Replies: 1 comment
-
Hi Daniele. Thanks for your interest in the project! Yes, you can generate images with whichever service you'd like and then use this code base to get the robot to paint that image. We are currently working on the project, but do not currently have any public roadmap. Two main areas of research are improving interactivity with the robot and improving the shapes of brush strokes the robot can make. Hopefully this will all be in the code base in a month or two! |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Amazing, amazing, amazing!
Love this project.
We're working on a Generative Art Project in the field of Paintings, yesterday we were discussing about how it could be possibile to "break the wall" and paint phisically, and woila, your amazing project appeared!
Is the project strictly related to StableDiffusion? If we have our knowledge, instructions and ways to generate our paintings, using something different like MidjJourney/DALL-E and so, we start from a "ready-to-use" image, what kind of approach do you suggest? To use Euclidean Distance to be closest to the real painting? Any other tips?
Are you currently working on the project? Do you have a roadmap or anything else?
Thank you!
Beta Was this translation helpful? Give feedback.
All reactions