Conceptual Draft
Following a sketching session, we select our favorite concept and generate image-to-image variations to explore different possibilities. This allows us to refine the design and choose the best direction moving forward. Following a sketching session, we select our favorite concept and generate image-to-image variations to explore different possibilities. The image below showcases our top nine sketches, each reflecting unique iterations of the concept. This process allows us to refine the design and choose the best direction moving forward
AI generated Sketch variations
Sketch to AI Render
At this stage, we present the preprocessed images of our selected concept. On the left, you can see the two preprocessed images used for the AI-rendered outputs. The first image was processed with the Canny preprocessor, while the second used the SoftEdge preprocessor. These images were then fed into ControlNet to guide the image generation process without adding color information. Color and other details were introduced through prompt engineering.
Selected Sketches to AI Render variations
Refined final AI Concept
Results of the project
More design options
Through the image-to-image process, we can significantly expand our design options and multiply creative possibilities.
Refinement with Flux and SD
We create the final image by generating multiple variations of the render using Stable Diffusion. Then, we refine these variations with Flux, and finally, we upscale the image using Stable Diffusion once more.