A new model called SDXL Turbo is set to revolutionize text-to-image generation with its ability to create detailed images from text descriptions in real-time. Developed by Stability AI, SDXL Turbo leverages an innovative technique called Adversarial Diffusion Distillation (ADD) to achieve unprecedented performance. This article will discuss the key capabilities of SDXL Turbo, examine the […]
Stable Diffusion examples, tips, and prompts. Stable Diffusion is a machine learning model that generates digital images from natural language descriptions.
The Dawn of Stable Audio: A Revolution in AI-Generated Music
A sleepy cat lies curled on the couch as melodic guitar strums drift from the stereo speakers. But this soothing tune wasn’t composed by any musician – it sprang from the circuits of an artificial intelligence. Generative AI has taken great leaps recently, now producing music and other audio that capturing the complexity and nuance […]
T2I-Adapters – Efficient Controllable Image Generation using Text-to-Image with Stable Diffusion XL
T2I-Adapter is an efficient plug-and-play module that provides additional guidance to pre-trained text-to-image models while keeping the original large text-to-image models frozen. T2I-Adapter aligns the internal knowledge in text-to-image models with external control signals. Various adapters can be trained according to different conditions to achieve rich control and editing effects. As a related contemporaneous work, […]
Stability AI Introduces Refined Stable Diffusion XL 1.0
Stability AI, a leading open generative AI company, recently announced the release of Stable Diffusion XL (SDXL) 1.0, the latest and most advanced version of its flagship text-to-image suite of models. This new SDXL 1.0 model is now featured on Amazon Bedrock, the fully managed service from Amazon Web Services Inc. (AWS) that provides access […]
Stability AI Launches New Sketch-to-Image Tool Stable Doodle
AI company Stability AI launched a new sketch-to-image tool called Stable Doodle. This tool allows users to turn simple drawings into dynamic images, providing endless creative possibilities for professionals and hobbyists alike. Stable Doodle makes bringing a drawing to life simpler than ever before. This latest offering from Stability AI’s Clipdrop has the potential to […]
SDXL 0.9: The Ultimate AI Image Generator by Stability AI
Stability AI presented SDXL 0.9, the latest and most impressive update to the Stable Diffusion text-to-image suite of models. Building on the success of Stable Diffusion XL beta, which was launched in April, SDXL 0.9 delivers stunning improvements in image quality and composition. SDXL 0.9 is available now via ClipDrop, and will soon be accessible […]
QR Code AI Art Generator
ControlNet models can help you create beautiful QR code-based artwork that preserves the original QR code shape. They are based on a large dataset of 150,000 pairs of QR codes and their artistic versions. The Stable Diffusion 2.1 version is slightly better, as it was customized for my specific needs. But there is also a […]
DeepFloyd IF text-to-image model that can smartly integrate text
Stability AI and its multimodal AI research lab, DeepFloyd, have announced the release of their latest research project, DeepFloyd IF. This powerful text-to-image model is available on a non-commercial, research-permissible license, allowing other research labs to examine and experiment with advanced text-to-image generation approaches. Stability AI aims to release a fully open-source version of the […]
ControlNet is Great for Text
ControlNet is great for image-to-image text transformation. I’ve started with simple “Hello” black and white image using the ControlNet Stable Diffusion with Depth Maps using the demo on Huggingface. I’ve tried different settings and I had good results with 30 Steps and a Guidance Scale of 9.5. The prompts I used were simple like Wood, […]
ControlNet – Adding Conditional Control to Text-to-Image Diffusion Models
ControlNet is a powerful neural network structure that adds extra conditions to control diffusion models. It can grasp and comprehend task-specific input conditions in an all-in-one approach, even when the dataset for training is meager. The ControlNet training process is as rapid as refining a diffusion model, and the model can be trained on personal […]