Stable Diffusion
Developed by engineers and researchers from CompVis, Stability AI, and LAION, the Stable Diffusion model is released under Creative ML OpenRAIL-M license, which means it can be used commercially or non-commercially. Read more about Stable-Diffusion on Lablab
Powerful Transformer
- It can be run on a consumer-grade graphics card enabeling makers to create next-generation AI applications or art
Flexible
- The model also supports image-to-image style transfer and upscaling, and generating images from a simple sketch.
API
- We will provide API, but encurage you to also expore envirments that allow you to experiment with the code such as Google colab and Huggingface
How we judge
The challenge for this hackathon is to create an innovative solution using stable-diffusion. Show us how AI makes something drastically better or solve a previously not solved problem, and you are in a good position
Prizes
• 🏆 Prize pool of $2000 in Digital Ocean credits to be distributed among the winning teams!
We thank Digital Ocean for sponsoring this event.
• All winning projects will be featured on the Lablab blogpost, newsletter and social media channels.
• Additionally, each subitted project comes with a certificate of completion!
The details 🧐
Join lablab community during this 48h makers event to innovate and build the next generation applications. It's a truly transformative technology that will change the way we design and create
🗓️ Where and when
The hackathon starts on November 11th and ends on November 13th. Over the weekend, you'll have the opportunity to learn from LABLAB experts during workshops, keynotes, and mentoring sessions.
😅 How about teams?
If you don’t have a team you will be able to match and team up with other participants around the world. Finding & creating teams can be done from the dashboard you can access after you enroll. We also recommend checking our Discord server to find teammates and discuss ideas.
🦸🏼♂️ Who should participate?
We looking to see both solutions where stable-diffusion is used as a core component in the application lifecycle, as well as solutions where it additionally trained/tuned to solve a problem. We are looking for people who are passionate about building the future of AI and want to learn more about the latest developments in the field
🔐 Access to API
We will provide API endpoints. But also encourage you to also explore environments that allow you to experiment with the code such as Google colab, Replicate and Huggingface. If you looking for extended compute for training, come talk with us, if you have a good idea we might be able to help you out.
🛠️ How to participate in the hackathon
The hackathon will take place online on lablab.ai platform and lablab.ai Discord Server. Please register for both in order to participate. To participate click the "Enroll" button at the bottom of the page and read our Hackathon Guidelines.
🧠 Get prepared
To get prepared for the hackathon, we recommend you to start at our Stable-diffusion technology page where you can find all relevant information about Stable-diffusion such as tutorials and cohere boilerplates.
Examples
Here are some examples of how this powerful model can be used.
Stable diffusion photoshop plugin.
Stable diffusion-powered tool for prompt-based inpainting.
Prompt engineering and image generation tool by Arjun Patel, Cohere Hackathon #3 winner.
Speakers, Mentors and Organizers
Hackathon FAQ
Who can join the Hackathon?
We welcome domain experts from all industries, not just AI or tech. Successful AI solutions require a combination of technical expertise and domain knowledge. Coding experience is recommended.
Do I need a team?
You are welcome to join as a team or solo, if solo. We encourage you to look for a team before the event. We recommend you to join the Deep Learning Labs Discord channel: https://discord.gg/gCuBwBB35k and posting in the #looking-for-team channel to get to know your potential future team members.
Do I need a Github account?
It is recommended, that at least one team member has a Github account. You can create one for free if you don't already have one.
I have other questions.
Feel free to reach us on social media, or through our Discord channel.
Event Schedule
- To be announced
Winner Submissions 🏆

Stable Diffusion Creator Tool
This tool lets you create videos based off stable key frames and interpolate the results using custom apis.
Fast Path
Submitted concepts, prototypes and pitches
Submissions from the teams participating in the Stable Diffusion: The Future of Text-to-Image Models event and making it to the end 👊

Stable Diffusion Creator Tool
This tool lets you create videos based off stable key frames and interpolate the results using custom apis.
Fast Path

forensic ai
Forensic.ai is AI Portrait Generator. It helps police officers easily generate portrait images of suspects based on victims' reports. Our system provides a form with fields to describe the physical characteristics of a suspect such as age, race, gender, weight, and height. These parameters will be concatenated into a prompt engineering algorithm that is fed into stable diffusion to generate a portrait image.
SD4FUN

Fast Ads
Creating an online advertisement is not simple for small businesses since they usually do not have full-time employees with graphic design skills. Fast-Ads is an AI tool that will automate this online Ad creation process.
Advert-AI

Same Same But Different
Designers typically face blockers in their initial stages of getting inspiration for their works. Without the risk of running into copyright or plagiarism laws, SSBD aims to provide a safe space to get similar-styled images with full originality.
DDC

Stable CAD Generator
Our project is an Innovative CAD generator using the Stable Diffusion module. It classifies the given image using tensorflow's MobileNetV2,after the image is successfully identified it passes it on to the Stable Diffusion API which generates a CAD/Blueprint based on the MobileNetV2's classification. This can solve the design problem of many people who work in engineering and architecture or anything relating to design, even though our main focus is on CADs. This can also be used as an educational tool in universities.We also plan on developing it further , for now this is only a prototype, its full potential is yet to come. For more information please check the presentation and the demonstration video, also don't forget to look at the github repo.
Chuck Norris

Dreamixer
Dreamixer is a Stable Diffusion based AI tool to generate comic strips from text. Dreamixer aims to democratize the comic strip creation process so everyone with an idea and aim to express it using comic strips is not blocked by his lack of graphics and designing skills. If you can express your idea as a comic strip script giving scene setting and dialogue, Dreamixer will do the rest. It will generate the comic strip with given scene setting and characters in a consistent style. It was also append the dialogues spoken by characters as a text bubble. Later on, we plan to give ability to our users to edit specific portions of the comic strip using in-painting and other advance Stable Diffusion features.
Morpheus

AI and early child development
Visual storytelling is one of the many ways educators use to assist in the child's growth. However, often times educators will have to improvise when trying to find relevant material (movies or story books) that can help in guiding the child in a specific situation with the child's needs and accessibility accommodations. For example, a child with physical differences can be facing a situation from a different child and an educator is keen on addressing the situation. Therefore, developing a solution that can create custom images for storytelling based on the unique characteristics of the child can be very empowering to the educational system.
Xena
.png&w=3840&q=75)
Novel view generation
Synthesizing a novel view from different angles is an interest in many fields like VR, video enhancement, games, etc. However, this process has some limitations when given an image of an object from an unrecognizable angle. The proposed method combines image-to-image translation with stable diffusion by giving text as an additional parameter
Aqua learning
.png&w=3840&q=75)
Show me a Story
This app generates a story text, from GPT-2, and creates a visual image of the story. As the story keeps being generated, that is more sentences are written, the app creates images to visually represent the story.
AI4Lokal
.png&w=3840&q=75)
Immersive Books
Our application allows book readers to have a more immersive experience and grants artists, authors, and publishers the ability to craft their own story by fine-tuning their models.
The Prompt Engineers

Chrolove
Pictures generated by the Stable Diffusion are used for mental health therapeutic purposes. There are two main services, the free in-app plan and the premium chromotherapy in the metaverse. First of all, the main goal of the application is to be able to ask the users about their current mood and the reasons behind it to generate an image based on the theories of mental health and chroma therapy to improve overall mental health. The application first starts by asking about the overall mood of the user: Great, Good, Okay, Bad, Awful . Based on these questions the model generates an image that is soothing to the mood and improves the mood of the user throughout the day. Then, the metaverse is used to provide immersive chromotherapy in the metaverse for all. It consists of the panorama version or full-screen version of the picture being trained and generated using Stable Diffusion. Then, throughout the chromotherapy, we could let use relaxation-stimulating colors to provide immersive therapeutic experiences, whether it is the VR feature on ma mobile phone, the VR feature from a VR headset, the AR feature from an Apple device, and more that are compatible with Spatial XR.
Accelerate Mental Health Engineering

TEXT TO HUMAN AI
We are building a Text-to-Video Human avatar generator. It is a pipeline of AI tools that works in 3 Steps: 1. Text-to-Speech (Creating a audio file from text input) 2. Text-to-Video (Creating a video from text input) 3. Text-to-Human (Using deep fake technology to make video less random and merge audio with lips of subject). This is just the beginning of AI media and the next natural steps for AI, it is better then most use cases because you have much more control of the subject who can actually interact with their environment, be trained easier and have movement which opens so many more doors in use cases then other technology out there in the same bracket. We have text-to-text, We have Text-to-video and we have text-to-video, Welcome to Text-To-Human.
VISIO

A machine making a human
Name/Alias of the artist: Michael Name of the prompt used/title of the artwork: A machine making a human, 4k digital art Name of tool used: API: newnative.ai To make prompts more in line with what the user likes, I took inspiration from a simulated annealing algorithm. The user begins by giving a prompt of what they'd like to see. From there, a list of synonyms is produced, along with 100 common words and acronyms. The prompt it tweaked slightly and different words are added, removed or moved within the prompt based on user feedback within each stage. This results in an evolving image prompt more toward the user's preference.
Team Get It Done

Dimensational
Dimensational is a text-to-image topographic texture generator, in which by simply typing in the type of environment you wish to generate, you can almost immediately create a myriad of beautiful artistic patterns that replicate the beauty of our planet and many others.
Dimensational
.png&w=3840&q=75)
Talk In threeD
This generative solution provides near real time (less than 20 secs) text to 3d model generation compared to current state of the art model Dreamfusion which takes 1 hr inference time for decent results. This will disruptive the way artists will do 3d modelling in future.
Metaverse-Streamer
Teams: Stable Diffusion: The Future of Text-to-Image Models
Check out the rooster and find teams to join at Stable Diffusion: The Future of Text-to-Image Models