A new text-to-image model called Stable Diffusion is set to change the game once again ⚡️. Stable Diffusion is a latent diffusion model that is capable of generating detailed images from text descriptions. It can also be used for tasks such as inpainting, outpainting, and image-to-image translations. Join us for a epic stable diffusion makers event 11-13 November 👊
Our AI hackathon brought together a diverse group of participants, who collaborated to develop a variety of impressive projects based on:
774
Participants
39
Teams
15
AI Applications
This event has now ended, but you can still register for upcoming events on lablab.ai. We look forward to seeing you at the next one!
Checkout Upcoming Events →Submissions from the teams participating in the Stable Diffusion Hackathon event and making it to the end 👊
We are building a Text-to-Video Human avatar generator. It is a pipeline of AI tools that works in 3 Steps: 1. Text-to-Speech (Creating a audio file from text input) 2. Text-to-Video (Creating a video from text input) 3. Text-to-Human (Using deep fake technology to make video less random and merge audio with lips of subject). This is just the beginning of AI media and the next natural steps for AI, it is better then most use cases because you have much more control of the subject who can actually interact with their environment, be trained easier and have movement which opens so many more doors in use cases then other technology out there in the same bracket. We have text-to-text, We have Text-to-video and we have text-to-video, Welcome to Text-To-Human.
VISIO
Pictures generated by the Stable Diffusion are used for mental health therapeutic purposes. There are two main services, the free in-app plan and the premium chromotherapy in the metaverse. First of all, the main goal of the application is to be able to ask the users about their current mood and the reasons behind it to generate an image based on the theories of mental health and chroma therapy to improve overall mental health. The application first starts by asking about the overall mood of the user: Great, Good, Okay, Bad, Awful . Based on these questions the model generates an image that is soothing to the mood and improves the mood of the user throughout the day. Then, the metaverse is used to provide immersive chromotherapy in the metaverse for all. It consists of the panorama version or full-screen version of the picture being trained and generated using Stable Diffusion. Then, throughout the chromotherapy, we could let use relaxation-stimulating colors to provide immersive therapeutic experiences, whether it is the VR feature on ma mobile phone, the VR feature from a VR headset, the AR feature from an Apple device, and more that are compatible with Spatial XR.
Accelerate Mental Health Engineering
Our application allows book readers to have a more immersive experience and grants artists, authors, and publishers the ability to craft their own story by fine-tuning their models.
The Prompt Engineers
This tool lets you create videos based off stable key frames and interpolate the results using custom apis.
Fast Path
This generative solution provides near real time (less than 20 secs) text to 3d model generation compared to current state of the art model Dreamfusion which takes 1 hr inference time for decent results. This will disruptive the way artists will do 3d modelling in future.
Metaverse-Streamer
This app generates a story text, from GPT-2, and creates a visual image of the story. As the story keeps being generated, that is more sentences are written, the app creates images to visually represent the story.
AI4Lokal
Creating an online advertisement is not simple for small businesses since they usually do not have full-time employees with graphic design skills. Fast-Ads is an AI tool that will automate this online Ad creation process.
Advert-AI
Designers typically face blockers in their initial stages of getting inspiration for their works. Without the risk of running into copyright or plagiarism laws, SSBD aims to provide a safe space to get similar-styled images with full originality.
DDC
Forensic.ai is AI Portrait Generator. It helps police officers easily generate portrait images of suspects based on victims' reports. Our system provides a form with fields to describe the physical characteristics of a suspect such as age, race, gender, weight, and height. These parameters will be concatenated into a prompt engineering algorithm that is fed into stable diffusion to generate a portrait image.
SD4FUN
Visual storytelling is one of the many ways educators use to assist in the child's growth. However, often times educators will have to improvise when trying to find relevant material (movies or story books) that can help in guiding the child in a specific situation with the child's needs and accessibility accommodations. For example, a child with physical differences can be facing a situation from a different child and an educator is keen on addressing the situation. Therefore, developing a solution that can create custom images for storytelling based on the unique characteristics of the child can be very empowering to the educational system.
Xena
Synthesizing a novel view from different angles is an interest in many fields like VR, video enhancement, games, etc. However, this process has some limitations when given an image of an object from an unrecognizable angle. The proposed method combines image-to-image translation with stable diffusion by giving text as an additional parameter
Aqua learning
Dimensational is a text-to-image topographic texture generator, in which by simply typing in the type of environment you wish to generate, you can almost immediately create a myriad of beautiful artistic patterns that replicate the beauty of our planet and many others.
Dimensational
Dreamixer is a Stable Diffusion based AI tool to generate comic strips from text. Dreamixer aims to democratize the comic strip creation process so everyone with an idea and aim to express it using comic strips is not blocked by his lack of graphics and designing skills. If you can express your idea as a comic strip script giving scene setting and dialogue, Dreamixer will do the rest. It will generate the comic strip with given scene setting and characters in a consistent style. It was also append the dialogues spoken by characters as a text bubble. Later on, we plan to give ability to our users to edit specific portions of the comic strip using in-painting and other advance Stable Diffusion features.
Morpheus
Name/Alias of the artist: Michael Name of the prompt used/title of the artwork: A machine making a human, 4k digital art Name of tool used: API: newnative.ai To make prompts more in line with what the user likes, I took inspiration from a simulated annealing algorithm. The user begins by giving a prompt of what they'd like to see. From there, a list of synonyms is produced, along with 100 common words and acronyms. The prompt it tweaked slightly and different words are added, removed or moved within the prompt based on user feedback within each stage. This results in an evolving image prompt more toward the user's preference.
Team Get It Done
Our project is an Innovative CAD generator using the Stable Diffusion module. It classifies the given image using tensorflow's MobileNetV2,after the image is successfully identified it passes it on to the Stable Diffusion API which generates a CAD/Blueprint based on the MobileNetV2's classification. This can solve the design problem of many people who work in engineering and architecture or anything relating to design, even though our main focus is on CADs. This can also be used as an educational tool in universities.We also plan on developing it further , for now this is only a prototype, its full potential is yet to come. For more information please check the presentation and the demonstration video, also don't forget to look at the github repo.
Chuck Norris