
Frequence bridges the gap between high-performance AI inference and real-time generative art. Built with a decoupled architecture, it uses a React frontend to capture audio frequencies via the Web Audio API, rendering them on an HTML5 canvas at 60FPS. Instead of relying on manual UI sliders, Frequence acts as an "AI VJ." User prompts are routed through a FastAPI gateway to a Qwen 2.5 7B model running on an AMD MI300X GPU via vLLM. The model analyzes the current visual state and returns a mathematical JSON delta, instantly adjusting parameters like rotation, particle physics, and color palettes. This allows users to type commands like "make it aggressive" or "warm sunset vibe" and see immediate, context-aware visual transformations.
10 May 2026