10
2
Pakistan
1 year of experience
Hi, I’m Zain, a passionate Full Stack Developer with a strong focus on building dynamic, user-friendly applications. My expertise spans across frontend and backend technologies, including React, Next.js, Firebase, and MongoDB, among others. I am committed to crafting efficient and scalable solutions for real-world problems. I have participated in multiple hackathons and coding competitions, including the Meta Hacker Cup, NASA Space City Hacks, Calico Berkeley, and the GEMMA 2 AI Challenge. These experiences have honed my problem-solving skills and ability to collaborate effectively in fast-paced environments.
Gaia is an innovative web application designed to provide safety and support to women who find themselves alone in potentially risky situations. The app can chat using an AI to give the user the illusion of being in company, offering both emotional reassurance and a sense of security. In addition, Gaia can help users call emergency services instantly if they are in danger. The app also features a map that identifies the most dangerous areas based on real-time emergency call data, enabling users to avoid risky locations. This data-driven approach will also assist law enforcement in intelligently focusing their efforts on areas with higher safety concerns. We have worked just for London and we have got the data from their official website.
Signify is a web-based application designed to bridge communication gaps between individuals who use sign language and those who don't. The project leverages machine learning and artificial intelligence to detect sign language gestures via a webcam, translate them into text, and provide real-time speech output for accessibility. This project is aimed at improving inclusivity and making communication easier for the deaf and hard-of-hearing community. Key Features: Real-Time Gesture Recognition: The app captures gestures using a webcam and processes them through a machine learning model to recognize sign language. Detected gestures are displayed as text on the screen, allowing users to see the translation of the gesture. Text-to-Speech Integration: The app can speak the translated gesture using the browser's built-in speech synthesis feature. This provides immediate audio feedback for the user, enabling a seamless communication experience.