Hero Banner

Top Builders

Explore the top contributors showcasing the highest number of app submissions within our community.

Phi-3 Model Family

The Phi-3 model family, developed by Microsoft, encompasses a range of small language models (SLMs) designed to offer high-quality AI capabilities with a focus on efficiency and accessibility. These models are particularly suited for applications where computational resources are limited, such as mobile devices or edge deployments. The Phi models balance performance and size, making them ideal for a variety of use cases, from natural language understanding to coding tasks.

General
Relese dateJune 2023
AuthorMicrosoft
WebsitePhi-3 open models
TypeSmall Language Models

Key Models and Features

  • Phi-3 Mini: A compact model with 3.8 billion parameters, trained on 3.3 trillion tokens. It provides strong performance across various benchmarks, including MMLU and GSM-8K, and is capable of running locally on smartphones and other edge devices.

  • Phi-3 Small: Featuring 7 billion parameters, this model includes additional capabilities for handling longer context lengths (up to 128K tokens). It offers enhanced performance in reasoning tasks and is fine-tuned with supervised and preference optimization techniques.

  • Phi-3 Medium: A larger variant with 14 billion parameters, designed for more complex applications that require robust reasoning and data analysis capabilities.

Training and Data

The training data for Phi models is meticulously curated, combining publicly available high-quality documents, synthetic “textbook-like” data, and chat-format supervised data. This approach ensures the models have a strong foundation in reasoning, coding, and general knowledge while maintaining efficiency in processing and storage requirements.

Applications and Use Cases

  • Edge and Mobile Deployment: The small size and efficient design of the Phi models make them suitable for deployment on devices with limited computational power, such as smartphones or IoT devices. They can operate offline, which is crucial for applications in remote or disconnected environments.

  • High-Risk Scenarios: While the models are designed to minimize biases and handle sensitive data responsibly, they are not recommended for high-stakes applications like legal advice or financial decision-making without additional safeguards.

Availability and Licensing

The Phi models are available on platforms like Hugging Face and Microsoft Azure AI Model Catalog. They are released under open licenses, allowing developers to integrate them into various applications while adhering to responsible AI practices.

👉 For more detailed information, you can refer to the technical report and resources available on Microsoft Research and Hugging Face.

Microsoft Phi-3 AI technology Hackathon projects

Discover innovative solutions crafted with Microsoft Phi-3 AI technology, developed by our community members during our engaging hackathons.

Qubic Liquidation Guardian

Qubic Liquidation Guardian

Qubic Liquidation Guardian is a hybrid Track 1 + Track 2 project built by CrewX that brings real-time liquidation protection, institutional-grade risk analysis, and automated alerting to the Qubic Network. The problem is simple: DeFi liquidations happen instantly, but users do not get instant signals. As a result, borrowers lose capital, protocols lose liquidity, and investors hesitate to adopt new systems without safety infrastructure. Inspired by this gap, Qubic Liquidation Guardian provides a complete safety layer over lending protocols deployed on the Nostromo Launchpad. At its core, the system includes an on-chain event listener and a real-time risk scoring engine, which analyzes: • Health Factor • Liquidation Proximity • Total Debt Exposure • Active Positions These metrics are combined into a 0–100 Risk Score, dynamically updated for each borrower. Based on the score, users are automatically classified into Low, Medium, High, and Critical risk tiers, enabling rapid decision-making. The platform also includes advanced features such as: • Whale Watch: Detect large-value transactions to anticipate market shifts • Smart Alerts: Severity-based notifications connected to any tool • Auto-Airdrop: Rewards for users who resolve high-risk positions • Crash Simulator: A built-in testing environment to simulate -70% market dumps, rebounds, and full resets to verify protocol safety Qubic Liquidation Guardian is designed to strengthen the Nostromo ecosystem by improving investor confidence, increasing protocol safety, and enabling risk-aware liquidity management. With over 35 production-ready API endpoints, an edge-distributed database, and a Next.js 15 architecture, the application is fully deployable and already live for testing. Ultimately, this project delivers exactly what new chains and protocols need: speed, stability, transparency, and automation—making Qubic safer for everyone.

The Intelligent Home

The Intelligent Home

An Intelligent Home is a modern living environment where everyday household systems—lighting, climate control, security, entertainment, and appliances—are interconnected through a network of smart devices and sensors. These components communicate seamlessly, enabling the home to monitor its own state and respond to user needs automatically. The goal is to create a living space that enhances comfort, convenience, and safety while reducing manual effort. At the center of an Intelligent Home is a smart home hub, which acts as the system’s brain. It manages communication between devices, processes real-time sensor data, and allows users to interact with the environment through voice commands, mobile apps, or automated routines. Through machine learning, the home can recognize patterns—such as daily schedules or common behaviors—and adjust settings automatically, like dimming lights in the evening or pre-cooling before residents arrive. A defining feature of an Intelligent Home is its ability to be context-aware. Using sensors that track motion, temperature, occupancy, and environmental changes, the home adapts to real-time conditions. For example, lighting can adjust based on natural sunlight, thermostats can adapt to user comfort levels, and security systems can differentiate between routine activity and potential threats. This awareness enables the home to evolve and provide increasingly personalized experiences. Another key aspect is connectivity and interoperability. Intelligent Homes support a broad ecosystem of devices and technologies using standards such as Wi-Fi, Zigbee, Z-Wave, and Matter, ensuring that products from different manufacturers work together seamlessly. This flexibility allows homeowners to expand, upgrade, or customize their setup without being locked into a single brand. A unified network enables synchronized automation—like having lights, security, and climate systems work in harmony based on a single trigger or routine.

Smarter-Health-Choices

Smarter-Health-Choices

This project aims to predict yearly medical insurance premium costs using machine learning algorithms based on individual health and demographic data. Leveraging a dataset of nearly 1,000 entries, it incorporates various health factors such as age, BMI, existing medical conditions, and lifestyle habits to build an accurate prediction model. Through data preprocessing, visualization, and model training using libraries like pandas, seaborn, and scikit-learn, this project demonstrates the real-world application of AI in the healthcare and insurance domain. The model helps users understand how different health parameters affect insurance premiums, encouraging informed financial planning and healthier lifestyle choices. This solution has the potential to enhance transparency in insurance pricing and empower better decision-making. Results & Visualizations: Feature Impact Bar Chart Shows how average premium varies across features like Gender, Smoking Status, and Exercise. Future Scope Deploy it as a web application using Streamlit or Flask so users can input values and get predictions. Add more granular health data like cholesterol, blood pressure, and stress levels. Implement explainable AI (XAI) tools like SHAP or LIME to explain individual predictions. Train on larger, real-world datasets from hospitals or insurance providers. Conclusion: This project showcases the impact of machine learning in healthcare decision-making. By predicting insurance premiums based on user health profiles, it helps users understand cost drivers and promotes healthier living. While the model provides a valuable estimate, it should complement, not replace, professional advice.

SupplyGenius Pro

SupplyGenius Pro

Core Features 1. Document Processing & Analysis - Automated analysis of supply chain documents - Extraction of key information (parties, dates, terms) - Compliance status verification - Confidence scoring for extracted data 2. Demand Forecasting & Planning - AI-powered demand prediction - Time series analysis with confidence intervals - Seasonal pattern recognition - Multi-model ensemble forecasting (LSTM, Random Forest) 3.Inventory Optimization - Real-time inventory level monitoring - Dynamic reorder point calculation - Holding cost optimization - Stockout risk prevention 4. Risk Management - Supply chain disruption simulation - Real-time risk monitoring - Automated mitigation strategy generation - Risk score calculation 5. Supplier Management - Supplier performance tracking - Lead time optimization - Pricing analysis - Automated purchase order generation 6. Financial Analytics - ROI calculation - Cost optimization analysis - Financial impact assessment - Budget forecasting 7. Real-time Monitoring - Live metrics dashboard - WebSocket-based alerts - Performance monitoring - System health tracking 8. Security Features - JWT-based authentication - Role-based access control - Rate limiting - Secure API endpoints -- Technical Capabilities 1. AI Integration - IBM Granite 13B model integration - RAG (Retrieval Augmented Generation) - Custom AI toolchains - Machine learning pipelines 2. Data Processing - Real-time data processing - Time series analysis - Statistical modeling - Data visualization 3. Performance Optimization - Redis caching - Async operations - Rate limiting - Load balancing 4. Monitoring & Logging - Prometheus metrics - Detailed logging - Performance tracking - Error handling

EdgeWise-Offline AI Content Moderation

EdgeWise-Offline AI Content Moderation

EdgeWise is an AI-driven content moderation solution designed specifically for educational platforms. Leveraging advanced AI models, our tool operates entirely offline on edge devices such as IoT devices, ensuring robust performance even in environments with limited or no internet access. The architecture of EdgeWise consists of several key components: Synthetic Data Generation: We use OpenAI's Meta-Llama-3.2-80B-Instruct-Turbo model to generate synthetic training data tailored to specific content moderation categories such as spam, inappropriate content, and misleading information. Fine-Tuned Model: The generated data is used to fine-tune the Phi model. This fine-tuned model is lightweight, optimized for edge devices, and includes specialized LoRA adapters for efficient inference. Edge Deployment: The fine-tuned model is deployed locally on devices using a Streamlit-based application. This application is designed to work entirely offline, providing real-time text categorization and content filtering without relying on external APIs or cloud services. Privacy and Security: By processing all data locally, EdgeWise ensures that user information remains private and secure. The architecture is robust, cost-effective, and highly customizable, allowing it to adapt to various educational environments and needs. This combination of advanced AI, local deployment, and a focus on privacy makes EdgeWise an ideal solution for creating safe, secure, and inclusive online learning environments globally.