
This project implements a secure federated learning system for healthcare institutions, allowing hospitals to collaboratively train machine learning models without sharing raw patient data. Each hospital trains a local model and securely sends the model updates to a central server, where authentication, cryptographic hashing, and HMAC signatures verify integrity and authenticity. The system protects against malicious updates, model poisoning, and fake hospital attacks, while maintaining an audit trail for compliance. Federated Averaging aggregates only verified updates, ensuring both high AI accuracy and strong cybersecurity. This hybrid AI-security framework demonstrates a practical, privacy-preserving approach to collaborative healthcare analytics.
7 Feb 2026