According to research and statistics Hate speech has become a real issue in online communication, especially in online games and live-streaming platforms where users are shielded by their anonymity. This phenomenon discourages a lot of people from using those platforms. With this project our goal is to help already existing voice communication platforms combat hate speech, harassment and toxic behaviour. Our solution to this problem is to utilise each user's microphone in order to assess whether his speech is obscene, toxic, threatful, insulting etc. using cutting-edge Machine Learning tools like Whisper and text-classification models. Our target audience is Video-Game companies, live-streaming platforms and Social Media. We really think that our product can help them minimise hate speech in their communities and thus achieve higher Quality of service.