Manual call center communication is time-consuming, repetitive, and costly. By implementing an AI-driven healthcare call center like HeyDoctor!, we can improve the patient experience, reallocate staff resources, and streamline financial resources. For the submission, we have categorized our project into two main groups: the input side and the output side. On the input side, we utilized the OpenAI Whisper 2 API to convert speech to text. The text generated from this process was then sent to our backend service to create a response. On the output side, we used the OpenAI GPT-3.5-turbo API as the reasoning engine and powered assistant. To achieve this, we took the user's dialog obtained from the Whisper API and used it as input for the GPT-3.5-turbo API to generate responses. These responses were then used with the elevenlabs API to produce a realistic voice. For the frontend, we implemented Svelte, and for the backend, we used FastAPI. Both of these services were deployed using Vercel.