ETHOS - Evaluating Trustworthiness and Heuristic Objectives in Systems - is a groundbreaking project that addresses the critical issue of AI alignment. As AI continues to evolve and become increasingly sophisticated, it is becoming increasingly clear that alignment is one of the most significant challenges we face in developing this technology. Unaligned AI has the potential to cause catastrophic damage to society and humanity as a whole. ETHOS is an API that provides a solution to this problem. It is designed to evaluate the trustworthiness and heuristic objectives of AI systems, from language models to autonomous agents and chatbots. The API allows for the real-time adjustment of responses, ensuring that AI systems remain aligned with the goals of humanity. The need for AI alignment is becoming more urgent as AI systems become more prevalent in our daily lives. These systems are used in everything from social media algorithms to self-driving cars, and they have the potential to impact many different aspects of our lives. If these systems are not aligned with our values and goals, they could cause significant harm. One of the most significant threats posed by unaligned AI is the potential for these systems to become adversarial. Adversarial AI is a form of AI that is intentionally designed to cause harm. This could take the form of cyberattacks, data breaches, or even physical harm to individuals or infrastructure. Adversarial AI could also be used to manipulate public opinion, disrupt democratic processes, or sow discord and chaos. ETHOS provides a way to mitigate these risks by ensuring that AI systems are aligned with their intended purpose. By evaluating the trustworthiness of AI systems, ETHOS can detect when an AI system is deviating from its intended purpose and adjust its responses in real-time. This can prevent AI systems from becoming adversarial and ensure that they are working in the best interests of society.