This project automates the generation of standardized docstrings for Python functions by leveraging AI to analyze code and extract relevant information. The goal is to ensure consistent and clear documentation across the codebase, facilitating better understanding and maintenance of test cases. The process begins by iterating through all Python files in a specified directory, reading the source code, and parsing it into an abstract syntax tree (AST). For each unittest function identified, the code is sent to an AI model, which generates a comprehensive docstring based on a predefined template. This template includes sections for the test title, test areas, subareas, environment/setup, test steps, expected results, post-execution steps, and exceptions. The AI model uses additional information from a JSON file containing descriptions of various APIs and classes to enhance the generated docstrings. The resulting docstrings are then inserted back into the source code, and the updated files are saved. This automated approach ensures thorough and uniform documentation, significantly improving the readability and maintainability of the codebase.