TruEra TruLens AI technology Top Builders

Explore the top contributors showcasing the highest number of TruEra TruLens AI technology app submissions within our community.

TruLens: Tools for Neural Network Development and Explainability

TruLens provides a set of tools for developing and monitoring neural nets, including large language models. This includes both tools for evaluation of LLMs and LLM-based applications with TruLens-Eval and deep learning explainability with TruLens-Explain. TruLens-Eval and TruLens-Explain are housed in separate packages and can be used independently.

General
AuthorTruEra
Repositoryhttps://github.com/truera/trulens
TypeLLMs Development and Explainability Tool

TruLens-Eval

TruLens-Eval contains instrumentation and evaluation tools for large language model (LLM) based applications. It supports the iterative development and monitoring of a wide range of LLM applications by wrapping your application to log key metadata across the entire chain (or off chain if your project does not use chains) on your local machine. Importantly, it also gives you the tools you need to evaluate the quality of your LLM-based applications.

Key Features:

  • Evaluation: TruLens supports the evaluation of inputs, outputs, and internals of your LLM application using any model, including LLMs. It offers various out-of-the-box feedback functions for evaluation, such as groundedness, relevance, and toxicity. The framework is easily extensible for custom evaluation requirements.

  • Tracking: TruLens contains instrumentation for any LLM application, including question answering, retrieval-augmented generation, agent-based applications, and more. This instrumentation allows for the tracking of a wide variety of usage metrics and metadata. TruLens' instrumentation can be applied to any LLM application without being tied down to a specific framework. Additionally, deep integrations with LangChain and Llama-Index allow the capture of internal metadata and text. Anything that is tracked by the instrumentation can be evaluated!

Learn more about TruLens-Eval

TruLens-Explain

TruLens-Explain is a cross-framework library for deep learning explainability. It provides a uniform abstraction layer over TensorFlow, PyTorch, and Keras, allowing for input and internal explanations.

Read more about TruLens-Explain

Getting Started

TruLens User Case Guides

TruLens Tutorials


TruEra TruLens AI technology Hackathon projects

Discover innovative solutions crafted with TruEra TruLens AI technology, developed by our community members during our engaging hackathons.

TruLens-MindShield

TruLens-MindShield

TruLens-MindShield An AI-Assisted Firewall for Cognitive Integrity. Overview Problem Statement: Address the issue of cognitive manipulation and the algorithmic promotion of toxicity in social media and other digital platforms. Tool Utilization: Employ TruLens-Eval for real-time evaluation of content and TruLens-Explain to provide insight into why certain content is deemed manipulative or toxic. Use TruEra for fine-tuning and ensuring the model performs at its peak capabilities. Innovation Technical Feasibility: Create a browser extension or app that integrates with social media platforms to provide real-time filtering and explanations. Unique Features: The system will include a user-customizable dashboard to set personal filters for what kinds of content they wish to be shielded from. Why It's Unique Gap Analysis: There is a lack of tools aimed specifically at protecting users from manipulative or emotionally charged content. Transparency Factor: TruLens-Explain will provide users with the rationale behind the filtering decisions, offering transparency and control. Where It Has Potential Real-world Relevance: The pervasive influence of toxic and manipulative content is a pressing issue in today's digital landscape. Tool Efficacy: The project aims to fully utilize TruLens and TruEra for real-time evaluation and fine-tuning. Performance Metrics: Success would be evaluated based on user satisfaction and the ability to accurately filter and explain problematic content in real-time. MindShield aims to offer users a protective layer against the cognitive hazards of the digital world, from outrage mobs to manipulative bots. By providing both a real-time filter and transparent explanations for its actions, it addresses many of the concerns faced online, making it a relevant and potentially impactful project.