
ProofWeaver is an AI-powered evidence retrieval and claim-analysis system built to counter the accelerating spread of misinformation. Today, information moves faster than any manual fact-checking workflow, leaving everyone without tools that verify claims clearly or quickly. Most platforms show search results but fail to explain why something may be true, false, or misleading. ProofWeaver fills this gap by combining semantic search, vector retrieval, and structured LLM reasoning into one evidence engine. The system works in two stages. First, a user submits a claim, which is converted into an embedding using sentence-transformer models. The system then performs a semantic search through Qdrant Cloud, retrieving the most relevant evidence snippets and giving similarity scores that show how closely each piece matches the claim. In the second stage, the AI Agent analyzes those snippets. This LLM-powered module evaluates the evidence and then generates a structured explanation that includes summaries, contradictions, missing pieces, and an approximate confidence score. Interaction is simple: users type a claim, click “Explain Claim,” and immediately see evidence, similarity scores, and a complete AI explanation. A screen recording in the demo highlights how indexing, semantic search, and explanation steps operate in real time. The market for misinformation detection and explainable AI is rapidly expanding. The total addressable market includes media outlets, social networks, academic institutions, governments, and NGOs. Revenue can come from SaaS subscriptions, enterprise dashboards, and API integrations for platforms that require verification tools. Competing systems are slow, manual, or opaque. ProofWeaver’s unique advantage lies in transparent evidence retrieval and structured explanation in one workflow. Future expansions include multimodal search, multiple domain-specific collections, real-time ingestion, and enterprise tools for bulk claim analysis.
19 Nov 2025