
This project is a reusable, end-to-end misinformation audit workflow built entirely in Opus using no-code/low-code blocks. The pipeline automates Intake → Understand → Decide → Review → Deliver for multi-source public data: topic-driven Google News and Reddit pulls. Incoming items are normalized to a single schema and deduplicated so downstream logic operates on clean records. An Agent layer transforms content into structured semantic records: concise summaries, extracted entities, and explicit factual claims. A two-tier decision system evaluates each claim: fast deterministic rules (missing source, too-short content, spam patterns) and an agentic AI trust scorer (0–1) that explains its rationale and highlights risky statements. Scoring runs in parallel and is then aggregated to a final risk signal. Items below the trust threshold or with conflicting evidence are routed to a human reviewer; reviewers can accept, reject, or request reprocessing. Every run emits a machine-readable audit artifact (JSON) that captures inputs, normalized fields, rules fired, agent rationales, reviewer actions, timestamps, and final verdicts—ensuring full traceability for compliance and post-hoc analysis. Designed for speed and clarity, the workflow minimizes unnecessary human labor while preserving oversight on edge cases. It’s modular, easily extended to new sources (X, YouTube, RSS), and adaptable to domains like finance or healthcare. Deliverables include the runnable Opus workflow, demo run outputs (Sheets & JSON), and a concise operator-ready README. This demonstrates practical, explainable automation that helps teams scale trustworthy information monitoring.
19 Nov 2025