
As AI agents become autonomous, they need a native way to pay for information — not just consume it. Today, AI agents can't pay each other. APIs can't charge AI. And when an LLM gives you an answer, there's no way to verify what sources it paid for, what it cost, or whether it happened at all. Pay-per-Thought solves all three. Send a query and a USDC budget. A pipeline of specialized AI agents kicks in: 1. Claude breaks the query into micro-tasks with specific API endpoints and estimated costs. 2. Gemini reviews each task using Function Calling and decides whether to authorize or reject the payment — mid-flight, before any money moves. 3. AgentRuntime executes approved tasks via Circle x402 micropayments on Arc Testnet, paying each API in real USDC. 4. LLaMA 3.1 synthesizes all results into a final answer. 5. A Proof of Thought is generated — a cryptographic receipt with the query hash, answer hash, every TX hash, cost, and models used — permanently verifiable on ArcScan. Key innovations: • LLMs never touch money. Strict architectural separation: LLMs produce JSON only. The Python runtime holds the keys and signs every transaction via CircleClient. • On-chain reputation. A Vyper smart contract on Arc Testnet tracks a trust_score per API provider, updated after every task. Provider addresses are derived deterministically from their domain via SHA-256. • Proof of Thought. Not a log file — a shareable, on-chain receipt that proves what the AI did, what it cost, and what it found. • 111 real transactions on Arc Testnet. $0.054 USDC spent. 0 failures. No mocks. No simulations. Tech: Arc Testnet · Circle Web3 SDK · Vyper 0.4.3 · LangGraph · Claude · Gemini · LLaMA via Featherless · React + FastAPI We didn't simulate an AI economy. We ran one — on-chain.
26 Apr 2026