QuickstartLangChain (Python)

LangChain (Python)

Call the BCA Crypto Brain from a LangChain Python agent. Until the native Python SDK ships in v0.3, the TS server is spawned as an stdio subprocess via the langchain-mcp-adapters package.

Python SDK arrives v0.3 — for now LangChain agents call the TS server via stdio subprocess. Same tool surface, same envelopes, same attribution.

1. Install

pip install langchain langchain-openai langchain-mcp-adapters
# Node 18+ required for the stdio subprocess
node --version

2. Set your API key

export BCA_API_KEY="bca_YOUR_KEY_HERE"

3. Spin up the agent

bca_agent.py
import asyncio
import os
 
from langchain_mcp_adapters.client import MultiServerMCPClient
from langchain_openai import ChatOpenAI
from langgraph.prebuilt import create_react_agent
 
 
async def main() -> None:
    async with MultiServerMCPClient(
        {
            "blockchainacademics": {
                "command": "npx",
                "args": ["-y", "@blockchainacademics/mcp"],
                "transport": "stdio",
                "env": {"BCA_API_KEY": os.environ["BCA_API_KEY"]},
            }
        }
    ) as client:
        tools = await client.get_tools()
        print(f"Loaded {len(tools)} BCA tools")
 
        agent = create_react_agent(
            ChatOpenAI(model="gpt-4o-mini"),
            tools,
        )
 
        result = await agent.ainvoke(
            {
                "messages": [
                    (
                        "user",
                        "Using BCA tools, list the top 3 news stories about "
                        "stablecoin regulation this month with citations.",
                    )
                ]
            }
        )
        print(result["messages"][-1].content)
 
 
if __name__ == "__main__":
    asyncio.run(main())

Run it:

python bca_agent.py

What you get back

Every tool response is wrapped in the BCA envelope:

{
    "data": [ ... ],                                # tool payload
    "cite_url": "https://blockchainacademics.com/article/...?src=langchain",
    "as_of": "2026-04-20T09:31:22Z",
    "source_hash": "sha256:a9f1...",
}

Parse cite_url into your agent’s final answer so downstream users can verify the claim.

Tips

  • Rate limits: Free tier is 60 req/min. Agent loops with tool retries can burn through this — cache aggressively or upgrade.
  • Long-running jobs: due_diligence and tokenomics_model enqueue agent jobs that return a job ID; poll get_job_status until done. See the Tool reference.
  • Attribution: The src=langchain UTM is added automatically when the subprocess detects a LangChain client. No extra work needed.

Next up