AI Agent Integration
This guide covers integrating Litmus with AI pipelines for automated experiment submission and result processing.
Use Cases
- Drug discovery pipelines — Validate ML predictions with wet lab data
- Automated screening — Submit batches of experiments programmatically
- Feedback loops — Use results to improve ML models
- Research assistants — AI agents that help users design experiments
Architecture
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
│ ML Prediction │────▶│ Spec Generator │────▶│ Litmus API │
│ Model │ │ (Your Code) │ │ │
└─────────────────┘ └──────────────────┘ └────────┬────────┘
│
▼
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
│ Model Retraining│◀────│ Results Parser │◀────│ Webhooks │
└─────────────────┘ └──────────────────┘ └─────────────────┘Identifying AI-Submitted Experiments
Mark experiments as AI-submitted:
{
"metadata": {
"submitter_type": "ai_agent",
"agent_identifier": "your-agent-name-v1.0",
"tags": ["automated", "batch-123"]
}
}Batch Submission
Submit multiple experiments efficiently:
import asyncio
import aiohttp
async def submit_batch(experiments: list[dict], api_key: str):
async with aiohttp.ClientSession() as session:
tasks = []
for spec in experiments:
# Validate first
async with session.post(
"https://api.litmus.science/validate",
json=spec,
headers={"X-API-Key": api_key}
) as resp:
validation = await resp.json()
if not validation["valid"]:
continue
# Submit
tasks.append(
session.post(
"https://api.litmus.science/experiments",
json=spec,
headers={"X-API-Key": api_key}
)
)
results = await asyncio.gather(*tasks)
return resultsWebhook Processing
Handle results automatically:
from fastapi import FastAPI, Request, HTTPException
import hmac
import hashlib
app = FastAPI()
@app.post("/litmus/webhook")
async def handle_webhook(request: Request):
# Verify signature
payload = await request.body()
signature = request.headers.get("X-Litmus-Signature")
if not verify_signature(payload, signature):
raise HTTPException(401, "Invalid signature")
data = await request.json()
if data["event"] == "completed":
experiment_id = data["experiment_id"]
# Fetch full results
results = await fetch_results(experiment_id)
# Update your database
await update_training_data(experiment_id, results)
# Trigger retraining if batch complete
await check_batch_complete(experiment_id)
return {"status": "ok"}
def verify_signature(payload: bytes, signature: str) -> bool:
expected = hmac.new(
WEBHOOK_SECRET.encode(),
payload,
hashlib.sha256
).hexdigest()
return hmac.compare_digest(f"sha256={expected}", signature)Rate Limiting
AI agent tier: 500 requests/minute, 5,000/day
For larger volumes, contact sales@litmus.science.
Best Practices
- Validate before submitting — Catch errors early with
/validate - Use webhooks — Don't poll for status updates
- Handle failures gracefully — Some experiments will fail
- Tag experiments — Use metadata for tracking and analysis
- Process results asynchronously — Return 200 quickly from webhook handlers
MCP Integration
For AI assistants (Claude, ChatGPT), we provide an MCP server with tools for:
intake.draft_from_text— Convert natural language to structured intakeintake.validate— Validate against schemarouting.match_labs— Find best-fit operatorsintake.submit— Submit to platform
See the MCP documentation (opens in a new tab) for setup instructions.