AI Agent Integration
This guide covers integrating Litmus with AI pipelines for automated experiment submission and result processing.
Use Cases
- Drug discovery pipelines — Validate ML predictions with wet lab data
- Automated screening — Submit batches of experiments programmatically
- Feedback loops — Use results to improve ML models
- Research assistants — AI agents that help users design experiments
Architecture
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
│ ML Prediction │────▶│ Spec Generator │────▶│ Litmus API │
│ Model │ │ (Your Code) │ │ │
└─────────────────┘ └──────────────────┘ └────────┬────────┘
│
▼
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
│ Model Retraining│◀────│ Results Parser │◀────│ Webhooks │
└─────────────────┘ └──────────────────┘ └─────────────────┘Identifying AI-Submitted Experiments
Mark experiments as AI-submitted:
{
"metadata": {
"submitter_type": "ai_agent",
"agent_identifier": "your-agent-name-v1.0",
"tags": ["automated", "batch-123"]
}
}Batch Submission
Submit multiple experiments efficiently:
import asyncio
import aiohttp
async def submit_batch(experiments: list[dict], api_key: str):
async with aiohttp.ClientSession() as session:
tasks = []
for spec in experiments:
# Validate first
async with session.post(
"https://api.litmus.science/validate",
json=spec,
headers={"X-API-Key": api_key}
) as resp:
validation = await resp.json()
if not validation["valid"]:
continue
# Submit
tasks.append(
session.post(
"https://api.litmus.science/experiments",
json=spec,
headers={"X-API-Key": api_key}
)
)
results = await asyncio.gather(*tasks)
return resultsWebhook Processing
Handle results automatically:
from fastapi import FastAPI, Request, HTTPException
import hmac
import hashlib
app = FastAPI()
@app.post("/litmus/webhook")
async def handle_webhook(request: Request):
# Verify signature
payload = await request.body()
signature = request.headers.get("X-Litmus-Signature")
if not verify_signature(payload, signature):
raise HTTPException(401, "Invalid signature")
data = await request.json()
if data["event"] == "completed":
experiment_id = data["experiment_id"]
# Fetch full results
results = await fetch_results(experiment_id)
# Update your database
await update_training_data(experiment_id, results)
# Trigger retraining if batch complete
await check_batch_complete(experiment_id)
return {"status": "ok"}
def verify_signature(payload: bytes, signature: str) -> bool:
expected = hmac.new(
WEBHOOK_SECRET.encode(),
payload,
hashlib.sha256
).hexdigest()
return hmac.compare_digest(f"sha256={expected}", signature)Rate Limiting
AI agent tier: 500 requests/minute, 5,000/day
For larger volumes, contact sales@litmus.science.
Best Practices
- Validate before submitting — Catch errors early with
/validate - Use webhooks — Don't poll for status updates
- Handle failures gracefully — Some experiments will fail
- Tag experiments — Use metadata for tracking and analysis
- Process results asynchronously — Return 200 quickly from webhook handlers
Edison Scientific Integration
Edison Scientific provides AI-powered hypothesis generation by analyzing scientific literature. This is ideal for AI pipelines that need to generate testable hypotheses from research questions.
Starting an Edison Run
import requests
import time
API_URL = "https://api.litmus.science"
headers = {"X-API-Key": "lk_your_api_key"}
# Start hypothesis generation
response = requests.post(
f"{API_URL}/cloud-labs/edison/start",
json={
"query": "Novel mechanisms for enhancing antibiotic efficacy",
"experiment_type": "MIC_MBC_ASSAY"
},
headers=headers
)
run_id = response.json()["run_id"]
# Poll for completion
while True:
status = requests.get(
f"{API_URL}/cloud-labs/edison/runs/{run_id}",
headers=headers
).json()
if status["status"] == "COMPLETED":
hypothesis = status["result"]["hypothesis"]
print(f"Generated: {hypothesis['statement']}")
break
elif status["status"] == "FAILED":
print(f"Error: {status['error']}")
break
time.sleep(5)Saving to Hypothesis Library
# Save generated hypothesis for reuse
hypothesis = requests.post(
f"{API_URL}/hypotheses",
json={
"title": "Edison: Antibiotic efficacy",
"statement": status["result"]["hypothesis"]["statement"],
"null_hypothesis": status["result"]["hypothesis"]["null_hypothesis"],
"experiment_type": "MIC_MBC_ASSAY",
"edison_response": status["result"]
},
headers=headers
).json()See the Edison API Reference for full endpoint documentation.
Cloud Labs Integration
For fully automated experiment execution, Litmus integrates with cloud laboratories (ECL, Strateos). This enables end-to-end automation from hypothesis to results.
Automated Workflow
# 1. Generate hypothesis with Edison
edison_result = run_edison_query("Novel antimicrobial compounds")
# 2. Translate to cloud lab protocol
translation = requests.post(
f"{API_URL}/cloud-labs/translate",
json={
"intake": edison_result["intake_draft"],
"provider_id": "ecl"
},
headers=headers
).json()
# 3. Submit experiment
experiment = requests.post(
f"{API_URL}/experiments",
json=edison_result["intake_draft"],
headers=headers
).json()
# 4. Trigger cloud lab execution
submission = requests.post(
f"{API_URL}/cloud-labs/experiments/{experiment['experiment_id']}/translate",
json={"provider_id": "ecl"},
headers=headers
).json()See the Cloud Labs API Reference for provider capabilities and protocol formats.
MCP Integration
For AI assistants (Claude, ChatGPT), we provide an MCP server with tools for:
intake.draft_from_text— Convert natural language to structured intakeintake.validate— Validate against schemarouting.match_labs— Find best-fit operatorsintake.submit— Submit to platform
See the MCP documentation (opens in a new tab) for setup instructions.