Interactive Experiment Platform
Automate Simulate Collaborate
Enter Platform
Internal Testing
Meet Carrier

What Carrier Can Do

A research platform built for experiments that mix humans, bots, and AI agents.

01
Mixed Human-AI Chatrooms

Humans + Bots + LLMs Together

Real participants chat alongside scripted bots and LLM-powered agents in the same room — indistinguishable or with visible type badges.

Multi-Agent Real-time Chat Configurable Behavior
Chamber: Group Deliberation
H1
Alex Human
I think we should prioritize the environmental factors. The data from Section 2 supports this.
B1
ResearchBot Scripted
Interesting point. 73% of studies in this area reached similar conclusions.
AI
Claude LLM Agent
Building on Alex's point — the environmental factors interact with socioeconomic variables in Table 4. Would the group like to explore that connection?
2 Humans · 1 Bot · 1 LLM Chamber 2/3 · Chat · 4:32
02
Participant Role System

Communicator · Mediator · Processor

Every participant — human or AI — is assigned a role that determines their capabilities. Communicators chat directly. Mediators observe and orchestrate. Processors assist with AI-powered feedback.

Communicator Mediator Processor
Chamber Participants
H1
Alex Human Communicator
H2
Jordan Human Communicator
B1
FacilitatorBot Scripted Mediator
AI
Claude LLM Processor
03
Dynamic Human-AI Collaboration

Fully Configurable AI Interaction

Define exactly how participants interact with AI at every stage. Three modes — Review, Generate, and Real-time Assist — transition automatically based on configurable triggers. Researchers control when AI activates, what it sees, and how participants respond.

Review Mode Generate Mode Real-time Assist Phase Transitions
Processor — Review Phase
Your draft:
"Climate change affects biodiversity through multiple pathways..."
AI Feedback:
"Consider adding a specific example — e.g., coral bleaching — to strengthen the claim."
Accept Dismiss Revise
Phase: Review Next: Generate (after 3 messages)
04
Segment System

11 Activity Types

Chain different activity types into timed sequences within a single session. Chat, vote, rank, survey, watch media, complete tasks — each with its own timing and transition rules.

Timed Transitions Sync Modes Agent Overrides
Segment Library
Ch
Chat
Se
Selection
Rk
Ranking
Su
Survey
In
Input
Md
Media
Sl
Slide
Tk
Task
Tm
Timer
Ix
Instruction
+ Attention Check

Explore More Capabilities

Click any card to see details

Batch LLM Annotation

Upload CSV datasets, annotate with multiple AI models at scale.

Upload CSV datasets and process with multiple AI models simultaneously. Built-in cost estimation, batch APIs, and multi-repetition for inter-rater reliability. Choose from 20+ literature-based templates or create custom annotation tasks.
CSV Upload Multi-Provider Cost Estimation 20+ Templates
Annotator — Sentiment Analysis
Input TextModelResult
"The product was great..." GPT-4o Positive · 0.94
"Disappointing quality..." Claude 3.5 Negative · 0.91
72% · 1,440 / 2,000 rows

Experiment Builder

Visual drag-and-drop experiment design — no code required.

Visual 3-pane builder. Drag segments and participants from a library onto a timeline canvas, configure properties in the inspector panel. Preview and validate experiments before deploying.
Drag & Drop 3-Pane Layout Visual Timeline
DELIBERATION STUDYValidatePreviewDeploy
Library
Segments
💬 Chat
☑️ Select
📊 Rank
📋 Survey
Chamber Line A
Intro
2 seg
Discussion
3 seg
Debrief
1 seg
Discussion — Segments
Chat · 10m
Select · 3m
Survey · 5m
Inspector
Chamber Name
Discussion
Participants
👤 2 Humans
🤖 1 Bot

AI Design Assistant

Describe your experiment in natural language — the AI builds it for you.

An agentic LLM assistant that co-designs experiments through conversation. Describe what you need — "add a 3-person chat with a mediator bot" — and it configures chambers, segments, triggers, and participant slots. It understands the full experiment model and can scaffold complex designs from a brief description.
Natural Language Agentic Design Auto-Config
AI Assistant + Experiment Builder
AI Assistant
You
I need a group deliberation with 3 humans and a mediator bot that summarises every 5 messages
Assistant
Done! I've created a chamber with 3 communicators and 1 mediator bot. Added a periodic trigger (every 5 messages) with summary broadcast. Want me to add a post-survey?
You
Yes, and add a ranking segment after the chat
Generated Config
Chamber Line A
Chamber 1 — Deliberation
3H + 1 Mediator Bot
Chat · 15m Rank · 3m Survey · 5m
Trigger: periodic (5 msgs) → summary broadcast

Smart Participant Matching

Automatic grouping by condition, survey response, or queue order.

Match participants into chatrooms automatically using three strategies. Survey-based assignment groups people by their pre-survey answers. Counterbalancing ensures even distribution across conditions. FIFO matches in queue order for speed.
Survey-Based Counterbalance FIFO
Matching Queue
P1
Alex Condition A matched
P2
Jordan Condition A matched
P3
Sam Condition B waiting

Multi-Provider LLMs

OpenAI, Anthropic, and Google models side by side.

Run the same experiment with different LLM providers to compare responses. Configure temperature, context window, system prompts, and response logic per agent. Supports GPT-4o, Claude, Gemini, and custom endpoints.
OpenAI Anthropic Google Custom API
Agent Configuration
Agent A
GPT-4o
temp: 0.7
Agent B
Claude 3.5
temp: 0.7
Agent C
Gemini Pro
temp: 0.7
Same prompt · Same context · Compare outputs

Live Monitoring Dashboard

Real-time session tracking, alerts, and data export.

Watch every active session in real time. Get alerts for disconnects, long waits, and drop-outs. Pause, resume, or end individual participant sessions. Export all data as CSV, XLSX, or JSON.
Real-time Alerts Session Control Export
Dashboard
12
Active
3
Waiting
47
Completed
⚠ P-0847 disconnected 45s ago — Chamber 2

No-Code Bot Scripting

13+ trigger types for scripted agent behavior — no code required.

Build sophisticated bot behavior with trigger-response rules. Keywords, regex, timed events, message counts, cross-bot chains, activity timeouts, and more. Triggers can fire conditionally, chain to other triggers, and have cooldowns and probability controls.
Keyword & Regex Timed Triggers Chain Reactions No Code
Bot Configuration
keyword "hello" "Welcome to the study!"
time 30s "Any initial thoughts?"
msg-count = 10 "Let's summarize"

Template Library

Reusable experiment configs with peer-review workflow.

Start from pre-built templates or save your own. Public templates go through admin peer review. Browse by experiment type, participant count, and chamber structure. One-click fork to customize.
Peer-Reviewed Quick-Start Fork & Customize
Template Library
Group Deliberation 3 chambers · 2H + 1AI Reviewed
Dyad Conversation 2 chambers · 2H Reviewed
Human-AI Collab 3 chambers · 1H + 2AI Pending
↑ Click any card above to expand its details and mock UI

Research Infrastructure That Grows With the Field

AI models evolve fast. Your research platform should keep up. Carrier is built so experiments are reproducible across model versions, scalable from pilot studies to large-scale deployments, and cumulative — each new model adds to a growing, comparable body of results.

Reproducible

Every experiment is a saved configuration — same chambers, same roles, same prompts. Re-run with a different model and compare results directly. No ambiguity about what changed.

Scalable

From a 4-person pilot to hundreds of concurrent sessions. Batch annotation processes thousands of rows. Matching, chatrooms, and data export handle the volume without changing your design.

Cumulative

New model released? Plug it in and re-run. Every experiment adds to a growing body of comparable results — same design, different models, tracked over time.