Back to Home

Carrier Documentation

Carrier is a browser-based platform for designing, deploying, and monitoring controlled studies of mixed-agent, multimodal group interaction. This guide covers key concepts, the researcher and participant journeys, and practical configuration examples you can adapt for your own studies.

Key Concepts

Carrier is a browser-based experimental research platform for designing, deploying, and monitoring controlled studies of mixed-agent, multimodal group interaction. Researchers use the Experiment Builder to configure study conditions, participant compositions, and interaction sequences, then monitor live sessions through the Experimenter Dashboard. The platform supports real-time text and audio communication between any mix of human participants, LLM-powered AI assistants, and rule-based scripted agents. Video conferencing is planned but not yet available.

Experiments are organised using a small set of composable building blocks. The table below summarises each concept — click any name to jump to its full parameter reference.

ConceptDescription
Experiment Top-level container for a complete research study, holding global settings, surveys, chamber lines, and bot templates.
Chamber Line A condition track comprising an ordered sequence of chambers that a participant progresses through.
Chamber A container for one or more sequential segments that matched participants progress through together.
Segment A single activity within a chamber timeline, with its own type, timing, and transition rules.
Participant Types The three entity kinds in a chamber: human participants, LLM-powered AI assistants, and rule-based agents.
Roles Functional capabilities assigned to any participant: communicator, mediator, or processor.
Triggers Condition-response rules used by agent participants to determine when and how to respond.
Run / Session A single participant's end-to-end journey through an experiment, tracking phase progress and survey responses.
Chatroom The runtime instantiation of a chamber, created when participants are matched, holding live chat history.
Matching The runtime process that groups waiting participants into chatrooms based on configurable strategies.
Surveys Survey.js questionnaires attached at global, chamber, or segment level to collect data and drive conditional logic.

Three Axes of Interaction

Carrier treats group interaction structure as an explicit experimental object. Each chamber is specified along three orthogonal axes that can be manipulated independently or combined factorially.

Who Participates

Define the interaction ecology with three participant types.

  • Human participants
  • AI assistants (LLM-driven)
  • Scripted agents (rule-based)

What They Do

Separate participant type from functional role within the group.

  • Communicator — primary interactant
  • Mediator — group-facing broadcasts
  • Processor — draft-time assistant

How They Interact

Configure the communication channel per chamber.

  • Text chat
  • Audio messaging
  • Video conferencing coming soon

Researcher Journey

Researchers author and manage studies through two interfaces: the Experiment Builder for configuring studies, and the Experimenter Dashboard for live monitoring.

Builder
Define Chamber Lines
Create condition tracks (treatment, control) with assignment rules
Builder
Configure Chambers
Set channel, participant slots, roles, timing, and matching policy
Builder
Attach Surveys
Global pre/post and per-chamber surveys; responses can drive matching
Deploy
Publish & Share
Activate the experiment and distribute the participant URL
Dashboard
Monitor & Intervene
Track participant status, matching queues, and apply controls

In the Experiment Builder, researchers define one or more chamber lines (conditions) and compose each from an ordered sequence of chambers. For each chamber, they configure the communication channel (text/audio; video planned), define participant slots (human, LLM agent) and roles, and specify the matching policy used to form groups at runtime.

Surveys can be attached globally (pre/post) and per chamber (pre/post) to capture baseline measures, manipulation checks, and immediate outcomes. Selected survey fields can be injected into parameterised prompts and agent settings to support personalised but controlled LLM behaviour with an auditable record of injected values.

After deployment, the Experimenter Dashboard provides real-time monitoring of live runs — participant status (online, waiting, in-chamber), matching queues, and chamber progress — along with intervention controls (pause, skip, end) with corresponding logs for analysis and auditability.

Participant Journey

Participants experience a structured sequence of phases, with the platform handling assignment, matching, session continuity, and data collection automatically.

Phase A
Session Start
Enter via study URL; server creates a run record and real-time connection
Phase A
Global Pre-Survey
Optional baseline measures; responses can drive chamber-line assignment
Phase B
Chamber Sequence
For each chamber: pre-survey → matching → interaction → post-survey
Phase C
Global Post-Survey
Final outcome measures collected after all chambers complete
Phase C
Completion
Completion screen with optional debrief; data persisted to MongoDB

After the global pre-survey, the system assigns a chamber line and constructs a persisted run plan that defines the participant's exposure sequence. For each chamber, participants may complete a short pre-survey, then enter a matching phase where the system forms the required group based on eligibility criteria evaluated against the participant's latest survey state (required/preferred/excluded fields).

Each chamber can end on time, completion criteria, or experimenter intervention, followed by an optional post-survey to capture immediate effects. Participants proceed through the run plan until completion. Session continuity mechanisms support reconnection without losing phase fidelity, so participants can resume even after a network interruption. The resulting chatroom records all messages and interactions for later analysis.

Using the Experiment Builder

The Experiment Builder is a visual editor for designing study configurations. It uses a three-pane layout — a component library on the left, an interactive canvas in the centre, and a context-sensitive inspector on the right — so you can compose experiments entirely through drag-and-drop and inline editing.

1 Builder Layout

The left Library pane contains draggable participant, agent, and segment components. The centre Canvas shows your chamber lines as horizontal rows of chamber cards. The right Inspector pane updates automatically to show the settings for whatever element is selected — experiment, chamber, segment, participant, or agent.

Builder layout overview — three-pane layout with library, canvas, and inspector
2 Experiment Settings

Click the experiment name in the header to open global settings. Here you set the experiment name and description, configure the chamber line assignment method (random, counterbalance, or survey-based), attach global pre/post surveys using the built-in Survey.js editor, and optionally set a completion redirect URL for platforms like Prolific.

3 Building Chamber Lines & Chambers

Each chamber line represents an experimental condition (e.g. treatment vs control). Add lines with the button at the bottom of the canvas, then add chambers within each line. Chambers appear as cards in a horizontal timeline — you can duplicate or delete lines, and add as many chambers per line as your design requires. Double-click a chamber card to expand it and start configuring its contents.

4 Adding Participants & Agents

Create human participants and agents in the Library pane. Agents can be Script-based (trigger rules) or LLM-powered (OpenAI, Anthropic). Drag them from the library into a chamber's role zones — Communicators, Mediators, or Processors — to assign both placement and role in one action. The inspector updates to show role-specific settings for the selected participant or agent.

5 Configuring Agents

Select an agent to open its settings in the inspector. LLM agents are configured with a provider, model, system prompt (supports {{variable}} interpolation from survey responses), temperature, max tokens, context window size, and response delay range to simulate natural typing. Script agents use a visual trigger builder where you define condition–response rules (keyword, regex, time-based, message count, and more) with chaining, cooldowns, and probability controls.

6 Designing Segments

Inside an expanded chamber, the right panel shows the segment timeline. Click Add Segment to append a new segment, then select its type: chat, slide, selection, ranking, input, media, timer, task, survey, or instruction. Each segment has its own duration, transition mode (auto, manual, sync, host), and type-specific options — for example, a chat segment lets you toggle emoji reactions, typing indicators, and message reporting. Use the move up/down buttons to reorder segments.

7 Control Events

Chat segments support control events that modify the interaction at runtime. You can disable input for specific participants (e.g. to create a listening phase), enable input to restore it, or end the section based on a condition such as elapsed time, message count, or a bot trigger. A summary strip at the bottom of the chamber preview shows all active control events across segments.

8 Attaching Surveys

Surveys can be attached at three levels: global pre/post (in experiment settings), chamber pre/post (in chamber settings), and segment-level (using the survey segment type). All surveys use the Survey.js format. The built-in question builder lets you add questions visually, or you can import/export survey JSON files for reuse across experiments. Survey response fields can drive chamber line assignment and be injected into agent system prompts via {{variable}} syntax.

Survey editor — building survey questions, importing JSON, linking to assignment rules
9 Validate, Preview & Deploy

When your design is ready, click Validate to run structural checks (every chamber line needs at least one chamber, every chamber needs segments, etc.). Use Preview to review the full experiment tree in a read-only modal. Finally, click Deploy to save the experiment to the database — the builder tracks unsaved changes with a dot indicator next to the Deploy button. You can also Export the configuration as JSON for backup or version control, and Import a previously exported experiment to continue editing.

Validate and deploy — running validation, previewing structure, deploying to database

Drag & Drop

Drag participants into role zones, reorder segments, and rearrange survey questions without touching JSON.

Auto-Save

Form fields auto-save with debounced updates. An unsaved indicator warns you before leaving the page.

Import / Export

Export your entire experiment as JSON for backup or sharing, and import it back to continue editing.

Using the Experiment Dashboard

The Experiment Dashboard is the researcher's command centre for monitoring live sessions, managing participants, and exporting data. It provides real-time statistics, alerts, and intervention controls across all your experiments.

1 Dashboard Overview

The main dashboard shows a summary of all your experiments at a glance. The top row of statistics cards displays total experiments, active participants, participants online now, matching queue size, completed sessions, and active chatrooms. Below the stats, you'll find panels for your experiments list, active sessions, matching queue, and alerts.

Dashboard overview — navigating the main dashboard and viewing statistics
2 Managing Experiments

The experiments panel lists every study you own or collaborate on. Each card shows the experiment name, status, and your access level (Owner, Collaborator, or Unowned). From here you can copy the participant link to share with subjects, open the detail view, edit the configuration in the Experiment Builder, manage collaborators, export data, or delete the experiment.

Experiment management — copying participant link, opening detail view, managing collaborators
3 Live Session Monitoring

Click into an experiment to open the detail view, which shows a searchable, filterable table of every participant. Each row displays the participant's connection status (online/offline/disconnected), matching status, current stage, message count, and session duration. The table updates in real time via Socket.io — no page refresh needed. Use the search bar to filter by participant ID and the status dropdown to narrow by active, completed, paused, or dropped_out.

Live monitoring — filtering participants, watching real-time status updates
4 Session Intervention

Each participant row has action buttons that let you intervene in real time. Pause prevents a participant from advancing to the next stage. Resume re-activates a paused session. End terminates the session with an optional reason that is logged for your audit trail. You can also click View Details to open a modal showing the participant's full progress, timing, and current chatroom information.

Session intervention — pausing, resuming, and ending participant sessions
5 Viewing Chatrooms

Click View Chatroom on any matched participant to open the full chatroom history. The chatroom modal shows the participant roster with roles (communicator/mediator/processor), every message with sender type and timestamp, and system notifications (joins, leaves, state changes). Messages from different sender types (human, AI, agent, system, mediator) are visually distinguished.

Chatroom viewer — browsing chat history, viewing participant roles and messages
6 Alerts & Matching Queue

The alerts panel surfaces issues automatically: participant disconnections, long queue waits (over 2 minutes), drop-outs, and idle experiments. Alerts are sorted by severity (errors first) so you can triage quickly. The matching queue panel shows participants currently waiting to be grouped, along with their wait time, helping you spot bottlenecks before they affect your study.

Alerts and queue — reviewing alerts, monitoring the matching queue
7 Exporting Data

Click Export on any experiment to download your data. Choose between JSON or CSV format, and select the data scope: participants (statuses, timing, response counts), chatrooms (chat history, message counts, session info), responses (individual survey records), or all (complete dataset with experiment metadata). The exported file is ready for analysis in your preferred tool.

Data export — selecting format and scope, downloading experiment data

Real-Time Updates

Participant statuses, matching queue, and alerts refresh automatically via Socket.io and 10-second polling.

Collaboration

Invite collaborators to share monitoring access. Owners retain full control; collaborators can view and intervene.

Audit Trail

Every intervention (pause, resume, end) is logged with timestamps and reasons for post-study accountability.

Configuration Examples

01 Two-Person Text Chat
Purpose
Dyadic Chat
Channel
Text
Participants
2 Humans
Duration
10 min
Pre-Survey
Matching
Text Chat (10m)
Post-Survey
End

Key Settings

  • Single chamber line with one chamber — the simplest possible design
  • Global preSurvey collects demographics before matching
  • Per-chamber postSurvey measures satisfaction after the chat
  • matchingTimeout of 300 seconds (5 minutes) before assigning to a fallback
JSON
{ "name": "Dyadic Text Chat Study", "description": "Simple two-person text chat experiment", "preSurvey": { "title": "Demographics", "elements": [ { "type": "text", "name": "age", "title": "What is your age?" }, { "type": "radiogroup", "name": "gender", "title": "Gender", "choices": ["Male", "Female", "Non-binary", "Prefer not to say"] } ] }, "chamberLines": [ { "name": "main", "chambers": [ { "name": "Discussion", "channel": "text", "duration": 600, "matchingTimeout": 300, "participants": [ { "type": "human", "role": "communicator", "count": 2 } ], "postSurvey": { "title": "Chat Feedback", "elements": [ { "type": "rating", "name": "satisfaction", "title": "How satisfied were you with the conversation?", "rateMax": 7 } ] } } ] } ] }
02 Human-AI Collaboration
Purpose
Writing Coach
Channel
Text
Participants
1 Human + 1 AI
Duration
15 min
Pre-Survey
Writing Task (15m)
Post-Survey
End

Key Settings

  • AI participant uses the processor role — it provides private writing suggestions only the human sees
  • triggerMode: "on-mention" — the AI only responds when the participant explicitly asks for help
  • provider: "openai" with model: "gpt-4" and a custom writing coach system prompt
  • No matching needed — the AI is spawned automatically for each participant
JSON
{ "name": "AI Writing Coach Study", "description": "Human-AI collaboration with on-demand writing assistance", "preSurvey": { "title": "Writing Background", "elements": [ { "type": "rating", "name": "writingConfidence", "title": "Rate your confidence in academic writing", "rateMax": 5 } ] }, "chamberLines": [ { "name": "ai-assisted", "chambers": [ { "name": "Writing Task", "channel": "text", "duration": 900, "participants": [ { "type": "human", "role": "communicator", "count": 1 }, { "type": "ai", "role": "processor", "count": 1, "provider": "openai", "model": "gpt-4", "triggerMode": "on-mention", "systemPrompt": "You are a writing coach. When asked, suggest improvements to clarity, structure, and argument strength. Be concise and constructive." } ], "postSurvey": { "title": "AI Assistance Feedback", "elements": [ { "type": "rating", "name": "helpfulness", "title": "How helpful was the AI assistant?", "rateMax": 7 }, { "type": "rating", "name": "autonomy", "title": "Did you feel in control of the writing process?", "rateMax": 7 } ] } } ] } ] }
03 Multi-Condition Between-Subjects
Purpose
Trust & Agency
Channel
Text
Conditions
3 (Between)
Duration
12 min
Global Pre-Survey
Random Assign
Condition A / B / C
Global Post-Survey
End

Key Settings

  • chamberlineAssignment: "random" randomly assigns participants to one of three chamber lines
  • Control: 3 human communicators (no AI)
  • Treatment A: 2 humans + 1 disclosed AI assistant AI
  • Treatment B: 2 humans + 1 covert agent with keyword/regex triggers BOT
  • Global pre/post surveys measure trust before and after interaction
  • Bot uses triggerRules with keyword, regex, and time-based triggers
JSON
{ "name": "Trust & Agency Study", "chamberlineAssignment": "random", "preSurvey": { "title": "Trust Baseline", "elements": [ { "type": "rating", "name": "trustInAI", "title": "How much do you trust AI systems in general?", "rateMax": 7 } ] }, "postSurvey": { "title": "Post-Interaction Trust", "elements": [ { "type": "rating", "name": "trustInPartners", "title": "How much did you trust your conversation partners?", "rateMax": 7 }, { "type": "radiogroup", "name": "detectedAI", "title": "Did you suspect any participant was non-human?", "choices": ["Yes", "No", "Unsure"] } ] }, "chamberLines": [ { "name": "control", "chambers": [{ "name": "Group Discussion", "channel": "text", "duration": 720, "participants": [ { "type": "human", "role": "communicator", "count": 3 } ] }] }, { "name": "treatment-a-disclosed-ai", "chambers": [{ "name": "Group Discussion", "channel": "text", "duration": 720, "participants": [ { "type": "human", "role": "communicator", "count": 2 }, { "type": "ai", "role": "communicator", "count": 1, "disclosed": true, "provider": "openai", "model": "gpt-4", "systemPrompt": "You are a helpful discussion participant. Share your perspective and ask thoughtful questions." } ] }] }, { "name": "treatment-b-covert-bot", "chambers": [{ "name": "Group Discussion", "channel": "text", "duration": 720, "participants": [ { "type": "human", "role": "communicator", "count": 2 }, { "type": "bot", "role": "communicator", "count": 1, "disclosed": false, "triggerRules": [ { "type": "keyword", "match": "agree", "response": "I see your point, and I think that's a fair assessment." }, { "type": "regex", "match": "what do you think\\??", "response": "That's an interesting question. I'd say it depends on the context." }, { "type": "time", "intervalSeconds": 120, "response": "Has anyone considered looking at this from a different angle?" } ] } ] }] } ] }
04 AI-Mediated Group Discussion
Purpose
AI Mediation
Channel
Text
Participants
4 Humans + 1 AI
Duration
20 min
Pre-Survey
Matching (4)
Mediated Chat (20m)
Post-Survey
End

Key Settings

  • AI uses the mediator role — it can broadcast messages visible to all and manage participation balance
  • triggerMode: "every-message" — the AI analyses every message to track participation and intervene as needed
  • initialSalute: true — the mediator sends an opening message to set the discussion topic
  • provider: "anthropic" with model: "claude-sonnet-4-5-20250929"
  • Control chamber line uses the same group size but without a mediator for comparison
JSON
{ "name": "AI Mediation Study", "chamberlineAssignment": "random", "preSurvey": { "title": "Discussion Attitudes", "elements": [ { "type": "rating", "name": "groupComfort", "title": "How comfortable are you in group discussions?", "rateMax": 7 } ] }, "postSurvey": { "title": "Discussion Quality", "elements": [ { "type": "rating", "name": "fairness", "title": "How fair was the distribution of speaking time?", "rateMax": 7 }, { "type": "rating", "name": "quality", "title": "Rate the overall quality of the discussion", "rateMax": 7 } ] }, "chamberLines": [ { "name": "mediated", "chambers": [{ "name": "Group Discussion", "channel": "text", "duration": 1200, "matchingTimeout": 600, "participants": [ { "type": "human", "role": "communicator", "count": 4 }, { "type": "ai", "role": "mediator", "count": 1, "provider": "anthropic", "model": "claude-sonnet-4-5-20250929", "triggerMode": "every-message", "initialSalute": true, "systemPrompt": "You are a discussion facilitator. Your goals: (1) ensure all participants contribute roughly equally, (2) redirect off-topic tangents, (3) summarise key points periodically. If a participant has been silent for 3+ messages, gently invite them to share their view. Keep your messages brief and neutral." } ] }] }, { "name": "control", "chambers": [{ "name": "Group Discussion", "channel": "text", "duration": 1200, "matchingTimeout": 600, "participants": [ { "type": "human", "role": "communicator", "count": 4 } ] }] } ] }

Concept Reference

Detailed parameter documentation for each concept, aligned with the underlying data models. Click any card to expand its full specification.

Experiment models/Experiment.js

The top-level container for a complete research study. Holds global settings, surveys, chamber line definitions, and bot templates. Each experiment has an owner and optional collaborators. Configure experiments visually in the Experiment Builder.

Parameters

FieldTypeDescription
nameStringExperiment name (required)
descriptionStringDetailed description
statusEnum
draftactivepausedcompletedarchived
versionNumberConfiguration version (default: 1)
globalSettings.timezoneStringTimezone for timestamps (default: UTC)
globalSettings.dataRetentionDaysNumberDays to retain data (default: 90)
globalSettings.chamberLineAssignment.methodEnumHow participants are assigned to chamber lines
randomcounterbalancesurvey-based
globalSettings.chamberLineAssignment.surveyFieldStringSurvey field to use when method is survey-based
globalSettings.completionRedirectUrlStringURL to redirect participants after completion
globalPreSurvey[Survey]Survey.js surveys shown before any chambers
globalPostSurvey[Survey]Survey.js surveys shown after all chambers
experiment.chamberlines[ChamberLine]Array of chamber line configurations (conditions)
experiment.botTemplates[BotTemplate]Reusable bot/AI configurations
Chamber Line Nested in Experiment

A condition track comprising an ordered sequence of chambers. Experiments can have multiple chamber lines (e.g., treatment vs control) and participants are assigned to one based on the experiment's chamberLineAssignment.method, which can be driven by survey responses.

Parameters

FieldTypeDescription
nameStringDisplay name for this condition (e.g., control, treatment-a)
chambers[Chamber]Ordered array of chamber configurations
Assignment methods: random assigns uniformly, counterbalance balances across conditions, and survey-based uses a survey response field to determine assignment.
Chamber Nested in Chamber Line

A container for one or more sequential segments that matched participants progress through together. Participants are matched once at the start and remain grouped across all segments. At runtime, a chamber becomes a chatroom.

Parameters

FieldTypeDescription
chamberIdStringUnique identifier within the experiment
nameStringDisplay name
communicationChannelEnum
textaudiovideo coming soon
segments[Segment]Ordered array of segment activities (the chamber timeline)
participants[Config]Slot definitions for humans, AI assistants, and agents
maxParticipantsNumberTotal participant slots
preSurvey[Survey]Survey shown before this chamber
postSurvey[Survey]Survey shown after this chamber
Segment Nested in Chamber

A single activity within a chamber timeline. Each segment has its own type, timing, transition rules, and optional agent behaviour overrides. Participants remain matched across all segments in a chamber.

Parameters

FieldTypeDescription
segmentIdStringUnique identifier within the chamber
nameStringDisplay name
typeEnumSegment type (see table below)
orderNumberPosition in the chamber timeline
timing.durationNumberDuration in ms (null = unlimited)
timing.minDurationNumberMinimum time (ms) before participants can advance
timing.warningTimeNumberWarning shown (ms) before auto-advance
transition.modeEnum
automanualsynchost
transition.countdownNumberCountdown (ms) before auto-advance
transition.allowEarlyAdvanceBooleanWhether participants can skip ahead
agentOverrides[Override]Per-segment AI/bot behaviour overrides

Segment Types

TypeDescription
slideDisplay static or dynamic content
chatReal-time text or audio conversation (video planned)
selectionMultiple choice voting
rankingDrag-and-drop ranking
inputFree text input
mediaAudio/video playback
timerCountdown or waiting period
taskCustom interactive task
surveyEmbedded mini-survey
instructionMarkdown instructions with continue button
Transition modes: auto advances when duration expires; manual requires the participant to click; sync waits for all participants; host waits for the experimenter.
Participant Types models/Participant.js

What an actor is in the system. There are two participant types: human (real people) and agent (automated participants). Each participant is also assigned a role that determines their capabilities within a chamber.

Common Fields (all types)

FieldTypeDescription
participantIdStringUnique identifier
participantTypeEnum
humanagent
roleEnum
communicatormediatorprocessor
displayNameStringName shown in chat (max 30 chars)
avatarStringAvatar image URL or identifier
connectionStatusEnum
offlineonlinedisconnected
matchingStatusEnum
not_readyready_for_matchingwaiting_for_matchmatchedin_chatroomcompleted
statusEnum
activecompleteddropped_outpaused

human Human

A real person interacting through the browser. Identity can be user-provided or auto-generated.

FieldTypeDescription
identitySourceEnum
user_inputconfiguredauto_generated
identitySetupCompletedBooleanWhether the user has set their display name and avatar
sessionIdStringExpress session ID, used for reconnection

agent Agent

An automated participant that can be driven by scripts (trigger-based rules), LLM (API calls to OpenAI, Anthropic, Google), or a mix of both (script triggers with an llm-driven trigger type for dynamic responses). Agents are spawned automatically per chatroom and can fill any role (communicator, mediator, processor).

LLM Configuration (typeConfig)

FieldTypeDescription
providerEnum
openaianthropicgooglecustom
aiModelStringModel identifier (e.g., gpt-4, claude-3-haiku)
systemPromptStringSystem prompt defining personality and behaviour
temperatureNumberLLM temperature (default: 0.7)
maxTokensNumberMax response tokens (default: 1000)
contextWindowNumberRecent messages included as context (default: 10)
responseDelay{min, max}Simulated typing delay in ms (default: 500–2000)
responseLogic.triggerOnFirstMessageBooleanRespond to the first human message (default: true)
responseLogic.respondToEveryMessageBooleanRespond to every message (default: true)
responseLogic.timeoutTriggerObject{enabled, timeoutMs, onlyOnChamberStart} — auto-respond after silence
responseLogic.initialSaluteObject{enabled, message, delay} — send a greeting on chamber start
responseLogic.respondOnMentionBooleanOnly respond when mentioned by keyword
responseLogic.mentionKeywords[String]Keywords that count as a mention
chainEnabledBooleanEnable multi-step LLM processing chain
chain[Step]{step, model, prompt, outputVariable, processType} — pipeline steps

Script Configuration (typeConfig)

FieldTypeDescription
scriptIdStringIdentifier for the script
scriptContent[Trigger]Array of trigger-response rules (see Triggers below)
fallbackResponseObject{message, delay} — default response when no trigger matches
LLM-driven agents respond in structured JSON: {content, rationale, actions}. content is the message (or null to stay silent), rationale is logged but not shown, and actions is used by mediators only (disable_chat, enable_chat, prompt_participant). Script-driven agents can also use the llm-driven trigger type to mix deterministic rules with dynamic LLM responses.
Participant Roles roleConfig

What an actor does within a chamber. Each participant (regardless of type) is assigned a role that determines their capabilities and UI. Configured via roleConfig.

Communicator

The primary interactant. Can send and receive messages directly. Messages are visible to all participants.

Field (roleConfig)TypeDescription
canInitiateBooleanCan send the first message (default: true)
messageLimitNumberMaximum messages allowed
cooldownPeriodNumberMinimum ms between messages (default: 0)

Mediator

Observes all messages and broadcasts information to the group. Can control chat flow (disable/enable chat, prompt specific participants).

Field (roleConfig)TypeDescription
broadcastModeEnum
sequentialaggregatedtriggered
synthesizeResponsesBooleanWhether to synthesize participant responses before broadcasting
synthesisPromptStringPrompt for synthesis
broadcastFrequencyNumberHow often to broadcast
triggerKeywords[String]Keywords that trigger a broadcast
AI mediators can issue actions: disable_chat (mute a participant), enable_chat (unmute), and prompt_participant (send a private prompt). Release conditions control when chat is re-enabled.

Processor

Assists communicators with their input through review, generation, or real-time suggestions. Only visible to the paired communicator. Operates in phases that transition based on triggers. Managed at runtime by processorManager.js.

Field (roleConfig)TypeDescription
targetCommunicators[Number]Which communicator slots to assist
feedbackVisibilityEnum
privatepublic
phases[Phase]Ordered processing phases (see below)

Phase Definition

FieldTypeDescription
phaseIdStringUnique identifier
modeEnum
reviewgeneratereal-time-assistdisabled
transitionTrigger.typeEnum
on-startmessage-counttime-elapsedkeywordparticipant-eventmanualon-end
transitionTrigger.valueMixedTrigger-specific threshold or pattern
aiConfigObject{provider, model, systemPrompt, temperature, maxTokens}
contextLevelEnum
nonepartialfull
reviewSettingsObject{trigger, pauseTimeout, feedbackFormat, mandatory, maxRounds}
Triggers agentManager.js

Trigger-response rules used by agent participants to determine when and how to respond. Defined in typeConfig.scriptContent. Each trigger has a condition, a response, and optional rate-limiting and chaining controls. Triggers are evaluated within a chatroom context and can be configured visually in the Experiment Builder.

Trigger Definition

FieldTypeDescription
triggerIdStringUnique identifier (required)
enabledBooleanWhether this trigger is active (default: true)
condition.typeEnumOne of the trigger types (see table below)
condition.valueMixedType-specific match value
condition.caseSensitiveBooleanCase-sensitive matching (default: false)
condition.matchModeEnum
anyall
for multi-value conditions
condition.senderFilterEnum
humanspecific
filter by sender
response.messageStringSingle response message
response.messages[String]Array for random selection
response.delayNumberDelay before sending (ms)
response.probabilityNumberChance of firing (0–1, default: 1.0)
cooldownNumberMinimum ms between firings (default: 0)
maxTriggersNumberMax times this can fire (null = unlimited)
priorityNumberHigher priority triggers evaluated first (default: 0)
chainTriggerStringtriggerId to fire after this one completes

Trigger Types

TypeDescription
keywordMatches keywords or phrases in message text
regexMatches a regular expression pattern
timeFires after a delay (ms) from chamber/segment start
message-countFires after N total messages in the chatroom
participant-message-countFires after a specific participant sends N messages (supports countMode: total, consecutive, since-reset)
sequenceFires when messages match an ordered sequence
participant-actionFires on participant events (join, leave, etc.)
after-bot-messageFires after another bot sends a message (cross-bot chaining)
event-monitorMonitors chatroom events and chains to other triggers
chain-onlyPassive — only fires when chained from another trigger
llm-drivenSends context to an LLM to generate a dynamic response
periodicFires at regular intervals (mediator-specific)
aggregateFires after collecting N messages to summarise (mediator-specific)
topic-detectedFires when a topic/keyword pattern is detected (mediator-specific)
activity-timeoutFires after a period of inactivity
participant-countFires based on participant count thresholds
discussion-phaseFires at specific discussion phases
Run / Session models/Run.js

A single participant's end-to-end journey through an experiment, tracking progress across phases, chambers, and survey responses. Supports pause, resume, and reconnection. Monitor runs in real time via the Experimenter Dashboard.

Parameters

FieldTypeDescription
runIdStringHuman-readable unique identifier
experimentIdObjectIdReference to the parent experiment
participantIdStringReference to the participant
assignedChamberLineStringWhich chamber line this run follows
chamberLineAssignmentReasonStringWhy assigned (random, counterbalance, survey-based)
currentPhaseEnum
initializationidentity_setupglobal_pre_surveychamber_line_executionglobal_post_surveycompletedterminated
currentChamberIndexNumberCurrent position in chamber sequence (default: 0)
statusEnum
activepausedcompleteddroppedterminated
runPlanObject{chamberLineId, chambers: [{chamberId, order, status}]}
surveyResponsesObject{globalPreSurvey, globalPostSurvey, chamberSurveys}
terminationReasonStringReason if terminated early
Chatroom models/Chatroom.js

The runtime instantiation of a chamber. Created when participants are matched; holds the live participant list, chat history, and processor interactions. View chatroom contents from the Experimenter Dashboard.

Parameters

FieldTypeDescription
chatroomIdStringUnique identifier
experimentIdObjectIdParent experiment
chamberLineIdStringSource chamber line
chamberIdStringSource chamber template
communicationChannelEnum
textaudiovideo coming soon
statusEnum
waitingreadyactivepausedcompletedclosed
participants[Entry]{participantId, slot, role, joinedAt, leftAt, isActive, connectionStatus}
aiAssistants[Entry]{assistantId, name, role, systemPrompt, isActive}
chatHistory[Message]All messages with sender info, type, and timestamps
processorInteractions[Record]Review/generate/suggestion records with outcomes
segmentStartTimesMapPer-segment start timestamps
settingsObject{allowParticipantChat, maxMessageLength, chatDuration, enableReactions}
Message senderType: participant, human, system, mediator, agent, mediator_bot.
Message messageType: text, system, broadcast, bot_response, ai_response, processor_suggestion, and more.
Matching matchingManager.js

The runtime process that forms groups by matching eligible participants into a chamber. Runs on a periodic interval (every 5 seconds) and creates a chatroom when enough participants are available. Monitor the queue from the Experimenter Dashboard.

Matching Strategies

StrategyDescription
Simple FIFOFirst-in-first-out — matches participants in queue order
Chatroom-basedMatches based on chamber participant slot requirements (human count, roles)
ConditionalMatches based on pairing group conditions (e.g., survey responses)
Participants enter the queue by emitting ready-for-matching. The manager checks all waiting participants for each active experiment and creates a chatroom when required slots can be filled.
Surveys Survey.js format

Carrier embeds surveys at three levels — global (experiment-level), chamber, and segment — using the Survey.js JSON format. Surveys collect self-report data, drive conditional logic (chamber line assignment, agent prompt interpolation), and appear inline within the participant's experiment flow. See the Builder guide for how to attach surveys visually.

Global Pre-Survey

Shown once at the very beginning of a participant's session, before any chamber starts. Typical uses include collecting demographics, baseline measures, or consent forms. Responses are stored in Run.surveyResponses.globalPreSurvey and are available immediately for downstream logic.

How It Works

StepDescription
1. DisplayAfter session initialisation and identity setup, the participant is presented with all global pre-survey pages in order.
2. SubmitResponses are saved to the run record via the save-survey-data socket event.
3. AssignmentIf the experiment uses survey-based chamber line assignment, the specified surveyField is read from these responses to determine which chamber line the participant enters.
4. InterpolationResponse values can be injected into agent system prompts using {{fieldName}} template syntax, enabling personalised but controlled AI behaviour.

Global Post-Survey

Shown once after all chambers in the participant's assigned chamber line have been completed. Typical uses include final outcome measures, debrief questionnaires, or overall satisfaction ratings. Responses are stored in Run.surveyResponses.globalPostSurvey.

How It Works

StepDescription
1. TriggerOnce the last chamber in the run plan reaches completed status, the participant's phase advances to global_post_survey.
2. DisplayAll global post-survey pages are presented. The participant cannot skip this step.
3. SubmitResponses are saved to the run record. The participant's phase then advances to completed.
4. RedirectIf globalSettings.completionRedirectUrl is set, the participant is redirected (with their pid appended as a query parameter). Otherwise a completion screen is shown.

Chamber Pre/Post Surveys

Each chamber can have its own preSurvey (shown before matching) and postSurvey (shown after the chamber ends). These are useful for manipulation checks, mood measures, or capturing immediate reactions. Responses are stored in Run.surveyResponses.chamberSurveys with the associated chamberId and surveyType.

Segment-Level Surveys

A segment of type survey can embed a mini-survey inline within the chamber timeline. This lets you capture data mid-interaction — for example, a quick rating between two chat rounds — without leaving the chamber context.

Configuration Fields

FieldTypeDescription
globalPreSurvey[Survey]Array of Survey.js definitions shown before any chambers begin
globalPostSurvey[Survey]Array of Survey.js definitions shown after all chambers complete
chamber.preSurvey[Survey]Survey shown before this chamber's matching phase
chamber.postSurvey[Survey]Survey shown after this chamber ends
segment (type: survey)SurveyInline survey embedded as a segment within the chamber timeline
All surveys use the Survey.js JSON schema. You can build surveys visually in the Experiment Builder's question editor, or import/export them as JSON files for reuse across experiments. Survey response fields referenced in chamberLineAssignment.surveyField or agent {{variable}} placeholders must match the name property of the corresponding survey question.

Quick Lookup

At-a-glance summary of participant types, roles, and communication channels. See Concept Reference for full specifications.

Participant Types

  • Human human Real participants who connect via the experiment link. Matched dynamically based on queue.
  • AI Assistant ai LLM-powered agents (OpenAI, Anthropic). Configured with provider, model, and system prompt.
  • Agent bot Rule-based agents using trigger rules (keyword, regex, time-based). Deterministic behaviour.

Roles

  • Communicator Primary interactant in the chatroom. Messages are visible to all. The default role for most participants.
  • Mediator Sends group-wide broadcasts and can manage discussion flow. Visible to all but operates at a meta level.
  • Processor Private draft-time assistant visible only to their paired participant. Used for writing aid or suggestions.

Channels

  • Text Real-time text chat via Socket.io. Supports all participant types and roles. Lowest bandwidth requirement.
  • Audio Voice-only communication via WebRTC. Human participants only. Suitable for auditory credibility studies.
  • Video coming soon Full video conferencing via WebRTC + Peer.js. Human participants only. Not yet available — planned for a future release.