Carrier Documentation
Carrier is a browser-based platform for designing, deploying, and monitoring controlled studies of mixed-agent, multimodal group interaction. This guide covers key concepts, the researcher and participant journeys, and practical configuration examples you can adapt for your own studies.
Overview
Key Concepts
Carrier is a browser-based experimental research platform for designing, deploying, and monitoring controlled studies of mixed-agent, multimodal group interaction. Researchers use the Experiment Builder to configure study conditions, participant compositions, and interaction sequences, then monitor live sessions through the Experimenter Dashboard. The platform supports real-time text and audio communication between any mix of human participants, LLM-powered AI assistants, and rule-based scripted agents. Video conferencing is planned but not yet available.
Experiments are organised using a small set of composable building blocks. The table below summarises each concept — click any name to jump to its full parameter reference.
| Concept | Description |
|---|---|
| Experiment | Top-level container for a complete research study, holding global settings, surveys, chamber lines, and bot templates. |
| Chamber Line | A condition track comprising an ordered sequence of chambers that a participant progresses through. |
| Chamber | A container for one or more sequential segments that matched participants progress through together. |
| Segment | A single activity within a chamber timeline, with its own type, timing, and transition rules. |
| Participant Types | The three entity kinds in a chamber: human participants, LLM-powered AI assistants, and rule-based agents. |
| Roles | Functional capabilities assigned to any participant: communicator, mediator, or processor. |
| Triggers | Condition-response rules used by agent participants to determine when and how to respond. |
| Run / Session | A single participant's end-to-end journey through an experiment, tracking phase progress and survey responses. |
| Chatroom | The runtime instantiation of a chamber, created when participants are matched, holding live chat history. |
| Matching | The runtime process that groups waiting participants into chatrooms based on configurable strategies. |
| Surveys | Survey.js questionnaires attached at global, chamber, or segment level to collect data and drive conditional logic. |
Framework
Three Axes of Interaction
Carrier treats group interaction structure as an explicit experimental object. Each chamber is specified along three orthogonal axes that can be manipulated independently or combined factorially.
Who Participates
Define the interaction ecology with three participant types.
- Human participants
- AI assistants (LLM-driven)
- Scripted agents (rule-based)
What They Do
Separate participant type from functional role within the group.
- Communicator — primary interactant
- Mediator — group-facing broadcasts
- Processor — draft-time assistant
How They Interact
Configure the communication channel per chamber.
- Text chat
- Audio messaging
- Video conferencing coming soon
User Journeys
Researcher Journey
Researchers author and manage studies through two interfaces: the Experiment Builder for configuring studies, and the Experimenter Dashboard for live monitoring.
In the Experiment Builder, researchers define one or more chamber lines (conditions) and compose each from an ordered sequence of chambers. For each chamber, they configure the communication channel (text/audio; video planned), define participant slots (human, LLM agent) and roles, and specify the matching policy used to form groups at runtime.
Surveys can be attached globally (pre/post) and per chamber (pre/post) to capture baseline measures, manipulation checks, and immediate outcomes. Selected survey fields can be injected into parameterised prompts and agent settings to support personalised but controlled LLM behaviour with an auditable record of injected values.
After deployment, the Experimenter Dashboard provides real-time monitoring of live runs — participant status (online, waiting, in-chamber), matching queues, and chamber progress — along with intervention controls (pause, skip, end) with corresponding logs for analysis and auditability.
Runtime
Participant Journey
Participants experience a structured sequence of phases, with the platform handling assignment, matching, session continuity, and data collection automatically.
After the global pre-survey, the system assigns a chamber line and constructs a persisted run plan that defines the participant's exposure sequence. For each chamber, participants may complete a short pre-survey, then enter a matching phase where the system forms the required group based on eligibility criteria evaluated against the participant's latest survey state (required/preferred/excluded fields).
Each chamber can end on time, completion criteria, or experimenter intervention, followed by an optional post-survey to capture immediate effects. Participants proceed through the run plan until completion. Session continuity mechanisms support reconnection without losing phase fidelity, so participants can resume even after a network interruption. The resulting chatroom records all messages and interactions for later analysis.
Guide
Using the Experiment Builder
The Experiment Builder is a visual editor for designing study configurations. It uses a three-pane layout — a component library on the left, an interactive canvas in the centre, and a context-sensitive inspector on the right — so you can compose experiments entirely through drag-and-drop and inline editing.
The left Library pane contains draggable participant, agent, and segment components. The centre Canvas shows your chamber lines as horizontal rows of chamber cards. The right Inspector pane updates automatically to show the settings for whatever element is selected — experiment, chamber, segment, participant, or agent.
Click the experiment name in the header to open global settings. Here you set the
experiment name and description, configure the
chamber line assignment method
(random, counterbalance, or survey-based),
attach global pre/post surveys using the built-in Survey.js editor,
and optionally set a completion redirect URL for platforms like Prolific.
Each chamber line represents an experimental condition (e.g. treatment vs control). Add lines with the button at the bottom of the canvas, then add chambers within each line. Chambers appear as cards in a horizontal timeline — you can duplicate or delete lines, and add as many chambers per line as your design requires. Double-click a chamber card to expand it and start configuring its contents.
Create human participants and agents in the Library pane. Agents can be Script-based (trigger rules) or LLM-powered (OpenAI, Anthropic). Drag them from the library into a chamber's role zones — Communicators, Mediators, or Processors — to assign both placement and role in one action. The inspector updates to show role-specific settings for the selected participant or agent.
Select an agent to open its settings in the inspector.
LLM agents are configured with a provider, model, system prompt
(supports {{variable}} interpolation from
survey responses), temperature,
max tokens, context window size, and response delay range to simulate natural typing.
Script agents use a visual
trigger builder where you define
condition–response rules (keyword, regex, time-based, message count, and more)
with chaining, cooldowns, and probability controls.
Inside an expanded chamber, the right panel shows the
segment timeline.
Click Add Segment to append a new segment, then select its type:
chat, slide, selection, ranking, input, media, timer, task, survey, or instruction.
Each segment has its own duration,
transition mode (auto, manual,
sync, host), and type-specific options — for example,
a chat segment lets you toggle emoji reactions, typing indicators, and message reporting.
Use the move up/down buttons to reorder segments.
Chat segments support control events that modify the interaction at runtime. You can disable input for specific participants (e.g. to create a listening phase), enable input to restore it, or end the section based on a condition such as elapsed time, message count, or a bot trigger. A summary strip at the bottom of the chamber preview shows all active control events across segments.
Surveys can be attached at three
levels: global pre/post (in
experiment settings),
chamber pre/post (in
chamber settings),
and segment-level (using the survey
segment type). All surveys use
the Survey.js format. The built-in question builder lets you add
questions visually, or you can import/export survey JSON files for
reuse across experiments. Survey response fields can drive
chamber line assignment
and be injected into agent system prompts via {{variable}} syntax.
When your design is ready, click Validate to run structural checks (every chamber line needs at least one chamber, every chamber needs segments, etc.). Use Preview to review the full experiment tree in a read-only modal. Finally, click Deploy to save the experiment to the database — the builder tracks unsaved changes with a dot indicator next to the Deploy button. You can also Export the configuration as JSON for backup or version control, and Import a previously exported experiment to continue editing.
Drag & Drop
Drag participants into role zones, reorder segments, and rearrange survey questions without touching JSON.
Auto-Save
Form fields auto-save with debounced updates. An unsaved indicator warns you before leaving the page.
Import / Export
Export your entire experiment as JSON for backup or sharing, and import it back to continue editing.
Guide
Using the Experiment Dashboard
The Experiment Dashboard is the researcher's command centre for monitoring live sessions, managing participants, and exporting data. It provides real-time statistics, alerts, and intervention controls across all your experiments.
The main dashboard shows a summary of all your experiments at a glance. The top row of statistics cards displays total experiments, active participants, participants online now, matching queue size, completed sessions, and active chatrooms. Below the stats, you'll find panels for your experiments list, active sessions, matching queue, and alerts.
The experiments panel lists every study you own or collaborate on. Each card shows the experiment name, status, and your access level (Owner, Collaborator, or Unowned). From here you can copy the participant link to share with subjects, open the detail view, edit the configuration in the Experiment Builder, manage collaborators, export data, or delete the experiment.
Click into an experiment to open the detail view, which shows a searchable,
filterable table of every participant.
Each row displays the participant's connection status
(online/offline/disconnected), matching status,
current stage, message count, and session
duration. The table updates in real time via Socket.io — no page refresh needed.
Use the search bar to filter by participant ID and the status dropdown to narrow by
active, completed, paused, or dropped_out.
Each participant row has action buttons that let you intervene in real time. Pause prevents a participant from advancing to the next stage. Resume re-activates a paused session. End terminates the session with an optional reason that is logged for your audit trail. You can also click View Details to open a modal showing the participant's full progress, timing, and current chatroom information.
Click View Chatroom on any matched participant to open the full chatroom history. The chatroom modal shows the participant roster with roles (communicator/mediator/processor), every message with sender type and timestamp, and system notifications (joins, leaves, state changes). Messages from different sender types (human, AI, agent, system, mediator) are visually distinguished.
The alerts panel surfaces issues automatically: participant disconnections, long queue waits (over 2 minutes), drop-outs, and idle experiments. Alerts are sorted by severity (errors first) so you can triage quickly. The matching queue panel shows participants currently waiting to be grouped, along with their wait time, helping you spot bottlenecks before they affect your study.
Click Export on any experiment to download your data. Choose between
JSON or CSV format, and select the data scope:
participants (statuses, timing, response counts),
chatrooms (chat history, message counts, session info),
responses (individual survey records), or
all (complete dataset with experiment metadata). The exported file
is ready for analysis in your preferred tool.
Real-Time Updates
Participant statuses, matching queue, and alerts refresh automatically via Socket.io and 10-second polling.
Collaboration
Invite collaborators to share monitoring access. Owners retain full control; collaborators can view and intervene.
Audit Trail
Every intervention (pause, resume, end) is logged with timestamps and reasons for post-study accountability.
Examples
Configuration Examples
Key Settings
- Single chamber line with one chamber — the simplest possible design
- Global
preSurveycollects demographics before matching - Per-chamber
postSurveymeasures satisfaction after the chat matchingTimeoutof 300 seconds (5 minutes) before assigning to a fallback
Key Settings
- AI participant uses the processor role — it provides private writing suggestions only the human sees
triggerMode: "on-mention"— the AI only responds when the participant explicitly asks for helpprovider: "openai"withmodel: "gpt-4"and a custom writing coach system prompt- No matching needed — the AI is spawned automatically for each participant
Key Settings
chamberlineAssignment: "random"randomly assigns participants to one of three chamber lines- Control: 3 human communicators (no AI)
- Treatment A: 2 humans + 1 disclosed AI assistant AI
- Treatment B: 2 humans + 1 covert agent with keyword/regex triggers BOT
- Global pre/post surveys measure trust before and after interaction
- Bot uses
triggerRuleswith keyword, regex, and time-based triggers
Key Settings
- AI uses the mediator role — it can broadcast messages visible to all and manage participation balance
triggerMode: "every-message"— the AI analyses every message to track participation and intervene as neededinitialSalute: true— the mediator sends an opening message to set the discussion topicprovider: "anthropic"withmodel: "claude-sonnet-4-5-20250929"- Control chamber line uses the same group size but without a mediator for comparison
Reference
Concept Reference
Detailed parameter documentation for each concept, aligned with the underlying data models. Click any card to expand its full specification.
The top-level container for a complete research study. Holds global settings, surveys, chamber line definitions, and bot templates. Each experiment has an owner and optional collaborators. Configure experiments visually in the Experiment Builder.
Parameters
| Field | Type | Description |
|---|---|---|
| name | String | Experiment name (required) |
| description | String | Detailed description |
| status | Enum | draftactivepausedcompletedarchived |
| version | Number | Configuration version (default: 1) |
| globalSettings.timezone | String | Timezone for timestamps (default: UTC) |
| globalSettings.dataRetentionDays | Number | Days to retain data (default: 90) |
| globalSettings.chamberLineAssignment.method | Enum | How participants are assigned to chamber linesrandomcounterbalancesurvey-based |
| globalSettings.chamberLineAssignment.surveyField | String | Survey field to use when method is survey-based |
| globalSettings.completionRedirectUrl | String | URL to redirect participants after completion |
| globalPreSurvey | [Survey] | Survey.js surveys shown before any chambers |
| globalPostSurvey | [Survey] | Survey.js surveys shown after all chambers |
| experiment.chamberlines | [ChamberLine] | Array of chamber line configurations (conditions) |
| experiment.botTemplates | [BotTemplate] | Reusable bot/AI configurations |
A condition track comprising an ordered sequence of
chambers.
Experiments can have multiple
chamber lines (e.g., treatment vs control) and participants are assigned to one based on the
experiment's chamberLineAssignment.method, which can be driven by
survey responses.
Parameters
| Field | Type | Description |
|---|---|---|
| name | String | Display name for this condition (e.g., control, treatment-a) |
| chambers | [Chamber] | Ordered array of chamber configurations |
random assigns uniformly, counterbalance balances across conditions,
and survey-based uses a survey response field to determine assignment.
A container for one or more sequential segments that matched participants progress through together. Participants are matched once at the start and remain grouped across all segments. At runtime, a chamber becomes a chatroom.
Parameters
| Field | Type | Description |
|---|---|---|
| chamberId | String | Unique identifier within the experiment |
| name | String | Display name |
| communicationChannel | Enum | textaudiovideo coming soon |
| segments | [Segment] | Ordered array of segment activities (the chamber timeline) |
| participants | [Config] | Slot definitions for humans, AI assistants, and agents |
| maxParticipants | Number | Total participant slots |
| preSurvey | [Survey] | Survey shown before this chamber |
| postSurvey | [Survey] | Survey shown after this chamber |
A single activity within a chamber timeline. Each segment has its own type, timing, transition rules, and optional agent behaviour overrides. Participants remain matched across all segments in a chamber.
Parameters
| Field | Type | Description |
|---|---|---|
| segmentId | String | Unique identifier within the chamber |
| name | String | Display name |
| type | Enum | Segment type (see table below) |
| order | Number | Position in the chamber timeline |
| timing.duration | Number | Duration in ms (null = unlimited) |
| timing.minDuration | Number | Minimum time (ms) before participants can advance |
| timing.warningTime | Number | Warning shown (ms) before auto-advance |
| transition.mode | Enum | automanualsynchost |
| transition.countdown | Number | Countdown (ms) before auto-advance |
| transition.allowEarlyAdvance | Boolean | Whether participants can skip ahead |
| agentOverrides | [Override] | Per-segment AI/bot behaviour overrides |
Segment Types
| Type | Description |
|---|---|
| slide | Display static or dynamic content |
| chat | Real-time text or audio conversation (video planned) |
| selection | Multiple choice voting |
| ranking | Drag-and-drop ranking |
| input | Free text input |
| media | Audio/video playback |
| timer | Countdown or waiting period |
| task | Custom interactive task |
| survey | Embedded mini-survey |
| instruction | Markdown instructions with continue button |
auto advances when duration expires; manual requires the participant to click;
sync waits for all participants; host waits for the experimenter.
What an actor is in the system. There are two participant types: human (real people) and agent (automated participants). Each participant is also assigned a role that determines their capabilities within a chamber.
Common Fields (all types)
| Field | Type | Description |
|---|---|---|
| participantId | String | Unique identifier |
| participantType | Enum | humanagent |
| role | Enum | communicatormediatorprocessor |
| displayName | String | Name shown in chat (max 30 chars) |
| avatar | String | Avatar image URL or identifier |
| connectionStatus | Enum | offlineonlinedisconnected |
| matchingStatus | Enum | not_readyready_for_matchingwaiting_for_matchmatchedin_chatroomcompleted |
| status | Enum | activecompleteddropped_outpaused |
human Human
A real person interacting through the browser. Identity can be user-provided or auto-generated.
| Field | Type | Description |
|---|---|---|
| identitySource | Enum | user_inputconfiguredauto_generated |
| identitySetupCompleted | Boolean | Whether the user has set their display name and avatar |
| sessionId | String | Express session ID, used for reconnection |
agent Agent
An automated participant that can be driven by scripts
(trigger-based rules),
LLM (API calls to OpenAI, Anthropic, Google), or a mix of both
(script triggers with an llm-driven trigger type for dynamic responses).
Agents are spawned automatically per
chatroom and can fill any
role (communicator, mediator, processor).
LLM Configuration (typeConfig)
| Field | Type | Description |
|---|---|---|
| provider | Enum | openaianthropicgooglecustom |
| aiModel | String | Model identifier (e.g., gpt-4, claude-3-haiku) |
| systemPrompt | String | System prompt defining personality and behaviour |
| temperature | Number | LLM temperature (default: 0.7) |
| maxTokens | Number | Max response tokens (default: 1000) |
| contextWindow | Number | Recent messages included as context (default: 10) |
| responseDelay | {min, max} | Simulated typing delay in ms (default: 500–2000) |
| responseLogic.triggerOnFirstMessage | Boolean | Respond to the first human message (default: true) |
| responseLogic.respondToEveryMessage | Boolean | Respond to every message (default: true) |
| responseLogic.timeoutTrigger | Object | {enabled, timeoutMs, onlyOnChamberStart} — auto-respond after silence |
| responseLogic.initialSalute | Object | {enabled, message, delay} — send a greeting on chamber start |
| responseLogic.respondOnMention | Boolean | Only respond when mentioned by keyword |
| responseLogic.mentionKeywords | [String] | Keywords that count as a mention |
| chainEnabled | Boolean | Enable multi-step LLM processing chain |
| chain | [Step] | {step, model, prompt, outputVariable, processType} — pipeline steps |
Script Configuration (typeConfig)
| Field | Type | Description |
|---|---|---|
| scriptId | String | Identifier for the script |
| scriptContent | [Trigger] | Array of trigger-response rules (see Triggers below) |
| fallbackResponse | Object | {message, delay} — default response when no trigger matches |
{content, rationale, actions}.
content is the message (or null to stay silent), rationale is logged but not shown,
and actions is used by mediators only (disable_chat, enable_chat, prompt_participant).
Script-driven agents can also use the llm-driven trigger type to mix deterministic rules with dynamic LLM responses.
What an actor does within a chamber.
Each participant (regardless of type)
is assigned a role that determines their capabilities and UI. Configured via roleConfig.
Communicator
The primary interactant. Can send and receive messages directly. Messages are visible to all participants.
| Field (roleConfig) | Type | Description |
|---|---|---|
| canInitiate | Boolean | Can send the first message (default: true) |
| messageLimit | Number | Maximum messages allowed |
| cooldownPeriod | Number | Minimum ms between messages (default: 0) |
Mediator
Observes all messages and broadcasts information to the group. Can control chat flow (disable/enable chat, prompt specific participants).
| Field (roleConfig) | Type | Description |
|---|---|---|
| broadcastMode | Enum | sequentialaggregatedtriggered |
| synthesizeResponses | Boolean | Whether to synthesize participant responses before broadcasting |
| synthesisPrompt | String | Prompt for synthesis |
| broadcastFrequency | Number | How often to broadcast |
| triggerKeywords | [String] | Keywords that trigger a broadcast |
disable_chat (mute a participant), enable_chat (unmute),
and prompt_participant (send a private prompt). Release conditions control when chat is re-enabled.
Processor
Assists communicators with their input through review, generation, or real-time suggestions.
Only visible to the paired communicator. Operates in phases that transition
based on triggers.
Managed at runtime by processorManager.js.
| Field (roleConfig) | Type | Description |
|---|---|---|
| targetCommunicators | [Number] | Which communicator slots to assist |
| feedbackVisibility | Enum | privatepublic |
| phases | [Phase] | Ordered processing phases (see below) |
Phase Definition
| Field | Type | Description |
|---|---|---|
| phaseId | String | Unique identifier |
| mode | Enum | reviewgeneratereal-time-assistdisabled |
| transitionTrigger.type | Enum | on-startmessage-counttime-elapsedkeywordparticipant-eventmanualon-end |
| transitionTrigger.value | Mixed | Trigger-specific threshold or pattern |
| aiConfig | Object | {provider, model, systemPrompt, temperature, maxTokens} |
| contextLevel | Enum | nonepartialfull |
| reviewSettings | Object | {trigger, pauseTimeout, feedbackFormat, mandatory, maxRounds} |
Trigger-response rules used by agent
participants to determine when and how to respond.
Defined in typeConfig.scriptContent. Each trigger has a condition, a response, and optional
rate-limiting and chaining controls. Triggers are evaluated within a
chatroom context and can be configured visually
in the Experiment Builder.
Trigger Definition
| Field | Type | Description |
|---|---|---|
| triggerId | String | Unique identifier (required) |
| enabled | Boolean | Whether this trigger is active (default: true) |
| condition.type | Enum | One of the trigger types (see table below) |
| condition.value | Mixed | Type-specific match value |
| condition.caseSensitive | Boolean | Case-sensitive matching (default: false) |
| condition.matchMode | Enum | anyall |
| condition.senderFilter | Enum | humanspecific |
| response.message | String | Single response message |
| response.messages | [String] | Array for random selection |
| response.delay | Number | Delay before sending (ms) |
| response.probability | Number | Chance of firing (0–1, default: 1.0) |
| cooldown | Number | Minimum ms between firings (default: 0) |
| maxTriggers | Number | Max times this can fire (null = unlimited) |
| priority | Number | Higher priority triggers evaluated first (default: 0) |
| chainTrigger | String | triggerId to fire after this one completes |
Trigger Types
| Type | Description |
|---|---|
| keyword | Matches keywords or phrases in message text |
| regex | Matches a regular expression pattern |
| time | Fires after a delay (ms) from chamber/segment start |
| message-count | Fires after N total messages in the chatroom |
| participant-message-count | Fires after a specific participant sends N messages (supports countMode: total, consecutive, since-reset) |
| sequence | Fires when messages match an ordered sequence |
| participant-action | Fires on participant events (join, leave, etc.) |
| after-bot-message | Fires after another bot sends a message (cross-bot chaining) |
| event-monitor | Monitors chatroom events and chains to other triggers |
| chain-only | Passive — only fires when chained from another trigger |
| llm-driven | Sends context to an LLM to generate a dynamic response |
| periodic | Fires at regular intervals (mediator-specific) |
| aggregate | Fires after collecting N messages to summarise (mediator-specific) |
| topic-detected | Fires when a topic/keyword pattern is detected (mediator-specific) |
| activity-timeout | Fires after a period of inactivity |
| participant-count | Fires based on participant count thresholds |
| discussion-phase | Fires at specific discussion phases |
A single participant's end-to-end journey through an experiment, tracking progress across phases, chambers, and survey responses. Supports pause, resume, and reconnection. Monitor runs in real time via the Experimenter Dashboard.
Parameters
| Field | Type | Description |
|---|---|---|
| runId | String | Human-readable unique identifier |
| experimentId | ObjectId | Reference to the parent experiment |
| participantId | String | Reference to the participant |
| assignedChamberLine | String | Which chamber line this run follows |
| chamberLineAssignmentReason | String | Why assigned (random, counterbalance, survey-based) |
| currentPhase | Enum | initializationidentity_setupglobal_pre_surveychamber_line_executionglobal_post_surveycompletedterminated |
| currentChamberIndex | Number | Current position in chamber sequence (default: 0) |
| status | Enum | activepausedcompleteddroppedterminated |
| runPlan | Object | {chamberLineId, chambers: [{chamberId, order, status}]} |
| surveyResponses | Object | {globalPreSurvey, globalPostSurvey, chamberSurveys} |
| terminationReason | String | Reason if terminated early |
The runtime instantiation of a chamber. Created when participants are matched; holds the live participant list, chat history, and processor interactions. View chatroom contents from the Experimenter Dashboard.
Parameters
| Field | Type | Description |
|---|---|---|
| chatroomId | String | Unique identifier |
| experimentId | ObjectId | Parent experiment |
| chamberLineId | String | Source chamber line |
| chamberId | String | Source chamber template |
| communicationChannel | Enum | textaudiovideo coming soon |
| status | Enum | waitingreadyactivepausedcompletedclosed |
| participants | [Entry] | {participantId, slot, role, joinedAt, leftAt, isActive, connectionStatus} |
| aiAssistants | [Entry] | {assistantId, name, role, systemPrompt, isActive} |
| chatHistory | [Message] | All messages with sender info, type, and timestamps |
| processorInteractions | [Record] | Review/generate/suggestion records with outcomes |
| segmentStartTimes | Map | Per-segment start timestamps |
| settings | Object | {allowParticipantChat, maxMessageLength, chatDuration, enableReactions} |
senderType: participant, human, system, mediator, agent, mediator_bot.Message
messageType: text, system, broadcast, bot_response, ai_response, processor_suggestion, and more.
The runtime process that forms groups by matching eligible participants into a chamber. Runs on a periodic interval (every 5 seconds) and creates a chatroom when enough participants are available. Monitor the queue from the Experimenter Dashboard.
Matching Strategies
| Strategy | Description |
|---|---|
| Simple FIFO | First-in-first-out — matches participants in queue order |
| Chatroom-based | Matches based on chamber participant slot requirements (human count, roles) |
| Conditional | Matches based on pairing group conditions (e.g., survey responses) |
ready-for-matching. The manager checks all
waiting participants for each active experiment
and creates a chatroom when required slots can be filled.
Carrier embeds surveys at three levels — global (experiment-level), chamber, and segment — using the Survey.js JSON format. Surveys collect self-report data, drive conditional logic (chamber line assignment, agent prompt interpolation), and appear inline within the participant's experiment flow. See the Builder guide for how to attach surveys visually.
Global Pre-Survey
Shown once at the very beginning of a participant's session,
before any chamber starts. Typical uses include collecting demographics,
baseline measures, or consent forms. Responses are stored in
Run.surveyResponses.globalPreSurvey and are available immediately
for downstream logic.
How It Works
| Step | Description |
|---|---|
| 1. Display | After session initialisation and identity setup, the participant is presented with all global pre-survey pages in order. |
| 2. Submit | Responses are saved to the run record via the save-survey-data socket event. |
| 3. Assignment | If the experiment uses survey-based chamber line assignment, the specified surveyField is read from these responses to determine which chamber line the participant enters. |
| 4. Interpolation | Response values can be injected into agent system prompts using {{fieldName}} template syntax, enabling personalised but controlled AI behaviour. |
Global Post-Survey
Shown once after all chambers in the participant's assigned
chamber line have been completed. Typical uses include final outcome measures,
debrief questionnaires, or overall satisfaction ratings. Responses are stored in
Run.surveyResponses.globalPostSurvey.
How It Works
| Step | Description |
|---|---|
| 1. Trigger | Once the last chamber in the run plan reaches completed status, the participant's phase advances to global_post_survey. |
| 2. Display | All global post-survey pages are presented. The participant cannot skip this step. |
| 3. Submit | Responses are saved to the run record. The participant's phase then advances to completed. |
| 4. Redirect | If globalSettings.completionRedirectUrl is set, the participant is redirected (with their pid appended as a query parameter). Otherwise a completion screen is shown. |
Chamber Pre/Post Surveys
Each chamber can have its own preSurvey (shown before matching)
and postSurvey (shown after the chamber ends). These are useful
for manipulation checks, mood measures, or capturing immediate reactions.
Responses are stored in Run.surveyResponses.chamberSurveys
with the associated chamberId and surveyType.
Segment-Level Surveys
A segment of type survey can embed a mini-survey inline within
the chamber timeline. This lets you capture data mid-interaction — for
example, a quick rating between two chat rounds — without leaving the
chamber context.
Configuration Fields
| Field | Type | Description |
|---|---|---|
| globalPreSurvey | [Survey] | Array of Survey.js definitions shown before any chambers begin |
| globalPostSurvey | [Survey] | Array of Survey.js definitions shown after all chambers complete |
| chamber.preSurvey | [Survey] | Survey shown before this chamber's matching phase |
| chamber.postSurvey | [Survey] | Survey shown after this chamber ends |
| segment (type: survey) | Survey | Inline survey embedded as a segment within the chamber timeline |
chamberLineAssignment.surveyField or agent {{variable}} placeholders
must match the name property of the corresponding survey question.
Reference
Quick Lookup
At-a-glance summary of participant types, roles, and communication channels. See Concept Reference for full specifications.
Participant Types
- Human human Real participants who connect via the experiment link. Matched dynamically based on queue.
- AI Assistant ai LLM-powered agents (OpenAI, Anthropic). Configured with provider, model, and system prompt.
- Agent bot Rule-based agents using trigger rules (keyword, regex, time-based). Deterministic behaviour.
Roles
- Communicator Primary interactant in the chatroom. Messages are visible to all. The default role for most participants.
- Mediator Sends group-wide broadcasts and can manage discussion flow. Visible to all but operates at a meta level.
- Processor Private draft-time assistant visible only to their paired participant. Used for writing aid or suggestions.
Channels
- Text Real-time text chat via Socket.io. Supports all participant types and roles. Lowest bandwidth requirement.
- Audio Voice-only communication via WebRTC. Human participants only. Suitable for auditory credibility studies.
- Video coming soon Full video conferencing via WebRTC + Peer.js. Human participants only. Not yet available — planned for a future release.