Test and debug your AI channel configuration in real time

Channel Sandbox

The Channel Sandbox (also called the Playground) is a built-in testing environment that lets you chat directly with your AI channel before deploying it. It runs the exact same pipeline that your live widget or API uses, so you can be confident that what you see in the sandbox is what your users will experience.

What is the Sandbox?

The sandbox is a chat interface embedded on each Channel's detail page. You type a message, and the AI responds using your channel's full configuration — including its system prompt, knowledge base, tools, and model settings.
Use it to:
  • Verify that your system prompt produces the right tone and behavior
  • Confirm that knowledge base retrieval returns relevant answers
  • Test tool integrations (web scraper, contact lead, custom API tools)
  • Check that security guardrails are working
  • Debug unexpected responses before going live

Accessing the Sandbox

  1. Navigate to the Channel detail page for any channel you own or manage.
  2. Scroll down to the Channel Sandbox card in the right column.
  3. Type a message and press Send.
Admin users can access the sandbox for any channel. Tenant users can only access the sandbox for channels they own.

How It Works

When you send a message in the sandbox, the following happens:
  1. Authorization — The system verifies you have access to the channel.
  2. Quota check — If the channel belongs to a tenant, the system checks whether credits are available (see Credits & Billing).
  3. Security scan — Your message is checked against a prompt injection detection system. If an injection attempt is detected, the request is rejected with a safe fallback message.
  4. Agent processing — A
    text
    SiteAgent
    is instantiated with your channel's configuration:
    • The system prompt is assembled from your channel's custom prompt, a truncated knowledge base excerpt (up to 4,000 characters), behavioral guidelines, and security rules.
    • Available tools are attached based on your channel settings (sitemap crawler, web scraper, knowledge search, custom API tools, contact lead tool).
    • A canary token is embedded in the system prompt to detect if the model leaks its instructions.
  5. Response — The AI generates a reply, which is cleaned of any leaked function-call markup before being displayed.
  6. Token usage — After each response, the sandbox shows the number of prompt tokens, completion tokens, and total tokens consumed.

Security

The sandbox applies multiple security layers:
  • Prompt injection detection — Messages are scored against weighted regex patterns. Messages that exceed a threshold are blocked.
  • Tool result sanitization — Data returned by tools is stripped of injection attempts, normalized for Unicode homoglyph attacks, and wrapped in explicit delimiters that tell the model to treat it as data, not instructions.
  • Canary token — A unique token is embedded in the system prompt. The AI is instructed never to reveal it. Responses are scanned for canary leaks and credential-like patterns before delivery.
  • Response cleaning — All AI responses pass through
    text
    AiResponseHelper
    , which strips any
    text
    <function=...>
    tags that might leak from the model.

Credits & Billing

Sandbox messages deduct credits from the channel's allocation, just like real API requests. Here is how it works:
  • Token-based pricing — Credits are deducted based on the number of input and output tokens, multiplied by the model's per-1K-token rate. See Billing & Credits for details.
  • Credit consumption order — Expiry credits are consumed first, then main balance credits. If your plan includes a protected assistant reserve, those credits are not available for sandbox use.
  • Quota enforcement — If the channel's allocated credits are exhausted, or the tenant's available balance is depleted, the sandbox returns a 403 Forbidden response with an error message.
  • Overage tracking — If deductions exceed the tenant's available credits, overage is tracked and may be billed later depending on your plan.
Each sandbox message shows the token count used, so you can monitor credit consumption in real time.

Conversation Continuity

The sandbox maintains conversation context within a session:
  • Each browser session generates a unique session ID that persists while the page is open.
  • Conversation history is cached server-side for 24 hours, allowing the AI to remember context across messages.
  • The agent keeps up to 6 recent messages in context (roughly 3 user/assistant turns) to stay within token limits.
  • Refreshing the page starts a new session and a fresh conversation.

Model Tier Resolution

The sandbox respects your billing plan's model tier settings:
  • Each channel has a configured AI model (for example, Llama 3.3 70B).
  • Each billing plan allows access to a specific model tier (
    text
    starter
    ,
    text
    pro
    , or
    text
    business
    ).
  • If the channel's configured model belongs to a tier above what your plan allows, the model is silently downgraded to the best available model within your allowed tier.
  • This ensures the sandbox always works, even if your plan doesn't support the channel's configured model.

Tips

  • Start simple — Test with your system prompt and knowledge base before adding tools. This helps isolate issues.
  • Test edge cases — Try ambiguous questions, off-topic queries, and inputs that might trigger injection attempts to verify your security settings.
  • Watch token usage — The per-message token counts help you estimate real-world costs before going live.
  • Use the clear button — Click the chat icon in the sandbox header to reset the conversation and start fresh.
  • Check tool responses — If you have web scraper or knowledge search enabled, ask questions that require those tools to verify they return useful data.