Skip to main content
User feedback captures direct signals from end users about your AI capability’s performance. By linking feedback events to traces, you can correlate user perception with system behavior to understand exactly what went wrong and prioritize high-impact improvements.

How user feedback works

User feedback collection works across your server and client in the following way:
  1. Server: Your AI capability runs inside withSpan, which creates a trace. Extract traceId and spanId from the span and return them to the client alongside the AI response.
  2. Client: When users provide feedback (thumbs up/down, ratings, comments), send it to Axiom with the trace IDs. This links the feedback to the exact trace.
  3. Axiom Console: View feedback events and click through to see the corresponding AI trace to understand what happened.

Types of feedback

Axiom AI SDK supports the following feedback types:
TypeDescriptionExample
thumbThumbs up (+1) or down (-1)Response quality rating
numberNumeric valueSimilarity score (0-1), star rating (1-5)
boolBoolean true/false”Was this helpful?”
textFree-form stringUser comments
enumConstrained string valueCategory selection
signalNo value, indicates event occurred”User copied response”

Prerequisites

Server-side configuration

On the server side, capture trace context with withSpan when you run your AI capability, and pass the trace and span IDs to the frontend using FeedbackLinks:
import { withSpan } from 'axiom/ai';
import type { FeedbackLinks } from 'axiom/ai/feedback';

async function runMyCapability(input: string) {
  return await withSpan({ capability: 'my-capability', step: 'generate' }, async (span) => {
    const links: FeedbackLinks = {
      traceId: span.spanContext().traceId,
      spanId: span.spanContext().spanId,
      capability: 'my-capability',
    };

    const result = await generateResponse(input);

    return { result, links };
  });
}
FeedbackLinks links feedback events to traces, and allows you to see what your AI capability did when a user provided feedback.
type FeedbackLinks = {
  traceId: string;      // Required: The trace ID from your AI capability
  capability: string;   // Required: The name of your capability
  spanId?: string;      // Optional: Link to a specific span
  step?: string;        // Optional: Step within the capability
  conversationId?: string; // Optional: Refers to `attributes.gen_ai.conversation_id`
  userId?: string;      // Optional: User providing feedback
};

Client-side configuration

On the client side, initialize a feedback client with your Axiom credentials:
import { createFeedbackClient, Feedback } from 'axiom/ai/feedback';

const { sendFeedback } = createFeedbackClient({
  token: process.env.AXIOM_FEEDBACK_TOKEN,
  dataset: process.env.AXIOM_FEEDBACK_DATASET,
  url: process.env.AXIOM_URL
});
For browser-based feedback collection, use environment variables prefixed for your framework. For example, use NEXT_PUBLIC_ for Next.js.
Store the following environment variables:
.env
AXIOM_FEEDBACK_TOKEN="API_TOKEN"
AXIOM_FEEDBACK_DATASET="DATASET_NAME"
AXIOM_URL="AXIOM_DOMAIN"
Replace API_TOKEN with the Axiom API token you have generated. For added security, store the API token in an environment variable.Replace DATASET_NAME with the name of the Axiom dataset where you send your data.Replace AXIOM_DOMAIN with the base domain of your edge deployment. For more information, see Edge deployments.

Send feedback

Use the Feedback helper to create feedback objects, and send them with sendFeedback:
// Thumbs up
await sendFeedback(
  links,
  Feedback.thumbUp({ name: 'response-quality' })
);

// Thumbs down with a comment
await sendFeedback(
  links,
  Feedback.thumbDown({
    name: 'response-quality',
    message: 'The answer was incorrect',
  })
);

// Using the generic thumb function
await sendFeedback(
  links,
  Feedback.thumb({
    name: 'response-quality',
    value: 'up', // or 'down'
    message: 'Very helpful!',
  })
);

Error handling

The feedback client logs errors to the JavaScript console by default. To handle errors differently, pass an onError callback:
const { sendFeedback } = createFeedbackClient(
  {
    token: process.env.AXIOM_FEEDBACK_TOKEN,
    dataset: process.env.AXIOM_FEEDBACK_DATASET,
    url: process.env.AXIOM_URL,
  },
  {
    onError: (error, context) => {
      // Log to your error tracking service
      console.error('Feedback failed:', error, context.links);
    },
  }
);

Example: Chat interface with feedback

This example shows a complete pattern for a chat interface with thumbs up/down feedback in Next.js.
The server-side code returns the trace and span IDs to the client-side code, which is used to link the feedback to the trace:
/app/actions.ts
'use server';

import { withSpan } from 'axiom/ai';
import type { FeedbackLinks } from 'axiom/ai/feedback';

export async function chat(messages: Message[]) {
  return await withSpan({ capability: 'support-agent', step: 'respond' }, async (span) => {
    // Add your AI logic here: call OpenAI, Anthropic, etc.
    const response = await generateResponse(messages);

    // Extract trace context to pass to the client
    const links: FeedbackLinks = {
      traceId: span.spanContext().traceId,
      spanId: span.spanContext().spanId,
      capability: 'support-agent',
    };

    return { response, links };
  });
}

View feedback in Console

After collecting feedback, analyze it in the Axiom Console.

AI engineering tab

Using the AI engineering tab, analyze the feedback events for each capability.
  1. Click the AI engineering tab.
  2. Click Feedback in the sidebar.
  3. Select the capability from the dropdown.
  4. Optional: Click to filter the feedback events by name.
  5. Click the feedback event to see the details.
To determine what your capability did when a user gave their feedback:
  1. Click View in the Trace column to navigate to the corresponding AI trace.
  2. Analyze the trace in the waterfall view. For more information, see Traces.

Query tab

Using the Query tab, query the feedback dataset as any other dataset. For example, to see the number of thumbs up and thumbs down for each capability:
['feedback']
| where event == 'feedback'
| summarize
    thumbs_up = countif(kind == 'thumb' and value == 1),
    thumbs_down = countif(kind == 'thumb' and value == -1)
  by capability = ['links.capability']
To determine what your capability did when a user gave their feedback:
  1. Click the feedback event in the list.
  2. In the event details panel, click the trace ID to navigate to the corresponding AI trace.
  3. Analyze the trace in the waterfall view. For more information, see Traces.

What’s next?

  • Learn how to use feedback insights to improve your capabilities in Iterate.
  • Set up evaluations to systematically test improvements in Evaluate.