Capture user ratings and comments on AI capability outputs to identify quality issues and prioritize improvements.
User feedback captures direct signals from end users about your AI capability’s performance. By linking feedback events to traces, you can correlate user perception with system behavior to understand exactly what went wrong and prioritize high-impact improvements.
User feedback collection works across your server and client in the following way:
Server: Your AI capability runs inside withSpan, which creates a trace. Extract traceId and spanId from the span and return them to the client alongside the AI response.
Client: When users provide feedback (thumbs up/down, ratings, comments), send it to Axiom with the trace IDs. This links the feedback to the exact trace.
Axiom Console: View feedback events and click through to see the corresponding AI trace to understand what happened.
Create a dataset in Axiom dedicated for storing feedback data. Feedback events are stored separately from trace data.
Create an API token in Axiom with minimal permissions because it’s exposed in the frontend. Add ingest-only permissions to the feedback dataset you have created.
Install Axiom AI SDK in your project. For more information, see Quickstart.
On the server side, capture trace context with withSpan when you run your AI capability, and pass the trace and span IDs to the frontend using FeedbackLinks:
FeedbackLinks links feedback events to traces, and allows you to see what your AI capability did when a user provided feedback.
Copy
Ask AI
type FeedbackLinks = { traceId: string; // Required: The trace ID from your AI capability capability: string; // Required: The name of your capability spanId?: string; // Optional: Link to a specific span step?: string; // Optional: Step within the capability conversationId?: string; // Optional: Refers to `attributes.gen_ai.conversation_id` userId?: string; // Optional: User providing feedback};
Replace API_TOKEN with the Axiom API token you have generated. For added security, store the API token in an environment variable.Replace DATASET_NAME with the name of the Axiom dataset where you send your data.Replace AXIOM_DOMAIN with the base domain of your edge deployment. For more information, see Edge deployments.
Use the Feedback helper to create feedback objects, and send them with sendFeedback:
Thumbs up/down
Numeric
Boolean
Text
Enum
Signal
Metadata
Copy
Ask AI
// Thumbs upawait sendFeedback( links, Feedback.thumbUp({ name: 'response-quality' }));// Thumbs down with a commentawait sendFeedback( links, Feedback.thumbDown({ name: 'response-quality', message: 'The answer was incorrect', }));// Using the generic thumb functionawait sendFeedback( links, Feedback.thumb({ name: 'response-quality', value: 'up', // or 'down' message: 'Very helpful!', }));
Using the Query tab, query the feedback dataset as any other dataset. For example, to see the number of thumbs up and thumbs down for each capability:
Copy
Ask AI
['feedback']| where event == 'feedback'| summarize thumbs_up = countif(kind == 'thumb' and value == 1), thumbs_down = countif(kind == 'thumb' and value == -1) by capability = ['links.capability']
To determine what your capability did when a user gave their feedback:
Click the feedback event in the list.
In the event details panel, click the trace ID to navigate to the corresponding AI trace.
Analyze the trace in the waterfall view. For more information, see Traces.