Manually instrumenting your generative AI apps using language-agnostic OpenTelemetry tooling gives you full control over your instrumentation while ensuring compatibility with Axiom’s AI engineering features.
Alternatively, instrument your app with Axiom AI SDK. For more information on instrumentation approaches, see Introduction to Observe.
For more information on sending OpenTelemetry data to Axiom, see Send OpenTelemetry data to Axiom for examples in multiple languages.

Required attributes

Axiom’s conventions for AI spans are based on version 1.37 of the OpenTelemetry semantic conventions for generative client AI spans. Axiom requires the following attributes in your data to properly recognize your spans:
  • gen_ai.operation.name identifies AI spans. It’s also a required attribute in the OpenTelemetry specification.
  • gen_ai.capability.name provides context about the specific capability being used within the AI operation.
  • gen_ai.step.name allows you to break down the AI operation into individual steps for more granular tracking.

gen_ai.operation.name

Axiom currently provides custom UI for the following operations related to the gen_ai.operation.name attribute:
  • chat: Chat completion
  • execute_tool: Tool execution
Other possible values include:
  • generate_content: Multimodal content generation
  • embeddings: Vector embeddings
  • create_agent: Create AI agents
  • invoke_agent: Invoke existing agents
  • text_completion: Text completion. This is a legacy value and has been deprecated by OpenAI and many other providers.
For more information, see the OpenTelemetry documentation on Gen AI Attributes. Axiom recommends the following attributes to get the most out of Axiom’s AI telemetry features:

axiom.gen_ai attributes

  • axiom.gen_ai.schema_url: Schema URL for the Axiom AI conventions. For example: https://axiom.co/ai/schemas/0.0.2
  • axiom.gen_ai.sdk.name: Name of the SDK. For example: my-ai-instrumentation-sdk
  • axiom.gen_ai.sdk.version: Version of the SDK. For example: 1.2.3

Chat spans

AttributeTypeRequiredDescription
gen_ai.provider.namestringRequiredProvider (openai, anthropic, aws.bedrock, etc.)
gen_ai.request.modelstringWhen availableModel requested (gpt-4, claude-3, etc.)
gen_ai.response.modelstringWhen availableModel that fulfilled the request
gen_ai.input.messagesMessages[] (stringified)RecommendedInput conversation history
gen_ai.output.messagesMessages[] (stringified)RecommendedModel response messages
gen_ai.usage.input_tokensintegerRecommendedInput token count
gen_ai.usage.output_tokensintegerRecommendedOutput token count
gen_ai.request.choice_countintegerWhen >1Number of completion choices requested
gen_ai.response.idstringRecommendedUnique response identifier
gen_ai.response.finish_reasonsstring[]RecommendedWhy generation stopped
gen_ai.conversation.idstringWhen availableConversation/session identifier

Tool spans

For tool operations (execute_tool), include these additional attributes:
AttributeTypeRequiredDescription
gen_ai.tool.namestringRequiredName of the executed tool
gen_ai.tool.call.idstringWhen availableTool call identifier
gen_ai.tool.typestringWhen availableTool type (function, extension, datastore)
gen_ai.tool.descriptionstringWhen availableTool description
gen_ai.tool.argumentsstringWhen availableTool arguments
gen_ai.tool.messagestringWhen availableTool message
For more information, see Gen AI Attributes.

Agent spans

For agent operations (create_agent, invoke_agent), include these additional attributes:
AttributeTypeRequiredDescription
gen_ai.agent.idstringWhen availableUnique agent identifier
gen_ai.agent.namestringWhen availableHuman-readable agent name
gen_ai.agent.descriptionstringWhen availableAgent description/purpose
gen_ai.conversation.idstringWhen availableConversation/session identifier

Span naming

Ensure span names follow the OpenTelemetry conventions for generative AI spans. For example, the suggested span names for common values of gen_ai.operation.name are the following:
  • chat {gen_ai.request.model}
  • execute_tool {gen_ai.tool.name}
  • embeddings {gen_ai.request.model}
  • generate_content {gen_ai.request.model}
  • text_completion {gen_ai.request.model}
  • create_agent {gen_ai.agent.name}
  • invoke_agent {gen_ai.agent.name}
For more information, see the OpenTelemetry documentation on span naming:

Messages

Messages support four different roles, each with specific content formats. They follow OpenTelemetry’s structured format:

System messages

System messages are messages that the system adds to set the behavior of the assistant. They typically contain instructions or context for the AI model.
{
  "role": "system",
  "parts": [
    {"type": "text", "content": "You are a helpful assistant"}
  ]
}

User messages

User messages are messages that users send to the AI model. They typically contain questions, commands, or other input from the user.
{
  "role": "user",
  "parts": [
    {"type": "text", "content": "Weather in Paris?"}
  ]
}

Assistant messages

Assistant messages are messages that the AI model sends back to the user. They typically contain responses, answers, or other output from the model.
{
  "role": "assistant", 
  "parts": [
    {"type": "text", "content": "Hi there!"},
    {"type": "tool_call", "id": "call_123", "name": "get_weather", "arguments": {"location": "Paris"}}
  ],
  "finish_reason": "stop"
}

Tool messages

Tool messages are messages that contain the results of tool calls made by the AI model. They typically contain the output or response from the tool.
{
  "role": "tool",
  "parts": [
    {"type": "tool_call_response", "id": "call_123", "response": "rainy, 57°F"}
  ]
}

Content part types

  • text: Text content with content field
  • tool_call: Tool invocation with id, name, arguments
  • tool_call_response: Tool result with id, response
For more information, see the OpenTelemetry documentation:

Example trace structure

Chat completion

Example of a properly structured chat completion trace:
import { trace, SpanKind, SpanStatusCode } from '@opentelemetry/api';

const tracer = trace.getTracer('my-ai-app');

// Create a span for the AI operation
return tracer.startActiveSpan('chat gpt-4', {
  kind: SpanKind.CLIENT
}, (span) => {
  try {
    // (Your AI operation logic here...)

    span.setAttributes({
      // Set operation name
      'gen_ai.operation.name': 'chat',
      // Set capability and step
      'gen_ai.capability.name': 'customer_support',
      'gen_ai.step.name': 'respond_to_greeting',
      // Set other attributes
      'gen_ai.provider.name': 'openai',
      'gen_ai.request.model': 'gpt-4',
      'gen_ai.response.model': 'gpt-4',
      'gen_ai.usage.input_tokens': 150,
      'gen_ai.usage.output_tokens': 75,
      'gen_ai.input.messages': JSON.stringify([
        { role: 'user', parts: [{ type: 'text', content: 'Hello, how are you?' }] }
      ]),
      'gen_ai.output.messages': JSON.stringify([
        { role: 'assistant', parts: [{ type: 'text', content: 'I\'m doing well, thank you!' }], finish_reason: 'stop' }
      ])
    });
    
    return /* your result */;
  } catch (error) {
    span.recordException(error);
    span.setStatus({ code: SpanStatusCode.ERROR, message: error.message });
    throw error; // rethrow if you want upstream to see it
  } finally {
    span.end();
  }
});

Tool execution

Example of a tool execution within an agent:
import { trace, SpanKind, SpanStatusCode } from '@opentelemetry/api';

const tracer = trace.getTracer('my-agent-app');

// Create a span for tool execution
return tracer.startActiveSpan('execute_tool get_weather', {
  kind: SpanKind.CLIENT
}, (span) => {
  try {
    // (Your tool call logic here...)

    span.setAttributes({
      // Set operation name
      'gen_ai.operation.name': 'execute_tool',
      // Set capability and step
      'gen_ai.capability.name': 'weather_assistance',
      'gen_ai.step.name': 'fetch_current_weather',
      // Set other attributes
      'gen_ai.tool.name': 'get_weather',
      'gen_ai.tool.type': 'function',
      'gen_ai.tool.call.id': 'call_abc123',
      'gen_ai.tool.arguments': JSON.stringify({ location: 'New York', units: 'celsius' }),
      'gen_ai.tool.message': JSON.stringify({ temperature: 22, condition: 'sunny', humidity: 65 }),
    });
    
    return /* your result */;
  } catch (error) {
    span.recordException(error);
    span.setStatus({ code: SpanStatusCode.ERROR, message: error.message });
    throw error; // rethrow if you want upstream to see it
  } finally {
    span.end();
  }
});

What’s next?

After sending traces with the proper semantic conventions: