In the Observe stage of the AI engineering lifecycle, the focus is on understanding how your deployed generative AI capabilities perform in the real world. After creating and evaluating a capability, observing its production behavior is crucial for identifying unexpected issues, tracking costs, and gathering the data needed for future improvements.

Instrument your app

Axiom offers the following approaches to capture generative AI telemetry:
Instrumentation approachLanguage supportCharacteristics
Axiom AI SDKTypeScriptQuick setup.
Minimal code changes.
ManualAnyMore involved setup.
Full control over instrumentation.
Instrumentation with Axiom AI SDK is the right choice for you if you have a TypeScript app and you want the SDK to capture and send traces with the correct semantic conventions. Manual instrumentation is the right choice for you if you want to use your own tooling or if you use a language other than TypeScript. You need to instrument your app manually to emit traces compatible with Axiom’s AI engineering features. Both approaches emit identical attributes. This means that all the telemetry analysis features work the same way.

Visualize traces in Console

Visualizing and making sense of this telemetry data is a core part of the Axiom Console experience:
  • A dedicated AI Trace Waterfall view visualizes single and multi-step LLM workflows, with clear input/output inspection at each stage.
  • A pre-built Gen AI OTel Dashboard automatically appears for any dataset receiving AI telemetry. It features elements for tracking cost per invocation, time-to-first-token, call counts by model, and error rates.

What’s next?

After capturing and analyzing production telemetry, use these insights to improve your capability. Learn more in Iterate.