Introducing GenAI functions: Analyze AI conversations with purpose-built APL functions

Neil Jagdish PatelCEO / Co-founder
December 12, 2025

GenAI functions are now available in APL.

When support for OpenTelemetry GenAI spans was added, something different emerged. Unlike traditional observability data with simple scalar values, GenAI traces contain complex structures: arrays of messages, nested tool calls, conversation metadata. These aren’t just strings or integers. They’re JSON objects that require parsing, filtering, and extraction to unlock their value.

You could analyze this data with raw APL, but you’d be jumping through hoops. That’s why GenAI functions were built: a suite of purpose-built APL functions that understand the shape of GenAI conversation data and make it easy to extract insights.

  • Purpose-built for GenAI data: Functions designed specifically for analyzing AI conversation structures, messages, tool calls, and metadata.
  • Extract conversation insights: Easily pull out user prompts, assistant responses, system prompts, and tool calls without complex JSON parsing.
  • Calculate costs and usage: Built-in functions for token estimation, cost calculation, and usage analysis across different models and customer segments.
  • Analyze conversation flow: Understand conversation turns, stop reasons, truncation, and message roles to debug and optimize AI interactions.
  • Vertical stack advantage: By owning the database, ingest, and query language, Axiom can optimize for GenAI workloads in ways others can’t.

Why GenAI functions exist

Traditional observability traces use scalar values: strings, integers, booleans. You query them with simple where clauses and basic aggregations. But GenAI traces are fundamentally different. They contain conversations: structured sequences of messages with roles, content, tool calls, and metadata.

Consider what teams need to understand about their AI applications:

  • Cost analysis: How much are enterprise customers spending? What’s the cost for customers using MCP-enabled features?
  • Conversation patterns: How many conversations stop after a tool result? How often do users abandon conversations after a system message?
  • Content analysis: What are users actually asking? Can you search user messages specifically, not just everything?
  • Debugging: Why did a conversation stop? Was the response truncated? What tool calls were made?

Without specialized functions, answering these questions means parsing JSON arrays, filtering by role, extracting nested fields, and calculating costs manually. It’s possible, but tedious. And when you’re analyzing thousands of conversations, tedious becomes impractical.

GenAI functions solve this by understanding the structure of GenAI data. They know what a conversation looks like, what roles messages have, how tool calls are structured, and how to calculate costs. You get the insights you need without the parsing overhead.

What makes GenAI functions different

Built for conversation analysis

GenAI functions understand conversations as structured data. Functions like genai_extract_user_prompt and genai_extract_assistant_response know how to find messages by role and extract their content. genai_conversation_turns counts the back-and-forth exchanges. genai_extract_tool_calls pulls out function invocations from messages.

This isn’t just convenience. It enables analysis that’s difficult or impossible with generic query functions. For example, you can extract all user messages and search for specific patterns, or analyze stop reasons to understand when users abandon conversations.

Cost analysis at any granularity

While many tools show you overall costs, GenAI functions let you slice and dice cost data however you need. Calculate costs for specific customer segments, features, or conversation types using genai_cost, genai_input_cost, and genai_output_cost.

Want to know the cost for enterprise customers using MCP? Filter by customer attributes, extract the conversation data, and calculate costs right in your query. The functions handle model pricing automatically, so you get accurate cost calculations without maintaining lookup tables.

Vertical stack advantage

GenAI functions demonstrate the power of owning the entire stack. Support for GenAI spans was built into the database, UI capabilities were added to view conversations, and now the query language is being extended with functions optimized for this data.

This vertical integration means the entire pipeline can be optimized for GenAI workloads. The database stores conversation data efficiently, the query engine understands conversation structures, and the functions leverage that understanding to make analysis straightforward.

Other platforms can’t easily do this. They’d need to coordinate changes across database, query engine, and UI layers, often across different teams and products. Axiom can move fast because it owns it all.

Available GenAI functions

GenAI functions provide a comprehensive suite of capabilities for analyzing AI conversations. Here’s a complete overview of all available functions:

FunctionDescription
genai_concat_contentsConcatenates message contents from a conversation array
genai_conversation_turnsCounts the number of conversation turns
genai_costCalculates the total cost for input and output tokens
genai_estimate_tokensEstimates the number of tokens in a text string
genai_extract_assistant_responseExtracts the assistant's response from a conversation
genai_extract_function_resultsExtracts function call results from messages
genai_extract_system_promptExtracts the system prompt from a conversation
genai_extract_tool_callsExtracts tool calls from messages
genai_extract_user_promptExtracts the user prompt from a conversation
genai_get_content_by_indexGets message content by index position
genai_get_content_by_roleGets message content by role
genai_get_pricingGets pricing information for a specific model
genai_get_roleGets the role of a message at a specific index
genai_has_tool_callsChecks if messages contain tool calls
genai_input_costCalculates the cost for input tokens
genai_is_truncatedChecks if a response was truncated
genai_message_rolesExtracts all message roles from a conversation
genai_output_costCalculates the cost for output tokens

Real-world use cases

Cost analysis by customer segment

Calculate costs for specific customer groups or features:

['genai-traces']
| where ['attributes.customer.tier'] == 'enterprise'
| where ['attributes.gen_ai.operation.name'] == 'execute_tool'
| extend model = ['attributes.gen_ai.response.model']
| extend input_tokens = tolong(['attributes.gen_ai.usage.input_tokens'])
| extend output_tokens = tolong(['attributes.gen_ai.usage.output_tokens'])
| extend cost = genai_cost(model, input_tokens, output_tokens)
| summarize total_cost = sum(cost), avg_cost = avg(cost)

Understanding conversation abandonment

Analyze stop reasons to understand when users abandon conversations:

['genai-traces']
| extend stop_reason = ['attributes.gen_ai.response.finish_reasons']
| extend last_tool_calls = genai_extract_tool_calls(['attributes.gen_ai.input.messages'])
| where isnotempty(last_tool_calls)
| summarize abandoned_after_tool = count() by stop_reason

Content analysis

Extract and analyze user prompts to understand what users are asking:

['genai-traces']
| extend user_prompt = genai_extract_user_prompt(['attributes.gen_ai.input.messages'])
| where isnotempty(user_prompt)
| summarize query_count = count() by user_prompt
| top 10 by query_count

Observability for AI engineering

Traditional observability traces use scalar values that work fine with standard queries. But AI conversations require analysis: understanding flow, extracting meaning, calculating costs, and debugging interactions. These are new problems that need new solutions.

GenAI functions rise to the occasion by providing the specialized capabilities needed to understand and optimize AI applications at scale.

Learn more

Ready to start analyzing GenAI conversations? Check out the documentation:

Start extracting insights from your AI conversations today.

Share:

Interested to learn more about Axiom?

Sign up for free or contact us at sales@axiom.co to talk with one of the team about our enterprise plans.

Get started with Axiom

Learn how to start ingesting, streaming, and querying data into Axiom in less than 10 minutes.