# null
Source: https://axiom.co/docs/.github/CODE_OF_CONDUCT
# Contributor covenant code of conduct
## Our pledge
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, religion, or sexual identity
and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
## Our standards
Examples of behavior that contributes to a positive environment for our
community include:
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the
overall community
Examples of unacceptable behavior include:
* The use of sexualized language or imagery, and sexual attention or
advances of any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or email
address, without their explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Enforcement responsibilities
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.
## Scope
This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official email address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement at
[https://axiom.co/contact](https://axiom.co/contact).
All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the
reporter of any incident.
## Enforcement guidelines
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series
of actions.
**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or
permanent ban.
### 3. Temporary ban
**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent ban
**Community impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within
the community.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
version 2.0, available at
[https://www.contributor-covenant.org/version/2/0/code\_of\_conduct.html][v2.0].
Community Impact Guidelines were inspired by
[Mozilla's code of conduct enforcement ladder][Mozilla CoC].
For answers to common questions about this code of conduct, see the FAQ at
[https://www.contributor-covenant.org/faq][FAQ]. Translations are available
at [https://www.contributor-covenant.org/translations][translations].
[homepage]: https://www.contributor-covenant.org
[v2.0]: https://www.contributor-covenant.org/version/2/0/code_of_conduct.html
[Mozilla CoC]: https://github.com/mozilla/diversity
[FAQ]: https://www.contributor-covenant.org/faq
[translations]: https://www.contributor-covenant.org/translations
# Concepts
Source: https://axiom.co/docs/ai-engineering/concepts
Learn about the core concepts in Rudder: Capabilities, Collections, Graders, Evals, and more.
export const definitions = {
'Capability': 'A generative AI capability is a system that uses large language models to perform a specific task.',
'Collection': 'A curated set of reference records that are used for the development, testing, and evaluation of a capability.',
'Console': "Axiom’s intuitive web app built for exploration, visualization, and monitoring of your data.",
'Eval': 'The process of testing a capability against a collection of ground truth references using one or more graders.',
'Grader': 'A function that scores a capability’s output.',
'GroundTruth': 'The validated, expert-approved correct output for a given input.',
'EventDB': "Axiom’s robust, cost-effective, and scalable datastore specifically optimized for timestamped event data.",
'OnlineEval': 'The process of applying a grader to a capability’s live production traffic.'
};
This page defines the core terms used in the Rudder workflow. Understanding these concepts is the first step toward building robust and reliable generative AI capabilities.
## Rudder lifecycle
The concepts in Rudder are best understood within the context of the development lifecycle. While AI capabilities can become highly sophisticated, they typically start simple and evolve through a disciplined, iterative process:
Development starts by defining a task and prototyping a capability with a prompt to solve it.
The prototype is then tested against a collection of reference examples (so called “ground truth”) to measure its quality and effectiveness using graders. This process is known as an eval.
Once a capability meets quality benchmarks, it's deployed. In production, graders can be applied to live traffic (online evals) to monitor performance and cost in real-time.
Insights from production monitoring reveal edge cases and opportunities for improvement. These new examples are used to refine the capability, expand the ground truth collection, and begin the cycle anew.
## Rudder terms
### Capability
A generative AI capability is a system that uses large language models to perform a specific task by transforming inputs into desired outputs.
Capabilities exist on a spectrum of complexity. They can be a simple, single-step function (for example, classifying a support ticket's intent) or evolve into a sophisticated, multi-step agent that uses reasoning and tools to achieve a goal (for example, orchestrating a complete customer support resolution).
### Collection
A collection is a curated set of reference records used for development, testing, and evaluation of a capability. Collections serve as the test cases for prompt engineering.
### Record
Records are the individual input-output pairs within a collection. Each record consists of an input and its corresponding expected output (ground truth).
### Reference
A reference is a historical example of a task completed successfully, serving as a benchmark for AI performance. References provide the input-output pairs that demonstrate the expected behavior and quality standards.
### Ground Truth
Ground truth is the validated, expert-approved correct output for a given input. It represents the gold standard that the AI capability should aspire to match.
### Annotation
Annotations are expert-provided labels, corrections, or outputs added to records to establish or refine ground truth.
### Grader
A grader is a function that scores a capability's output. It programmatically assesses quality by comparing the generated output against ground truth or other criteria, returning a score or judgment. Graders are the reusable, atomic scoring logic used in all forms of evaluation.
### Evaluator (Eval)
An evaluator, or eval, is the process of testing a capability against a collection of ground truth data using one or more graders. An eval runs the capability on every record in the collection and reports metrics like accuracy, pass-rate, and cost. Evals are typically run before deployment to benchmark performance.
### Online Eval
An online eval is the process of applying a grader to a capability's live production traffic. This provides real-time feedback on performance degradation, cost, and quality drift, enabling continuous monitoring and improvement.
### What's next?
Now that you understand the core concepts, see them in action in the Rudder [workflow](/ai-engineering/quickstart).
# Create
Source: https://axiom.co/docs/ai-engineering/create
Learn how to create and define AI capabilities using structured prompts and typed arguments with Axiom.
export const definitions = {
'Capability': 'A generative AI capability is a system that uses large language models to perform a specific task.',
'Collection': 'A curated set of reference records that are used for the development, testing, and evaluation of a capability.',
'Console': "Axiom’s intuitive web app built for exploration, visualization, and monitoring of your data.",
'Eval': 'The process of testing a capability against a collection of ground truth references using one or more graders.',
'Grader': 'A function that scores a capability’s output.',
'GroundTruth': 'The validated, expert-approved correct output for a given input.',
'EventDB': "Axiom’s robust, cost-effective, and scalable datastore specifically optimized for timestamped event data.",
'OnlineEval': 'The process of applying a grader to a capability’s live production traffic.'
};
export const Badge = ({children}) => {
return
{children}
;
};
The **Create** stage is about defining a new AI capability as a structured, version-able asset in your codebase. The goal is to move away from scattered, hard-coded string prompts and toward a more disciplined and organized approach to prompt engineering.
### Defining a capability as a prompt object
In Rudder, every capability is represented by a `Prompt` object. This object serves as the single source of truth for the capability's logic, including its messages, metadata, and the schema for its arguments.
For now, these `Prompt` objects can be defined and managed as TypeScript files within your own project repository.
A typical `Prompt` object looks like this:
```typescript
// src/prompts/email-summarizer.prompt.ts
import { Type, type Prompt } from '@axiomhq/ai';
export const emailSummarizerPrompt = {
id: 'prompt_123',
name: 'Email Summarizer',
slug: 'email-summarizer',
version: '1.0.0',
environment: 'production',
messages: [
{
role: 'system',
content: 'Summarize emails concisely, highlighting action items. The user is named {{ username }}.',
},
{
role: 'user',
content: 'Please summarize this email: {{ email_content }}',
},
],
arguments: {
username: Type.String(),
email_content: Type.String(),
},
} satisfies Prompt;
```
### Strongly-typed arguments with `Template`
To ensure that prompts are used correctly, the `@axiomhq/ai` package includes a `Template` type system (exported as `Type`) for defining the schema of a prompt's `arguments`. This provides type safety, autocompletion, and a clear, self-documenting definition of what data the prompt expects.
The `arguments` object uses `Template` helpers to define the shape of the context:
```typescript
// src/prompts/report-generator.prompt.ts
import { Type, type Prompt } from '@axiomhq/ai';
export const reportGeneratorPrompt = {
// ... other properties
arguments: {
company: Type.Object({
name: Type.String(),
isActive: Type.Boolean(),
departments: Type.Array(
Type.Object({
name: Type.String(),
budget: Type.Number(),
}),
),
}),
priority: Type.Union([
Type.Literal('high'),
Type.Literal('medium'),
Type.Literal('low')
]),
},
} satisfies Prompt;
```
You can even infer the exact TypeScript type for a prompt's context using the `InferContext` utility.
### Prototyping and local testing
Before using a prompt in your application, you can test it locally using the `parse` function. This function takes a `Prompt` object and a `context` object, rendering the templated messages to verify the output. This is a quick way to ensure your templating logic is correct.
```typescript
import { parse } from '@axiomhq/ai';
import { reportGeneratorPrompt } from './prompts/report-generator.prompt';
const context = {
company: {
name: 'Axiom',
isActive: true,
departments: [
{ name: 'Engineering', budget: 500000 },
{ name: 'Marketing', budget: 150000 },
],
},
priority: 'high' as const,
};
// Render the prompt with the given context
const parsedPrompt = await parse(reportGeneratorPrompt, { context });
console.log(parsedPrompt.messages);
// [
// {
// role: 'system',
// content: 'Generate a report for Axiom.\nCompany Status: Active...'
// }
// ]
```
### Managing prompts with Axiom
To enable more advanced workflows and collaboration, Axiom is building tools to manage your prompt assets centrally.
* Coming soon The `axiom` CLI will allow you to `push`, `pull`, and `list` prompt versions directly from your terminal, synchronizing your local files with the Axiom platform.
* Coming soon The SDK will include methods like `axiom.prompts.create()` and `axiom.prompts.load()` for programmatic access to your managed prompts. This will be the foundation for A/B testing, version comparison, and deploying new prompts without changing your application code.
### What's next?
Now that you've created and structured your capability, the next step is to measure its quality against a set of known good examples.
Learn more about this step of the Rudder workflow in the [Measure](/ai-engineering/measure) docs.
# Iterate
Source: https://axiom.co/docs/ai-engineering/iterate
Learn how to iterate on your AI capabilities by using production data and evaluation scores to drive improvements.
export const definitions = {
'Capability': 'A generative AI capability is a system that uses large language models to perform a specific task.',
'Collection': 'A curated set of reference records that are used for the development, testing, and evaluation of a capability.',
'Console': "Axiom’s intuitive web app built for exploration, visualization, and monitoring of your data.",
'Eval': 'The process of testing a capability against a collection of ground truth references using one or more graders.',
'Grader': 'A function that scores a capability’s output.',
'GroundTruth': 'The validated, expert-approved correct output for a given input.',
'EventDB': "Axiom’s robust, cost-effective, and scalable datastore specifically optimized for timestamped event data.",
'OnlineEval': 'The process of applying a grader to a capability’s live production traffic.'
};
export const Badge = ({children}) => {
return
{children}
;
};
The iteration workflow described here is in active development. Axiom is working with design partners to shape what’s built. [Contact Axiom](https://www.axiom.co/contact) to get early access and join a small group of teams shaping these tools.
The **Iterate** stage is where the Rudder workflow comes full circle. It's the process of taking the real-world performance data from the [Observe](/ai-engineering/observe) stage and the quality benchmarks from the [Measure](/ai-engineering/measure) stage, and using them to make concrete improvements to your AI capability. This creates a cycle of continuous, data-driven enhancement.
## Identifying opportunities for improvement
Iteration begins with insight. The telemetry you gather while observing your capability in production is a goldmine for finding areas to improve. By analyzing traces in the Axiom Console, you can:
* Find real-world user inputs that caused your capability to fail or produce low-quality output.
* Identify high-cost or high-latency interactions that could be optimized.
* Discover common themes in user feedback that point to systemic weaknesses.
These examples can be used to create a new, more robust collection of ground truth data for offline testing.
## Testing changes against ground truth
Coming soon Once you've created a new version of your `Prompt` object, you need to verify that it's actually an improvement. The best way to do this is to run an "offline evaluation"—testing your new version against the same ground truth collection you used in the **Measure** stage.
The Axiom Console will provide views to compare these evaluation runs side-by-side:
* **A/B Comparison Views:** See the outputs of two different prompt versions for the same input, making it easy to spot regressions or improvements.
* **Leaderboards:** Track evaluation scores across all versions of a capability to see a clear history of its quality over time.
This ensures you can validate changes with data before they ever reach your users.
## Deploying with confidence
Coming soon After a new version of your capability has proven its superiority in offline tests, you can deploy it with confidence. The Rudder workflow will support a champion/challenger pattern, where you can deploy a new "challenger" version to run in shadow mode against a portion of production traffic. This allows for a final validation on real-world data without impacting the user experience.
Once you're satisfied with the challenger's performance, you can promote it to become the new "champion" using the SDK's `deploy` function.
```typescript
import { axiom } from './axiom-client';
// Promote a new version of a prompt to the production environment
await axiom.prompts.deploy('prompt_123', {
environment: 'production',
version: '1.1.0',
});
```
## What's next?
By completing the Iterate stage, you have closed the loop. Your improved capability is now in production, and you can return to the **Observe** stage to monitor its performance and identify the next opportunity for improvement.
This cycle of creating, measuring, observing, and iterating is the core of the Rudder workflow, enabling you to build better AI systems, backed by data.
# Measure
Source: https://axiom.co/docs/ai-engineering/measure
Learn how to measure the quality of your AI capabilities by running evaluations against ground truth data.
export const definitions = {
'Capability': 'A generative AI capability is a system that uses large language models to perform a specific task.',
'Collection': 'A curated set of reference records that are used for the development, testing, and evaluation of a capability.',
'Console': "Axiom’s intuitive web app built for exploration, visualization, and monitoring of your data.",
'Eval': 'The process of testing a capability against a collection of ground truth references using one or more graders.',
'Grader': 'A function that scores a capability’s output.',
'GroundTruth': 'The validated, expert-approved correct output for a given input.',
'EventDB': "Axiom’s robust, cost-effective, and scalable datastore specifically optimized for timestamped event data.",
'OnlineEval': 'The process of applying a grader to a capability’s live production traffic.'
};
export const Badge = ({children}) => {
return
{children}
;
};
The evaluation framework described here is in active development. Axiom is working with design partners to shape what’s built. [Contact Axiom](https://www.axiom.co/contact) to get early access and join a small group of teams shaping these tools.
The **Measure** stage is where you quantify the quality and effectiveness of your AI capability. Instead of relying on anecdotal checks, this stage uses a systematic process called an eval to score your capability's performance against a known set of correct examples (ground truth). This provides a data-driven benchmark to ensure a capability is ready for production and to track its quality over time.
## The `Eval` function
Coming soon The primary tool for the Measure stage is the `Eval` function, which will be available in the `@axiomhq/ai/evals` package. It provides a simple, declarative way to define a test suite for your capability directly in your codebase.
An `Eval` is structured around a few key parameters:
* `data`: An async function that returns your `collection` of `{ input, expected }` pairs, which serve as your ground truth.
* `task`: The function that executes your AI capability, taking an `input` and producing an `output`.
* `scorers`: An array of `grader` functions that score the `output` against the `expected` value.
* `threshold`: A score between 0 and 1 that determines the pass/fail condition for the evaluation.
Here is an example of a complete evaluation suite:
```typescript
// evals/text-match.eval.ts
import { Levenshtein } from 'autoevals';
import { Eval } from '@axiomhq/ai';
Eval('text-match-eval', {
// 1. Your ground truth dataset
data: async () => {
return [
{
input: 'test',
expected: 'hi, test!',
},
{
input: 'foobar',
expected: 'hello, foobar!',
},
];
},
// 2. The task that runs your capability
task: async (input: string) => {
return `hi, ${input}!`;
},
// 3. The scorers that grade the output
scorers: [Levenshtein],
// 4. The pass/fail threshold for the scores
threshold: 1,
});
```
## Grading with scorers
Coming soon A grader is a function that scores a capability's output. Axiom will provide a library of built-in scorers for common tasks (e.g., checking for semantic similarity, factual correctness, or JSON validity). You can also provide your own custom functions to measure domain-specific logic. Each scorer receives the `input`, the generated `output`, and the `expected` value, and must return a score.
## Running evaluations
Coming soon You will run your evaluation suites from your terminal using the `axiom` CLI.
```bash
axiom run evals/text-match.eval.ts
```
This command will execute the specified test file using `vitest` in the background. Note that `vitest` will be a peer dependency for this functionality.
## Analyzing results in the console
Coming soon When you run an eval, the Axiom SDK captures a detailed OpenTelemetry trace for the entire run. This includes parent spans for the evaluation suite and child spans for each individual test case, task execution, and scorer result. These traces are enriched with `eval.*` attributes, allowing you to deeply analyze results in the Axiom Console.
The Console will feature leaderboards and comparison views to track score progression across different versions of a capability, helping you verify that your changes are leading to measurable improvements.
## What's next?
Once your capability meets your quality benchmarks in the Measure stage, it's ready to be deployed. The next step is to monitor its performance with real-world traffic.
Learn more about this step of the Rudder workflow in the [Observe](/ai-engineering/observe) docs.
# Observe
Source: https://axiom.co/docs/ai-engineering/observe
Learn how to observe your deployed AI capabilities in production using Axiom's AI SDK to capture telemetry.
export const Badge = ({children}) => {
return
{children}
;
};
The **Observe** stage is about understanding how your deployed generative AI capabilities perform in the real world. After creating and evaluating a capability, observing its production behavior is crucial for identifying unexpected issues, tracking costs, and gathering the data needed for future improvements.
## Capturing telemetry with the `@axiomhq/ai` SDK
The foundation of the Observe stage is Axiom's SDK, which integrates with your app to capture detailed OpenTelemetry traces for every AI interaction.
The initial release of `@axiomhq/ai` is focused on providing deep integration with TypeScript applications, particularly those using Vercel's AI SDK to interact with frontier models.
### Instrumenting AI SDK calls
The easiest way to get started is by wrapping your existing AI model client. The `@axiomhq/ai` package provides helper functions for popular libraries like Vercel's AI SDK.
The `wrapAISDKModel` function takes an existing AI model object and returns an instrumented version that will automatically generate trace data for every call.
```typescript
// src/shared/gemini.ts
import { createGoogleGenerativeAI } from '@ai-sdk/google';
import { wrapAISDKModel } from '@axiomhq/ai';
const geminiProvider = createGoogleGenerativeAI({
apiKey: process.env.GEMINI_API_KEY,
});
// Wrap the model to enable automatic tracing
export const geminiFlash = wrapAISDKModel(geminiProvider('gemini-2.5-flash-preview-04-17'));
```
### Adding context with `withSpan`
While `wrapAISDKModel` handles the automatic instrumentation, the `withSpan` function allows you to add crucial business context to your traces. It creates a parent span around your LLM call and attaches metadata about the `capability` and `step` being executed.
```typescript
// src/app/page.tsx
import { withSpan } from '@axiomhq/ai';
import { generateText } from 'ai';
import { geminiFlash } from '@/shared/gemini';
export default async function Page() {
const userId = 123;
// Use withSpan to define the capability and step
const res = await withSpan({ capability: 'get_capital', step: 'generate_answer' }, (span) => {
// You have access to the OTel span to add custom attributes
span.setAttribute('user_id', userId);
return generateText({
model: geminiFlash, // Use the wrapped model
messages: [
{
role: 'user',
content: 'What is the capital of Spain?',
},
],
});
});
return
{res.text}
;
}
```
## Setting up instrumentation
The Axiom AI SDK is built on the OpenTelemetry standard. To send traces, you need to configure a Node.js or edge-compatible tracer that exports data to Axiom.
### Configuring the tracer
You must configure an OTLP trace exporter pointing to your Axiom instance. This is typically done in a dedicated instrumentation file that is loaded before your application starts.
```typescript
// src/instrumentation.node.ts
import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http';
import { Resource } from '@opentelemetry/resources';
import { NodeSDK } from '@opentelemetry/sdk-node';
import { SimpleSpanProcessor } from '@opentelemetry/sdk-trace-node';
import { ATTR_SERVICE_NAME } from '@opentelemetry/semantic-conventions';
import { initAxiomAI, tracer } from '@axiomhq/ai';
// Configure the SDK to export traces to Axiom
const sdk = new NodeSDK({
resource: new Resource({
[ATTR_SERVICE_NAME]: 'nextjs-otel-example',
}),
spanProcessor: new SimpleSpanProcessor(
new OTLPTraceExporter({
url: `https://api.axiom.co/v1/traces`,
headers: {
Authorization: `Bearer ${process.env.AXIOM_TOKEN}`,
'X-Axiom-Dataset': process.env.AXIOM_DATASET!,
},
}),
),
});
sdk.start();
// Initialize the Axiom AI SDK with the tracer
initAxiomAI({ tracer });
```
Your Axiom credentials (`AXIOM_TOKEN` and `AXIOM_DATASET`) should be set as environment variables.
## Understanding your AI telemetry
Once instrumented, every LLM call will send a detailed span to your Axiom dataset. These spans are enriched with standardized `gen_ai.*` attributes that make your AI interactions easy to query and analyze.
Key attributes include:
* `gen_ai.capability.name`: The high-level capability name you defined in `withSpan`.
* `gen_ai.step.name`: The specific step within the capability.
* `gen_ai.request.model`: The model requested for the completion.
* `gen_ai.response.model`: The model that actually fulfilled the request.
* `gen_ai.usage.input_tokens`: The number of tokens in the prompt.
* `gen_ai.usage.output_tokens`: The number of tokens in the generated response.
* `gen_ai.prompt`: The full, rendered prompt or message history sent to the model (as a JSON string).
* `gen_ai.completion`: The full response from the model, including tool calls (as a JSON string).
* `gen_ai.response.finish_reasons`: The reason the model stopped generating tokens (e.g., `stop`, `tool-calls`).
## Visualizing traces in the console
Visualizing and making sense of this telemetry data is a core part of the Axiom Console experience.
* Coming soon A dedicated **AI Trace Waterfall** view will visualize single and multi-step LLM workflows, with clear input/output inspection at each stage.
* Coming soon A pre-built **Gen AI OTel Dashboard** will automatically appear for any dataset receiving AI telemetry. It will feature elements for tracking cost per invocation, time-to-first-token, call counts by model, and error rates.
## What's next?
Now that you are capturing and analyzing production telemetry, the next step is to use these insights to improve your capability.
Learn more in the [Iterate](/ai-engineering/iterate) page.
# Overview
Source: https://axiom.co/docs/ai-engineering/overview
Introduction to Rudder, Axiom's methodology for designing, evaluating, monitoring, and iterating generative-AI capabilities.
export const definitions = {
'Capability': 'A generative AI capability is a system that uses large language models to perform a specific task.',
'Collection': 'A curated set of reference records that are used for the development, testing, and evaluation of a capability.',
'Console': "Axiom’s intuitive web app built for exploration, visualization, and monitoring of your data.",
'Eval': 'The process of testing a capability against a collection of ground truth references using one or more graders.',
'Grader': 'A function that scores a capability’s output.',
'GroundTruth': 'The validated, expert-approved correct output for a given input.',
'EventDB': "Axiom’s robust, cost-effective, and scalable datastore specifically optimized for timestamped event data.",
'OnlineEval': 'The process of applying a grader to a capability’s live production traffic.'
};
Generative AI development is fundamentally different from traditional software engineering. Its outputs are probabilistic, not deterministic; the same input can produce different results. This variability makes it challenging to guarantee quality and predict failure modes without the right infrastructure.
Axiom’s data intelligence platform is ideally suited to address the unique challenges of AI engineering. Building on the foundational EventDB and Console components, Axiom provides an essential toolkit for the next generation of software builders.
This section of the documentation introduces the concepts and workflows for building production-ready AI capabilities with confidence. The goal is to help developers move from experimental "vibe coding" to building increasingly sophisticated systems with observable outcomes.
### Rudder workflow
Axiom provides a structured, iterative workflow—the Rudder method—for developing AI capabilities. The workflow is designed to build statistical confidence in systems that are not entirely predictable, and is grounded in systematic evaluation and continuous improvement, from initial prototype to production monitoring.
The core stages are:
* **Create**: Define a new AI capability, prototype it with various models, and gather reference examples to establish ground truth.
* **Measure**: Systematically evaluate the capability's performance against reference data using custom graders to score for accuracy, quality, and cost.
* **Observe**: Cultivate the capability in production by collecting rich telemetry on every execution. Use online evaluations to monitor for performance degradation and discover edge cases.
* **Iterate**: Use insights from production to refine prompts, augment reference datasets, and improve the capability over time.
### What's next?
* To understand the key terms used in Rudder, see the [Concepts](/ai-engineering/concepts) page.
* To start building, follow the [Quickstart](/ai-engineering/quickstart) page.
# Quickstart
Source: https://axiom.co/docs/ai-engineering/quickstart
Install and configure the Axiom AI SDK to begin capturing telemetry from your generative AI applications.
This guide provides the steps to install and configure the [`@axiomhq/ai`](https://github.com/axiomhq/ai) SDK. Once configured, you can follow the Rudder workflow to create, measure, observe, and iterate on your AI capabilities.
## Prerequisites
Before you begin, ensure you have the following:
* An Axiom **account**. Create one [here](https://www.axiom.co/register).
* An Axiom **dataset**. Create one [here](https://app.axiom.co/datasets).
* An Axiom **API token**. Create one [here](https://app.axiom.co/settings/api-tokens).
## Installation
Install the Axiom AI SDK into your TypeScript project using your preferred package manager.
```bash pnpm
pnpm i @axiomhq/ai
```
```bash npm
npm i @axiomhq/ai
```
```bash yarn
yarn add @axiomhq/ai
```
```bash bun
bun add @axiomhq/ai
```
The SDK is open source. You can view the source code and examples in the [axiomhq/ai GitHub repository](https://github.com/axiomhq/ai).
The `@axiomhq/ai` package also includes the `axiom` command-line interface (CLI) for managing your AI assets, which will be used in later stages of the Rudder workflow.
## Configuration
The Axiom AI SDK is built on the OpenTelemetry standard and requires a configured tracer to send data to Axiom. This is typically done in a dedicated instrumentation file that is loaded before the rest of your application.
Here is a standard configuration for a Node.js environment:
```typescript
// src/instrumentation.ts
import 'dotenv/config'; // Make sure to load environment variables
import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http';
import { Resource } from '@opentelemetry/resources';
import { NodeSDK } from '@opentelemetry/sdk-node';
import { SimpleSpanProcessor } from '@opentelemetry/sdk-trace-node';
import { ATTR_SERVICE_NAME } from '@opentelemetry/semantic-conventions';
import { initAxiomAI, tracer } from '@axiomhq/ai';
// Configure the NodeSDK to export traces to your Axiom dataset
const sdk = new NodeSDK({
resource: new Resource({
[ATTR_SERVICE_NAME]: 'my-ai-app', // Replace with your service name
}),
spanProcessor: new SimpleSpanProcessor(
new OTLPTraceExporter({
url: `https://api.axiom.co/v1/traces`,
headers: {
Authorization: `Bearer ${process.env.AXIOM_TOKEN}`,
'X-Axiom-Dataset': process.env.AXIOM_DATASET!,
},
}),
),
});
// Start the SDK
sdk.start();
// Initialize the Axiom AI SDK with the configured tracer
initAxiomAI({ tracer });
```
## Environment variables
Your Axiom credentials and any frontier model API keys should be stored as environment variables. Create a `.env` file in the root of your project:
```bash
# Axiom Credentials
AXIOM_TOKEN=""
AXIOM_DATASET=""
# Frontier Model API Keys
OPENAI_API_KEY=""
GEMINI_API_KEY=""
```
## What's next?
Now that your application is configured to send telemetry to Axiom, the next step is to start instrumenting your AI model calls.
Learn more about that in the [Observe](/ai-engineering/observe) page of the Rudder workflow.
# arg_max
Source: https://axiom.co/docs/apl/aggregation-function/arg-max
This page explains how to use the arg_max aggregation in APL.
The `arg_max` aggregation in APL helps you identify the row with the maximum value for an expression and return additional fields from that record. Use `arg_max` when you want to determine key details associated with a row where the expression evaluates to the maximum value. If you group your data, `arg_max` finds the row within each group where a particular expression evaluates to the maximum value.
This aggregation is particularly useful in scenarios like the following:
* Pinpoint the slowest HTTP requests in log data and retrieve associated details (like URL, status code, and user agent) for the same row.
* Identify the longest span durations in OpenTelemetry traces with additional context (like span name, trace ID, and attributes) for the same row.
* Highlight the highest severity security alerts in logs along with relevant metadata (such as alert type, source, and timestamp) for the same row.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
Splunk SPL doesn’t have an equivalent to `arg_max`. You can use `stats` with a combination of `max` and `by` clauses to evaluate the maximum value of a single numberic field. APL provides a dedicated `arg_max` aggregation that evaluates expressions.
```sql Splunk example
| stats max(req_duration_ms) as max_duration by id, uri
```
```kusto APL equivalent
['sample-http-logs']
| summarize arg_max(req_duration_ms, id, uri)
```
In ANSI SQL, you typically use a subquery to find the maximum value and then join it back to the original table to retrieve additional fields. APL’s `arg_max` provides a more concise and efficient alternative.
```sql SQL example
WITH MaxValues AS (
SELECT id, MAX(req_duration_ms) as max_duration
FROM sample_http_logs
GROUP BY id
)
SELECT logs.id, logs.uri, MaxValues.max_duration
FROM sample_http_logs logs
JOIN MaxValues
ON logs.id = MaxValues.id;
```
```kusto APL equivalent
['sample-http-logs']
| summarize arg_max(req_duration_ms, id, uri)
```
## Usage
### Syntax
```kusto
| summarize arg_max(expression, field1[, field2, ...])
```
### Parameters
| Parameter | Description |
| ---------------- | --------------------------------------------------------------------------------- |
| `expression` | The expression whose maximum value determines the selected record. |
| `field1, field2` | The additional fields to retrieve from the record with the maximum numeric value. |
### Returns
Returns a row where the expression evaluates to the maximum value for each group (or the entire dataset if no grouping is specified), containing the fields specified in the query.
## Use case examples
Find the slowest path for each HTTP method in the `['sample-http-logs']` dataset.
**Query**
```kusto
['sample-http-logs']
| summarize arg_max(req_duration_ms, uri) by method
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20arg_max\(req_duration_ms%2C%20uri\)%20by%20method%22%7D)
**Output**
| uri | method | req\_duration\_ms |
| ------------- | ------ | ----------------- |
| /home | GET | 1200 |
| /api/products | POST | 2500 |
This query identifies the slowest path for each HTTP method.
Identify the span with the longest duration for each service in the `['otel-demo-traces']` dataset.
**Query**
```kusto
['otel-demo-traces']
| summarize arg_max(duration, span_id, trace_id) by ['service.name']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20arg_max\(duration%2C%20span_id%2C%20trace_id\)%20by%20%5B'service.name'%5D%22%7D)
**Output**
| service.name | span\_id | trace\_id | duration |
| --------------- | -------- | --------- | -------- |
| frontend | span123 | trace456 | 3s |
| checkoutservice | span789 | trace012 | 5s |
This query identifies the span with the longest duration for each service, returning the `span_id`, `trace_id`, and `duration`.
Find the highest status code for each country in the `['sample-http-logs']` dataset.
**Query**
```kusto
['sample-http-logs']
| summarize arg_max(toint(status), uri) by ['geo.country']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20arg_max\(toint\(status\)%2C%20uri\)%20by%20%5B'geo.country'%5D%22%7D)
**Output**
| geo.country | uri | status |
| ----------- | ---------- | ------ |
| USA | /admin | 500 |
| Canada | /dashboard | 503 |
This query identifies the URI with the highest status code for each country.
## List of related aggregations
* [arg\_min](/apl/aggregation-function/arg-min): Retrieves the record with the minimum value for a numeric field.
* [max](/apl/aggregation-function/max): Retrieves the maximum value for a numeric field but does not return additional fields.
* [percentile](/apl/aggregation-function/percentile): Provides the value at a specific percentile of a numeric field.
# arg_min
Source: https://axiom.co/docs/apl/aggregation-function/arg-min
This page explains how to use the arg_min aggregation in APL.
The `arg_min` aggregation in APL allows you to identify the row in a dataset where an expression evaluates to the minimum value. You can use this to retrieve other associated fields in the same row, making it particularly useful for pinpointing details about the smallest value in large datasets. If you group your data, `arg_min` finds the row within each group where a particular expression evaluates to the minimum value.
This aggregation is particularly useful in scenarios like the following:
* Pinpoint the shortest HTTP requests in log data and retrieve associated details (like URL, status code, and user agent) for the same row.
* Identify the fastest span durations in OpenTelemetry traces with additional context (like span name, trace ID, and attributes) for the same row.
* Highlight the lowest severity security alerts in logs along with relevant metadata (such as alert type, source, and timestamp) for the same row.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
Splunk SPL doesn’t have an equivalent to `arg_min`. You can use `stats` with a combination of `values` and `first` clauses to evaluate the minimum value of a single numberic field. APL provides a dedicated `arg_min` aggregation that evaluates expressions.
```sql Splunk example
| stats min(req_duration_ms) as minDuration by id
| where req_duration_ms=minDuration
```
```kusto APL equivalent
['sample-http-logs']
| summarize arg_min(req_duration_ms, id, uri)
```
In ANSI SQL, achieving similar functionality often requires a combination of `MIN`, `GROUP BY`, and `JOIN` to retrieve the associated fields. APL's `arg_min` eliminates the need for multiple steps by directly returning the row with the minimum value.
```sql SQL example
SELECT id, uri
FROM sample_http_logs
WHERE req_duration_ms = (
SELECT MIN(req_duration_ms)
FROM sample_http_logs
);
```
```kusto APL equivalent
['sample-http-logs']
| summarize arg_min(req_duration_ms, id, uri)
```
## Usage
### Syntax
```kusto
| summarize arg_min(expression, field1, ..., fieldN)
```
### Parameters
* `expression`: The expression to evaluate for the minimum value.
* `field1, ..., fieldN`: Additional fields to return from the row with the minimum value.
### Returns
Returns a row where the expression evaluates to the minimum value for each group (or the entire dataset if no grouping is specified), containing the fields specified in the query.
## Use case examples
You can use `arg_min` to identify the path with the shortest duration and its associated details for each method.
**Query**
```kusto
['sample-http-logs']
| summarize arg_min(req_duration_ms, uri) by method
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20arg_min\(req_duration_ms%2C%20uri\)%20by%20method%22%7D)
**Output**
| req\_duration\_ms | uri | method |
| ----------------- | ---------- | ------ |
| 0.1 | /api/login | POST |
This query identifies the paths with the shortest duration for each method and provides details about the path.
Use `arg_min` to find the span with the shortest duration for each service and retrieve its associated details.
**Query**
```kusto
['otel-demo-traces']
| summarize arg_min(duration, trace_id, span_id, kind) by ['service.name']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20arg_min\(duration%2C%20trace_id%2C%20span_id%2C%20kind\)%20by%20%5B'service.name'%5D%22%7D)
**Output**
| duration | trace\_id | span\_id | service.name | kind |
| -------- | --------- | -------- | ------------ | ------ |
| 00:00:01 | abc123 | span456 | frontend | server |
This query identifies the span with the shortest duration for each service along with its metadata.
Find the lowest status code for each country in the `['sample-http-logs']` dataset.
**Query**
```kusto
['sample-http-logs']
| summarize arg_min(toint(status), uri) by ['geo.country']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20arg_min\(toint\(status\)%2C%20uri\)%20by%20%5B'geo.country'%5D%22%7D)
**Output**
| geo.country | uri | status |
| ----------- | ---------- | ------ |
| USA | /admin | 200 |
| Canada | /dashboard | 201 |
This query identifies the URI with the lowest status code for each country.
## List of related aggregations
* [arg\_max](/apl/aggregation-function/arg-max): Returns the row with the maximum value for a numeric field, useful for finding peak metrics.
* [min](/apl/aggregation-function/min): Returns only the minimum value of a numeric field without additional fields.
* [percentile](/apl/aggregation-function/percentile): Provides the value at a specific percentile of a numeric field.
# avg
Source: https://axiom.co/docs/apl/aggregation-function/avg
This page explains how to use the avg aggregation function in APL.
The `avg` aggregation in APL calculates the average value of a numeric field across a set of records. You can use this aggregation when you need to determine the mean value of numerical data, such as request durations, response times, or other performance metrics. It is useful in scenarios such as performance analysis, trend identification, and general statistical analysis.
When to use `avg`:
* When you want to analyze the average of numeric values over a specific time range or set of data.
* For comparing trends, like average request duration or latency across HTTP requests.
* To provide insight into system or user performance, such as the average duration of transactions in a service.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, the `avg` function works similarly, but the syntax differs slightly. Here’s how to write the equivalent query in APL.
```sql Splunk example
| stats avg(req_duration_ms) by status
```
```kusto APL equivalent
['sample-http-logs']
| summarize avg(req_duration_ms) by status
```
In ANSI SQL, the `avg` aggregation is used similarly, but APL has a different syntax for structuring the query.
```sql SQL example
SELECT status, AVG(req_duration_ms)
FROM sample_http_logs
GROUP BY status
```
```kusto APL equivalent
['sample-http-logs']
| summarize avg(req_duration_ms) by status
```
## Usage
### Syntax
```kusto
summarize avg(ColumnName) [by GroupingColumn]
```
### Parameters
* **ColumnName**: The numeric field you want to calculate the average of.
* **GroupingColumn** (optional): A column to group the results by. If not specified, the average is calculated over all records.
### Returns
* A table with the average value for the specified field, optionally grouped by another column.
## Use case examples
This example calculates the average request duration for HTTP requests, grouped by status.
**Query**
```kusto
['sample-http-logs']
| summarize avg(req_duration_ms) by status
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20summarize%20avg\(req_duration_ms\)%20by%20status%22%7D)
**Output**
| status | avg\_req\_duration\_ms |
| ------ | ---------------------- |
| 200 | 350.4 |
| 404 | 150.2 |
This query calculates the average request duration (in milliseconds) for each HTTP status code.
This example calculates the average span duration for each service to analyze performance across services.
**Query**
```kusto
['otel-demo-traces']
| summarize avg(duration) by ['service.name']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%5Cn%7C%20summarize%20avg\(duration\)%20by%20%5B'service.name'%5D%22%7D)
**Output**
| service.name | avg\_duration |
| ------------ | ------------- |
| frontend | 500ms |
| cartservice | 250ms |
This query calculates the average duration of spans for each service.
In security logs, you can calculate the average request duration by country to analyze regional performance trends.
**Query**
```kusto
['sample-http-logs']
| summarize avg(req_duration_ms) by ['geo.country']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20summarize%20avg\(req_duration_ms\)%20by%20%5B'geo.country'%5D%22%7D)
**Output**
| geo.country | avg\_req\_duration\_ms |
| ----------- | ---------------------- |
| US | 400.5 |
| DE | 250.3 |
This query calculates the average request duration for each country from where the requests originated.
## List of related aggregations
* [**sum**](/apl/aggregation-function/sum): Use `sum` to calculate the total of a numeric field. This is useful when you want the total of values rather than their average.
* [**count**](/apl/aggregation-function/count): The `count` function returns the total number of records. It’s useful when you want to count occurrences rather than averaging numerical values.
* [**min**](/apl/aggregation-function/min): The `min` function returns the minimum value of a numeric field. Use this when you’re interested in the smallest value in your dataset.
* [**max**](/apl/aggregation-function/max): The `max` function returns the maximum value of a numeric field. This is useful for finding the largest value in the data.
* [**stdev**](/apl/aggregation-function/stdev): This function calculates the standard deviation of a numeric field, providing insight into how spread out the data is around the mean.
# avgif
Source: https://axiom.co/docs/apl/aggregation-function/avgif
This page explains how to use the avgif aggregation function in APL.
The `avgif` aggregation function in APL allows you to calculate the average value of a field, but only for records that satisfy a given condition. This function is particularly useful when you need to perform a filtered aggregation, such as finding the average response time for requests that returned a specific status code or filtering by geographic regions. The `avgif` function is highly valuable in scenarios like log analysis, performance monitoring, and anomaly detection, where focusing on subsets of data can provide more accurate insights.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk, you achieve similar functionality using the combination of a `stats` function with conditional filtering. In APL, `avgif` provides this filtering inline as part of the aggregation function, which can simplify your queries.
```sql Splunk example
| stats avg(req_duration_ms) by id where status = "200"
```
```kusto APL equivalent
['sample-http-logs']
| summarize avgif(req_duration_ms, status == "200") by id
```
In ANSI SQL, you can use a `CASE` statement inside an `AVG` function to achieve similar behavior. APL simplifies this with `avgif`, allowing you to specify the condition directly.
```sql SQL example
SELECT id, AVG(CASE WHEN status = '200' THEN req_duration_ms ELSE NULL END)
FROM sample_http_logs
GROUP BY id
```
```kusto APL equivalent
['sample-http-logs']
| summarize avgif(req_duration_ms, status == "200") by id
```
## Usage
### Syntax
```kusto
summarize avgif(expr, predicate) by grouping_field
```
### Parameters
* **`expr`**: The field for which you want to calculate the average.
* **`predicate`**: A boolean condition that filters which records are included in the calculation.
* **`grouping_field`**: (Optional) A field by which you want to group the results.
### Returns
The function returns the average of the values from the `expr` field for the records that satisfy the `predicate`. If no records match the condition, the result is `null`.
## Use case examples
In this example, you calculate the average request duration for HTTP status 200 in different cities.
**Query**
```kusto
['sample-http-logs']
| summarize avgif(req_duration_ms, status == "200") by ['geo.city']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20summarize%20avgif%28req_duration_ms%2C%20status%20%3D%3D%20%22200%22%29%20by%20%5B%27geo.city%27%5D%22%7D)
**Output**
| geo.city | avg\_req\_duration\_ms |
| -------- | ---------------------- |
| New York | 325 |
| London | 400 |
| Tokyo | 275 |
This query calculates the average request duration (`req_duration_ms`) for HTTP requests that returned a status of 200 (`status == "200"`), grouped by the city where the request originated (`geo.city`).
In this example, you calculate the average span duration for traces that ended with HTTP status 500.
**Query**
```kusto
['otel-demo-traces']
| summarize avgif(duration, status == "500") by ['service.name']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D%20%7C%20summarize%20avgif%28duration%2C%20status%20%3D%3D%20%22500%22%29%20by%20%5B%27service.name%27%5D%22%7D)
**Output**
| service.name | avg\_duration |
| --------------- | ------------- |
| checkoutservice | 500ms |
| frontend | 600ms |
| cartservice | 475ms |
This query calculates the average span duration (`duration`) for traces where the status code is 500 (`status == "500"`), grouped by the service name (`service.name`).
In this example, you calculate the average request duration for failed HTTP requests (status code 400 or higher) by country.
**Query**
```kusto
['sample-http-logs']
| summarize avgif(req_duration_ms, toint(status) >= 400) by ['geo.country']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20summarize%20avgif%28req_duration_ms%2C%20toint%28status%29%20%3E%3D%20400%29%20by%20%5B%27geo.country%27%5D%22%7D)
**Output**
| geo.country | avg\_req\_duration\_ms |
| ----------- | ---------------------- |
| USA | 450 |
| Canada | 500 |
| Germany | 425 |
This query calculates the average request duration (`req_duration_ms`) for failed HTTP requests (`status >= 400`), grouped by the country of origin (`geo.country`).
## List of related aggregations
* [**minif**](/apl/aggregation-function/minif): Returns the minimum value of an expression, filtered by a predicate. Use when you want to find the smallest value for a subset of data.
* [**maxif**](/apl/aggregation-function/maxif): Returns the maximum value of an expression, filtered by a predicate. Use when you are looking for the largest value within specific conditions.
* [**countif**](/apl/aggregation-function/countif): Counts the number of records that match a condition. Use when you want to know how many records meet a specific criterion.
* [**sumif**](/apl/aggregation-function/sumif): Sums the values of a field that match a given condition. Ideal for calculating the total of a subset of data.
# count
Source: https://axiom.co/docs/apl/aggregation-function/count
This page explains how to use the count aggregation function in APL.
The `count` aggregation in APL returns the total number of records in a dataset or the total number of records that match specific criteria. This function is useful when you need to quantify occurrences, such as counting log entries, user actions, or security events.
When to use `count`:
* To count the total number of events in log analysis, such as the number of HTTP requests or errors.
* To monitor system usage, such as the number of transactions or API calls.
* To identify security incidents by counting failed login attempts or suspicious activities.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, the `count` function works similarly to APL, but the syntax differs slightly.
```sql Splunk example
| stats count by status
```
```kusto APL equivalent
['sample-http-logs']
| summarize count() by status
```
In ANSI SQL, the `count` function works similarly, but APL uses different syntax for querying.
```sql SQL example
SELECT status, COUNT(*)
FROM sample_http_logs
GROUP BY status
```
```kusto APL equivalent
['sample-http-logs']
| summarize count() by status
```
## Usage
### Syntax
```kusto
summarize count() [by GroupingColumn]
```
### Parameters
* **GroupingColumn** (optional): A column to group the count results by. If not specified, the total number of records across the dataset is returned.
### Returns
* A table with the count of records for the entire dataset or grouped by the specified column.
## Use case examples
In log analysis, you can count the number of HTTP requests by status to get a sense of how many requests result in different HTTP status codes.
**Query**
```kusto
['sample-http-logs']
| summarize count() by status
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20summarize%20count\(\)%20by%20status%22%7D)
**Output**
| status | count |
| ------ | ----- |
| 200 | 1500 |
| 404 | 200 |
This query counts the total number of HTTP requests for each status code in the logs.
For OpenTelemetry traces, you can count the total number of spans for each service, which helps you monitor the distribution of requests across services.
**Query**
```kusto
['otel-demo-traces']
| summarize count() by ['service.name']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%5Cn%7C%20summarize%20count\(\)%20by%20%5B'service.name'%5D%22%7D)
**Output**
| service.name | count |
| ------------ | ----- |
| frontend | 1000 |
| cartservice | 500 |
This query counts the number of spans for each service in the OpenTelemetry traces dataset.
In security logs, you can count the number of requests by country to identify where the majority of traffic or suspicious activity originates.
**Query**
```kusto
['sample-http-logs']
| summarize count() by ['geo.country']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20summarize%20count\(\)%20by%20%5B'geo.country'%5D%22%7D)
**Output**
| geo.country | count |
| ----------- | ----- |
| US | 3000 |
| DE | 500 |
This query counts the number of requests originating from each country.
## List of related aggregations
* [**sum**](/apl/aggregation-function/sum): Use `sum` to calculate the total sum of a numeric field, as opposed to counting the number of records.
* [**avg**](/apl/aggregation-function/avg): The `avg` function calculates the average of a numeric field. Use it when you want to determine the mean value of data instead of the count.
* [**min**](/apl/aggregation-function/min): The `min` function returns the minimum value of a numeric field, helping to identify the smallest value in a dataset.
* [**max**](/apl/aggregation-function/max): The `max` function returns the maximum value of a numeric field, useful for identifying the largest value.
* [**countif**](/apl/aggregation-function/countif): The `countif` function allows you to count only records that meet specific conditions, giving you more flexibility in your count queries.
# countif
Source: https://axiom.co/docs/apl/aggregation-function/countif
This page explains how to use the countif aggregation function in APL.
The `countif` aggregation function in Axiom Processing Language (APL) counts the number of records that meet a specified condition. You can use this aggregation to filter records based on a specific condition and return a count of matching records. This is particularly useful for log analysis, security audits, and tracing events when you need to isolate and count specific data subsets.
Use `countif` when you want to count occurrences of certain conditions, such as HTTP status codes, errors, or actions in telemetry traces.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, conditional counting is typically done using the `eval` function combined with `stats`. APL provides a more streamlined approach with the `countif` function, which performs conditional counting directly.
```sql Splunk example
| stats count(eval(status="500")) AS error_count
```
```kusto APL equivalent
['sample-http-logs']
| summarize countif(status == '500')
```
In ANSI SQL, conditional counting is achieved by using the `COUNT` function with a `CASE` statement. In APL, `countif` simplifies this process by offering a direct approach to conditional counting.
```sql SQL example
SELECT COUNT(CASE WHEN status = '500' THEN 1 END) AS error_count
FROM sample_http_logs
```
```kusto APL equivalent
['sample-http-logs']
| summarize countif(status == '500')
```
## Usage
### Syntax
```kusto
countif(condition)
```
### Parameters
* **condition**: A boolean expression that filters the records based on a condition. Only records where the condition evaluates to `true` are counted.
### Returns
The function returns the number of records that match the specified condition.
## Use case examples
In log analysis, you might want to count how many HTTP requests returned a 500 status code to detect server errors.
**Query**
```kusto
['sample-http-logs']
| summarize countif(status == '500')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20countif\(status%20%3D%3D%20'500'\)%22%7D)
**Output**
| count\_errors |
| ------------- |
| 72 |
This query counts the number of HTTP requests with a `500` status, helping you identify how many server errors occurred.
In OpenTelemetry traces, you might want to count how many requests were initiated by the client service kind.
**Query**
```kusto
['otel-demo-traces']
| summarize countif(kind == 'client')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20countif\(kind%20%3D%3D%20'client'\)%22%7D)
**Output**
| count\_client\_kind |
| ------------------- |
| 345 |
This query counts how many requests were initiated by the `client` service kind, providing insight into the volume of client-side traffic.
In security logs, you might want to count how many HTTP requests originated from a specific city, such as New York.
**Query**
```kusto
['sample-http-logs']
| summarize countif(['geo.city'] == 'New York')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20countif\(%5B'geo.city'%5D%20%3D%3D%20'New%20York'\)%22%7D)
**Output**
| count\_nyc\_requests |
| -------------------- |
| 87 |
This query counts how many HTTP requests originated from New York, which can help detect traffic from a particular location for security analysis.
## List of related aggregations
* [**count**](/apl/aggregation-function/count): Counts all records in a dataset without applying a condition. Use this when you need the total count of records, regardless of any specific condition.
* [**sumif**](/apl/aggregation-function/sumif): Adds up the values of a field for records that meet a specific condition. Use `sumif` when you want to sum values based on a filter.
* [**dcountif**](/apl/aggregation-function/dcountif): Counts distinct values of a field for records that meet a condition. This is helpful when you need to count unique occurrences.
* [**avgif**](/apl/aggregation-function/avgif): Calculates the average value of a field for records that match a condition, useful for performance monitoring.
* [**maxif**](/apl/aggregation-function/maxif): Returns the maximum value of a field for records that meet a condition. Use this when you want to find the highest value in filtered data.
# dcount
Source: https://axiom.co/docs/apl/aggregation-function/dcount
This page explains how to use the dcount aggregation function in APL.
The `dcount` aggregation function in Axiom Processing Language (APL) counts the distinct values in a column. This function is essential when you need to know the number of unique values, such as counting distinct users, unique requests, or distinct error codes in log files.
Use `dcount` for analyzing datasets where it’s important to identify the number of distinct occurrences, such as unique IP addresses in security logs, unique user IDs in application logs, or unique trace IDs in OpenTelemetry traces.
The `dcount` aggregation in APL is a statistical aggregation that returns estimated results. The estimation comes with the benefit of speed at the expense of accuracy. This means that `dcount` is fast and light on resources even on a large or high-cardinality dataset, but it doesn’t provide precise results.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, you can count distinct values using the `dc` function within the `stats` command. In APL, the `dcount` function offers similar functionality.
```sql Splunk example
| stats dc(user_id) AS distinct_users
```
```kusto APL equivalent
['sample-http-logs']
| summarize dcount(id)
```
In ANSI SQL, distinct counting is typically done using `COUNT` with the `DISTINCT` keyword. In APL, `dcount` provides a direct and efficient way to count distinct values.
```sql SQL example
SELECT COUNT(DISTINCT user_id) AS distinct_users
FROM sample_http_logs
```
```kusto APL equivalent
['sample-http-logs']
| summarize dcount(id)
```
## Usage
### Syntax
```kusto
dcount(column_name)
```
### Parameters
* **column\_name**: The name of the column for which you want to count distinct values.
### Returns
The function returns the count of distinct values found in the specified column.
## Use case examples
In log analysis, you can count how many distinct users accessed the service.
**Query**
```kusto
['sample-http-logs']
| summarize dcount(id)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20dcount\(id\)%22%7D)
**Output**
| distinct\_users |
| --------------- |
| 45 |
This query counts the distinct values in the `id` field, representing the number of unique users who accessed the system.
In OpenTelemetry traces, you can count how many unique trace IDs are recorded.
**Query**
```kusto
['otel-demo-traces']
| summarize dcount(trace_id)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20dcount\(trace_id\)%22%7D)
**Output**
| distinct\_traces |
| ---------------- |
| 321 |
This query counts the distinct trace IDs in the dataset, helping you determine how many unique traces are being captured.
In security logs, you can count how many distinct IP addresses were logged.
**Query**
```kusto
['sample-http-logs']
| summarize dcount(['geo.city'])
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20dcount\(%5B'geo.city'%5D\)%22%7D)
**Output**
| distinct\_cities |
| ---------------- |
| 35 |
This query counts the number of distinct cities recorded in the logs, which helps analyze the geographic distribution of traffic.
## List of related aggregations
* [**count**](/apl/aggregation-function/count): Counts the total number of records in the dataset, including duplicates. Use it when you need to know the overall number of records.
* [**countif**](/apl/aggregation-function/countif): Counts records that match a specific condition. Use `countif` when you want to count records based on a filter or condition.
* [**dcountif**](/apl/aggregation-function/dcountif): Counts the distinct values in a column but only for records that meet a condition. It’s useful when you need a filtered distinct count.
* [**sum**](/apl/aggregation-function/sum): Sums the values in a column. Use this when you need to add up values rather than counting distinct occurrences.
* [**avg**](/apl/aggregation-function/avg): Calculates the average value for a column. Use this when you want to find the average of a specific numeric field.
# dcountif
Source: https://axiom.co/docs/apl/aggregation-function/dcountif
This page explains how to use the dcountif aggregation function in APL.
The `dcountif` aggregation function in Axiom Processing Language (APL) counts the distinct values in a column that meet a specific condition. This is useful when you want to filter records and count only the unique occurrences that satisfy a given criterion.
Use `dcountif` in scenarios where you need a distinct count but only for a subset of the data, such as counting unique users from a specific region, unique error codes for specific HTTP statuses, or distinct traces that match a particular service type.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, counting distinct values conditionally is typically achieved using a combination of `eval` and `dc` in the `stats` function. APL simplifies this with the `dcountif` function, which handles both filtering and distinct counting in a single step.
```sql Splunk example
| stats dc(eval(status="200")) AS distinct_successful_users
```
```kusto APL equivalent
['sample-http-logs']
| summarize dcountif(id, status == '200')
```
In ANSI SQL, conditional distinct counting can be done using a combination of `COUNT(DISTINCT)` and `CASE`. APL's `dcountif` function provides a more concise and readable way to handle conditional distinct counting.
```sql SQL example
SELECT COUNT(DISTINCT CASE WHEN status = '200' THEN user_id END) AS distinct_successful_users
FROM sample_http_logs
```
```kusto APL equivalent
['sample-http-logs']
| summarize dcountif(id, status == '200')
```
## Usage
### Syntax
```kusto
dcountif(column_name, condition)
```
### Parameters
* **column\_name**: The name of the column for which you want to count distinct values.
* **condition**: A boolean expression that filters the records. Only records that meet the condition will be included in the distinct count.
### Returns
The function returns the count of distinct values that meet the specified condition.
## Use case examples
In log analysis, you might want to count how many distinct users accessed the service and received a successful response (HTTP status 200).
**Query**
```kusto
['sample-http-logs']
| summarize dcountif(id, status == '200')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20dcountif\(id%2C%20status%20%3D%3D%20'200'\)%22%7D)
**Output**
| distinct\_successful\_users |
| --------------------------- |
| 50 |
This query counts the distinct users (`id` field) who received a successful HTTP response (status 200), helping you understand how many unique users had successful requests.
In OpenTelemetry traces, you might want to count how many unique trace IDs are recorded for a specific service, such as `frontend`.
**Query**
```kusto
['otel-demo-traces']
| summarize dcountif(trace_id, ['service.name'] == 'frontend')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20dcountif\(trace_id%2C%20%5B'service.name'%5D%20%3D%3D%20'frontend'\)%22%7D)
**Output**
| distinct\_frontend\_traces |
| -------------------------- |
| 123 |
This query counts the number of distinct trace IDs that belong to the `frontend` service, providing insight into the volume of unique traces for that service.
In security logs, you might want to count how many unique IP addresses were logged for requests that resulted in a 403 status (forbidden access).
**Query**
```kusto
['sample-http-logs']
| summarize dcountif(['geo.city'], status == '403')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20dcountif\(%5B'geo.city'%5D%2C%20status%20%3D%3D%20'403'\)%22%7D)
**Output**
| distinct\_cities\_forbidden |
| --------------------------- |
| 20 |
This query counts the number of distinct cities (`geo.city` field) where requests resulted in a `403` status, helping you identify potential unauthorized access attempts from different regions.
## List of related aggregations
* [**dcount**](/apl/aggregation-function/dcount): Counts distinct values without applying any condition. Use this when you need to count unique values across the entire dataset.
* [**countif**](/apl/aggregation-function/countif): Counts records that match a specific condition, without focusing on distinct values. Use this when you need to count records based on a filter.
* [**dcountif**](/apl/aggregation-function/dcountif): Use this function to get a distinct count for records that meet a condition. It combines both filtering and distinct counting.
* [**sumif**](/apl/aggregation-function/sumif): Sums values in a column for records that meet a condition. This is useful when you need to sum data points after filtering.
* [**avgif**](/apl/aggregation-function/avgif): Calculates the average value of a column for records that match a condition. Use this when you need to find the average based on a filter.
# histogram
Source: https://axiom.co/docs/apl/aggregation-function/histogram
This page explains how to use the histogram aggregation function in APL.
The `histogram` aggregation in APL allows you to create a histogram that groups numeric values into intervals or “bins.” This is useful for visualizing the distribution of data, such as the frequency of response times, request durations, or other continuous numerical fields. You can use it to analyze patterns and trends in datasets like logs, traces, or metrics. It is especially helpful when you need to summarize a large volume of data into a digestible form, providing insights on the distribution of values.
The `histogram` aggregation is ideal for identifying peaks, valleys, and outliers in your data. For example, you can analyze the distribution of request durations in web server logs or span durations in OpenTelemetry traces to understand performance bottlenecks.
The `histogram` aggregation in APL is a statistical aggregation that returns estimated results. The estimation comes with the benefit of speed at the expense of accuracy. This means that `histogram` is fast and light on resources even on a large or high-cardinality dataset, but it doesn’t provide precise results.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, a similar operation to APL's `histogram` is the `timechart` or `histogram` command, which groups events into time buckets. However, in APL, the `histogram` function focuses on numeric values, allowing you to control the number of bins precisely.
```splunk Splunk example
| stats count by duration | timechart span=10 count
```
```kusto APL equivalent
['sample-http-logs']
| summarize count() by histogram(req_duration_ms, 10)
```
In ANSI SQL, you can use the `GROUP BY` clause combined with range calculations to achieve a similar result to APL’s `histogram`. However, APL’s `histogram` function simplifies the process by automatically calculating bin intervals.
```sql SQL example
SELECT COUNT(*), FLOOR(req_duration_ms/10)*10 as duration_bin
FROM sample_http_logs
GROUP BY duration_bin
```
```kusto APL equivalent
['sample-http-logs']
| summarize count() by histogram(req_duration_ms, 10)
```
## Usage
### Syntax
```kusto
histogram(numeric_field, number_of_bins)
```
### Parameters
* `numeric_field`: The numeric field to create a histogram for. For example, request duration or span duration.
* `number_of_bins`: The number of bins (intervals) to use for grouping the numeric values.
### Returns
The `histogram` aggregation returns a table where each row represents a bin, along with the number of occurrences (counts) that fall within each bin.
## Use case examples
You can use the `histogram` aggregation to analyze the distribution of request durations in web server logs.
**Query**
```kusto
['sample-http-logs']
| summarize histogram(req_duration_ms, 100) by bin_auto(_time)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20histogram\(req_duration_ms%2C%20100\)%20by%20bin_auto\(_time\)%22%7D)
**Output**
| req\_duration\_ms\_bin | count |
| ---------------------- | ----- |
| 0 | 50 |
| 100 | 200 |
| 200 | 120 |
This query creates a histogram that groups request durations into bins of 100 milliseconds and shows the count of requests in each bin. It helps you visualize how frequently requests fall within certain duration ranges.
In OpenTelemetry traces, you can use the `histogram` aggregation to analyze the distribution of span durations.
**Query**
```kusto
['otel-demo-traces']
| summarize histogram(duration, 100) by bin_auto(_time)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20histogram\(duration%2C%20100\)%20by%20bin_auto\(_time\)%22%7D)
**Output**
| duration\_bin | count |
| ------------- | ----- |
| 0.1s | 30 |
| 0.2s | 120 |
| 0.3s | 50 |
This query groups the span durations into 100ms intervals, making it easier to spot latency issues in your traces.
In security logs, the `histogram` aggregation helps you understand the frequency distribution of request durations to detect anomalies or attacks.
**Query**
```kusto
['sample-http-logs']
| where status == '200'
| summarize histogram(req_duration_ms, 50) by bin_auto(_time)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20where%20status%20%3D%3D%20'200'%20%7C%20summarize%20histogram\(req_duration_ms%2C%2050\)%20by%20bin_auto\(_time\)%22%7D)
**Output**
| req\_duration\_ms\_bin | count |
| ---------------------- | ----- |
| 0 | 150 |
| 50 | 400 |
| 100 | 100 |
This query analyzes the request durations for HTTP 200 (Success) responses, helping you identify patterns in security-related events.
## List of related aggregations
* [**percentile**](/apl/aggregation-function/percentile): Use `percentile` when you need to find the specific value below which a percentage of observations fall, which can provide more precise distribution analysis.
* [**avg**](/apl/aggregation-function/avg): Use `avg` for calculating the average value of a numeric field, useful when you are more interested in the central tendency rather than distribution.
* [**sum**](/apl/aggregation-function/sum): The `sum` function adds up the total values in a numeric field, helpful for determining overall totals.
* [**count**](/apl/aggregation-function/count): Use `count` when you need a simple tally of rows or events, often in conjunction with `histogram` for more basic summarization.
# make_list
Source: https://axiom.co/docs/apl/aggregation-function/make-list
This page explains how to use the make_list aggregation function in APL.
The `make_list` aggregation function in Axiom Processing Language (APL) collects all values from a specified column into a dynamic array for each group of rows in a dataset. This aggregation is particularly useful when you want to consolidate multiple values from distinct rows into a single grouped result.
For example, if you have multiple log entries for a particular user, you can use `make_list` to gather all request URIs accessed by that user into a single list. You can also apply `make_list` to various contexts, such as trace aggregation, log analysis, or security monitoring, where collating related events into a compact form is needed.
Key uses of `make_list`:
* Consolidating values from multiple rows into a list per group.
* Summarizing activity (e.g., list all HTTP requests by a user).
* Generating traces or timelines from distributed logs.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, the `make_list` equivalent is `values` or `mvlist`, which gathers multiple values into a multivalue field. In APL, `make_list` behaves similarly by collecting values from rows into a dynamic array.
```sql Splunk example
index=logs | stats values(uri) by user
```
```kusto APL equivalent
['sample-http-logs']
| summarize uris=make_list(uri) by id
```
In ANSI SQL, the `make_list` function is similar to `ARRAY_AGG`, which aggregates column values into an array for each group. In APL, `make_list` performs the same role, grouping the column values into a dynamic array.
```sql SQL example
SELECT ARRAY_AGG(uri) AS uris FROM sample_http_logs GROUP BY id;
```
```kusto APL equivalent
['sample-http-logs']
| summarize uris=make_list(uri) by id
```
## Usage
### Syntax
```kusto
make_list(column)
```
### Parameters
* `column`: The name of the column to collect into a list.
### Returns
The `make_list` function returns a dynamic array that contains all values of the specified column for each group of rows.
## Use case examples
In log analysis, `make_list` is useful for collecting all URIs a user has accessed in a session. This can help in identifying browsing patterns or tracking user activity.
**Query**
```kusto
['sample-http-logs']
| summarize uris=make_list(uri) by id
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20summarize%20uris%3Dmake_list%28uri%29%20by%20id%22%7D)
**Output**
| id | uris |
| ------- | --------------------------------- |
| user123 | \[‘/home’, ‘/profile’, ‘/cart’] |
| user456 | \[‘/search’, ‘/checkout’, ‘/pay’] |
This query collects all URIs accessed by each user, providing a compact view of user activity in the logs.
In OpenTelemetry traces, `make_list` can help in gathering the list of services involved in a trace by consolidating all service names related to a trace ID.
**Query**
```kusto
['otel-demo-traces']
| summarize services=make_list(['service.name']) by trace_id
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D%20%7C%20summarize%20services%3Dmake_list%28%5B%27service.name%27%5D%29%20by%20trace_id%22%7D)
**Output**
| trace\_id | services |
| --------- | ----------------------------------------------- |
| trace\_a | \[‘frontend’, ‘cartservice’, ‘checkoutservice’] |
| trace\_b | \[‘productcatalogservice’, ‘loadgenerator’] |
This query aggregates all service names associated with a particular trace, helping trace spans across different services.
In security logs, `make_list` is useful for collecting all IPs or cities from where a user has initiated requests, aiding in detecting anomalies or patterns.
**Query**
```kusto
['sample-http-logs']
| summarize cities=make_list(['geo.city']) by id
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20summarize%20cities%3Dmake_list%28%5B%27geo.city%27%5D%29%20by%20id%22%7D)
**Output**
| id | cities |
| ------- | ---------------------------- |
| user123 | \[‘New York’, ‘Los Angeles’] |
| user456 | \[‘Berlin’, ‘London’] |
This query collects the cities from which each user has made HTTP requests, useful for geographical analysis or anomaly detection.
## List of related aggregations
* [**make\_set**](/apl/aggregation-function/make-set): Similar to `make_list`, but only unique values are collected in the set. Use `make_set` when duplicates aren’t relevant.
* [**count**](/apl/aggregation-function/count): Returns the count of rows in each group. Use this instead of `make_list` when you're interested in row totals rather than individual values.
* [**max**](/apl/aggregation-function/max): Aggregates values by returning the maximum value from each group. Useful for numeric comparison across rows.
* [**dcount**](/apl/aggregation-function/dcount): Returns the distinct count of values for each group. Use this when you need unique value counts instead of listing them.
# make_list_if
Source: https://axiom.co/docs/apl/aggregation-function/make-list-if
This page explains how to use the make_list_if aggregation function in APL.
The `make_list_if` aggregation function in APL creates a list of values from a given field, conditioned on a Boolean expression. This function is useful when you need to gather values from a column that meet specific criteria into a single array. By using `make_list_if`, you can aggregate data based on dynamic conditions, making it easier to perform detailed analysis.
This aggregation is ideal in scenarios where filtering at the aggregation level is required, such as gathering only the successful requests or collecting trace spans of a specific service in OpenTelemetry data. It’s particularly useful when analyzing logs, tracing information, or security events, where conditional aggregation is essential for understanding trends or identifying issues.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk, you would typically use the `eval` and `stats` commands to create conditional lists. In APL, the `make_list_if` function serves a similar purpose by allowing you to aggregate data into a list based on a condition.
```sql Splunk example
| stats list(field) as field_list by condition
```
```kusto APL equivalent
summarize make_list_if(field, condition)
```
In ANSI SQL, conditional aggregation often involves the use of `CASE` statements combined with aggregation functions such as `ARRAY_AGG`. In APL, `make_list_if` directly applies a condition to the aggregation.
```sql SQL example
SELECT ARRAY_AGG(CASE WHEN condition THEN field END) FROM table
```
```kusto APL equivalent
summarize make_list_if(field, condition)
```
## Usage
### Syntax
```kusto
summarize make_list_if(expression, condition)
```
### Parameters
* `expression`: The field or expression whose values will be included in the list.
* `condition`: A Boolean condition that determines which values from `expression` are included in the result.
### Returns
The function returns an array containing all values from `expression` that meet the specified `condition`.
## Use case examples
In this example, we will gather a list of request durations for successful HTTP requests.
**Query**
```kusto
['sample-http-logs']
| summarize make_list_if(req_duration_ms, status == '200') by id
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D+%7C+summarize+make_list_if%28req_duration_ms%2C+status+%3D%3D+%27200%27%29+by+id%22%7D)
**Output**
| id | req\_duration\_ms\_list |
| --- | ----------------------- |
| 123 | \[100, 150, 200] |
| 456 | \[300, 350, 400] |
This query aggregates request durations for HTTP requests that returned a status of ‘200’ for each user ID.
Here, we will aggregate the span durations for `cartservice` where the status code indicates success.
**Query**
```kusto
['otel-demo-traces']
| summarize make_list_if(duration, status_code == '200' and ['service.name'] == 'cartservice') by trace_id
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D+%7C+summarize+make_list_if%28duration%2C+status_code+%3D%3D+%27200%27+and+%5B%27service.name%27%5D+%3D%3D+%27cartservice%27%29+by+trace_id%22%7D)
**Output**
| trace\_id | duration\_list |
| --------- | --------------------- |
| abc123 | \[00:01:23, 00:01:45] |
| def456 | \[00:02:12, 00:03:15] |
This query collects span durations for successful requests to the `cartservice` by `trace_id`.
In this case, we gather a list of IP addresses from security logs where the HTTP status is `403` (Forbidden) and group them by the country of origin.
**Query**
```kusto
['sample-http-logs']
| summarize make_list_if(uri, status == '403') by ['geo.country']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D+%7C+summarize+make_list_if%28uri%2C+status+%3D%3D+%27403%27%29+by+%5B%27geo.country%27%5D%22%7D)
**Output**
| geo.country | uri\_list |
| ----------- | ---------------------- |
| USA | \['/login', '/admin'] |
| Canada | \['/admin', '/secure'] |
This query collects a list of URIs that resulted in a `403` error, grouped by the country where the request originated.
## List of related aggregations
* [**make\_list**](/apl/aggregation-function/make-list): Aggregates all values into a list without any conditions. Use `make_list` when you don’t need to filter the values based on a condition.
* [**countif**](/apl/aggregation-function/countif): Counts the number of records that satisfy a specific condition. Use `countif` when you need a count of occurrences rather than a list of values.
* [**avgif**](/apl/aggregation-function/avgif): Calculates the average of values that meet a specified condition. Use `avgif` for numerical aggregations where you want a conditional average instead of a list.
# make_set
Source: https://axiom.co/docs/apl/aggregation-function/make-set
This page explains how to use the make_set aggregation function in APL.
The `make_set` aggregation in APL (Axiom Processing Language) is used to collect unique values from a specific column into an array. It is useful when you want to reduce your data by grouping it and then retrieving all unique values for each group. This aggregation is valuable for tasks such as grouping logs, traces, or events by a common attribute and retrieving the unique values of a specific field for further analysis.
You can use `make_set` when you need to collect non-repeating values across rows within a group, such as finding all the unique HTTP methods in web server logs or unique trace IDs in telemetry data.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, the `values` function is similar to `make_set` in APL. The main difference is that while `values` returns all non-null values, `make_set` specifically returns only unique values and stores them in an array.
```sql Splunk example
| stats values(method) by id
```
```kusto APL equivalent
['sample-http-logs']
| summarize make_set(method) by id
```
In ANSI SQL, the `GROUP_CONCAT` or `ARRAY_AGG(DISTINCT)` functions are commonly used to aggregate unique values in a column. `make_set` in APL works similarly by aggregating distinct values from a specific column into an array, but it offers better performance for large datasets.
```sql SQL example
SELECT id, ARRAY_AGG(DISTINCT method)
FROM sample_http_logs
GROUP BY id;
```
```kusto APL equivalent
['sample-http-logs']
| summarize make_set(method) by id
```
## Usage
### Syntax
```kusto
make_set(column, [limit])
```
### Parameters
* `column`: The column from which unique values are aggregated.
* `limit`: (Optional) The maximum number of unique values to return. Defaults to 128 if not specified.
### Returns
An array of unique values from the specified column.
## Use case examples
In this use case, you want to collect all unique HTTP methods used by each user in the log data.
**Query**
```kusto
['sample-http-logs']
| summarize make_set(method) by id
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D+%7C+summarize+make_set%28method%29+by+id%22%7D)
**Output**
| id | make\_set\_method |
| ------- | ----------------- |
| user123 | \['GET', 'POST'] |
| user456 | \['GET'] |
This query groups the log entries by `id` and returns all unique HTTP methods used by each user.
In this use case, you want to gather the unique service names involved in a trace.
**Query**
```kusto
['otel-demo-traces']
| summarize make_set(['service.name']) by trace_id
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D+%7C+summarize+make_set%28%5B%27service.name%27%5D%29+by+trace_id%22%7D)
**Output**
| trace\_id | make\_set\_service.name |
| --------- | -------------------------------- |
| traceA | \['frontend', 'checkoutservice'] |
| traceB | \['cartservice'] |
This query groups the telemetry data by `trace_id` and collects the unique services involved in each trace.
In this use case, you want to collect all unique HTTP status codes for each country where the requests originated.
**Query**
```kusto
['sample-http-logs']
| summarize make_set(status) by ['geo.country']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D+%7C+summarize+make_set%28status%29+by+%5B%27geo.country%27%5D%22%7D)
**Output**
| geo.country | make\_set\_status |
| ----------- | ----------------- |
| USA | \['200', '404'] |
| UK | \['200'] |
This query collects all unique HTTP status codes returned for each country from which requests were made.
## List of related aggregations
* [**make\_list**](/apl/aggregation-function/make-list): Similar to `make_set`, but returns all values, including duplicates, in a list. Use `make_list` if you want to preserve duplicates.
* [**count**](/apl/aggregation-function/count): Counts the number of records in each group. Use `count` when you need the total count rather than the unique values.
* [**dcount**](/apl/aggregation-function/dcount): Returns the distinct count of values in a column. Use `dcount` when you need the number of unique values, rather than an array of them.
* [**max**](/apl/aggregation-function/max): Finds the maximum value in a group. Use `max` when you are interested in the largest value rather than collecting values.
# make_set_if
Source: https://axiom.co/docs/apl/aggregation-function/make-set-if
This page explains how to use the make_set_if aggregation function in APL.
The `make_set_if` aggregation function in APL allows you to create a set of distinct values from a column based on a condition. You can use this function to aggregate values that meet specific criteria, helping you filter and reduce data to unique entries while applying a conditional filter. This is especially useful when analyzing large datasets to extract relevant, distinct information without duplicates.
You can use `make_set_if` in scenarios where you need to aggregate conditional data points, such as log analysis, tracing information, or security logs, to summarize distinct occurrences based on particular conditions.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, you may use `values` with a `where` condition to achieve similar functionality to `make_set_if`. However, in APL, the `make_set_if` function is explicitly designed to create a distinct set of values based on a conditional filter within the aggregation step itself.
```sql Splunk example
| stats values(field) by another_field where condition
```
```kusto APL equivalent
summarize make_set_if(field, condition) by another_field
```
In ANSI SQL, you would typically use `GROUP BY` in combination with conditional aggregation, such as using `CASE WHEN` inside aggregate functions. In APL, the `make_set_if` function directly aggregates distinct values conditionally without requiring a `CASE WHEN`.
```sql SQL example
SELECT DISTINCT CASE WHEN condition THEN field END
FROM table
GROUP BY another_field
```
```kusto APL equivalent
summarize make_set_if(field, condition) by another_field
```
## Usage
### Syntax
```kusto
make_set_if(column, predicate, [max_size])
```
### Parameters
* `column`: The column from which distinct values will be aggregated.
* `predicate`: A condition that filters the values to be aggregated.
* `[max_size]`: (Optional) Specifies the maximum number of elements in the resulting set. If omitted, the default is 1048576.
### Returns
The `make_set_if` function returns a dynamic array of distinct values from the specified column that satisfy the given condition.
## Use case examples
In this use case, you're analyzing HTTP logs and want to get the distinct cities from which requests originated, but only for requests that took longer than 500 ms.
**Query**
```kusto
['sample-http-logs']
| summarize make_set_if(['geo.city'], req_duration_ms > 500) by ['method']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20summarize%20make_set_if%28%5B%27geo.city%27%5D%2C%20req_duration_ms%20%3E%20500%29%20by%20%5B%27method%27%5D%22%7D)
**Output**
| method | make\_set\_if\_geo.city |
| ------ | ------------------------------ |
| GET | \[‘New York’, ‘San Francisco’] |
| POST | \[‘Berlin’, ‘Tokyo’] |
This query returns the distinct cities from which requests took more than 500 ms, grouped by HTTP request method.
Here, you're analyzing OpenTelemetry traces and want to identify the distinct services that processed spans with a duration greater than 1 second, grouped by trace ID.
**Query**
```kusto
['otel-demo-traces']
| summarize make_set_if(['service.name'], duration > 1s) by ['trace_id']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D%20%7C%20summarize%20make_set_if%28%5B%27service.name%27%5D%2C%20duration%20%3E%201s%29%20by%20%5B%27trace_id%27%5D%22%7D)
**Output**
| trace\_id | make\_set\_if\_service.name |
| --------- | ------------------------------------- |
| abc123 | \[‘frontend’, ‘cartservice’] |
| def456 | \[‘checkoutservice’, ‘loadgenerator’] |
This query extracts distinct services that have processed spans longer than 1 second for each trace.
In security log analysis, you may want to find out which HTTP status codes were encountered for each city, but only for POST requests.
**Query**
```kusto
['sample-http-logs']
| summarize make_set_if(status, method == 'POST') by ['geo.city']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20summarize%20make_set_if%28status%2C%20method%20%3D%3D%20%27POST%27%29%20by%20%5B%27geo.city%27%5D%22%7D)
**Output**
| geo.city | make\_set\_if\_status |
| -------- | --------------------- |
| Berlin | \[‘200’, ‘404’] |
| Tokyo | \[‘500’, ‘403’] |
This query identifies the distinct HTTP status codes for POST requests grouped by the originating city.
## List of related aggregations
* [**make\_list\_if**](/apl/aggregation-function/make-list-if): Similar to `make_set_if`, but returns a list that can include duplicates instead of a distinct set.
* [**make\_set**](/apl/aggregation-function/make-set): Aggregates distinct values without a conditional filter.
* [**countif**](/apl/aggregation-function/countif): Counts rows that satisfy a specific condition, useful for when you need to count rather than aggregate distinct values.
# max
Source: https://axiom.co/docs/apl/aggregation-function/max
This page explains how to use the max aggregation function in APL.
The `max` aggregation in APL allows you to find the highest value in a specific column of your dataset. This is useful when you need to identify the maximum value of numerical data, such as the longest request duration, highest sales figures, or the latest timestamp in logs. The `max` function is ideal when you are working with large datasets and need to quickly retrieve the largest value, ensuring you're focusing on the most critical or recent data point.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, the `max` function works similarly, used to find the maximum value in a given field. The syntax in APL, however, requires you to specify the column to aggregate within a query and make use of APL's structured flow.
```sql Splunk example
| stats max(req_duration_ms)
```
```kusto APL equivalent
['sample-http-logs']
| summarize max(req_duration_ms)
```
In ANSI SQL, `MAX` works similarly to APL’s `max`. In SQL, you aggregate over a column using the `MAX` function in a `SELECT` statement. In APL, you achieve the same result using the `summarize` operator followed by the `max` function.
```sql SQL example
SELECT MAX(req_duration_ms) FROM sample_http_logs;
```
```kusto APL equivalent
['sample-http-logs']
| summarize max(req_duration_ms)
```
## Usage
### Syntax
```kusto
summarize max(ColumnName)
```
### Parameters
* `ColumnName`: The column or field from which you want to retrieve the maximum value. The column should contain numerical data, timespans, or dates.
### Returns
The maximum value from the specified column.
## Use case examples
In log analysis, you might want to find the longest request duration to diagnose performance issues.
**Query**
```kusto
['sample-http-logs']
| summarize max(req_duration_ms)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20max\(req_duration_ms\)%22%7D)
**Output**
| max\_req\_duration\_ms |
| ---------------------- |
| 5400 |
This query returns the highest request duration from the `req_duration_ms` field, which helps you identify the slowest requests.
When analyzing OpenTelemetry traces, you can find the longest span duration to determine performance bottlenecks in distributed services.
**Query**
```kusto
['otel-demo-traces']
| summarize max(duration)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20max\(duration\)%22%7D)
**Output**
| max\_duration |
| ------------- |
| 00:00:07.234 |
This query returns the longest trace span from the `duration` field, helping you pinpoint the most time-consuming operations.
In security log analysis, you may want to identify the most recent event for monitoring threats or auditing activities.
**Query**
```kusto
['sample-http-logs']
| summarize max(_time)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20max\(_time\)%22%7D)
**Output**
| max\_time |
| ------------------- |
| 2024-09-25 12:45:01 |
This query returns the most recent timestamp from your logs, allowing you to monitor the latest security events.
## List of related aggregations
* [**min**](/apl/aggregation-function/min): Retrieves the minimum value from a column, which is useful when you need to find the smallest or earliest value, such as the lowest request duration or first event in a log.
* [**avg**](/apl/aggregation-function/avg): Calculates the average value of a column. This function helps when you want to understand the central tendency, such as the average response time for requests.
* [**sum**](/apl/aggregation-function/sum): Sums all values in a column, making it useful when calculating totals, such as total sales or total number of requests over a period.
* [**count**](/apl/aggregation-function/count): Counts the number of records or non-null values in a column. It’s useful for finding the total number of log entries or transactions.
* [**percentile**](/apl/aggregation-function/percentile): Finds a value below which a specified percentage of data falls. This aggregation is helpful when you need to analyze performance metrics like latency at the 95th percentile.
# maxif
Source: https://axiom.co/docs/apl/aggregation-function/maxif
This page explains how to use the maxif aggregation function in APL.
# maxif aggregation in APL
## Introduction
The `maxif` aggregation function in APL is useful when you want to return the maximum value from a dataset based on a conditional expression. This allows you to filter the dataset dynamically and only return the maximum for rows that satisfy the given condition. It’s particularly helpful for scenarios where you want to find the highest value of a specific metric, like response time or duration, but only for a subset of the data (e.g., successful responses, specific users, or requests from a particular geographic location).
You can use the `maxif` function when analyzing logs, monitoring system traces, or inspecting security-related data to get insights into the maximum value under certain conditions.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, you might use the `stats max()` function alongside a conditional filtering step to achieve a similar result. APL’s `maxif` function combines both operations into one, streamlining the query.
```splunk
| stats max(req_duration_ms) as max_duration where status="200"
```
```kusto
['sample-http-logs']
| summarize maxif(req_duration_ms, status == "200")
```
In ANSI SQL, you typically use the `MAX` function in conjunction with a `WHERE` clause. APL’s `maxif` allows you to perform the same operation with a single aggregation function.
```sql
SELECT MAX(req_duration_ms)
FROM logs
WHERE status = '200';
```
```kusto
['sample-http-logs']
| summarize maxif(req_duration_ms, status == "200")
```
## Usage
### Syntax
```kusto
summarize maxif(column, condition)
```
### Parameters
* `column`: The column containing the values to aggregate.
* `condition`: The condition that must be true for the values to be considered in the aggregation.
### Returns
The maximum value from `column` for rows that meet the `condition`. If no rows match the condition, it returns `null`.
## Use case examples
In log analysis, you might want to find the maximum request duration, but only for successful requests.
**Query**
```kusto
['sample-http-logs']
| summarize maxif(req_duration_ms, status == "200")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20maxif\(req_duration_ms,%20status%20%3D%3D%20'200'\)%22%7D)
**Output**
| max\_req\_duration |
| ------------------ |
| 1250 |
This query returns the maximum request duration (`req_duration_ms`) for HTTP requests with a `200` status.
In OpenTelemetry traces, you might want to find the longest span duration for a specific service type.
**Query**
```kusto
['otel-demo-traces']
| summarize maxif(duration, ['service.name'] == "checkoutservice" and kind == "server")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20maxif\(duration,%20%5B'service.name'%5D%20%3D%3D%20'checkoutservice'%20and%20kind%20%3D%3D%20'server'\)%22%7D)
**Output**
| max\_duration |
| ------------- |
| 2.05s |
This query returns the maximum span duration (`duration`) for server spans in the `checkoutservice`.
For security logs, you might want to identify the longest request duration for any requests originating from a specific country, such as the United States.
**Query**
```kusto
['sample-http-logs']
| summarize maxif(req_duration_ms, ['geo.country'] == "United States")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20maxif\(req_duration_ms,%20%5B'geo.country'%5D%20%3D%3D%20'United%20States'\)%22%7D)
**Output**
| max\_req\_duration |
| ------------------ |
| 980 |
This query returns the maximum request duration for requests coming from the United States (`geo.country`).
## List of related aggregations
* [**minif**](/apl/aggregation-function/minif): Returns the minimum value from a column for rows that satisfy a condition. Use `minif` when you're interested in the lowest value under specific conditions.
* [**max**](/apl/aggregation-function/max): Returns the maximum value from a column without filtering. Use `max` when you want the highest value across the entire dataset without conditions.
* [**sumif**](/apl/aggregation-function/sumif): Returns the sum of values for rows that satisfy a condition. Use `sumif` when you want the total value of a column under specific conditions.
* [**avgif**](/apl/aggregation-function/avgif): Returns the average of values for rows that satisfy a condition. Use `avgif` when you want to calculate the mean value based on a filter.
* [**countif**](/apl/aggregation-function/countif): Returns the count of rows that satisfy a condition. Use `countif` when you want to count occurrences that meet certain criteria.
# min
Source: https://axiom.co/docs/apl/aggregation-function/min
This page explains how to use the min aggregation function in APL.
The `min` aggregation function in APL returns the minimum value from a set of input values. You can use this function to identify the smallest numeric or comparable value in a column of data. This is useful when you want to find the quickest response time, the lowest transaction amount, or the earliest date in log data. It’s ideal for analyzing performance metrics, filtering out abnormal low points in your data, or discovering outliers.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk, the `min` function works similarly to APL's `min` aggregation, allowing you to find the minimum value in a field across your dataset. The main difference is in the query structure and syntax between the two.
```sql Splunk example
| stats min(duration) by id
```
```kusto APL equivalent
['sample-http-logs']
| summarize min(req_duration_ms) by id
```
In ANSI SQL, the `MIN` function works almost identically to the APL `min` aggregation. You use it to return the smallest value in a column of data, grouped by one or more fields.
```sql SQL example
SELECT MIN(duration), id FROM sample_http_logs GROUP BY id;
```
```kusto APL equivalent
['sample-http-logs']
| summarize min(req_duration_ms) by id
```
## Usage
### Syntax
```kusto
summarize min(Expression)
```
### Parameters
* `Expression`: The expression from which to calculate the minimum value. Typically, this is a numeric or date/time field.
### Returns
The function returns the smallest value found in the specified column or expression.
## Use case examples
In this use case, you analyze HTTP logs to find the minimum request duration for each unique user.
**Query**
```kusto
['sample-http-logs']
| summarize min(req_duration_ms) by id
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20min\(req_duration_ms\)%20by%20id%22%7D)
**Output**
| id | min\_req\_duration\_ms |
| --------- | ---------------------- |
| user\_123 | 32 |
| user\_456 | 45 |
This query returns the minimum request duration for each user, helping you identify the fastest responses.
Here, you analyze OpenTelemetry trace data to find the minimum span duration per service.
**Query**
```kusto
['otel-demo-traces']
| summarize min(duration) by ['service.name']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20min\(duration\)%20by%20%5B'service.name'%5D%22%7D)
**Output**
| service.name | min\_duration |
| --------------- | ------------- |
| frontend | 2ms |
| checkoutservice | 5ms |
This query returns the minimum span duration for each service in the trace logs.
In this example, you analyze security logs to find the minimum request duration for each HTTP status code.
**Query**
```kusto
['sample-http-logs']
| summarize min(req_duration_ms) by status
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20min\(req_duration_ms\)%20by%20status%22%7D)
**Output**
| status | min\_req\_duration\_ms |
| ------ | ---------------------- |
| 200 | 10 |
| 404 | 40 |
This query returns the minimum request duration for each HTTP status code, helping you identify if certain statuses are associated with faster or slower response times.
## List of related aggregations
* [**max**](/apl/aggregation-function/max): Returns the maximum value from a set of values. Use `max` when you need to find the highest value instead of the lowest.
* [**avg**](/apl/aggregation-function/avg): Calculates the average of a set of values. Use `avg` to find the mean value instead of the minimum.
* [**count**](/apl/aggregation-function/count): Counts the number of records or distinct values. Use `count` when you need to know how many records or unique values exist, rather than calculating the minimum.
* [**sum**](/apl/aggregation-function/sum): Adds all values together. Use `sum` when you need the total of a set of values rather than the minimum.
* [**percentile**](/apl/aggregation-function/percentile): Returns the value at a specified percentile. Use `percentile` if you need a value that falls at a certain point in the distribution of your data, rather than the minimum.
# minif
Source: https://axiom.co/docs/apl/aggregation-function/minif
This page explains how to use the minif aggregation function in APL.
## Introduction
The `minif` aggregation in Axiom Processing Language (APL) allows you to calculate the minimum value of a numeric expression, but only for records that meet a specific condition. This aggregation is useful when you want to find the smallest value in a subset of data that satisfies a given predicate. For example, you can use `minif` to find the shortest request duration for successful HTTP requests, or the minimum span duration for a specific service in your OpenTelemetry traces.
The `minif` aggregation is especially useful in scenarios where you need conditional aggregations, such as log analysis, monitoring distributed systems, or examining security-related events.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk, you might use the `min` function in combination with `where` to filter results. In APL, the `minif` function combines both the filtering condition and the minimum calculation into one step.
```sql Splunk example
| stats min(req_duration_ms) as min_duration where status="200"
```
```kusto APL equivalent
['sample-http-logs']
| summarize minif(req_duration_ms, status == "200") by id
```
In ANSI SQL, you would typically use a `CASE` statement with `MIN` to apply conditional logic for aggregation. In APL, the `minif` function simplifies this by combining both the condition and the aggregation.
```sql SQL example
SELECT MIN(CASE WHEN status = '200' THEN req_duration_ms ELSE NULL END) as min_duration
FROM sample_http_logs
GROUP BY id;
```
```kusto APL equivalent
['sample-http-logs']
| summarize minif(req_duration_ms, status == "200") by id
```
## Usage
### Syntax
```kusto
summarize minif(Expression, Predicate)
```
### Parameters
| Parameter | Description |
| ------------ | ------------------------------------------------------------ |
| `Expression` | The numeric expression whose minimum value you want to find. |
| `Predicate` | The condition that determines which records to include. |
### Returns
The `minif` aggregation returns the minimum value of the specified `Expression` for the records that satisfy the `Predicate`.
## Use case examples
In log analysis, you might want to find the minimum request duration for successful HTTP requests.
**Query**
```kusto
['sample-http-logs']
| summarize minif(req_duration_ms, status == '200') by ['geo.city']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20minif\(req_duration_ms,%20status%20%3D%3D%20'200'\)%20by%20%5B'geo.city'%5D%22%7D)
**Output**
| geo.city | min\_duration |
| --------- | ------------- |
| San Diego | 120 |
| New York | 95 |
This query finds the minimum request duration for HTTP requests with a `200` status code, grouped by city.
For distributed tracing, you can use `minif` to find the minimum span duration for a specific service.
**Query**
```kusto
['otel-demo-traces']
| summarize minif(duration, ['service.name'] == 'frontend') by trace_id
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20minif\(duration,%20%5B'service.name'%5D%20%3D%3D%20'frontend'\)%20by%20trace_id%22%7D)
**Output**
| trace\_id | min\_duration |
| --------- | ------------- |
| abc123 | 50ms |
| def456 | 40ms |
This query returns the minimum span duration for traces from the `frontend` service, grouped by `trace_id`.
In security logs, you can use `minif` to find the minimum request duration for HTTP requests from a specific country.
**Query**
```kusto
['sample-http-logs']
| summarize minif(req_duration_ms, ['geo.country'] == 'US') by status
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20minif\(req_duration_ms,%20%5B'geo.country'%5D%20%3D%3D%20'US'\)%20by%20status%22%7D)
**Output**
| status | min\_duration |
| ------ | ------------- |
| 200 | 95 |
| 404 | 120 |
This query returns the minimum request duration for HTTP requests originating from the United States, grouped by HTTP status code.
## List of related aggregations
* [**maxif**](/apl/aggregation-function/maxif): Finds the maximum value of an expression that satisfies a condition. Use `maxif` when you need the maximum value under a condition, rather than the minimum.
* [**avgif**](/apl/aggregation-function/avgif): Calculates the average value of an expression that meets a specified condition. Useful when you want an average instead of a minimum.
* [**countif**](/apl/aggregation-function/countif): Counts the number of records that satisfy a given condition. Use this for counting records rather than calculating a minimum.
* [**sumif**](/apl/aggregation-function/sumif): Sums the values of an expression for records that meet a condition. Helpful when you're interested in the total rather than the minimum.
# percentile
Source: https://axiom.co/docs/apl/aggregation-function/percentile
This page explains how to use the percentile aggregation function in APL.
The `percentile` aggregation function in Axiom Processing Language (APL) allows you to calculate the value below which a given percentage of data points fall. It is particularly useful when you need to analyze distributions and want to summarize the data using specific thresholds, such as the 90th or 95th percentile. This function can be valuable in performance analysis, trend detection, or identifying outliers across large datasets.
You can apply the `percentile` function to various use cases, such as analyzing log data for request durations, OpenTelemetry traces for service latencies, or security logs to assess risk patterns.
The `percentile` aggregation in APL is a statistical aggregation that returns estimated results. The estimation comes with the benefit of speed at the expense of accuracy. This means that `percentile` is fast and light on resources even on a large or high-cardinality dataset, but it doesn’t provide precise results.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, the `percentile` function is referred to as `perc` or `percentile`. APL's `percentile` function works similarly, but the syntax is different. The main difference is that APL requires you to explicitly define the column on which you want to apply the percentile and the target percentile value.
```sql Splunk example
| stats perc95(req_duration_ms)
```
```kusto APL equivalent
['sample-http-logs']
| summarize percentile(req_duration_ms, 95)
```
In ANSI SQL, you might use the `PERCENTILE_CONT` or `PERCENTILE_DISC` functions to compute percentiles. In APL, the `percentile` function provides a simpler syntax while offering similar functionality.
```sql SQL example
SELECT PERCENTILE_CONT(0.95) WITHIN GROUP (ORDER BY req_duration_ms) FROM sample_http_logs;
```
```kusto APL equivalent
['sample-http-logs']
| summarize percentile(req_duration_ms, 95)
```
## Usage
### Syntax
```kusto
percentile(column, percentile)
```
### Parameters
* **column**: The name of the column to calculate the percentile on. This must be a numeric field.
* **percentile**: The target percentile value (between 0 and 100).
### Returns
The function returns the value from the specified column that corresponds to the given percentile.
## Use case examples
In log analysis, you can use the `percentile` function to identify the 95th percentile of request durations, which gives you an idea of the tail-end latencies of requests in your system.
**Query**
```kusto
['sample-http-logs']
| summarize percentile(req_duration_ms, 95)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20summarize%20percentile%28req_duration_ms%2C%2095%29%22%7D)
**Output**
| percentile\_req\_duration\_ms |
| ----------------------------- |
| 1200 |
This query calculates the 95th percentile of request durations, showing that 95% of requests take less than or equal to 1200ms.
For OpenTelemetry traces, you can use the `percentile` function to identify the 90th percentile of span durations for specific services, which helps to understand the performance of different services.
**Query**
```kusto
['otel-demo-traces']
| where ['service.name'] == 'checkoutservice'
| summarize percentile(duration, 90)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D%20%7C%20where%20%5B%27service.name%27%5D%20%3D%3D%20%27checkoutservice%27%20%7C%20summarize%20percentile%28duration%2C%2090%29%22%7D)
**Output**
| percentile\_duration |
| -------------------- |
| 300ms |
This query calculates the 90th percentile of span durations for the `checkoutservice`, helping to assess high-latency spans.
In security logs, you can use the `percentile` function to calculate the 99th percentile of response times for a specific set of status codes, helping you focus on outliers.
**Query**
```kusto
['sample-http-logs']
| where status == '500'
| summarize percentile(req_duration_ms, 99)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20where%20status%20%3D%3D%20%27500%27%20%7C%20summarize%20percentile%28req_duration_ms%2C%2099%29%22%7D)
**Output**
| percentile\_req\_duration\_ms |
| ----------------------------- |
| 2500 |
This query identifies that 99% of requests resulting in HTTP 500 errors take less than or equal to 2500ms.
## List of related aggregations
* [**avg**](/apl/aggregation-function/avg): Use `avg` to calculate the average of a column, which gives you the central tendency of your data. In contrast, `percentile` provides more insight into the distribution and tail values.
* [**min**](/apl/aggregation-function/min): The `min` function returns the smallest value in a column. Use this when you need the absolute lowest value instead of a specific percentile.
* [**max**](/apl/aggregation-function/max): The `max` function returns the highest value in a column. It’s useful for finding the upper bound, while `percentile` allows you to focus on a specific point in the data distribution.
* [**stdev**](/apl/aggregation-function/stdev): `stdev` calculates the standard deviation of a column, which helps measure data variability. While `stdev` provides insight into overall data spread, `percentile` focuses on specific distribution points.
# percentileif
Source: https://axiom.co/docs/apl/aggregation-function/percentileif
This page explains how to use the percentileif aggregation function in APL.
The `percentileif` aggregation function calculates the percentile of a numeric column, conditional on a specified boolean predicate. This function is useful for filtering data dynamically and determining percentile values based only on relevant subsets of data.
You can use `percentileif` to gain insights in various scenarios, such as:
* Identifying response time percentiles for HTTP requests from specific regions.
* Calculating percentiles of span durations for specific service types in OpenTelemetry traces.
* Analyzing security events by percentile within defined risk categories.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
The `percentileif` aggregation in APL works similarly to `percentile` combined with conditional filtering in SPL. However, APL integrates the condition directly into the aggregation for simplicity.
```sql Splunk example
stats perc95(req_duration_ms) as p95 where geo.country="US"
```
```kusto APL equivalent
['sample-http-logs']
| summarize percentileif(req_duration_ms, 95, geo.country == 'US')
```
In SQL, you typically calculate percentiles using window functions or aggregate functions combined with a `WHERE` clause. APL simplifies this by embedding the condition directly in the `percentileif` aggregation.
```sql SQL example
SELECT PERCENTILE_CONT(0.95) WITHIN GROUP (ORDER BY req_duration_ms)
FROM sample_http_logs
WHERE geo_country = 'US'
```
```kusto APL equivalent
['sample-http-logs']
| summarize percentileif(req_duration_ms, 95, geo.country == 'US')
```
## Usage
### Syntax
```kusto
summarize percentileif(Field, Percentile, Predicate)
```
### Parameters
| Parameter | Description |
| ------------ | ---------------------------------------------------------------------- |
| `Field` | The numeric field from which to calculate the percentile. |
| `Percentile` | A number between 0 and 100 that specifies the percentile to calculate. |
| `Predicate` | A Boolean expression that filters rows to include in the calculation. |
### Returns
The function returns a single numeric value representing the specified percentile of the `Field` for rows where the `Predicate` evaluates to `true`.
## Use case examples
You can use `percentileif` to analyze request durations for specific HTTP methods.
**Query**
```kusto
['sample-http-logs']
| summarize post_p90 = percentileif(req_duration_ms, 90, method == "POST"), get_p90 = percentileif(req_duration_ms, 90, method == "GET") by bin_auto(_time)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20post_p90%20%3D%20percentileif\(req_duration_ms%2C%2090%2C%20method%20%3D%3D%20'POST'\)%2C%20get_p90%20%3D%20percentileif\(req_duration_ms%2C%2090%2C%20method%20%3D%3D%20'GET'\)%20by%20bin_auto\(_time\)%22%7D)
**Output**
| post\_p90 | get\_p90 |
| --------- | -------- |
| 1.691 ms | 1.453 ms |
This query calculates the 90th percentile of request durations for HTTP POST and GET methods.
You can use `percentileif` to measure span durations for specific services and operation kinds.
**Query**
```kusto
['otel-demo-traces']
| summarize percentileif(duration, 95, ['service.name'] == 'frontend' and kind == 'server')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D%20%7C%20summarize%20percentileif%28duration%2C%2095%2C%20%5B%27service.name%27%5D%20%3D%3D%20%27frontend%27%20and%20kind%20%3D%3D%20%27server%27%29%22%7D)
**Output**
| Percentile95 |
| ------------ |
| 1.2s |
This query calculates the 95th percentile of span durations for server spans in the `frontend` service.
You can use `percentileif` to calculate response time percentiles for specific HTTP status codes.
**Query**
```kusto
['sample-http-logs']
| summarize percentileif(req_duration_ms, 75, status == '404')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20summarize%20percentileif%28req_duration_ms%2C%2075%2C%20status%20%3D%3D%20%27404%27%29%22%7D)
**Output**
| Percentile75 |
| ------------ |
| 350 |
This query calculates the 75th percentile of request durations for HTTP 404 errors.
## List of related aggregations
* [percentile](/apl/aggregation-function/percentile): Calculates the percentile for all rows without any filtering. Use `percentile` when you don’t need conditional filtering.
* [avgif](/apl/aggregation-function/avgif): Calculates the average of a numeric column based on a condition. Use `avgif` for mean calculations instead of percentiles.
* [minif](/apl/aggregation-function/minif): Returns the minimum value of a numeric column where a condition is true. Use `minif` for identifying the lowest values within subsets.
* [maxif](/apl/aggregation-function/maxif): Returns the maximum value of a numeric column where a condition is true. Use `maxif` for identifying the highest values within subsets.
* [sumif](/apl/aggregation-function/sumif): Sums a numeric column based on a condition. Use `sumif` for conditional total calculations.
# percentiles_array
Source: https://axiom.co/docs/apl/aggregation-function/percentiles-array
This page explains how to use the percentiles_array function in APL.
Use the `percentiles_array` aggregation function in APL to calculate multiple percentile values over a numeric expression in one pass. This function is useful when you want to understand the distribution of numeric data points, such as response times or durations, by summarizing them at several key percentiles like the 25th, 50th, and 95th.
You can use `percentiles_array` to:
* Analyze latency or duration metrics across requests or operations.
* Identify performance outliers.
* Visualize percentile distributions in dashboards.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk, you typically calculate percentiles one at a time using the `perc` function. To get multiple percentiles, you repeat the function with different percentile values. In APL, `percentiles_array` lets you specify multiple percentiles in a single function call and returns them as an array.
```sql Splunk example
... | stats perc95(duration), perc50(duration), perc25(duration) by service
```
```kusto APL equivalent
['otel-demo-traces']
| summarize percentiles_array(duration, 25, 50, 95) by ['service.name']
```
Standard SQL typically lacks a built-in function to calculate multiple percentiles in a single operation. Instead, you use `PERCENTILE_CONT` or `PERCENTILE_DISC` with `WITHIN GROUP`, repeated for each desired percentile. In APL, `percentiles_array` simplifies this with a single function call that returns all requested percentiles as an array.
```sql SQL example
SELECT
service,
PERCENTILE_CONT(0.25) WITHIN GROUP (ORDER BY duration) AS p25,
PERCENTILE_CONT(0.50) WITHIN GROUP (ORDER BY duration) AS p50,
PERCENTILE_CONT(0.95) WITHIN GROUP (ORDER BY duration) AS p95
FROM traces
GROUP BY service
```
```kusto APL equivalent
['otel-demo-traces']
| summarize percentiles_array(duration, 25, 50, 95) by ['service.name']
```
## Usage
### Syntax
```kusto
percentiles_array(Field, Percentile1, Percentile2, ...)
```
### Parameters
* `Field` is the name of the field for which you want to compute percentile values.
* `Percentile1`, `Percentile2`, ... are numeric percentile values between 0 and 100.
### Returns
An array of numbers where each element is the value at the corresponding percentile.
## Use case examples
Use `percentiles_array` to understand the spread of request durations per HTTP method, highlighting performance variability.
**Query**
```kusto
['sample-http-logs']
| summarize percentiles_array(req_duration_ms, 25, 50, 95) by method
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20percentiles_array\(req_duration_ms%2C%2025%2C%2050%2C%2095\)%20by%20method%22%7D)
**Output**
| method | P25 | P50 | P95 |
| ------ | --------- | --------- | -------- |
| GET | 0.3981 ms | 0.7352 ms | 1.981 ms |
| POST | 0.3261 ms | 0.7162 ms | 2.341 ms |
| PUT | 0.3324 ms | 0.7772 ms | 1.341 ms |
| DELETE | 0.2332 ms | 0.4652 ms | 1.121 ms |
This query calculates the 25th, 50th, and 95th percentiles of request durations for each HTTP method. It helps identify performance differences between different methods.
Use `percentiles_array` to analyze the distribution of span durations by service to detect potential bottlenecks.
**Query**
```kusto
['otel-demo-traces']
| summarize percentiles_array(duration, 50, 90, 99) by ['service.name']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20percentiles_array\(duration%2C%2050%2C%2090%2C%2099\)%20by%20%5B'service.name'%5D%22%7D)
**Output**
| service.name | P50 | P90 | P99 | P99 |
| --------------------- | -------- | --------- | --------- | --------- |
| recommendationservice | 1.96 ms | 2.965 ms | 3.477 ms | 3.477 ms |
| frontendproxy | 3.767 ms | 13.101 ms | 39.735 ms | 39.735 ms |
| shippingservice | 2.119 ms | 3.085 ms | 9.739 ms | 9.739 ms |
| checkoutservice | 1.454 ms | 12.342 ms | 29.542 ms | 29.542 ms |
This query shows latency patterns across services by computing the median, 90th, and 99th percentile of span durations.
Use `percentiles_array` to assess outlier response times per status code, which can reveal abnormal activity or service issues.
**Query**
```kusto
['sample-http-logs']
| summarize percentiles_array(req_duration_ms, 50, 95, 99) by status
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20percentiles_array\(req_duration_ms%2C%2050%2C%2095%2C%2099\)%20by%20status%22%7D)
**Output**
| status | P50 | P95 | P99 |
| ------ | --------- | -------- | -------- |
| 200 | 0.7352 ms | 1.981 ms | 2.612 ms |
| 201 | 0.7856 ms | 1.356 ms | 2.234 ms |
| 301 | 0.8956 ms | 1.547 ms | 2.546 ms |
| 500 | 0.6587 ms | 1.856 ms | 2.856 ms |
This query helps identify whether requests resulting in errors (like 500) are significantly slower than successful ones.
## List of related functions
* [avg](/apl/aggregation-function/avg): Returns the average value. Use it when a single central tendency is sufficient.
* [percentile](/apl/aggregation-function/percentile): Returns a single percentile value. Use it when you only need one percentile.
* [percentile\_if](/apl/aggregation-function/percentileif): Returns a single percentile value for the records that satisfy a condition.
* [percentiles\_arrayif](/apl/aggregation-function/percentiles-arrayif): Returns an array of percentile values for the records that satisfy a condition.
* [sum](/apl/aggregation-function/sum): Returns the sum of a numeric column.
# percentiles_arrayif
Source: https://axiom.co/docs/apl/aggregation-function/percentiles-arrayif
This page explains how to use the percentiles_array function in APL.
Use `percentiles_arrayif` to calculate approximate percentile values for a numeric expression when a certain condition evaluates to true. This function is useful when you want an array of percentiles instead of a single percentile. You can use it to understand data distributions in scenarios such as request durations, event processing times, or security alert severities, while filtering on specific criteria.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, you often use statistical functions such as `perc` or `percN()` to compute percentile estimates. In APL, you use `percentiles_arrayif` and provide a predicate to define which records to include in the computation.
```sql Splunk example
index=main sourcetype=access_combined
| stats perc90(req_duration_ms) AS p90, perc99(req_duration_ms) AS p99
```
```kusto APL equivalent
['sample-http-logs']
| summarize Dist=percentiles_arrayif(req_duration_ms, dynamic([90, 99]), status == '200')
```
In ANSI SQL, you often use window functions like `PERCENTILE_DISC` or `PERCENTILE_CONT` or write multiple `CASE` expressions for conditional aggregation. In APL, you can achieve similar functionality with `percentiles_arrayif` by passing the numeric field and condition to the function.
```sql SQL example
SELECT
PERCENTILE_DISC(0.90) WITHIN GROUP (ORDER BY req_duration_ms) AS p90,
PERCENTILE_DISC(0.99) WITHIN GROUP (ORDER BY req_duration_ms) AS p99
FROM sample_http_logs
WHERE status = '200';
```
```kusto APL equivalent
['sample-http-logs']
| summarize Dist=percentiles_arrayif(req_duration_ms, dynamic([90, 99]), status == '200')
```
# Usage
## Syntax
```kusto
percentiles_arrayif(Field, Array, Condition)
```
## Parameters
* `Field` is the name of the field for which you want to compute percentile values.
* `Array` is a dynamic array of one or more numeric percentile values (between 0 and 100).
* `Condition` is a Boolean expression that indicates which records to include in the calculation.
## Returns
The function returns an array of percentile values for the records that satisfy the condition. The position of each returned percentile in the array matches the order in which it appears in the function call.
## Use case examples
You can use `percentiles_arrayif` to analyze request durations in HTTP logs while filtering for specific criteria, such as certain HTTP statuses or geographic locations.
**Query**
```kusto
['sample-http-logs']
| summarize percentiles_arrayif(req_duration_ms, dynamic([50, 90, 95, 99]), status == '200') by bin_auto(_time)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20percentiles_arrayif\(req_duration_ms%2C%20dynamic\(%5B50%2C%2090%2C%2095%2C%2099%5D\)%2C%20status%20%3D%3D%20'200'\)%20by%20bin_auto\(_time\)%22%7D)
**Output**
| percentiles\_req\_duration\_ms |
| ------------------------------ |
| 0.7352 ms |
| 1.691 ms |
| 1.981 ms |
| 2.612 ms |
This query filters records to those with a status of 200 and returns the percentile values for the request durations.
Use `percentiles_arrayif` to track performance of spans and filter on a specific service operation. This lets you quickly gauge how request durations differ for incoming traffic.
**Query**
```kusto
['otel-demo-traces']
| summarize percentiles_arrayif(duration, dynamic([50, 90, 99, 99]), ['method'] == "POST") by bin_auto(_time)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20percentiles_arrayif\(duration%2C%20dynamic\(%5B50%2C%2090%2C%2099%2C%2099%5D\)%2C%20%5B'method'%5D%20%3D%3D%20'POST'\)%20by%20bin_auto\(_time\)%22%7D)
**Output**
| percentiles\_duration |
| --------------------- |
| 5.166 ms |
| 25.18 ms |
| 71.996 ms |
This query returns the percentile values for span durations for requests with the POST method.
You can focus on server issues by filtering for specific status codes, then see how request durations are distributed in those scenarios.
**Query**
```kusto
['sample-http-logs']
| summarize percentiles_arrayif(req_duration_ms, dynamic([50, 90, 95, 99]), status startswith '5') by bin_auto(_time)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20percentiles_arrayif\(req_duration_ms%2C%20dynamic\(%5B50%2C%2090%2C%2095%2C%2099%5D\)%2C%20status%20startswith%20'5'\)%20by%20bin_auto\(_time\)%22%7D)
**Output**
| percentiles\_req\_duration\_ms |
| ------------------------------ |
| 0.7352 ms |
| 1.691 ms |
| 1.981 ms |
| 2.612 ms |
This query calculates percentile values for request durations that return a status code starting with 5 which means server error.
## List of related functions
* [avg](/apl/aggregation-function/avg): Returns the average of a numeric column.
* [percentile](/apl/aggregation-function/percentile): Returns a single percentile value.
* [percentile\_if](/apl/aggregation-function/percentileif): Returns a single percentile value for the records that satisfy a condition.
* [percentiles\_array](/apl/aggregation-function/percentiles-array): Returns an array of percentile values for all rows.
* [sum](/apl/aggregation-function/sum): Returns the sum of a numeric column.
# rate
Source: https://axiom.co/docs/apl/aggregation-function/rate
This page explains how to use the rate aggregation function in APL.
The `rate` aggregation function in APL (Axiom Processing Language) helps you calculate the rate of change over a specific time interval. This is especially useful for scenarios where you need to monitor how frequently an event occurs or how a value changes over time. For example, you can use the `rate` function to track request rates in web logs or changes in metrics like CPU usage or memory consumption.
The `rate` function is useful for analyzing trends in time series data and identifying unusual spikes or drops in activity. It can help you understand patterns in logs, metrics, and traces over specific intervals, such as per minute, per second, or per hour.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, the equivalent of the `rate` function can be achieved using the `timechart` command with a `per_second` option or by calculating the difference between successive values over time. In APL, the `rate` function simplifies this process by directly calculating the rate over a specified time interval.
```splunk Splunk example
| timechart per_second count by resp_body_size_bytes
```
```kusto APL equivalent
['sample-http-logs']
| summarize rate(resp_body_size_bytes) by bin(_time, 1s)
```
In ANSI SQL, calculating rates typically involves using window functions like `LAG` or `LEAD` to calculate the difference between successive rows in a time series. In APL, the `rate` function abstracts this complexity by allowing you to directly compute the rate over time without needing window functions.
```sql SQL example
SELECT resp_body_size_bytes, COUNT(*) / TIMESTAMPDIFF(SECOND, MIN(_time), MAX(_time)) AS rate
FROM http_logs;
```
```kusto APL equivalent
['sample-http-logs']
| summarize rate(resp_body_size_bytes) by bin(_time, 1s)
```
## Usage
### Syntax
```kusto
rate(field)
```
### Parameters
* `field`: The numeric field for which you want to calculate the rate.
### Returns
Returns the rate of change or occurrence of the specified `field` over the time interval specified in the query.
Specify the time interval in the query in the following way:
* `| summarize rate(field)` calculates the rate value of the field over the entire query window.
* `| summarize rate(field) by bin(_time, 1h)` calculates the rate value of the field over a one-hour time window.
* `| summarize rate(field) by bin_auto(_time)` calculates the rate value of the field bucketed by an automatic time window computed by `bin_auto()`.
Use two `summarize` statements to visualize the average rate over one minute per hour. For example:
```kusto
['sample-http-logs']
| summarize respBodyRate = rate(resp_body_size_bytes) by bin(_time, 1m)
| summarize avg(respBodyRate) by bin(_time, 1h)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20respBodyRate%20%3D%20rate\(resp_body_size_bytes\)%20by%20bin\(_time%2C%201m\)%20%7C%20summarize%20avg\(respBodyRate\)%20by%20bin\(_time%2C%201h\)%22%2C%20%22queryOptions%22%3A%7B%22quickRange%22%3A%226h%22%7D%7D)
## Use case examples
In this example, the `rate` aggregation calculates the rate of HTTP response sizes per second.
**Query**
```kusto
['sample-http-logs']
| summarize rate(resp_body_size_bytes) by bin(_time, 1s)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20rate\(resp_body_size_bytes\)%20by%20bin\(_time%2C%201s\)%22%7D)
**Output**
| rate | \_time |
| ------ | ------------------- |
| 854 kB | 2024-01-01 12:00:00 |
| 635 kB | 2024-01-01 12:00:01 |
This query calculates the rate of HTTP response sizes per second.
This example calculates the rate of span duration per second.
**Query**
```kusto
['otel-demo-traces']
| summarize rate(toint(duration)) by bin(_time, 1s)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20rate\(toint\(duration\)\)%20by%20bin\(_time%2C%201s\)%22%7D)
**Output**
| rate | \_time |
| ---------- | ------------------- |
| 26,393,768 | 2024-01-01 12:00:00 |
| 19,303,456 | 2024-01-01 12:00:01 |
This query calculates the rate of span duration per second.
In this example, the `rate` aggregation calculates the rate of HTTP request duration per second which can be useful to detect an increate in malicious requests.
**Query**
```kusto
['sample-http-logs']
| summarize rate(req_duration_ms) by bin(_time, 1s)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20rate\(req_duration_ms\)%20by%20bin\(_time%2C%201s\)%22%7D)
**Output**
| rate | \_time |
| ---------- | ------------------- |
| 240.668 ms | 2024-01-01 12:00:00 |
| 264.17 ms | 2024-01-01 12:00:01 |
This query calculates the rate of HTTP request duration per second.
## List of related aggregations
* [**count**](/apl/aggregation-function/count): Returns the total number of records. Use `count` when you want an absolute total instead of a rate over time.
* [**sum**](/apl/aggregation-function/sum): Returns the sum of values in a field. Use `sum` when you want to aggregate the total value, not its rate of change.
* [**avg**](/apl/aggregation-function/avg): Returns the average value of a field. Use `avg` when you want to know the mean value rather than how it changes over time.
* [**max**](/apl/aggregation-function/max): Returns the maximum value of a field. Use `max` when you need to find the peak value instead of how often or quickly something occurs.
* [**min**](/apl/aggregation-function/min): Returns the minimum value of a field. Use `min` when you’re looking for the lowest value rather than a rate.
# Aggregation functions
Source: https://axiom.co/docs/apl/aggregation-function/statistical-functions
This section explains how to use and combine different aggregation functions in APL.
The table summarizes the aggregation functions available in APL. Use all these aggregation functions in the context of the [summarize operator](/apl/tabular-operators/summarize-operator).
| Function | Description |
| --------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------- |
| [arg\_min](/apl/aggregation-function/arg-min) | Returns the row where an expression evaluates to the minimum value. |
| [arg\_max](/apl/aggregation-function/arg-max) | Returns the row where an expression evaluates to the maximum value. |
| [avg](/apl/aggregation-function/avg) | Returns an average value across the group. |
| [avgif](/apl/aggregation-function/avgif) | Calculates the average value of an expression in records for which the predicate evaluates to true. |
| [count](/apl/aggregation-function/count) | Returns a count of the group without/with a predicate. |
| [countif](/apl/aggregation-function/countif) | Returns a count of rows for which the predicate evaluates to true. |
| [dcount](/apl/aggregation-function/dcount) | Returns an estimate for the number of distinct values that are taken by a scalar an expressionession in the summary group. |
| [dcountif](/apl/aggregation-function/dcountif) | Returns an estimate of the number of distinct values of an expression of rows for which the predicate evaluates to true. |
| [histogram](/apl/aggregation-function/histogram) | Returns a timeseries heatmap chart across the group. |
| [make\_list](/apl/aggregation-function/make-list) | Creates a dynamic JSON object (array) of all the values of an expression in the group. |
| [make\_list\_if](/apl/aggregation-function/make-list-if) | Creates a dynamic JSON object (array) of an expression values in the group for which the predicate evaluates to true. |
| [make\_set](/apl/aggregation-function/make-set) | Creates a dynamic JSON array of the set of distinct values that an expression takes in the group. |
| [make\_set\_if](/apl/aggregation-function/make-set-if) | Creates a dynamic JSON object (array) of the set of distinct values that an expression takes in records for which the predicate evaluates to true. |
| [max](/apl/aggregation-function/max) | Returns the maximum value across the group. |
| [maxif](/apl/aggregation-function/maxif) | Calculates the maximum value of an expression in records for which the predicate evaluates to true. |
| [min](/apl/aggregation-function/min) | Returns the minimum value across the group. |
| [minif](/apl/aggregation-function/minif) | Returns the minimum of an expression in records for which the predicate evaluates to true. |
| [percentile](/apl/aggregation-function/percentile) | Calculates the requested percentiles of the group and produces a timeseries chart. |
| [percentileif](/apl/aggregation-function/percentileif) | Calculates the requested percentiles of the field for the rows where the predicate evaluates to true. |
| [percentiles\_array](/apl/aggregation-function/percentiles-array) | Returns an array of numbers where each element is the value at the corresponding percentile. |
| [percentiles\_arrayif](/apl/aggregation-function/percentiles-arrayif) | Returns an array of percentile values for the records that satisfy the condition. |
| [rate](/apl/aggregation-function/rate) | Calculates the rate of values in a group per second. |
| [stdev](/apl/aggregation-function/stdev) | Calculates the standard deviation of an expression across the group. |
| [stdevif](/apl/aggregation-function/stdevif) | Calculates the standard deviation of an expression in records for which the predicate evaluates to true. |
| [sum](/apl/aggregation-function/sum) | Calculates the sum of an expression across the group. |
| [sumif](/apl/aggregation-function/sumif) | Calculates the sum of an expression in records for which the predicate evaluates to true. |
| [topk](/apl/aggregation-function/topk) | Calculates the top values of an expression across the group in a dataset. |
| [topkif](/apl/aggregation-function/topkif) | Calculates the top values of an expression in records for which the predicate evaluates to true. |
| [variance](/apl/aggregation-function/variance) | Calculates the variance of an expression across the group. |
| [varianceif](/apl/aggregation-function/varianceif) | Calculates the variance of an expression in records for which the predicate evaluates to true. |
# stdev
Source: https://axiom.co/docs/apl/aggregation-function/stdev
This page explains how to use the stdev aggregation function in APL.
The `stdev` aggregation in APL computes the standard deviation of a numeric field within a dataset. This is useful for understanding the variability or dispersion of data points around the mean. You can apply this aggregation to various use cases, such as performance monitoring, anomaly detection, and statistical analysis of logs and traces.
Use the `stdev` function to determine how spread out values like request duration, span duration, or response times are. This is particularly helpful when analyzing data trends and identifying inconsistencies, outliers, or abnormal behavior.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, the `stdev` aggregation function works similarly but has a different syntax. While SPL uses the `stdev` command within the `stats` function, APL users will find the aggregation works similarly in APL with just minor differences in syntax.
```sql Splunk example
| stats stdev(duration) as duration_std
```
```kusto APL equivalent
['dataset']
| summarize duration_std = stdev(duration)
```
In ANSI SQL, the standard deviation is computed using the `STDDEV` function. APL's `stdev` function is the direct equivalent of SQL’s `STDDEV`, although APL uses pipes (`|`) for chaining operations and different keyword formatting.
```sql SQL example
SELECT STDDEV(duration) AS duration_std FROM dataset;
```
```kusto APL equivalent
['dataset']
| summarize duration_std = stdev(duration)
```
## Usage
### Syntax
```kusto
stdev(numeric_field)
```
### Parameters
* **`numeric_field`**: The field containing numeric values for which the standard deviation is calculated.
### Returns
The `stdev` aggregation returns a single numeric value representing the standard deviation of the specified numeric field in the dataset.
## Use case examples
You can use the `stdev` aggregation to analyze HTTP request durations and identify performance variations across different requests. For instance, you can calculate the standard deviation of request durations to identify potential anomalies.
**Query**
```kusto
['sample-http-logs']
| summarize req_duration_std = stdev(req_duration_ms)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20req_duration_std%20%3D%20stdev\(req_duration_ms\)%22%7D)
**Output**
| req\_duration\_std |
| ------------------ |
| 345.67 |
This query calculates the standard deviation of the `req_duration_ms` field in the `sample-http-logs` dataset, helping to understand how much variability there is in request durations.
In distributed tracing, calculating the standard deviation of span durations can help identify inconsistent spans that might indicate performance issues or bottlenecks.
**Query**
```kusto
['otel-demo-traces']
| summarize span_duration_std = stdev(duration)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20span_duration_std%20%3D%20stdev\(duration\)%22%7D)
**Output**
| span\_duration\_std |
| ------------------- |
| 0:00:02.456 |
This query computes the standard deviation of span durations in the `otel-demo-traces` dataset, providing insight into how much variation exists between trace spans.
In security logs, the `stdev` function can help analyze the response times of various HTTP requests, potentially identifying patterns that might be related to security incidents or abnormal behavior.
**Query**
```kusto
['sample-http-logs']
| summarize resp_time_std = stdev(req_duration_ms) by status
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20resp_time_std%20%3D%20stdev\(req_duration_ms\)%20by%20status%22%7D)
**Output**
| status | resp\_time\_std |
| ------ | --------------- |
| 200 | 123.45 |
| 500 | 567.89 |
This query calculates the standard deviation of request durations grouped by the HTTP status code, providing insight into the performance of different status codes.
## List of related aggregations
* [**avg**](/apl/aggregation-function/avg): Calculates the average value of a numeric field. Use `avg` to understand the central tendency of the data.
* [**min**](/apl/aggregation-function/min): Returns the smallest value in a numeric field. Use `min` when you need to find the minimum value.
* [**max**](/apl/aggregation-function/max): Returns the largest value in a numeric field. Use `max` to identify the peak value in a dataset.
* [**sum**](/apl/aggregation-function/sum): Adds up all the values in a numeric field. Use `sum` to get a total across records.
* [**count**](/apl/aggregation-function/count): Returns the number of records in a dataset. Use `count` when you need the number of occurrences or entries.
# stdevif
Source: https://axiom.co/docs/apl/aggregation-function/stdevif
This page explains how to use the stdevif aggregation function in APL.
The `stdevif` aggregation function in APL computes the standard deviation of values in a group based on a specified condition. This is useful when you want to calculate variability in data, but only for rows that meet a particular condition. For example, you can use `stdevif` to find the standard deviation of response times in an HTTP log, but only for requests that resulted in a 200 status code.
The `stdevif` function is useful when you want to analyze the spread of data values filtered by specific criteria, such as analyzing request durations in successful transactions or monitoring trace durations of specific services in OpenTelemetry data.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, the `stdev` function is used to calculate the standard deviation, but you need to use an `if` function or a `where` clause to filter data. APL simplifies this by combining both operations in `stdevif`.
```sql Splunk example
| stats stdev(req_duration_ms) as stdev_req where status="200"
```
```kusto APL equivalent
['sample-http-logs']
| summarize stdevif(req_duration_ms, status == "200") by geo.country
```
In ANSI SQL, the `STDDEV` function is used to compute the standard deviation, but it requires the use of a `CASE WHEN` expression to apply a conditional filter. APL integrates the condition directly into the `stdevif` function.
```sql SQL example
SELECT STDDEV(CASE WHEN status = '200' THEN req_duration_ms END)
FROM sample_http_logs
GROUP BY geo.country;
```
```kusto APL equivalent
['sample-http-logs']
| summarize stdevif(req_duration_ms, status == "200") by geo.country
```
## Usage
### Syntax
```kusto
summarize stdevif(column, condition)
```
### Parameters
* **column**: The column that contains the numeric values for which you want to calculate the standard deviation.
* **condition**: The condition that must be true for the values to be included in the standard deviation calculation.
### Returns
The `stdevif` function returns a floating-point number representing the standard deviation of the specified column for the rows that satisfy the condition.
## Use case examples
In this example, you calculate the standard deviation of request durations (`req_duration_ms`), but only for successful HTTP requests (status code 200).
**Query**
```kusto
['sample-http-logs']
| summarize stdevif(req_duration_ms, status == '200') by ['geo.country']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20summarize%20stdevif%28req_duration_ms%2C%20status%20%3D%3D%20%27200%27%29%20by%20%5B%27geo.country%27%5D%22%7D)
**Output**
| geo.country | stdev\_req\_duration\_ms |
| ----------- | ------------------------ |
| US | 120.45 |
| Canada | 98.77 |
| Germany | 134.92 |
This query calculates the standard deviation of request durations for HTTP 200 responses, grouped by country.
In this example, you calculate the standard deviation of span durations, but only for traces from the `frontend` service.
**Query**
```kusto
['otel-demo-traces']
| summarize stdevif(duration, ['service.name'] == "frontend") by kind
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D%20%7C%20summarize%20stdevif%28duration%2C%20%5B%27service.name%27%5D%20%3D%3D%20%27frontend%27%29%20by%20kind%22%7D)
**Output**
| kind | stdev\_duration |
| ------ | --------------- |
| server | 45.78 |
| client | 23.54 |
This query computes the standard deviation of span durations for the `frontend` service, grouped by span type (`kind`).
In this example, you calculate the standard deviation of request durations for security events from specific HTTP methods, filtered by `POST` requests.
**Query**
```kusto
['sample-http-logs']
| summarize stdevif(req_duration_ms, method == "POST") by ['geo.city']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20summarize%20stdevif%28req_duration_ms%2C%20method%20%3D%3D%20%27POST%27%29%20by%20%5B%27geo.city%27%5D%22%7D)
**Output**
| geo.city | stdev\_req\_duration\_ms |
| -------- | ------------------------ |
| New York | 150.12 |
| Berlin | 130.33 |
This query calculates the standard deviation of request durations for `POST` HTTP requests, grouped by the originating city.
## List of related aggregations
* [**avgif**](/apl/aggregation-function/avgif): Similar to `stdevif`, but instead of calculating the standard deviation, `avgif` computes the average of values that meet the condition.
* [**sumif**](/apl/aggregation-function/sumif): Computes the sum of values that meet the condition. Use `sumif` when you want to aggregate total values instead of analyzing data spread.
* [**varianceif**](/apl/aggregation-function/varianceif): Returns the variance of values that meet the condition, which is a measure of how spread out the data points are.
* [**countif**](/apl/aggregation-function/countif): Counts the number of rows that satisfy the specified condition.
* [**minif**](/apl/aggregation-function/minif): Retrieves the minimum value that satisfies the given condition, useful when finding the smallest value in filtered data.
# sum
Source: https://axiom.co/docs/apl/aggregation-function/sum
This page explains how to use the sum aggregation function in APL.
The `sum` aggregation in APL is used to compute the total sum of a specific numeric field in a dataset. This aggregation is useful when you want to find the cumulative value for a certain metric, such as the total duration of requests, total sales revenue, or any other numeric field that can be summed.
You can use the `sum` aggregation in a wide range of scenarios, such as analyzing log data, monitoring traces, or examining security logs. It is particularly helpful when you want to get a quick overview of your data in terms of totals or cumulative statistics.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk, you use the `sum` function in combination with the `stats` command to aggregate data. In APL, the `sum` aggregation works similarly but is structured differently in terms of syntax.
```splunk Splunk example
| stats sum(req_duration_ms) as total_duration
```
```kusto APL equivalent
['sample-http-logs']
| summarize total_duration = sum(req_duration_ms)
```
In ANSI SQL, the `SUM` function is commonly used with the `GROUP BY` clause to aggregate data by a specific field. In APL, the `sum` function works similarly but can be used without requiring a `GROUP BY` clause for simple summations.
```sql SQL example
SELECT SUM(req_duration_ms) AS total_duration
FROM sample_http_logs
```
```kusto APL equivalent
['sample-http-logs']
| summarize total_duration = sum(req_duration_ms)
```
## Usage
### Syntax
```kusto
summarize [ =] sum()
```
### Parameters
* ``: (Optional) The name you want to assign to the resulting column that contains the sum.
* ``: The field in your dataset that contains the numeric values you want to sum.
### Returns
The `sum` aggregation returns a single row with the sum of the specified numeric field. If used with a `by` clause, it returns multiple rows with the sum per group.
## Use case examples
The `sum` aggregation can be used to calculate the total request duration in an HTTP log dataset.
**Query**
```kusto
['sample-http-logs']
| summarize total_duration = sum(req_duration_ms)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20total_duration%20%3D%20sum\(req_duration_ms\)%22%7D)
**Output**
| total\_duration |
| --------------- |
| 123456 |
This query calculates the total request duration across all HTTP requests in the dataset.
The `sum` aggregation can be applied to OpenTelemetry traces to calculate the total span duration.
**Query**
```kusto
['otel-demo-traces']
| summarize total_duration = sum(duration)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20total_duration%20%3D%20sum\(duration\)%22%7D)
**Output**
| total\_duration |
| --------------- |
| 7890 |
This query calculates the total duration of all spans in the dataset.
You can use the `sum` aggregation to calculate the total number of requests based on a specific HTTP status in security logs.
**Query**
```kusto
['sample-http-logs']
| where status == '200'
| summarize request_count = sum(1)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20where%20status%20%3D%3D%20'200'%20%7C%20summarize%20request_count%20%3D%20sum\(1\)%22%7D)
**Output**
| request\_count |
| -------------- |
| 500 |
This query counts the total number of successful requests (status 200) in the dataset.
## List of related aggregations
* [**count**](/apl/aggregation-function/count): Counts the number of records in a dataset. Use `count` when you want to count the number of rows, not aggregate numeric values.
* [**avg**](/apl/aggregation-function/avg): Computes the average value of a numeric field. Use `avg` when you need to find the mean instead of the total sum.
* [**min**](/apl/aggregation-function/min): Returns the minimum value of a numeric field. Use `min` when you're interested in the lowest value.
* [**max**](/apl/aggregation-function/max): Returns the maximum value of a numeric field. Use `max` when you're interested in the highest value.
* [**sumif**](/apl/aggregation-function/sumif): Sums a numeric field conditionally. Use `sumif` when you only want to sum values that meet a specific condition.
# sumif
Source: https://axiom.co/docs/apl/aggregation-function/sumif
This page explains how to use the sumif aggregation function in APL.
The `sumif` aggregation function in Axiom Processing Language (APL) computes the sum of a numeric expression for records that meet a specified condition. This function is useful when you want to filter data based on specific criteria and aggregate the numeric values that match the condition. Use `sumif` when you need to apply conditional logic to sums, such as calculating the total request duration for successful HTTP requests or summing the span durations in OpenTelemetry traces for a specific service.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, the `sumif` equivalent functionality requires using a `stats` command with a `where` clause to filter the data. In APL, you can use `sumif` to simplify this operation by combining both the condition and the summing logic into one function.
```sql Splunk example
| stats sum(duration) as total_duration where status="200"
```
```kusto APL equivalent
summarize total_duration = sumif(duration, status == '200')
```
In ANSI SQL, achieving a similar result typically involves using a `CASE` statement inside the `SUM` function to conditionally sum values based on a specified condition. In APL, `sumif` provides a more concise approach by allowing you to filter and sum in a single function.
```sql SQL example
SELECT SUM(CASE WHEN status = '200' THEN duration ELSE 0 END) AS total_duration
FROM http_logs
```
```kusto APL equivalent
summarize total_duration = sumif(duration, status == '200')
```
## Usage
### Syntax
```kusto
sumif(numeric_expression, condition)
```
### Parameters
* `numeric_expression`: The numeric field or expression you want to sum.
* `condition`: A boolean expression that determines which records contribute to the sum. Only the records that satisfy the condition are considered.
### Returns
`sumif` returns the sum of the values in `numeric_expression` for records where the `condition` is true. If no records meet the condition, the result is 0.
## Use case examples
In this use case, we calculate the total request duration for HTTP requests that returned a `200` status code.
**Query**
```kusto
['sample-http-logs']
| summarize total_req_duration = sumif(req_duration_ms, status == '200')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20summarize%20total_req_duration%20%3D%20sumif%28req_duration_ms%2C%20status%20%3D%3D%20%27200%27%29%22%7D)
**Output**
| total\_req\_duration |
| -------------------- |
| 145000 |
This query computes the total request duration (in milliseconds) for all successful HTTP requests (those with a status code of `200`).
In this example, we sum the span durations for the `frontend` service in OpenTelemetry traces.
**Query**
```kusto
['otel-demo-traces']
| summarize total_duration = sumif(duration, ['service.name'] == 'frontend')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D%20%7C%20summarize%20total_duration%20%3D%20sumif%28duration%2C%20%5B%27service.name%27%5D%20%3D%3D%20%27frontend%27%29%22%7D)
**Output**
| total\_duration |
| --------------- |
| 32000 |
This query sums the span durations for traces related to the `frontend` service, providing insight into how long this service has been running over time.
Here, we calculate the total request duration for failed HTTP requests (those with status codes other than `200`).
**Query**
```kusto
['sample-http-logs']
| summarize total_req_duration_failed = sumif(req_duration_ms, status != '200')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20summarize%20total_req_duration_failed%20%3D%20sumif%28req_duration_ms%2C%20status%20%21%3D%20%27200%27%29%22%7D)
**Output**
| total\_req\_duration\_failed |
| ---------------------------- |
| 64000 |
This query computes the total request duration for all failed HTTP requests (where the status code is not `200`), which can be useful for security log analysis.
## List of related aggregations
* [**avgif**](/apl/aggregation-function/avgif): Computes the average of a numeric expression for records that meet a specified condition. Use `avgif` when you're interested in the average value, not the total sum.
* [**countif**](/apl/aggregation-function/countif): Counts the number of records that satisfy a condition. Use `countif` when you need to know how many records match a specific criterion.
* [**minif**](/apl/aggregation-function/minif): Returns the minimum value of a numeric expression for records that meet a condition. Useful when you need the smallest value under certain criteria.
* [**maxif**](/apl/aggregation-function/maxif): Returns the maximum value of a numeric expression for records that meet a condition. Use `maxif` to identify the highest values under certain conditions.
# topk
Source: https://axiom.co/docs/apl/aggregation-function/topk
This page explains how to use the topk aggregation function in APL.
The `topk` aggregation in Axiom Processing Language (APL) allows you to identify the top `k` results based on a specified field. This is especially useful when you want to quickly analyze large datasets and extract the most significant values, such as the top-performing queries, most frequent errors, or highest latency requests.
Use `topk` to find the most common or relevant entries in datasets, especially in log analysis, telemetry data, and monitoring systems. This aggregation helps you focus on the most important data points, filtering out the noise.
The `topk` aggregation in APL is a statistical aggregation that returns estimated results. The estimation comes with the benefit of speed at the expense of accuracy. This means that `topk` is fast and light on resources even on a large or high-cardinality dataset, but it doesn’t provide precise results.
For completely accurate results, use the [`top` operator](/apl/tabular-operators/top-operator).
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
Splunk SPL doesn’t have the equivalent of the `topk` function. You can achieve similar results with SPL’s `top` command which is equivalent to APL’s `top` operator. The `topk` function in APL behaves similarly by returning the top `k` values of a specified field, but its syntax is unique to APL.
The main difference between `top` (supported by both SPL and APL) and `topk` (supported only by APL) is that `topk` is estimated. This means that APL’s `topk` is faster, less resource intenstive, but less accurate than SPL’s `top`.
```sql Splunk example
| top limit=5 status by method
```
```kusto APL equivalent
['sample-http-logs']
| summarize topk(status, 5) by method
```
In ANSI SQL, identifying the top `k` rows often involves using the `ORDER BY` and `LIMIT` clauses. While the logic remains similar, APL’s `topk` simplifies this process by directly returning the top `k` values of a field in an aggregation.
The main difference between SQL’s solution and APL’s `topk` is that `topk` is estimated. This means that APL’s `topk` is faster, less resource intenstive, but less accurate than SQL’s combination of `ORDER BY` and `LIMIT` clauses.
```sql SQL example
SELECT status, COUNT(*)
FROM sample_http_logs
GROUP BY status
ORDER BY COUNT(*) DESC
LIMIT 5;
```
```kusto APL equivalent
['sample-http-logs']
| summarize topk(status, 5)
```
## Usage
### Syntax
```kusto
topk(Field, k)
```
### Parameters
* `Field`: The field or expression to rank the results by.
* `k`: The number of top results to return.
### Returns
A subset of the original dataset with the top `k` values based on the specified field.
## Use case examples
When analyzing HTTP logs, you can use the `topk` function to find the top 5 most frequent HTTP status codes.
**Query**
```kusto
['sample-http-logs']
| summarize topk(status, 5)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%20%7C%20summarize%20topk\(status%2C%205\)%22%7D)
**Output**
| status | count\_ |
| ------ | ------- |
| 200 | 1500 |
| 404 | 400 |
| 500 | 200 |
| 301 | 150 |
| 302 | 100 |
This query groups the logs by HTTP status and returns the 5 most frequent statuses.
In OpenTelemetry traces, you can use `topk` to find the top five status codes by service.
**Query**
```kusto
['otel-demo-traces']
| summarize topk(['attributes.http.status_code'], 5) by ['service.name']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20topk\(%5B'attributes.http.status_code'%5D%2C%205\)%20by%20%5B'service.name'%5D%22%7D)
**Output**
| service.name | attributes.http.status\_code | \_count |
| ------------- | ---------------------------- | ---------- |
| frontendproxy | 200 | 34,862,088 |
| | 203 | 3,095,223 |
| | 404 | 154,417 |
| | 500 | 153,823 |
| | 504 | 3,497 |
This query shows the top five status codes by service.
You can use `topk` in security log analysis to find the top 5 cities generating the most HTTP requests.
**Query**
```kusto
['sample-http-logs']
| summarize topk(['geo.city'], 5)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%20%7C%20summarize%20topk\(%5B'geo.city'%5D%2C%205\)%22%7D)
**Output**
| geo.city | count\_ |
| -------- | ------- |
| New York | 500 |
| London | 400 |
| Paris | 350 |
| Tokyo | 300 |
| Berlin | 250 |
This query returns the top 5 cities based on the number of HTTP requests.
## List of related aggregations
* [top](/apl/tabular-operators/top-operator): Returns the top values based on a field without requiring a specific number of results (`k`), making it useful when you're unsure how many top values to retrieve.
* [topkif](/apl/aggregation-function/topkif): Returns the top `k` results without filtering. Use topk when you do not need to restrict your analysis to a subset.
* [sort](/apl/tabular-operators/sort-operator): Orders the dataset based on one or more fields, which is useful if you need a complete ordered list rather than the top `k` values.
* [extend](/apl/tabular-operators/extend-operator): Adds calculated fields to your dataset, which can be useful in combination with `topk` to create custom rankings.
* [count](/apl/aggregation-function/count): Aggregates the dataset by counting occurrences, often used in conjunction with `topk` to find the most common values.
# topkif
Source: https://axiom.co/docs/apl/aggregation-function/topkif
This page explains how to use the topkif aggregation in APL.
The `topkif` aggregation in Axiom Processing Language (APL) allows you to identify the top `k` values based on a specified field, while also applying a filter on another field. Use `topkif` when you want to find the most significant entries that meet specific criteria, such as the top-performing queries from a particular service, the most frequent errors for a specific HTTP method, or the highest latency requests from a specific country.
Use `topkif` when you need to focus on the most important filtered subsets of data, especially in log analysis, telemetry data, and monitoring systems. This aggregation helps you quickly zoom in on significant values without scanning the entire dataset.
The `topkif` aggregation in APL is a statistical aggregation that returns estimated results. The estimation provides the benefit of speed at the expense of precision. This means that `topkif` is fast and light on resources even on large or high-cardinality datasets but does not provide completely accurate results.
For completely accurate results, use the [top operator](/apl/tabular-operators/top-operator) together with a filter.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
Splunk SPL does not have a direct equivalent to the `topkif` function. You can achieve similar results by using the top command combined with a where clause, which is closer to using APL’s top operator with a filter. However, APL’s `topkif` provides a more optimized, estimated solution when you want speed and efficiency.
```sql Splunk example
| where method="GET" | top limit=5 status
```
```kusto APL equivalent
['sample-http-logs']
| summarize topkif(status, 5, method == 'GET')
```
In ANSI SQL, identifying the top `k` rows filtered by a condition often involves a WHERE clause followed by ORDER BY and LIMIT. APL’s `topkif` simplifies this by combining the filtering and top-k selection in one function.
```sql SQL example
SELECT status, COUNT(*)
FROM sample_http_logs
WHERE method = 'GET'
GROUP BY status
ORDER BY COUNT(*) DESC
LIMIT 5;
```
```kusto APL equivalent
['sample-http-logs']
| summarize topkif(status, 5, method == 'GET')
```
# Usage
## Syntax
```kusto
topkif(Field, k, Condition)
```
## Parameters
* `Field`: The field or expression to rank the results by.
* `k`: The number of top results to return.
* `Condition`: A logical expression that specifies the filtering condition.
## Returns
A subset of the original dataset containing the top `k` values based on the specified field, after applying the filter condition.
# Use case examples
Use `topkif` when analyzing HTTP logs to find the top 5 most frequent HTTP status codes for GET requests.
**Query**
```kusto
['sample-http-logs']
| summarize topkif(status, 5, method == 'GET')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20topkif\(status%2C%205%2C%20method%20%3D%3D%20'GET'\)%22%7D)
**Output**
| status | count\_ |
| ------ | ------- |
| 200 | 900 |
| 404 | 250 |
| 500 | 100 |
| 301 | 90 |
| 302 | 60 |
This query groups GET requests by HTTP status and returns the 5 most frequent statuses.
Use `topkif` in OpenTelemetry traces to find the top five services for server.
**Query**
```kusto
['otel-demo-traces']
| summarize topkif(['service.name'], 5, kind == 'server')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20topkif\(%5B'service.name'%5D%2C%205%2C%20kind%20%3D%3D%20'server'\)%22%7D)
**Output**
| service.name | count\_ |
| --------------- | ------- |
| frontend-proxy | 99,573 |
| frontend | 91,800 |
| product-catalog | 29,696 |
| image-provider | 25,223 |
| flagd | 10,336 |
This query shows the top five services filtered to server.
Use `topkif` in security log analysis to find the top 5 cities generating GET HTTP requests.
**Query**
```kusto
['sample-http-logs']
| summarize topkif(['geo.city'], 5, method == 'GET')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20topkif\(%5B'geo.city'%5D%2C%205%2C%20method%20%3D%3D%20'GET'\)%22%7D)
**Output**
| geo.city | count\_ |
| -------- | ------- |
| New York | 300 |
| London | 250 |
| Paris | 200 |
| Tokyo | 180 |
| Berlin | 160 |
This query returns the top 5 cities generating the most GET HTTP requests.
# List of related aggregations
* [topk](/apl/aggregation-function/topk): Returns the top `k` results without filtering. Use topk when you do not need to restrict your analysis to a subset.
* [top](/apl/tabular-operators/top-operator): Returns the top results based on a field with accurate results. Use top when precision is important.
* [sort](/apl/tabular-operators/sort-operator): Sorts the dataset based on one or more fields. Use sort if you need full ordered results.
* [extend](/apl/tabular-operators/extend-operator): Adds calculated fields to your dataset, useful before applying topkif to create new fields to rank.
* [count](/apl/aggregation-function/count): Counts occurrences in the dataset. Use count when you only need counts without focusing on the top entries.\`
# variance
Source: https://axiom.co/docs/apl/aggregation-function/variance
This page explains how to use the variance aggregation function in APL.
The `variance` aggregation function in APL calculates the variance of a numeric expression across a set of records. Variance is a statistical measurement that represents the spread of data points in a dataset. It's useful for understanding how much variation exists in your data. In scenarios such as performance analysis, network traffic monitoring, or anomaly detection, `variance` helps identify outliers and patterns by showing how data points deviate from the mean.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In SPL, variance is computed using the `stats` command with the `var` function, whereas in APL, you can use `variance` for the same functionality.
```sql Splunk example
| stats var(req_duration_ms) as variance
```
```kusto APL equivalent
['sample-http-logs']
| summarize variance(req_duration_ms)
```
In ANSI SQL, variance is typically calculated using `VAR_POP` or `VAR_SAMP`. APL provides a simpler approach using the `variance` function without needing to specify population or sample.
```sql SQL example
SELECT VAR_POP(req_duration_ms) FROM sample_http_logs;
```
```kusto APL equivalent
['sample-http-logs']
| summarize variance(req_duration_ms)
```
## Usage
### Syntax
```kusto
summarize variance(Expression)
```
### Parameters
* `Expression`: A numeric expression or field for which you want to compute the variance. The expression should evaluate to a numeric data type.
### Returns
The function returns the variance (a numeric value) of the specified expression across the records.
## Use case examples
You can use the `variance` function to measure the variability of request durations, which helps in identifying performance bottlenecks or anomalies in web services.
**Query**
```kusto
['sample-http-logs']
| summarize variance(req_duration_ms)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20variance\(req_duration_ms\)%22%7D)
**Output**
| variance\_req\_duration\_ms |
| --------------------------- |
| 1024.5 |
This query calculates the variance of request durations from a dataset of HTTP logs. A high variance indicates greater variability in request durations, potentially signaling performance issues.
For OpenTelemetry traces, `variance` can be used to measure how much span durations differ across service invocations, helping in performance optimization and anomaly detection.
**Query**
```kusto
['otel-demo-traces']
| summarize variance(duration)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20variance\(duration\)%22%7D)
**Output**
| variance\_duration |
| ------------------ |
| 1287.3 |
This query computes the variance of span durations across traces, which helps in understanding how consistent the service performance is. A higher variance might indicate unstable or inconsistent performance.
You can use the `variance` function on security logs to detect abnormal patterns in request behavior, such as unusual fluctuations in response times, which may point to potential security threats.
**Query**
```kusto
['sample-http-logs']
| summarize variance(req_duration_ms) by status
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20variance\(req_duration_ms\)%20by%20status%22%7D)
**Output**
| status | variance\_req\_duration\_ms |
| ------ | --------------------------- |
| 200 | 1534.8 |
| 404 | 2103.4 |
This query calculates the variance of request durations grouped by HTTP status codes. High variance in certain status codes (e.g., 404 errors) can indicate network or application issues.
## List of related aggregations
* [**stdev**](/apl/aggregation-function/stdev): Computes the standard deviation, which is the square root of the variance. Use `stdev` when you need the spread of data in the same units as the original dataset.
* [**avg**](/apl/aggregation-function/avg): Computes the average of a numeric field. Combine `avg` with `variance` to analyze both the central tendency and the spread of data.
* [**count**](/apl/aggregation-function/count): Counts the number of records. Use `count` alongside `variance` to get a sense of data size relative to variance.
* [**percentile**](/apl/aggregation-function/percentile): Returns a value below which a given percentage of observations fall. Use `percentile` for a more detailed distribution analysis.
* [**max**](/apl/aggregation-function/max): Returns the maximum value. Use `max` when you are looking for extreme values in addition to variance to detect anomalies.
# varianceif
Source: https://axiom.co/docs/apl/aggregation-function/varianceif
This page explains how to use the varianceif aggregation function in APL.
The `varianceif` aggregation in APL calculates the variance of values that meet a specified condition. This is useful when you want to understand the variability of a subset of data without considering all data points. For example, you can use `varianceif` to compute the variance of request durations for HTTP requests that resulted in a specific status code or to track anomalies in trace durations for a particular service.
You can use the `varianceif` aggregation when analyzing logs, telemetry data, or security events where conditions on subsets of the data are critical to your analysis.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk, you would use the `eval` function to filter data and calculate variance for specific conditions. In APL, `varianceif` combines the filtering and aggregation into a single function, making your queries more concise.
```sql Splunk example
| eval filtered_var=if(status=="200",req_duration_ms,null())
| stats var(filtered_var)
```
```kusto APL equivalent
['sample-http-logs']
| summarize varianceif(req_duration_ms, status == '200')
```
In ANSI SQL, you typically use a `CASE` statement to apply conditional logic and then compute the variance. In APL, `varianceif` simplifies this by combining both the condition and the aggregation.
```sql SQL example
SELECT VARIANCE(CASE WHEN status = '200' THEN req_duration_ms END)
FROM sample_http_logs;
```
```kusto APL equivalent
['sample-http-logs']
| summarize varianceif(req_duration_ms, status == '200')
```
## Usage
### Syntax
```kusto
summarize varianceif(Expr, Predicate)
```
### Parameters
* `Expr`: The expression (numeric) for which you want to calculate the variance.
* `Predicate`: A boolean condition that determines which records to include in the calculation.
### Returns
Returns the variance of `Expr` for the records where the `Predicate` is true. If no records match the condition, it returns `null`.
## Use case examples
You can use the `varianceif` function to calculate the variance of HTTP request durations for requests that succeeded (`status == '200'`).
**Query**
```kusto
['sample-http-logs']
| summarize varianceif(req_duration_ms, status == '200')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20varianceif%28req_duration_ms%2C%20status%20%3D%3D%20'200'%29%22%7D)
**Output**
| varianceif\_req\_duration\_ms |
| ----------------------------- |
| 15.6 |
This query calculates the variance of request durations for all HTTP requests that returned a status code of 200 (successful requests).
You can use the `varianceif` function to monitor the variance in span durations for a specific service, such as the `frontend` service.
**Query**
```kusto
['otel-demo-traces']
| summarize varianceif(duration, ['service.name'] == 'frontend')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20varianceif%28duration%2C%20%5B'service.name'%5D%20%3D%3D%20'frontend'%29%22%7D)
**Output**
| varianceif\_duration |
| -------------------- |
| 32.7 |
This query calculates the variance in the duration of spans generated by the `frontend` service.
The `varianceif` function can also be used to track the variance in request durations for requests from a specific geographic region, such as requests from `geo.country == 'United States'`.
**Query**
```kusto
['sample-http-logs']
| summarize varianceif(req_duration_ms, ['geo.country'] == 'United States')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20varianceif%28req_duration_ms%2C%20%5B'geo.country'%5D%20%3D%3D%20'United%20States'%29%22%7D)
**Output**
| varianceif\_req\_duration\_ms |
| ----------------------------- |
| 22.9 |
This query calculates the variance in request durations for requests originating from the United States.
## List of related aggregations
* [**avgif**](/apl/aggregation-function/avgif): Computes the average value of an expression for records that match a given condition. Use `avgif` when you want the average instead of variance.
* [**sumif**](/apl/aggregation-function/sumif): Returns the sum of values that meet a specified condition. Use `sumif` when you're interested in totals, not variance.
* [**stdevif**](/apl/aggregation-function/stdevif): Returns the standard deviation of values based on a condition. Use `stdevif` when you want to measure dispersion using standard deviation instead of variance.
# All features of Axiom Processing Language (APL)
Source: https://axiom.co/docs/apl/apl-features
This page gives an overview about all the features of Axiom Processing Language (APL).
| Category | Feature | Description |
| :-------------------- | :-------------------------------------------------------------------------------------------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Aggregation function | [arg\_max](/apl/aggregation-function/arg-max) | Returns the row where an expression evaluates to the maximum value. |
| Aggregation function | [arg\_min](/apl/aggregation-function/arg-min) | Returns the row where an expression evaluates to the minimum value. |
| Aggregation function | [avg](/apl/aggregation-function/avg) | Returns an average value across the group. |
| Aggregation function | [avgif](/apl/aggregation-function/avgif) | Calculates the average value of an expression in records for which the predicate evaluates to true. |
| Aggregation function | [count](/apl/aggregation-function/count) | Returns a count of the group without/with a predicate. |
| Aggregation function | [countif](/apl/aggregation-function/countif) | Returns a count of rows for which the predicate evaluates to true. |
| Aggregation function | [dcount](/apl/aggregation-function/dcount) | Returns an estimate for the number of distinct values that are taken by a scalar expression in the summary group. |
| Aggregation function | [dcountif](/apl/aggregation-function/dcountif) | Returns an estimate of the number of distinct values of an expression of rows for which the predicate evaluates to true. |
| Aggregation function | [histogram](/apl/aggregation-function/histogram) | Returns a timeseries heatmap chart across the group. |
| Aggregation function | [make\_list\_if](/apl/aggregation-function/make-list-if) | Creates a dynamic JSON object (array) of an expression values in the group for which the predicate evaluates to true. |
| Aggregation function | [make\_list](/apl/aggregation-function/make-list) | Creates a dynamic JSON object (array) of all the values of an expression in the group. |
| Aggregation function | [make\_set\_if](/apl/aggregation-function/make-set-if) | Creates a dynamic JSON object (array) of the set of distinct values that an expression takes in records for which the predicate evaluates to true. |
| Aggregation function | [make\_set](/apl/aggregation-function/make-set) | Creates a dynamic JSON array of the set of distinct values that an expression takes in the group. |
| Aggregation function | [max](/apl/aggregation-function/max) | Returns the maximum value across the group. |
| Aggregation function | [maxif](/apl/aggregation-function/maxif) | Calculates the maximum value of an expression in records for which the predicate evaluates to true. |
| Aggregation function | [min](/apl/aggregation-function/min) | Returns the minimum value across the group. |
| Aggregation function | [minif](/apl/aggregation-function/minif) | Returns the minimum of an expression in records for which the predicate evaluates to true. |
| Aggregation function | [percentile](/apl/aggregation-function/percentile) | Calculates the requested percentiles of the group and produces a timeseries chart. |
| Aggregation function | [percentileif](/apl/aggregation-function/percentileif) | Calculates the requested percentiles of the field for the rows where the predicate evaluates to true. |
| Aggregation function | [percentiles\_array](/apl/aggregation-function/percentiles-array) | Returns an array of numbers where each element is the value at the corresponding percentile. |
| Aggregation function | [percentiles\_arrayif](/apl/aggregation-function/percentiles-arrayif) | Returns an array of percentile values for the records that satisfy the condition. |
| Aggregation function | [rate](/apl/aggregation-function/rate) | Calculates the rate of values in a group per second. |
| Aggregation function | [stdev](/apl/aggregation-function/stdev) | Calculates the standard deviation of an expression across the group. |
| Aggregation function | [stdevif](/apl/aggregation-function/stdevif) | Calculates the standard deviation of an expression in records for which the predicate evaluates to true. |
| Aggregation function | [sum](/apl/aggregation-function/sum) | Calculates the sum of an expression across the group. |
| Aggregation function | [sumif](/apl/aggregation-function/sumif) | Calculates the sum of an expression in records for which the predicate evaluates to true. |
| Aggregation function | [topk](/apl/aggregation-function/topk) | Calculates the top values of an expression across the group in a dataset. |
| Aggregation function | [topkif](/apl/aggregation-function/topkif) | Calculates the top values of an expression in records for which the predicate evaluates to true. |
| Aggregation function | [variance](/apl/aggregation-function/variance) | Calculates the variance of an expression across the group. |
| Aggregation function | [varianceif](/apl/aggregation-function/varianceif) | Calculates the variance of an expression in records for which the predicate evaluates to true. |
| Array function | [array\_concat](/apl/scalar-functions/array-functions/array-concat) | Concatenates arrays into one. |
| Array function | [array\_extract](/apl/scalar-functions/array-functions/array-extract) | Extracts values from a nested array. |
| Array function | [array\_iff](/apl/scalar-functions/array-functions/array-iff) | Filters array by condition. |
| Array function | [array\_index\_of](/apl/scalar-functions/array-functions/array-index-of) | Returns index of item in array. |
| Array function | [array\_length](/apl/scalar-functions/array-functions/array-length) | Returns length of array. |
| Array function | [array\_reverse](/apl/scalar-functions/array-functions/array-reverse) | Reverses array elements. |
| Array function | [array\_rotate\_left](/apl/scalar-functions/array-functions/array-rotate-left) | Rotates array values to the left. |
| Array function | [array\_rotate\_right](/apl/scalar-functions/array-functions/array-rotate-right) | Rotates array values to the right. |
| Array function | [array\_select\_dict](/apl/scalar-functions/array-functions/array-select-dict) | Selects dictionary from array of dictionaries. |
| Array function | [array\_shift\_left](/apl/scalar-functions/array-functions/array-shift-left) | Shifts array values to the left. |
| Array function | [array\_shift\_right](/apl/scalar-functions/array-functions/array-shift-right) | Shifts array values to the right. |
| Array function | [array\_slice](/apl/scalar-functions/array-functions/array-slice) | Returns slice of an array. |
| Array function | [array\_split](/apl/scalar-functions/array-functions/array-split) | Splits array by indices. |
| Array function | [array\_sum](/apl/scalar-functions/array-functions/array-sum) | Sums array elements. |
| Array function | [bag\_has\_key](/apl/scalar-functions/array-functions/bag-has-key) | Checks if dynamic object has a specific key. |
| Array function | [bag\_keys](/apl/scalar-functions/array-functions/bag-keys) | Returns keys of a dynamic property bag. |
| Array function | [bag\_pack](/apl/scalar-functions/array-functions/bag-pack) | Creates a dynamic property bag from key-value pairs. |
| Array function | [isarray](/apl/scalar-functions/array-functions/isarray) | Checks if value is an array. |
| Array function | [len](/apl/scalar-functions/array-functions/len) | Returns array or string length. |
| Array function | [pack\_array](/apl/scalar-functions/array-functions/pack-array) | Packs input into a dynamic array. |
| Array function | [pack\_dictionary](/apl/scalar-functions/array-functions/pack-dictionary) | Returns a dictionary from key-value mappings. |
| Array function | [strcat\_array](/apl/scalar-functions/array-functions/strcat-array) | Joins array elements into a string using a delimiter. |
| Conditional function | [case](/apl/scalar-functions/conditional-function#case) | Evaluates conditions and returns the first matched result. |
| Conditional function | [iff](/apl/scalar-functions/conditional-function#iff) | Returns one of two values based on predicate. |
| Conversion function | [dynamic\_to\_json](/apl/scalar-functions/conversion-functions#dynamic-to-json) | Converts dynamic value to JSON string. |
| Conversion function | [ensure\_field](/apl/scalar-functions/conversion-functions#ensure-field) | Returns value of field or typed null. |
| Conversion function | [isbool](/apl/scalar-functions/conversion-functions#isbool) | Checks if expression evaluates to boolean. |
| Conversion function | [toarray](/apl/scalar-functions/conversion-functions/toarray) | Converts to array. |
| Conversion function | [tobool](/apl/scalar-functions/conversion-functions#tobool) | Converts to boolean. |
| Conversion function | [todatetime](/apl/scalar-functions/conversion-functions#todatetime) | Converts to datetime. |
| Conversion function | [todouble](/apl/scalar-functions/conversion-functions#todouble%2C-toreal) | Converts to real. |
| Conversion function | [todynamic](/apl/scalar-functions/conversion-functions/todynamic) | Converts to dynamic. |
| Conversion function | [tohex](/apl/scalar-functions/conversion-functions#tohex) | Converts to hexadecimal string. |
| Conversion function | [toint](/apl/scalar-functions/conversion-functions#toint) | Converts to integer. |
| Conversion function | [tolong](/apl/scalar-functions/conversion-functions#tolong) | Converts to signed 64-bit long. |
| Conversion function | [toreal](/apl/scalar-functions/conversion-functions#todouble%2C-toreal) | Converts to real. |
| Conversion function | [tostring](/apl/scalar-functions/conversion-functions#tostring) | Converts to string. |
| Conversion function | [totimespan](/apl/scalar-functions/conversion-functions#totimespan) | Converts to timespan. |
| Datetime function | [ago](/apl/scalar-functions/datetime-functions#ago) | Subtracts timespan from current time. |
| Datetime function | [datetime\_add](/apl/scalar-functions/datetime-functions#datetime-add) | Adds amount to datetime. |
| Datetime function | [datetime\_diff](/apl/scalar-functions/datetime-functions#datetime-diff) | Difference between two datetimes. |
| Datetime function | [datetime\_part](/apl/scalar-functions/datetime-functions#datetime-part) | Extracts part of a datetime. |
| Datetime function | [dayofmonth](/apl/scalar-functions/datetime-functions#dayofmonth) | Day number in month. |
| Datetime function | [dayofweek](/apl/scalar-functions/datetime-functions#dayofweek) | Days since previous Sunday. |
| Datetime function | [dayofyear](/apl/scalar-functions/datetime-functions#dayofyear) | Day number in year. |
| Datetime function | [endofday](/apl/scalar-functions/datetime-functions#endofday) | Returns end of day. |
| Datetime function | [endofmonth](/apl/scalar-functions/datetime-functions#endofmonth) | Returns end of month. |
| Datetime function | [endofweek](/apl/scalar-functions/datetime-functions#endofweek) | Returns end of week. |
| Datetime function | [endofyear](/apl/scalar-functions/datetime-functions#endofyear) | Returns end of year. |
| Datetime function | [getmonth](/apl/scalar-functions/datetime-functions#getmonth) | Month of a datetime. |
| Datetime function | [getyear](/apl/scalar-functions/datetime-functions#getyear) | Year of a datetime. |
| Datetime function | [hourofday](/apl/scalar-functions/datetime-functions#hourofday) | Hour number of the day. |
| Datetime function | [monthofyear](/apl/scalar-functions/datetime-functions#monthofyear) | Month number of year. |
| Datetime function | [now](/apl/scalar-functions/datetime-functions#now) | Returns current UTC time. |
| Datetime function | [startofday](/apl/scalar-functions/datetime-functions#startofday) | Returns start of day. |
| Datetime function | [startofmonth](/apl/scalar-functions/datetime-functions#startofmonth) | Returns start of month. |
| Datetime function | [startofweek](/apl/scalar-functions/datetime-functions#startofweek) | Returns start of week. |
| Datetime function | [startofyear](/apl/scalar-functions/datetime-functions#startofyear) | Returns start of year. |
| Datetime function | [unixtime\_microseconds\_todatetime](/apl/scalar-functions/datetime-functions/unixtime-microseconds-todatetime) | Converts microsecond Unix timestamp to datetime. |
| Datetime function | [unixtime\_milliseconds\_todatetime](/apl/scalar-functions/datetime-functions/unixtime-milliseconds-todatetime) | Converts millisecond Unix timestamp to datetime. |
| Datetime function | [unixtime\_nanoseconds\_todatetime](/apl/scalar-functions/datetime-functions/unixtime-nanoseconds-todatetime) | Converts nanosecond Unix timestamp to datetime. |
| Datetime function | [unixtime\_seconds\_todatetime](/apl/scalar-functions/datetime-functions/unixtime-seconds-todatetime) | Converts second Unix timestamp to datetime. |
| Hash function | [hash\_md5](/apl/scalar-functions/hash-functions#hash-md5) | Returns MD5 hash. |
| Hash function | [hash\_sha1](/apl/scalar-functions/hash-functions#hash-sha1) | Returns SHA1 hash. |
| Hash function | [hash\_sha256](/apl/scalar-functions/hash-functions#hash-sha256) | Returns SHA256 hash. |
| Hash function | [hash\_sha512](/apl/scalar-functions/hash-functions#hash-sha512) | Returns SHA512 hash. |
| Hash function | [hash](/apl/scalar-functions/hash-functions/hash) | Returns integer hash of input. |
| IP function | [format\_ipv4\_mask](/apl/scalar-functions/ip-functions/format-ipv4-mask) | Formats IPv4 and mask to CIDR. |
| IP function | [format\_ipv4](/apl/scalar-functions/ip-functions/format-ipv4) | Formats netmask into IPv4 string. |
| IP function | [geo\_info\_from\_ip\_address](/apl/scalar-functions/ip-functions/geo-info-from-ip-address) | Extracts geolocation from IP address. |
| IP function | [has\_any\_ipv4\_prefix](/apl/scalar-functions/ip-functions/has-any-ipv4-prefix) | Checks if IPv4 starts with any prefix. |
| IP function | [has\_any\_ipv4](/apl/scalar-functions/ip-functions/has-any-ipv4) | Checks if any of given IPv4s exist in column. |
| IP function | [has\_ipv4\_prefix](/apl/scalar-functions/ip-functions/has-ipv4-prefix) | Checks if IPv4 starts with specified prefix. |
| IP function | [has\_ipv4](/apl/scalar-functions/ip-functions/has-ipv4) | Checks if IPv4 is valid and in source text. |
| IP function | [ipv4\_compare](/apl/scalar-functions/ip-functions/ipv4-compare) | Compares two IPv4 addresses. |
| IP function | [ipv4\_is\_in\_any\_range](/apl/scalar-functions/ip-functions/ipv4-is-in-any-range) | Checks if IPv4 is in any specified range. |
| IP function | [ipv4\_is\_in\_range](/apl/scalar-functions/ip-functions/ipv4-is-in-range) | Checks if IPv4 is in a given range. |
| IP function | [ipv4\_is\_match](/apl/scalar-functions/ip-functions/ipv4-is-match) | Matches IPv4 against a pattern. |
| IP function | [ipv4\_is\_private](/apl/scalar-functions/ip-functions/ipv4-is-private) | Checks if IPv4 is private. |
| IP function | [ipv4\_netmask\_suffix](/apl/scalar-functions/ip-functions/ipv4-netmask-suffix) | Extracts netmask suffix. |
| IP function | [ipv6\_compare](/apl/scalar-functions/ip-functions/ipv6-compare) | Compares two IPv6 addresses. |
| IP function | [ipv6\_is\_in\_any\_range](/apl/scalar-functions/ip-functions/ipv6-is-in-any-range) | Checks if IPv6 is in any range. |
| IP function | [ipv6\_is\_in\_range](/apl/scalar-functions/ip-functions/ipv6-is-in-range) | Checks if IPv6 is in range. |
| IP function | [ipv6\_is\_match](/apl/scalar-functions/ip-functions/ipv6-is-match) | Checks if IPv6 matches pattern. |
| IP function | [parse\_ipv4\_mask](/apl/scalar-functions/ip-functions/parse-ipv4-mask) | Converts IPv4 and mask to long integer. |
| IP function | [parse\_ipv4](/apl/scalar-functions/ip-functions/parse-ipv4) | Converts IPv4 to long integer. |
| Logical operator | [!=](/apl/scalar-operators/logical-operators) | Returns `true` if either one (or both) of the operands are null, or they are not equal to each other. Otherwise, `false`. |
| Logical operator | [==](/apl/scalar-operators/logical-operators) | Returns `true` if both operands are non-null and equal to each other. Otherwise, `false`. |
| Logical operator | [and](/apl/scalar-operators/logical-operators) | Returns `true` if both operands are `true`. |
| Logical operator | [or](/apl/scalar-operators/logical-operators) | Returns `true` if one of the operands is `true`, regardless of the other operand. |
| Mathematical function | [abs](/apl/scalar-functions/mathematical-functions#abs) | Returns absolute value. |
| Mathematical function | [acos](/apl/scalar-functions/mathematical-functions#acos) | Returns arccosine of a number. |
| Mathematical function | [asin](/apl/scalar-functions/mathematical-functions#asin) | Returns arcsine of a number. |
| Mathematical function | [atan](/apl/scalar-functions/mathematical-functions#atan) | Returns arctangent of a number. |
| Mathematical function | [atan2](/apl/scalar-functions/mathematical-functions#atan2) | Returns angle between x-axis and point (y, x). |
| Mathematical function | [cos](/apl/scalar-functions/mathematical-functions#cos) | Returns cosine of a number. |
| Mathematical function | [degrees](/apl/scalar-functions/mathematical-functions#degrees) | Converts radians to degrees. |
| Mathematical function | [exp](/apl/scalar-functions/mathematical-functions#exp) | Returns e^x. |
| Mathematical function | [exp10](/apl/scalar-functions/mathematical-functions#exp10) | Returns 10^x. |
| Mathematical function | [exp2](/apl/scalar-functions/mathematical-functions#exp2) | Returns 2^x. |
| Mathematical function | [gamma](/apl/scalar-functions/mathematical-functions#gamma) | Returns gamma function of x. |
| Mathematical function | [isinf](/apl/scalar-functions/mathematical-functions#isinf) | Returns `true` if x is infinite. |
| Mathematical function | [isint](/apl/scalar-functions/mathematical-functions#isint) | Returns `true` if x is an integer. |
| Mathematical function | [isnan](/apl/scalar-functions/mathematical-functions#isnan) | Returns `true` if x is NaN. |
| Mathematical function | [log](/apl/scalar-functions/mathematical-functions#log) | Returns natural logarithm of x. |
| Mathematical function | [log10](/apl/scalar-functions/mathematical-functions#log10) | Returns base-10 logarithm. |
| Mathematical function | [log2](/apl/scalar-functions/mathematical-functions#log2) | Returns base-2 logarithm. |
| Mathematical function | [loggamma](/apl/scalar-functions/mathematical-functions#loggamma) | Returns log of absolute gamma function. |
| Mathematical function | [max\_of](/apl/scalar-functions/mathematical-functions/max-of) | Returns largest value among arguments. |
| Mathematical function | [min\_of](/apl/scalar-functions/mathematical-functions/min-of) | Returns smallest value among arguments. |
| Mathematical function | [not](/apl/scalar-functions/mathematical-functions#not) | Reverses boolean value. |
| Mathematical function | [pi](/apl/scalar-functions/mathematical-functions#pi) | Returns value of Pi. |
| Mathematical function | [pow](/apl/scalar-functions/mathematical-functions#pow) | Returns value raised to a power. |
| Mathematical function | [radians](/apl/scalar-functions/mathematical-functions#radians) | Converts degrees to radians. |
| Mathematical function | [round](/apl/scalar-functions/mathematical-functions#round) | Rounds value to given precision. |
| Mathematical function | [set\_difference](/apl/scalar-functions/mathematical-functions/set-difference) | Returns array difference. |
| Mathematical function | [set\_has\_element](/apl/scalar-functions/mathematical-functions/set-has-element) | Returns `true` if set contains an element. |
| Mathematical function | [set\_intersect](/apl/scalar-functions/mathematical-functions/set-intersect) | Returns array intersection. |
| Mathematical function | [set\_union](/apl/scalar-functions/mathematical-functions/set-union) | Returns array union. |
| Mathematical function | [sign](/apl/scalar-functions/mathematical-functions#sign) | Returns sign of number. |
| Mathematical function | [sin](/apl/scalar-functions/mathematical-functions#sin) | Returns sine of a number. |
| Mathematical function | [sqrt](/apl/scalar-functions/mathematical-functions#sqrt) | Returns square root of a number. |
| Mathematical function | [tan](/apl/scalar-functions/mathematical-functions#tan) | Returns tangent of a number. |
| Numerical operator | [-](/apl/scalar-operators/numerical-operators) | Subtract. Example: `0.26 - 0.23` |
| Numerical operator | [!=](/apl/scalar-operators/numerical-operators) | Not equals. Example: `2 != 1` |
| Numerical operator | [!in](/apl/scalar-operators/numerical-operators) | Not equals to any of the elements. Example: `"bca" !in ("123", "345", "abc")` |
| Numerical operator | [\*](/apl/scalar-operators/numerical-operators) | Multiply. Example: `1s * 5`, `5 * 5` |
| Numerical operator | [/](/apl/scalar-operators/numerical-operators) | Divide. Example: `10m / 1s`, `4 / 2` |
| Numerical operator | [\<](/apl/scalar-operators/numerical-operators) | Less. Example: `1 < 2`, `1 <= 1` |
| Numerical operator | [\<=](/apl/scalar-operators/numerical-operators) | Less or Equal. Example: `5 <= 6` |
| Numerical operator | [%](/apl/scalar-operators/numerical-operators) | Modulo. Example: `10 % 3`, `5 % 2` |
| Numerical operator | [+](/apl/scalar-operators/numerical-operators) | Add. Example: `3.19 + 3.19`, `ago(10m) + 10m` |
| Numerical operator | [==](/apl/scalar-operators/numerical-operators) | Equals. Example: `3 == 3` |
| Numerical operator | [>](/apl/scalar-operators/numerical-operators) | Greater. Example: `0.23 > 0.22`, `now() > ago(1d)` |
| Numerical operator | [>=](/apl/scalar-operators/numerical-operators) | Greater or Equal. Example: `7 >= 6` |
| Numerical operator | [in](/apl/scalar-operators/numerical-operators) | Equals to one of the elements. Example: `"abc" in ("123", "345", "abc")` |
| Rounding function | [bin\_auto](/apl/scalar-functions/rounding-functions#bin-auto) | Rounds values down to a bin based on query-provided size and alignment. |
| Rounding function | [bin](/apl/scalar-functions/rounding-functions#bin) | Rounds values down to a bin size. |
| Rounding function | [ceiling](/apl/scalar-functions/rounding-functions#ceiling) | Returns the smallest integer greater than or equal to the specified number. |
| Rounding function | [floor](/apl/scalar-functions/rounding-functions#floor) | Returns the largest integer less than or equal to the specified number. |
| SQL function | [format\_sql](/apl/scalar-functions/sql-functions#format-sql) | Converts parsed SQL data model back into SQL statement. |
| SQL function | [parse\_sql](/apl/scalar-functions/sql-functions#parse-sql) | Parses and analyzes SQL queries. |
| String function | [base64\_decode\_tostring](/apl/scalar-functions/string-functions#base64-decode-tostring) | Decodes a base64 string to a UTF-8 string. |
| String function | [base64\_encode\_tostring](/apl/scalar-functions/string-functions#base64-encode-tostring) | Encodes a string as base64 string. |
| String function | [coalesce](/apl/scalar-functions/string-functions#coalesce) | Returns the first non-null/non-empty value from a list. |
| String function | [countof\_regex](/apl/scalar-functions/string-functions#countof-regex) | Counts occurrences of a regex in a string. |
| String function | [countof](/apl/scalar-functions/string-functions#countof) | Counts occurrences of a substring in a string. |
| String function | [extract\_all](/apl/scalar-functions/string-functions#extract-all) | Gets all matches for a regular expression from a text string. |
| String function | [extract](/apl/scalar-functions/string-functions#extract) | Gets a match for a regular expression from a text string. |
| String function | [format\_bytes](/apl/scalar-functions/string-functions#format-bytes) | Formats a number of bytes as a string including units. |
| String function | [format\_url](/apl/scalar-functions/string-functions#format-url) | Formats a string into a valid URL. |
| String function | [gettype](/apl/scalar-functions/string-functions#gettype) | Returns the runtime type of an argument. |
| String function | [indexof](/apl/scalar-functions/string-functions#indexof) | Returns index of the first occurrence of a substring. |
| String function | [isempty](/apl/scalar-functions/string-functions#isempty) | Returns `true` if the argument is empty or null. |
| String function | [isnotempty](/apl/scalar-functions/string-functions#isnotempty) | Returns `true` if the argument is not empty or null. |
| String function | [isnotnull](/apl/scalar-functions/string-functions#isnotnull) | Returns `true` if the argument is not null. |
| String function | [isnull](/apl/scalar-functions/string-functions#isnull) | Returns `true` if the argument is null. |
| String function | [parse\_bytes](/apl/scalar-functions/string-functions#parse-bytes) | Parses byte-size string to number of bytes. |
| String function | [parse\_csv](/apl/scalar-functions/string-functions#parse-csv) | Splits a CSV-formatted string into an array. |
| String function | [parse\_json](/apl/scalar-functions/string-functions#parse-json) | Parses a string as a JSON value. |
| String function | [parse\_url](/apl/scalar-functions/string-functions#parse-url) | Parses a URL string and returns parts in a dynamic object. |
| String function | [parse\_urlquery](/apl/scalar-functions/string-functions#parse-urlquery) | Parses a URL query string into key-value pairs. |
| String function | [replace\_regex](/apl/scalar-functions/string-functions#replace-regex) | Replaces regex matches with another string. |
| String function | [replace\_string](/apl/scalar-functions/string-functions#replace-string) | Replaces string matches with another string. |
| String function | [replace](/apl/scalar-functions/string-functions#replace) | Replaces all regex matches with another string. |
| String function | [reverse](/apl/scalar-functions/string-functions#reverse) | Reverses a string. |
| String function | [split](/apl/scalar-functions/string-functions#split) | Splits a string into an array using a delimiter. |
| String function | [strcat\_delim](/apl/scalar-functions/string-functions#strcat-delim) | Concatenates 2–64 arguments with a delimiter. |
| String function | [strcat](/apl/scalar-functions/string-functions#strcat) | Concatenates 1–64 arguments. |
| String function | [strcmp](/apl/scalar-functions/string-functions#strcmp) | Compares two strings. |
| String function | [strlen](/apl/scalar-functions/string-functions#strlen) | Returns the length of a string. |
| String function | [strrep](/apl/scalar-functions/string-functions#strrep) | Repeats a string a given number of times. |
| String function | [substring](/apl/scalar-functions/string-functions#substring) | Extracts a substring. |
| String function | [tolower](/apl/scalar-functions/string-functions#tolower) | Converts string to lowercase. |
| String function | [toupper](/apl/scalar-functions/string-functions#toupper) | Converts string to uppercase. |
| String function | [trim\_end\_regex](/apl/scalar-functions/string-functions#trim-end-regex) | Trims trailing characters using regex. |
| String function | [trim\_end](/apl/scalar-functions/string-functions#trim-end) | Trims trailing characters. |
| String function | [trim\_regex](/apl/scalar-functions/string-functions#trim-regex) | Trims characters matching a regex. |
| String function | [trim\_start\_regex](/apl/scalar-functions/string-functions#trim-start-regex) | Trims leading characters using regex. |
| String function | [trim\_start](/apl/scalar-functions/string-functions#trim-start) | Trims leading characters. |
| String function | [trim](/apl/scalar-functions/string-functions#trim) | Trims leading/trailing characters. |
| String function | [url\_decode](/apl/scalar-functions/string-functions#url-decode) | Decodes a URL-encoded string. |
| String function | [url\_encode](/apl/scalar-functions/string-functions#url-encode) | Encodes characters into a URL-friendly format. |
| String operator | [!=](/apl/scalar-operators/string-operators) | Not equals (case-sensitive). Example: `"abc" != "ABC"` |
| String operator | [!\~](/apl/scalar-operators/string-operators) | Not equals (case-insensitive). Example: `"aBc" !~ "xyz"` |
| String operator | [!contains\_cs](/apl/scalar-operators/string-operators) | RHS doesn’t occur in LHS (case-sensitive). Example: `"parentSpanId" !contains_cs "Id"` |
| String operator | [!contains](/apl/scalar-operators/string-operators) | RHS doesn’t occur in LHS (case-insensitive). Example: `"parentSpanId" !contains "abc"` |
| String operator | [!endswith\_cs](/apl/scalar-operators/string-operators) | RHS isn’t a closing subsequence of LHS (case-sensitive). Example: `"parentSpanId" !endswith_cs "Span"` |
| String operator | [!endswith](/apl/scalar-operators/string-operators) | RHS isn’t a closing subsequence of LHS (case-insensitive). Example: `"parentSpanId" !endswith "Span"` |
| String operator | [!has\_cs](/apl/scalar-operators/string-operators) | RHS isn’t a whole term in LHS (case-sensitive). Example: `"North America" !has_cs "America"` |
| String operator | [!has](/apl/scalar-operators/string-operators) | RHS isn’t a whole term in LHS (case-insensitive). Example: `"North America" !has "america"` |
| String operator | [!hasprefix\_cs](/apl/scalar-operators/string-operators) | LHS string doesn’t start with the RHS string (case-sensitive). Example: `"DOCS_file" !hasprefix_cs "DOCS"` |
| String operator | [!hasprefix](/apl/scalar-operators/string-operators) | LHS string doesn’t start with the RHS string (case-insensitive). Example: `"Admin_User" !hasprefix "Admin"` |
| String operator | [!hassuffix\_cs](/apl/scalar-operators/string-operators) | LHS string doesn’t end with the RHS string (case-sensitive). Example: `"Document.HTML" !hassuffix_cs ".HTML"` |
| String operator | [!hassuffix](/apl/scalar-operators/string-operators) | LHS string doesn’t end with the RHS string (case-insensitive). Example: `"documentation.docx" !hassuffix ".docx"` |
| String operator | [!in](/apl/scalar-operators/string-operators) | Not equals to any of the elements (case-sensitive). Example: `"bca" !in ("123", "345", "abc")` |
| String operator | [!in\~](/apl/scalar-operators/string-operators) | Not equals to any of the elements (case-insensitive). Example: `"bca" !in~ ("123", "345", "ABC")` |
| String operator | [!matches regex](/apl/scalar-operators/string-operators) | LHS doesn’t contain a match for RHS. Example: `"parentSpanId" !matches regex "g.*r"` |
| String operator | [!startswith\_cs](/apl/scalar-operators/string-operators) | RHS isn’t an initial subsequence of LHS (case-sensitive). Example: `"parentSpanId" !startswith_cs "parent"` |
| String operator | [!startswith](/apl/scalar-operators/string-operators) | RHS isn’t an initial subsequence of LHS (case-insensitive). Example: `"parentSpanId" !startswith "Id"` |
| String operator | [==](/apl/scalar-operators/string-operators) | Equals (case-sensitive). Example: `"aBc" == "aBc"` |
| String operator | [=\~](/apl/scalar-operators/string-operators) | Equals (case-insensitive). Example: `"abc" =~ "ABC"` |
| String operator | [contains\_cs](/apl/scalar-operators/string-operators) | RHS occurs as a subsequence of LHS (case-sensitive). Example: `"parentSpanId" contains_cs "Id"` |
| String operator | [contains](/apl/scalar-operators/string-operators) | RHS occurs as a subsequence of LHS (case-insensitive). Example: `"parentSpanId" contains "Span"` |
| String operator | [endswith\_cs](/apl/scalar-operators/string-operators) | RHS is a closing subsequence of LHS (case-sensitive). Example: `"parentSpanId" endswith_cs "Id"` |
| String operator | [endswith](/apl/scalar-operators/string-operators) | RHS is a closing subsequence of LHS (case-insensitive). Example: `"parentSpanId" endswith "Id"` |
| String operator | [has\_cs](/apl/scalar-operators/string-operators) | RHS is a whole term in LHS (case-sensitive). Example: `"North America" has_cs "America"` |
| String operator | [has](/apl/scalar-operators/string-operators) | RHS is a whole term in LHS (case-insensitive). Example: `"North America" has "america"` |
| String operator | [hasprefix\_cs](/apl/scalar-operators/string-operators) | LHS string starts with the RHS string (case-sensitive). Example: `"DOCS_file" hasprefix_cs "DOCS"` |
| String operator | [hasprefix](/apl/scalar-operators/string-operators) | LHS string starts with the RHS string (case-insensitive). Example: `"Admin_User" hasprefix "Admin"` |
| String operator | [hassuffix\_cs](/apl/scalar-operators/string-operators) | LHS string ends with the RHS string (case-sensitive). Example: `"Document.HTML" hassuffix_cs ".HTML"` |
| String operator | [hassuffix](/apl/scalar-operators/string-operators) | LHS string ends with the RHS string (case-insensitive). Example: `"documentation.docx" hassuffix ".docx"` |
| String operator | [in](/apl/scalar-operators/string-operators) | Equals to one of the elements (case-sensitive). Example: `"abc" in ("123", "345", "abc")` |
| String operator | [in\~](/apl/scalar-operators/string-operators) | Equals to one of the elements (case-insensitive). Example: `"abc" in~ ("123", "345", "ABC")` |
| String operator | [matches regex](/apl/scalar-operators/string-operators) | LHS contains a match for RHS. Example: `"parentSpanId" matches regex "g.*r"` |
| String operator | [startswith\_cs](/apl/scalar-operators/string-operators) | RHS is an initial subsequence of LHS (case-sensitive). Example: `"parentSpanId" startswith_cs "parent"` |
| String operator | [startswith](/apl/scalar-operators/string-operators) | RHS is an initial subsequence of LHS (case-insensitive). Example: `"parentSpanId" startswith "parent"` |
| Tabular operator | [count](/apl/tabular-operators/count-operator) | Returns an integer representing the total number of records in the dataset. |
| Tabular operator | [distinct](/apl/tabular-operators/distinct-operator) | Returns a dataset with unique values from the specified fields, removing any duplicate entries. |
| Tabular operator | [extend-valid](/apl/tabular-operators/extend-valid-operator) | Returns a table where the specified fields are extended with new values based on the given expression for valid rows. |
| Tabular operator | [extend](/apl/tabular-operators/extend-operator) | Returns the original dataset with one or more new fields appended, based on the defined expressions. |
| Tabular operator | [externaldata](/apl/tabular-operators/externaldata-operator) | Returns a table with the specified schema, containing data retrieved from an external source. |
| Tabular operator | [getschema](/apl/tabular-operators/getschema-operator) | Returns the schema of a dataset, including field names and their data types. |
| Tabular operator | [join](/apl/tabular-operators/join-operator) | Returns a dataset containing rows from two different tables based on conditions. |
| Tabular operator | [limit](/apl/tabular-operators/limit-operator) | Returns the top N rows from the input dataset. |
| Tabular operator | [lookup](/apl/tabular-operators/lookup-operator) | Returns a dataset where rows from one dataset are enriched with matching columns from a lookup table based on conditions. |
| Tabular operator | [order](/apl/tabular-operators/order-operator) | Returns the input dataset, sorted according to the specified fields and order. |
| Tabular operator | [parse](/apl/tabular-operators/parse-operator) | Returns the input dataset with new fields added based on the specified parsing pattern. |
| Tabular operator | [project-away](/apl/tabular-operators/project-away-operator) | Returns the input dataset excluding the specified fields. |
| Tabular operator | [project-keep](/apl/tabular-operators/project-keep-operator) | Returns a dataset with only the specified fields. |
| Tabular operator | [project-reorder](/apl/tabular-operators/project-reorder-operator) | Returns a table with the specified fields reordered as requested followed by any unspecified fields in their original order. |
| Tabular operator | [project](/apl/tabular-operators/project-operator) | Returns a dataset containing only the specified fields. |
| Tabular operator | [redact](/apl/tabular-operators/redact-operator) | Returns the input dataset with sensitive data replaced or hashed. |
| Tabular operator | [sample](/apl/tabular-operators/sample-operator) | Returns a table containing the specified number of rows, selected randomly from the input dataset. |
| Tabular operator | [search](/apl/tabular-operators/search-operator) | Returns all rows where the specified keyword appears in any field. |
| Tabular operator | [sort](/apl/tabular-operators/sort-operator) | Returns a table with rows ordered based on the specified fields. |
| Tabular operator | [summarize](/apl/tabular-operators/summarize-operator) | Returns a table where each row represents a unique combination of values from the by fields, with the aggregated results calculated for the other fields. |
| Tabular operator | [take](/apl/tabular-operators/take-operator) | Returns the specified number of rows from the dataset. |
| Tabular operator | [top](/apl/tabular-operators/top-operator) | Returns the top N rows from the dataset based on the specified sorting criteria. |
| Tabular operator | [union](/apl/tabular-operators/union-operator) | Returns all rows from the specified tables or queries. |
| Tabular operator | [where](/apl/tabular-operators/where-operator) | Returns a filtered dataset containing only the rows where the condition evaluates to true. |
| Type function | [ismap](/apl/scalar-functions/type-functions/ismap) | Checks whether a value is of the `dynamic` type and represents a mapping. |
| Type function | [isreal](/apl/scalar-functions/type-functions/isreal) | Checks whether a value is a real number. |
| Type function | [isstring](/apl/scalar-functions/type-functions/isstring) | Checks whether a value is a string. |
# Map fields
Source: https://axiom.co/docs/apl/data-types/map-fields
This page explains what map fields are and how to query them.
Map fields are a special type of field that can hold a collection of nested key-value pairs within a single field. You can think of the content of a map field as a JSON object.
Axiom automatically creates map fields in datasets that use [OpenTelemetry](/send-data/opentelemetry) and you can create map fields yourself in any dataset.
## Benefits and drawbacks of map fields
Map fields help you manage high-cardinality data by storing multiple key-value pairs within a single field. One of the benefits of map fields is that you can store additional attributes without adding more fields. This is particularly useful when the shape of your data is unpredictable (for example, additional attributes added by OpenTelemetry instrumentation). Using map fields means that you can avoid reaching the field limit of a dataset.
Use map fields in the following cases:
* You approach the dataset field limit.
* The shape of your data is unpredictable. For example, an OpenTelemetry instrumentation or another SDK creates objects with many keys.
* You work with feature flags or custom attributes that generate many fields.
Map fields reduce impact on field limits, but involve trade-offs in query efficiency and compression. The drawbacks of map fields are the following:
* Querying map fields uses more query-hours than querying conventional fields.
* Map fields don’t compress as well as conventional fields. This means datasets with map fields use more storage.
* You don’t have visibility into map fields from the schema. For example, autocomplete doesn’t know the properties inside the map field.
## Custom attributes in tracing datasets
If you use [OpenTelemetry](/send-data/opentelemetry) to send data to Axiom, you find some attributes in the `attributes.custom` map field. The reason is that instrumentation libraries can add hundreds or even thousands of arbitrary attributes to spans. Storing each custom attribute in a separate field would significantly increase the number of fields in your dataset. To keep the number of fields in your dataset under control, Axiom places all custom attributes in the single `attributes.custom` map field.
## Use map fields in queries
The example query below uses the `http.protocol` property inside the `attributes.custom` map field to filter results:
```kusto
['otel-demo-traces']
| where ['attributes.custom']['http.protocol'] == 'HTTP/1.1'
```
[Run in playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7b%22apl%22%3a%22%5b%27otel-demo-traces%27%5d%5cn%7c%20where%20%5b%27attributes.custom%27%5d%5b%27http.protocol%27%5d%20%3d%3d%20%27HTTP%2f1.1%27%22%2c%22queryOptions%22%3a%7b%22quickRange%22%3a%2230d%22%7d%7d)
## Create map fields using UI
To create a map field using the UI:
1. Go to the Datasets tab.
2. Select the dataset where you want to create the map field.
3. In the top right of the fields list, click **More > Create map field**.
4. In **Field name**, enter the full name of the field, including parent fields, if any. For example, `map_field_name`. For more information on syntax, see [Access properties of nested maps](#access-properties-of-nested-maps)
5. Click **Create map field**.
## Create map fields using API
To create a map field using the Axiom API, send a request to the [Create map field](/restapi/endpoints/createMapField) endpoint. For example:
```bash
curl --request POST \
--url https://AXIOM_DOMAIN/v2/datasets/{DATASET_NAME}/mapfields \
--header 'Authorization: Bearer API_TOKEN' \
--header 'Content-Type: application/json' \
--data '{
"name": "MAP_FIELD"
}'
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
Replace `MAP_FIELD` with the name of the field that you want to change to a map field.
## View map fields
To view map fields:
1. Go to the Datasets tab.
2. Select a dataset where you want to view map fields.
3. Map fields are labelled in the following way:
* **MAPPED** means that the field was previously an ordinary field but at some point its parent was changed to a map field. Axiom adds new events to the field as an attribute of the parent map field. Events you ingested before the change retain the ordinary structure.
* **UNUSED** means that the field is configured as a map field but you haven’t yet ingested data into it. Once ingested, data within this field won’t count toward your field limit.
* **REMOVED** means that the field was configured as a map field but at some point it was changed to an ordinary field. Axiom adds new events to the field as usual. Events you ingested before the change retain the map structure. To fully remove this field, first [trim your dataset](/reference/datasets#trim-dataset) to remove the time period when map data was ingested, and then [vacuum the fields](/reference/datasets#vacuum-fields).
## Access properties of nested maps
To access the properties of nested maps, use index notation, dot notation, or a mix of the two. If you use index notation for an entity, enclose the entity name in quotation marks (`'` or `"`) and square brackets (`[]`). For example:
* `where ['map_field']['property1']['property2'] == 14`
* `where map_field.property1.property2 == 14`
* `where ['map_field'].property1.property2 == 14`
If an entity name has spaces (` `), dots (`.`), or dashes (`-`), you can only use index notation for that entity. You can use dot notation for the other entities. For example:
* `where ['map.field']['property.name1']['property.name2'] == 14`
* `where ['map.field'].property1.property2 == 14`
In OTel traces, custom attributes are located in the `attributes.custom` map field. You can access them as `['attributes.custom']['header.Accept']`, for example. In this case, you don’t access the `Accept` field nested within the `header` field. What actually happens is that you access the field named `header.Accept` within the `attributes.custom` map field.
For more information on quoting field names, see [Entity names](/apl/entities/entity-names#quote-identifiers).
## Map fields and flattened fields
Within a dataset, the same fields can exist as flattened fields and as subfields of a map field.
For example, consider the following:
1. `geo` is initially not a map field.
2. You ingest the following:
```json
{
"geo": {
"city": "Paris",
"country": "France"
}
}
```
This adds two flattened fields to the dataset that you can access as `['geo.city']` or `['geo.country']`.
3. You change `geo` to a map field through the UI or the API.
4. You ingest the following:
```json
{
"geo": {
"city": "Paris",
"country": "France"
}
}
```
You use the same ingest JSON as before, but this adds the new subfields to the `geo` parent map field. You can access the subfields as `['geo']['city']` and `['geo']['country']`.
Axiom treats the flattened fields (`['geo.city']` and `['geo.country']`) and the subfields of the map field (`['geo']['city']` and `['geo']['country']`) as separate fields and doesn’t maintain a relationship between them.
Queries using `['geo.city']` access a field literally named `geo.city`, while `['geo']['city']` accesses the `city` key inside a `geo` map. These references are not equivalent.
To avoid confusion:
* Choose either a flattened or map-based structure when designing your schema.
* Be explicit in queries about which fields to include or exclude.
# Null values
Source: https://axiom.co/docs/apl/data-types/null-values
This page explains how APL represents missing values.
All scalar data types in APL have a special value that represents a missing value. This value is called the null value, or null.
## Null literals
The null value of a scalar type D is represented in the query language by the null literal D(null). The following query returns a single row full of null values:
```kusto
print bool(null), datetime(null), dynamic(null), int(null), long(null), real(null), double(null), time(null)
```
## Predicates on null values
The scalar function [isnull()](/apl/scalar-functions/string-functions#isnull\(\)) can be used to determine if a scalar value is the null value. The corresponding function [isnotnull()](/apl/scalar-functions/string-functions#isnotnull\(\)) can be used to determine if a scalar value isn’t the null value.
## Equality and inequality of null values
* Equality (`==`): Applying the equality operator to two null values yields `bool(null)`. Applying the equality operator to a null value and a non-null value yields `bool(false)`.
* inequality(`!=`): Applying the inequality operator to two null values yields `bool(null)`. Applying the inequality operator to a null value and a non-null value yields `bool(true)`.
# Scalar data types
Source: https://axiom.co/docs/apl/data-types/scalar-data-types
This page explains the data types in APL.
Axiom Processing Language supplies a set of system data types that define all the types of data that can be used with APL.
The following table lists the data types supported by APL, alongside additional aliases you can use to refer to them.
| **Type** | **Additional name(s)** | **gettype()** |
| ------------------------------------- | ----------------------------- | ------------------------------------------------------------ |
| [bool()](#the-bool-data-type) | **boolean** | **int8** |
| [datetime()](#the-datetime-data-type) | **date** | **datetime** |
| [dynamic()](#the-dynamic-data-type) | | **array** or **dictionary** or any other of the other values |
| [int()](#the-int-data-type) | **int** has an alias **long** | **int** |
| [long()](#the-long-data-type) | | **long** |
| [real()](#the-real-data-type) | **double** | **real** |
| [string()](#the-string-data-type) | | **string** |
| [timespan()](#the-timespan-data-type) | **time** | **timespan** |
## The bool data type
The bool (boolean) data type can have one of two states: `true` or `false` (internally encoded as 1 and 0, respectively), as well as the null value.
### bool literals
The bool data type has the following literals:
* true and bool(true): Representing trueness
* false and bool(false): Representing falsehood
* null and bool(null): Representing the null value
### bool operators
The `bool` data type supports the following operators: equality (`==`), inequality (`!=`), logical-and (`and`), and logical-or (`or`).
## The datetime data type
The datetime (date) data type represents an instant in time, typically expressed as a date and time of day. Values range from 00:00:00 (midnight), January 1, 0001 Anno Domini (Common Era) through 11:59:59 P.M., December 31, 9999 A.D. (C.E.) in the Gregorian calendar.
### datetime literals
Literals of type **datetime** have the syntax **datetime** (`value`), where a number of formats are supported for value, as indicated by the following table:
| **Example** | **Value** |
| ------------------------------------------------------------ | -------------------------------------------------------------- |
| **datetime(2019-11-30 23:59:59.9)** **datetime(2015-12-31)** | Times are always in UTC. Omitting the date gives a time today. |
| **datetime(null)** | Check out our [null values](/apl/data-types/null-values) |
| **now()** | The current time. |
| **now(-timespan)** | now()-timespan |
| **ago(timespan)** | now()-timespan |
**now()** and **ago()** indicate a `datetime` value compared with the moment in time when APL started to execute the query.
### Supported formats
We support the **ISO 8601** format, which is the standard format for representing dates and times in the Gregorian calendar.
### [ISO 8601](https://www.iso.org/iso-8601-date-and-time-format.html)
| **Format** | **Example** |
| ------------------- | --------------------------- |
| %Y-%m-%dT%H:%M:%s%z | 2016-06-26T08:20:03.123456Z |
| %Y-%m-%dT%H:%M:%s | 2016-06-26T08:20:03.123456 |
| %Y-%m-%dT%H:%M | 2016-06-26T08:20 |
| %Y-%m-%d %H:%M:%s%z | 2016-10-06 15:55:55.123456Z |
| %Y-%m-%d %H:%M:%s | 2016-10-06 15:55:55 |
| %Y-%m-%d %H:%M | 2016-10-06 15:55 |
| %Y-%m-%d | 2014-11-08 |
## The dynamic data type
The **dynamic** scalar data type is special in that it can take on any value of other scalar data types from the list below, as well as arrays and property bags. Specifically, a **dynamic** value can be:
* null
* A value of any of the primitive scalar data types: **bool**, **datetime**, **int**, **long**, **real**, **string**, and **timespan**.
* An array of **dynamic** values, holding zero or more values with zero-based indexing.
* A property bag, holding zero or more key-value pairs.
### Dynamic literals
A literal of type dynamic looks like this:
dynamic (`Value`)
Value can be:
* null, in which case the literal represents the null dynamic value: **dynamic(null)**.
* Another scalar data type literal, in which case the literal represents the **dynamic** literal of the "inner" type. For example, **dynamic(6)** is a dynamic value holding the value 6 of the long scalar data type.
* An array of dynamic or other literals: \[`ListOfValues`]. For example, dynamic(\[3, 4, "bye"]) is a dynamic array of three elements, two **long** values and one **string** value.
* A property bag: \{`Name`=`Value ...`}. For example, `dynamic(\{"a":1, "b":\{"a":2\}\})` is a property bag with two slots, a, and b, with the second slot being another property bag.
## The int data type
The **int** data type represents a signed, 64-bit wide, integer.
The special form **int(null)** represents the [null value.](/apl/data-types/null-values)
**int** has an alias **[long](/apl/data-types/scalar-data-types#the-long-data-type)**
## The long data type
The **long** data type represents a signed, 64-bit wide, integer.
### long literals
Literals of the long data type can be specified in the following syntax:
long(`Value`)
Where Value can take the following forms:
* One more or digits, in which case the literal value is the decimal representation of these digits. For example, **long(11)** is the number eleven of type long.
* A minus (`-`) sign followed by one or more digits. For example, **long(-3)** is the number minus three of type **long**.
* null, in which case this is the [null value](/apl/data-types/null-values) of the **long** data type. Thus, the null value of type **long** is **long(null)**.
## The real data type
The **real** data type represents a 64-bit wide, double-precision, floating-point number.
## The string data type
The **string** data type represents a sequence of zero or more [Unicode](https://home.unicode.org/) characters.
### String literals
There are several ways to encode literals of the **string** data type in a query text:
* Enclose the string in double-quotes(`"`): "This is a string literal. Single quote characters (') don’t require escaping. Double quote characters (") are escaped by a backslash (\\)"
* Enclose the string in single-quotes (`'`): Another string literal. Single quote characters (') require escaping by a backslash (\\). Double quote characters (") do not require escaping.
In the two representations above, the backslash (`\`) character indicates escaping. The backslash is used to escape the enclosing quote characters, tab characters (`\t`), newline characters (`\n`), and itself (`\\`).
### Raw string literals
Raw string literals are also supported. In this form, the backslash character (`\`) stands for itself, and does not denote an escape sequence.
* Enclosed in double-quotes (`""`): `@"This is a raw string literal"`
* Enclose in single-quotes (`'`): `@'This is a raw string literal'`
Raw strings are particularly useful for regexes where you can use `@"^[\d]+$"` instead of `"^[\\d]+$"`.
## The timespan data type
The **timespan** `(time)` data type represents a time interval.
## timespan literals
Literals of type **timespan** have the syntax **timespan(value)**, where a number of formats are supported for value, as indicated by the following table:
| **Value** | **length of time** |
| ----------------- | ------------------ |
| **2d** | 2 days |
| **1.5h** | 1.5 hour |
| **30m** | 30 minutes |
| **10s** | 10 seconds |
| **timespan(15s)** | 15 seconds |
| **0.1s** | 0.1 second |
| **timespan(2d)** | 2 days |
## Type conversions
APL provides a set of functions to convert values between different scalar data types. These conversion functions allow you to convert a value from one type to another.
Some of the commonly used conversion functions include:
* `tobool()`: Converts input to boolean representation.
* `todatetime()`: Converts input to datetime scalar.
* `todouble()` or `toreal()`: Converts input to a value of type real.
* `tostring()`: Converts input to a string representation.
* `totimespan()`: Converts input to timespan scalar.
* `tolong()`: Converts input to long (signed 64-bit) number representation.
* `toint()`: Converts input to an integer value (signed 64-bit) number representation.
For a complete list of conversion functions and their detailed descriptions and examples, refer to the [Conversion functions](/apl/scalar-functions/conversion-functions) documentation.
# Entity names
Source: https://axiom.co/docs/apl/entities/entity-names
This page explains how to use entity names in your APL query.
APL entities (datasets, tables, columns, and operators) are named. For example, two fields or columns in the same dataset can have the same name if the casing is different, and a table and a dataset may have the same name because they aren’t in the same scope.
## Columns
* Column names are case-sensitive for resolving purposes and they have a specific position in the dataset’s collection of columns.
* Column names are unique within a dataset and table.
* In queries, columns are generally referenced by name only. They can only appear in expressions, and the query operator under which the expression appears determines the table or tabular data stream.
## Identifier naming rules
Axiom uses identifiers to name various entities. Valid identifier names follow these rules:
* Between 1 and 1024 characters long.
* Allowed characters:
* Alphanumeric characters (letters and digits)
* Underscore (`_`)
* Space (` `)
* Dot (`.`)
* Dash (`-`)
Identifier names are case-sensitive.
## Quote identifiers
Quote an identifier in your APL query if any of the following is true:
* The identifier name contains at least one of the following special characters:
* Space (` `)
* Dot (`.`)
* Dash (`-`)
* The identifier name is identical to one of the reserved keywords of the APL query language. For example, `project` or `where`.
If any of the above is true, you must quote the identifier by enclosing it in quotation marks (`'` or `"`) and square brackets (`[]`). For example, `['my-field']`.
If none of the above is true, you don’t need to quote the identifier in your APL query. For example, `myfield`. In this case, quoting the identifier name is optional.
# Migrate from SQL to APL
Source: https://axiom.co/docs/apl/guides/migrating-from-sql-to-apl
This guide will help you through migrating SQL to APL, helping you understand key differences and providing you with query examples.
## Introduction
As data grows exponentially, organizations are continuously seeking more efficient and powerful tools to manage and analyze their data. The Query tab, which utilizes the Axiom Processing Language (APL), is one such service that offers fast, scalable, and interactive data exploration capabilities. If you are an SQL user looking to migrate to APL, this guide will provide a gentle introduction to help you make the transition smoothly.
**This tutorial will guide you through migrating SQL to APL, helping you understand key differences and providing you with query examples.**
## Introduction to Axiom Processing Language (APL)
Axiom Processing Language (APL) is the language used by the Query tab, a fast and highly scalable data exploration service. APL is optimized for real-time and historical data analytics, making it a suitable choice for various data analysis tasks.
**Tabular operators**: In APL, there are several tabular operators that help you manipulate and filter data, similar to SQL’s SELECT, FROM, WHERE, GROUP BY, and ORDER BY clauses. Some of the commonly used tabular operators are:
* `extend`: Adds new columns to the result set.
* `project`: Selects specific columns from the result set.
* `where`: Filters rows based on a condition.
* `summarize`: Groups and aggregates data similar to the GROUP BY clause in SQL.
* `sort`: Sorts the result set based on one or more columns, similar to ORDER BY in SQL.
## Key differences between SQL and APL
While SQL and APL are query languages, there are some key differences to consider:
* APL is designed for querying large volumes of structured, semi-structured, and unstructured data.
* APL is a pipe-based language, meaning you can chain multiple operations using the pipe operator (`|`) to create a data transformation flow.
* APL doesn’t use SELECT, and FROM clauses like SQL. Instead, it uses keywords such as summarize, extend, where, and project.
* APL is case-sensitive, whereas SQL isn’t.
## Benefits of migrating from SQL to APL:
* **Time Series Analysis:** APL is particularly strong when it comes to analyzing time-series data (logs, telemetry data, etc.). It has a rich set of operators designed specifically for such scenarios, making it much easier to handle time-based analysis.
* **Pipelining:** APL uses a pipelining model, much like the UNIX command line. You can chain commands together using the pipe (`|`) symbol, with each command operating on the results of the previous command. This makes it very easy to write complex queries.
* **Easy to Learn:** APL is designed to be simple and easy to learn, especially for those already familiar with SQL. It does not require any knowledge of database schemas or the need to specify joins.
* **Scalability:** APL is a more scalable platform than SQL. This means that it can handle larger amounts of data.
* **Flexibility:** APL is a more flexible platform than SQL. This means that it can be used to analyze different types of data.
* **Features:** APL offers more features and capabilities than SQL. This includes features such as real-time analytics, and time-based analysis.
## Basic APL Syntax
A basic APL query follows this structure:
```kusto
|
|
|
|
```
## Query Examples
Let’s see some examples of how to convert SQL queries to APL.
## SELECT with a simple filter
**SQL:**
```sql
SELECT *
FROM [Sample-http-logs]
WHERE method = 'GET';
```
**APL:**
```kusto
['sample-http-logs']
| where method == 'GET'
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20where%20method%20==%20%27GET%27%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}})
## COUNT with GROUP BY
**SQL:**
```sql
SELECT Country, COUNT(*)
FROM [Sample-http-logs]
GROUP BY method;
```
**APL:**
```kusto
['sample-http-logs']
| summarize count() by method
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20summarize%20count\(\)%20by%20method%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}})
## Top N results
**SQL:**
```sql
SELECT TOP 10 Status, Method
FROM [Sample-http-logs]
ORDER BY Method DESC;
```
**APL:**
```kusto
['sample-http-logs']
| top 10 by method desc
| project status, method
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|top%2010%20by%20method%20desc%20\n|%20project%20status,%20method%22,%22queryOptions%22:\{%22quickRange%22:%2215d%22}})
## Simple filtering and projection
**SQL:**
```sql
SELECT method, status, geo.country
FROM [Sample-http-logs]
WHERE resp_header_size_bytes >= 18;
```
**APL:**
```kusto
['sample-http-logs']
| where resp_header_size_bytes >= 18
| project method, status, ['geo.country']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|where%20resp_header_size_bytes%20%3E=18%20\n|%20project%20method,%20status,%20\[%27geo.country%27]%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}})
## COUNT with a HAVING clause
**SQL:**
```sql
SELECT geo.country
FROM [Sample-http-logs]
GROUP BY geo.country
HAVING COUNT(*) > 100;
```
**APL:**
```kusto
['sample-http-logs']
| summarize count() by ['geo.country']
| where count_ > 100
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20summarize%20count\(\)%20by%20\[%27geo.country%27]\n|%20where%20count_%20%3E%20100%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}})
## Multiple Aggregations
**SQL:**
```sql
SELECT geo.country,
COUNT(*) AS TotalRequests,
AVG(req_duration_ms) AS AverageRequest,
MIN(req_duration_ms) AS MinRequest,
MAX(req_duration_ms) AS MaxRequest
FROM [Sample-http-logs]
GROUP BY geo.country;
```
**APL:**
```kusto
Users
| summarize TotalRequests = count(),
AverageRequest = avg(req_duration_ms),
MinRequest = min(req_duration_ms),
MaxRequest = max(req_duration_ms) by ['geo.country']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20summarize%20totalRequests%20=%20count\(\),%20Averagerequest%20=%20avg\(req_duration_ms\),%20MinRequest%20=%20min\(req_duration_ms\),%20MaxRequest%20=%20max\(req_duration_ms\)%20by%20\[%27geo.country%27]%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}})
### Sum of a column
**SQL:**
```sql
SELECT SUM(resp_body_size_bytes) AS TotalBytes
FROM [Sample-http-logs];
```
**APL:**
```kusto
[‘sample-http-logs’]
| summarize TotalBytes = sum(resp_body_size_bytes)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20summarize%20TotalBytes%20=%20sum\(resp_body_size_bytes\)%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}})
### Average of a column
**SQL:**
```sql
SELECT AVG(req_duration_ms) AS AverageRequest
FROM [Sample-http-logs];
```
**APL:**
```kusto
['sample-http-logs']
| summarize AverageRequest = avg(req_duration_ms)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20summarize%20AverageRequest%20=%20avg\(req_duration_ms\)%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}})
## Minimum and Maximum Values of a column
**SQL:**
```sql
SELECT MIN(req_duration_ms) AS MinRequest, MAX(req_duration_ms) AS MaxRequest
FROM [Sample-http-logs];
```
**APL:**
```kusto
['sample-http-logs']
| summarize MinRequest = min(req_duration_ms), MaxRequest = max(req_duration_ms)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20summarize%20MinRequest%20=%20min\(req_duration_ms\),%20MaxRequest%20=%20max\(req_duration_ms\)%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}})
## Count distinct values
**SQL:**
```sql
SELECT COUNT(DISTINCT method) AS UniqueMethods
FROM [Sample-http-logs];
```
**APL:**
```kusto
['sample-http-logs']
| summarize UniqueMethods = dcount(method)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|summarize%20UniqueMethods%20=%20dcount\(method\)%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}})
## Standard deviation of a data
**SQL:**
```sql
SELECT STDDEV(req_duration_ms) AS StdDevRequest
FROM [Sample-http-logs];
```
**APL:**
```kusto
['sample-http-logs']
| summarize StdDevRequest = stdev(req_duration_ms)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20summarize%20stdDEVRequest%20=%20stdev\(req_duration_ms\)%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}})
## Variance of a data
**SQL:**
```sql
SELECT VAR(req_duration_ms) AS VarRequest
FROM [Sample-http-logs];
```
**APL:**
```kusto
['sample-http-logs']
| summarize VarRequest = variance(req_duration_ms)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20summarize%20VarRequest%20=%20variance\(req_duration_ms\)%22,%22queryOptions%22:\{%22quickRange%22:%2215d%22}})
## Multiple aggregation functions
**SQL:**
```sql
SELECT COUNT(*) AS TotalDuration, SUM(req_duration_ms) AS TotalDuration, AVG(Price) AS AverageDuration
FROM [Sample-http-logs];
```
**APL:**
```kusto
['sample-http-logs']
| summarize TotalOrders = count(), TotalDuration = sum( req_duration_ms), AverageDuration = avg(req_duration_ms)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20summarize%20TotalOrders%20=%20count\(\),%20TotalDuration%20=%20sum\(req_duration_ms\),%20AverageDuration%20=%20avg\(req_duration_ms\)%20%22,%22queryOptions%22:\{%22quickRange%22:%2215d%22}})
## Aggregation with GROUP BY and ORDER BY
**SQL:**
```sql
SELECT status, COUNT(*) AS TotalStatus, SUM(resp_header_size_bytes) AS TotalRequest
FROM [Sample-http-logs];
GROUP BY status
ORDER BY TotalSpent DESC;
```
**APL:**
```kusto
['sample-http-logs']
| summarize TotalStatus = count(), TotalRequest = sum(resp_header_size_bytes) by status
| order by TotalRequest desc
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20summarize%20TotalStatus%20=%20count\(\),%20TotalRequest%20=%20sum\(resp_header_size_bytes\)%20by%20status\n|%20order%20by%20TotalRequest%20desc%20%22,%22queryOptions%22:\{%22quickRange%22:%2215d%22}})
## Count with a condition
**SQL:**
```sql
SELECT COUNT(*) AS HighContentStatus
FROM [Sample-http-logs];
WHERE resp_header_size_bytes > 1;
```
**APL:**
```kusto
['sample-http-logs']
| where resp_header_size_bytes > 1
| summarize HighContentStatus = count()
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20where%20resp_header_size_bytes%20%3E%201\n|%20summarize%20HighContentStatus%20=%20count\(\)%20%20%20%22,%22queryOptions%22:\{%22quickRange%22:%2215d%22}})
## Aggregation with HAVING
**SQL:**
```sql
SELECT Status
FROM [Sample-http-logs];
GROUP BY Status
HAVING COUNT(*) > 10;
```
**APL:**
```kusto
['sample-http-logs']
| summarize OrderCount = count() by status
| where OrderCount > 10
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20summarize%20OrderCount%20=%20count\(\)%20by%20status\n|%20where%20OrderCount%20%3E%2010%20%20%20%22,%22queryOptions%22:\{%22quickRange%22:%2215d%22}})
## Count occurrences of a value in a field
**SQL:**
```sql
SELECT content_type, COUNT(*) AS RequestCount
FROM [Sample-http-logs];
WHERE content_type = ‘text/csv’;
```
**APL:**
```kusto
['sample-http-logs'];
| where content_type == 'text/csv'
| summarize RequestCount = count()
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20where%20content_type%20==%20%27text/csv%27%20\n|%20summarize%20RequestCount%20=%20count\(\)%20%20%20%22,%22queryOptions%22:\{%22quickRange%22:%2215d%22}})
## String Functions:
## Length of a string
**SQL:**
```sql
SELECT LEN(Status) AS NameLength
FROM [Sample-http-logs];
```
**APL:**
```kusto
['sample-http-logs']
| extend NameLength = strlen(status)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20NameLength%20=%20strlen\(status\)%20%22,%22queryOptions%22:\{%22quickRange%22:%2215d%22}})
## Concatentation
**SQL:**
```sql
SELECT CONCAT(content_type, ' ', method) AS FullLength
FROM [Sample-http-logs];
```
**APL:**
```kusto
['sample-http-logs']
| extend FullLength = strcat(content_type, ' ', method)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20FullLength%20=%20strcat\(content_type,%20%27%20%27,%20method\)%20%20%22,%22queryOptions%22:\{%22quickRange%22:%2215d%22}})
## Substring
**SQL:**
```sql
SELECT SUBSTRING(content_type, 1, 10) AS ShortDescription
FROM [Sample-http-logs];
```
**APL:**
```kusto
['sample-http-logs']
| extend ShortDescription = substring(content_type, 0, 10)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20ShortDescription%20=%20substring\(content_type,%200,%2010\)%20%22,%22queryOptions%22:\{%22quickRange%22:%2215d%22}})
## Left and Right
**SQL:**
```sql
SELECT LEFT(content_type, 3) AS LeftTitle, RIGHT(content_type, 3) AS RightTitle
FROM [Sample-http-logs];
```
**APL:**
```kusto
['sample-http-logs']
| extend LeftTitle = substring(content_type, 0, 3), RightTitle = substring(content_type, strlen(content_type) - 3, 3)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20LeftTitle%20=%20substring\(content_type,%200,%203\),%20RightTitle%20=%20substring\(content_type,%20strlen\(content_type\)%20-%203,%203\)%20%20%22,%22queryOptions%22:\{%22quickRange%22:%2215d%22}})
## Replace
**SQL:**
```sql
SELECT REPLACE(StaTUS, 'old', 'new') AS UpdatedStatus
FROM [Sample-http-logs];
```
**APL:**
```kusto
['sample-http-logs']
| extend UpdatedStatus = replace('old', 'new', status)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20UpdatedStatus%20=%20replace\(%27old%27,%20%27new%27,%20status\)%20%20%22,%22queryOptions%22:\{%22quickRange%22:%2215d%22}})
## Upper and Lower
**SQL:**
```sql
SELECT UPPER(FirstName) AS UpperFirstName, LOWER(LastName) AS LowerLastName
FROM [Sample-http-logs];
```
**APL:**
```kusto
['sample-http-logs']
| project upperFirstName = toupper(content_type), LowerLastNmae = tolower(status)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20project%20upperFirstName%20=%20toupper\(content_type\),%20LowerLastNmae%20=%20tolower\(status\)%20%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}})
## LTrim and RTrim
**SQL:**
```sql
SELECT LTRIM(content_type) AS LeftTrimmedFirstName, RTRIM(content_type) AS RightTrimmedLastName
FROM [Sample-http-logs];
```
**APL:**
```kusto
['sample-http-logs']
| extend LeftTrimmedFirstName = trim_start(' ', content_type), RightTrimmedLastName = trim_end(' ', content_type)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20project%20LeftTrimmedFirstName%20=%20trim_start\(%27%27,%20content_type\),%20RightTrimmedLastName%20=%20trim_end\(%27%27,%20content_type\)%20%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}})
## Trim
**SQL:**
```sql
SELECT TRIM(content_type) AS TrimmedFirstName
FROM [Sample-http-logs];
```
**APL:**
```kusto
['sample-http-logs']
| extend TrimmedFirstName = trim(' ', content_type)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20TrimmedFirstName%20=%20trim\(%27%20%27,%20content_type\)%20%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}})
## Reverse
**SQL:**
```sql
SELECT REVERSE(Method) AS ReversedFirstName
FROM [Sample-http-logs];
```
**APL:**
```kusto
['sample-http-logs']
| extend ReversedFirstName = reverse(method)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20project%20ReservedFirstnName%20=%20reverse\(method\)%20%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}})
## Case-insensitive search
**SQL:**
```sql
SELECT Status, Method
FROM “Sample-http-logs”
WHERE LOWER(Method) LIKE 'get’';
```
**APL:**
```kusto
['sample-http-logs']
| where tolower(method) contains 'GET'
| project status, method
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20where%20tolower\(method\)%20contains%20%27GET%27\n|%20project%20status,%20method%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}})
## Take the First Step Today: Dive into APL
The journey from SQL to APL might seem daunting at first, but with the right approach, it can become an empowering transition. It is about expanding your data query capabilities to leverage the advanced, versatile, and fast querying infrastructure that APL provides. In the end, the goal is to enable you to draw more value from your data, make faster decisions, and ultimately propel your business forward.
Try converting some of your existing SQL queries to APL and observe the performance difference. Explore the Axiom Processing Language and start experimenting with its unique features.
**Happy querying!**
# Migrate from Sumo Logic Query Language to APL
Source: https://axiom.co/docs/apl/guides/migrating-from-sumologic-to-apl
This guide dives into why APL could be a superior choice for your data needs, and the differences between Sumo Logic and APL.
## Introduction
In the sphere of data analytics and log management, being able to query data efficiently and effectively is of paramount importance.
This guide dives into why APL could be a superior choice for your data needs, the differences between Sumo Logic and APL, and the potential benefits you could reap from migrating from Sumo Logic to APL. Let’s explore the compelling case for APL as a robust, powerful tool for handling your complex data querying requirements.
APL is powerful and flexible and uses a pipe (`|`) operator for chaining commands, and it provides a richer set of functions and operators for more complex queries.
## Benefits of Migrating from SumoLogic to APL
* **Scalability and Performance:** APL was built with scalability in mind. It handles very large volumes of data more efficiently and provides quicker query execution compared to Sumo Logic, making it a suitable choice for organizations with extensive data requirements. APL is designed for high-speed data ingestion, real-time analytics, and providing insights across structured, semi-structured data. It’s also optimized for time-series data analysis, making it highly efficient for log and telemetry data.
* **Advanced Analytics Capabilities:** With APL’s support for aggregation and conversion functions and more advanced statistical visualization, organizations can derive more sophisticated insights from their data.
## Query Examples
Let’s see some examples of how to convert SumoLogic queries to APL.
## Parse, and Extract Operators
Extract `from` and `to` fields. For example, if a raw event contains `From: Jane To: John,` then `from=Jane and to=John.`
**Sumo Logic:**
```bash
* | parse "From: * To: *" as (from, to)
```
**APL:**
```kusto
['sample-http-logs']
| extend (method) == extract("From: (.*?) To: (.*)", 1, method)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20\(method\)%20==%20extract\(%22From:%20\(.*?\)%20To:%20\(.*\)%22,%201,%20method\)%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}})
## Extract Source IP with Regex
In this section, we will utilize a regular expression to identify the four octets of an IP address. This will help us efficiently extract the source IP addresses from the data.
**Sumo Logic:**
```bash
*| parse regex "(\\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})"
```
**APL:**
```kusto
['sample-http-logs']
| extend ip = extract("(\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3})", 1, "23.45.67.90")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20ip%20=%20extract\(%22\(\\\d\{1,3}\\\\.\\\d\{1,3}\\\\.\\\d\{1,3}\\\\.\\\d\{1,3}\)%22,%201,%20%2223.45.67.90%22\)%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}})
## Extract Visited URLs
This section focuses on identifying all URL addresses visited and extracting them to populate the "url" field. This method provides an organized way to track user activity using APL.
**Sumo Logic:**
```bash
_sourceCategory=apache
| parse "GET * " as url
```
**APL:**
```kusto
['sample-http-logs']
| where method == "GET"
| project url = extract(@"(\w+)", 1, method)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20where%20method%20%3D%3D%20%5C%22GET%5C%22%5Cn%7C%20project%20url%20%3D%20extract\(%40%5C%22\(%5C%5Cw%2B\)%5C%22%2C%201%2C%20method\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
## Extract Data from Source Category Traffic
This section aims to identify and analyze traffic originating from the Source Category. We will extract critical information including the source addresses, the sizes of messages transmitted, and the URLs visited, providing valuable insights into the nature of the traffic using APL.
**Sumo Logic:**
```bash
_sourceCategory=apache
| parse "* " as src_IP
| parse " 200 * " as size
| parse "GET * " as url
```
**APL:**
```kusto
['sample-http-logs']
| extend src_IP = extract("^(\\S+)", 0, uri)
| extend size = extract("^(\\S+)", 1, status)
| extend url = extract("^(\\S+)", 1, method)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20src_IP%20%3D%20extract\(%5C%22%5E\(%40S%2B\)%5C%22%2C%200%2C%20uri\)%5Cn%7C%20extend%20size%20%3D%20extract\(%5C%22%5E\(%40S%2B\)%5C%22%2C%201%2C%20status\)%5Cn%7C%20extend%20url%20%3D%20extract\(%5C%22%5E\(%40S%2B\)%5C%22%2C%201%2C%20method\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
## Calculate Bytes Transferred per Source IP
In this part, we will compute the total number of bytes transferred to each source IP address. This will allow us to gauge the data volume associated with each source using APL.
**Sumo Logic:**
```bash
_sourceCategory=apache
| parse "* " as src_IP
| parse " 200 * " as size
| count, sum(size) by src_IP
```
**APL:**
```kusto
['sample-http-logs']
| extend src_IP = extract("^(\\S+)", 1, uri)
| extend size = toint(extract("200", 0, status))
| summarize count(), sum(size) by src_IP
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20size%20=%20toint\(extract\(%22200%22,%200,%20status\)\)\n|%20summarize%20count\(\),%20sum\(size\)%20by%20status%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}})
## Compute Average HTTP Response Size
In this section, we will calculate the average size of all successful HTTP responses. This metric helps us to understand the typical data load associated with successful server responses.
**Sumo Logic:**
```bash
_sourceCategory=apache
| parse " 200 * " as size
| avg(size)
```
**APL:**
Get the average value from a string:
```kusto
['sample-http-logs']
| extend number = todouble(extract("\\d+(\\.\\d+)?", 0, status))
| summarize Average = avg(number)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20number%20=%20todouble\(status\)\n|%20summarize%20Average%20=%20avg\(number\)%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}})
## Extract Data with Missing Size Field (NoDrop)
This section focuses on extracting key parameters like `src`, `size`, and `URL`, even when the `size` field may be absent from the log message.
**Sumo Logic:**
```bash
_sourceCategory=apache
| parse "* " as src_IP
| parse " 200 * " as size nodrop
| parse "GET * " as url
```
**APL:**
```kusto
['sample-http-logs']
| where content_type == "text/css"
| extend src_IP = extract("^(\\S+)", 1, ['id'])
| extend size = toint(extract("(\\w+)", 1, status))
| extend url = extract("GET", 0, method)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20where%20content_type%20%3D%3D%20%5C%22text%2Fcss%5C%22%20%7C%20extend%20src_IP%20%3D%20extract\(%5C%22%5E\(%5C%5CS%2B\)%5C%22%2C%201%2C%20%5B%27id%27%5D\)%20%7C%20extend%20size%20%3D%20toint\(extract\(%5C%22\(%5C%5Cw%2B\)%5C%22%2C%201%2C%20status\)\)%20%7C%20extend%20url%20%3D%20extract\(%5C%22GET%5C%22%2C%200%2C%20method\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
## Count URL Visits
This section is dedicated to identifying the frequency of visits to a specific URL. By counting these occurrences, we can gain insights into website popularity and user behavior.
**Sumo Logic:**
```bash
_sourceCategory=apache
| parse "GET * " as url
| count by url
```
**APL:**
```kusto
['sample-http-logs']
| extend url = extract("^(\\S+)", 1, method)
| summarize Count = count() by url
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?qid=RsnK4jahgNC-rviz3s)
## Page Count by Source IP
In this section, we will identify the total number of pages associated with each source IP address. This analysis will allow us to understand the volume of content generated or hosted by each source.
**Sumo Logic:**
```bash
_sourceCategory=apache
| parse "* -" as src_ip
| count by src_ip
```
**APL:**
```kusto
['sample-http-logs']
| extend src_ip = extract(".*", 0, ['id'])
| summarize Count = count() by src_ip
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20src_ip%20=%20extract\(%22.*%22,%200,%20%20\[%27id%27]\)\n|%20summarize%20Count%20=%20count\(\)%20by%20src_ip%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}})
## Reorder Pages by Load Frequency
We aim to identify the total number of pages per source IP address in this section. Following this, the pages will be reordered based on the frequency of loads, which will provide insights into the most accessed content.
**Sumo Logic:**
```bash
_sourceCategory=apache
| parse "* " as src_ip
| parse "GET * " as url
| count by url
| sort by _count
```
**APL:**
```kusto
['sample-http-logs']
| extend src_ip = extract(".*", 0, ['id'])
| extend url = extract("(GET)", 1, method)
| where isnotnull(url)
| summarize _count = count() by url, src_ip
| order by _count desc
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20src_ip%20=%20extract\(%22.*%22,%200,%20\[%27id%27]\)\n|%20extend%20url%20=%20extract\(%22\(GET\)%22,%201,%20method\)\n|%20where%20isnotnull\(url\)\n|%20summarize%20_count%20=%20count\(\)%20by%20url,%20src_ip\n|%20order%20by%20_count%20desc%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}})
## Identify the top 10 requested pages.
**Sumo Logic:**
```bash
* | parse "GET * " as url
| count by url
| top 10 url by _count
```
**APL:**
```kusto
['sample-http-logs']
| where method == "GET"
| top 10 by method desc
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20where%20method%20==%20%22GET%22\n|%20top%2010%20by%20method%20desc%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}})
## Top 10 IPs by Bandwidth Usage
In this section, we aim to identify the top 10 source IP addresses based on their bandwidth consumption.
**Sumo Logic:**
```bash
_sourceCategory=apache
| parse " 200 * " as size
| parse "* -" as src_ip
| sum(size) as total_bytes by src_ip
| top 10 src_ip by total_bytes
```
**APL:**
```kusto
['sample-http-logs']
| extend size = req_duration_ms
| summarize total_bytes = sum(size) by ['id']
| top 10 by total_bytes desc
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20size%20=%20req_duration_ms\n|%20summarize%20total_bytes%20=%20sum\(size\)%20by%20\[%27id%27]\n|%20top%2010%20by%20total_bytes%20desc%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}})
## Top 6 IPs by Number of Hits
This section focuses on identifying the top six source IP addresses according to the number of hits they generate. This will provide insight into the most frequently accessed or active sources in the network.
**Sumo Logic**
```bash
_sourceCategory=apache
| parse "* -" as src_ip
| count by src_ip
| top 100 src_ip by _count
```
**APL:**
```kusto
['sample-http-logs']
| extend src_ip = extract("^(\\S+)", 1, user_agent)
| summarize _count = count() by src_ip
| top 6 by _count desc
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20summarize%20_count%20=%20count\(\)%20by%20user_agent\n|%20order%20by%20_count%20desc\n|%20limit%206%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}})
## Timeslice and Transpose
For the Source Category "apache", count by status\_code and timeslice of 1 hour.
**Sumo Logic:**
```bash
_sourceCategory=apache*
| parse "HTTP/1.1\" * * \"" as (status_code, size)
| timeslice 1h
| count by _timeslice, status_code
```
**APL:**
```kusto
['sample-http-logs']
| extend status_code = extract("^(\\S+)", 1, method)
| where status_code == "POST"
| summarize count() by status_code, bin(_time, 1h)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20where%20method%20==%20%22POST%22\n|%20summarize%20count\(\)%20by%20method,%20bin\(_time,%201h\)%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}})
## Hourly Status Code Count for "Text" Source
In this section, We aim to count instances by `status_code`, grouped into one-hour timeslices, and then transpose `status_code` to column format. This will help us understand the frequency and timing of different status codes.
**Sumo Logic:**
```bash
_sourceCategory=text*
| parse "HTTP/1.1\" * * \"" as (status_code, size)
| timeslice 1h
| count by _timeslice, status_code
| transpose row _timeslice column status_code
```
**APL:**
```
['sample-http-logs']
| where content_type startswith 'text/css'
| extend status_code= status
| summarize count() by bin(_time, 1h), content_type, status_code
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20where%20content_type%20startswith%20%27text/css%27\n|%20extend%20status_code%20=%20status\n|%20summarize%20count\(\)%20by%20bin\(_time,%201h\),%20content_type,%20status_code%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}})
## Status Code Count in 5 Time Buckets
In this example, we will perform a count by 'status\_code', sliced into five time buckets across the search results. This will help analyze the distribution and frequency of status codes over specific time intervals.
**Sumo Logic:**
```bash
_sourceCategory=apache*
| parse "HTTP/1.1\" * * \"" as (status_code, size)
| timeslice 5 buckets
| count by _timeslice, status_code
```
**APL:**
```kusto
['sample-http-logs']
| where content_type startswith 'text/css'
| extend p=("HTTP/1.1\" * * \""), tostring( is_tls)
| extend status_code= status
| summarize count() by bin(_time, 12m), status_code
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20where%20content_type%20startswith%20%27text/css%27\n|%20extend%20p=\(%22HTTP/1.1\\%22%20*%20*%20\\%22%22\),%20tostring\(is_tls\)\n|%20extend%20status_code%20=%20status\n|%20summarize%20count\(\)%20by%20bin\(_time,%2012m\),%20status_code%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}})
## Grouped Status Code Count
In this example, we will count messages by status code categories. We will group all messages with status codes in the `200s`, `300s`, `400s`, and `500s` together, we are also groupint the method requests with the `GET`, `POST`, `PUT`, `DELETE` attributes. This will provide an overview of the response status distribution.
**Sumo Logic:**
```bash
_sourceCategory=Apache/Access
| timeslice 15m
| if (status_code matches "20*",1,0) as resp_200
| if (status_code matches "30*",1,0) as resp_300
| if (status_code matches "40*",1,0) as resp_400
| if (status_code matches "50*",1,0) as resp_500
| if (!(status_code matches "20*" or status_code matches "30*" or status_code matches "40*" or status_code matches "50*"),1,0) as resp_others
| count(*), sum(resp_200) as tot_200, sum(resp_300) as tot_300, sum(resp_400) as tot_400, sum(resp_500) as tot_500, sum(resp_others) as tot_others by _timeslice
```
**APL:**
```kusto
['sample-http-logs']
| extend MethodCategory = case(
method == "GET", "GET Requests",
method == "POST", "POST Requests",
method == "PUT", "PUT Requests",
method == "DELETE", "DELETE Requests",
"Other Methods")
| extend StatusCodeCategory = case(
status startswith "2", "Success",
status startswith "3", "Redirection",
status startswith "4", "Client Error",
status startswith "5", "Server Error",
"Unknown Status")
| extend ContentTypeCategory = case(
content_type == "text/csv", "CSV",
content_type == "application/json", "JSON",
content_type == "text/html", "HTML",
"Other Types")
| summarize Count=count() by bin_auto(_time), StatusCodeCategory, MethodCategory, ContentTypeCategory
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20MethodCategory%20=%20case\(\n%20%20%20method%20==%20%22GET%22,%20%22GET%20Requests%22,\n%20%20%20method%20==%20%22POST%22,%20%22POST%20Requests%22,\n%20%20%20method%20==%20%22PUT%22,%20%22PUT%20Requests%22,\n%20%20%20method%20==%20%22DELETE%22,%20%22DELETE%20Requests%22,\n%20%20%20%22Other%20Methods%22\)\n|%20extend%20StatusCodeCategory%20=%20case\(\n%20%20%20status%20startswith%20%222%22,%20%22Success%22,\n%20%20%20status%20startswith%20%223%22,%20%22Redirection%22,\n%20%20%20status%20startswith%20%224%22,%20%22Client%20Error%22,\n%20%20%20status%20startswith%20%225%22,%20%22Server%20Error%22,\n%20%20%20%22Unknown%20Status%22\)\n|%20extend%20ContentTypeCategory%20=%20case\(\n%20%20%20content_type%20==%20%22text/csv%22,%20%22CSV%22,\n%20%20%20content_type%20==%20%22application/json%22,%20%22JSON%22,\n%20%20%20content_type%20==%20%22text/html%22,%20%22HTML%22,\n%20%20%20%22Other%20Types%22\)\n|%20summarize%20Count=count\(\)%20by%20bin_auto\(_time\),%20StatusCodeCategory,%20MethodCategory,%20ContentTypeCategory%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}})
## Conditional Operators
For the Source Category "apache", find all messages with a client error status code (40\*):
**Sumo Logic:**
```bash
_sourceCategory=apache*
| parse "HTTP/1.1\" * * \"" as (status_code, size)
| where status_code matches "40*"
```
**APL:**
```kusto
['sample-http-logs']
| where content_type startswith 'text/css'
| extend p = ("HTTP/1.1\" * * \"")
| where status == "200"
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20where%20content_type%20startswith%20%27text/css%27\n|%20extend%20p%20=%20\(%22HTTP/1.1\\%22%20*%20*%20\\%22%22\)\n|%20where%20status%20==%20%22200%22%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}})
## Browser-based Hit Count
In this query example, we aim to count the number of hits by browser. This analysis will provide insights into the different browsers used to access the source and their respective frequencies.
**Sumo Logic:**
```bash
_sourceCategory=Apache/Access
| extract "\"[A-Z]+ \S+ HTTP/[\d\.]+\" \S+ \S+ \S+ \"(?[^\"]+?)\""
| if (agent matches "*MSIE*",1,0) as ie
| if (agent matches "*Firefox*",1,0) as firefox
| if (agent matches "*Safari*",1,0) as safari
| if (agent matches "*Chrome*",1,0) as chrome
| sum(ie) as ie, sum(firefox) as firefox, sum(safari) as safari, sum(chrome) as chrome
```
**APL:**
```kusto
['sample-http-logs']
| extend ie = case(tolower(user_agent) contains "msie", 1, 0)
| extend firefox = case(tolower(user_agent) contains "firefox", 1, 0)
| extend safari = case(tolower(user_agent) contains "safari", 1, 0)
| extend chrome = case(tolower(user_agent) contains "chrome", 1, 0)
| summarize data = sum(ie), lima = sum(firefox), lo = sum(safari), ce = sum(chrome)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20ie%20=%20case\(tolower\(user_agent\)%20contains%20%22msie%22,%201,%200\)\n|%20extend%20firefox%20=%20case\(tolower\(user_agent\)%20contains%20%22firefox%22,%201,%200\)\n|%20extend%20safari%20=%20case\(tolower\(user_agent\)%20contains%20%22safari%22,%201,%200\)\n|%20extend%20chrome%20=%20case\(tolower\(user_agent\)%20contains%20%22chrome%22,%201,%200\)\n|%20summarize%20data%20=%20sum\(ie\),%20lima%20=%20sum\(firefox\),%20lo%20=%20sum\(safari\),%20ce%20=%20sum\(chrome\)%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}})
## Use the where operator to match only weekend days.
**Sumo Logic:**
```bash
* | parse "day=*:" as day_of_week
| where day_of_week in ("Saturday","Sunday")
```
**APL:**
```kusto
['sample-http-logs']
| extend day_of_week = dayofweek(_time)
| where day_of_week == 1 or day_of_week == 0
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20day_of_week%20=%20dayofweek\(_time\)\n|%20where%20day_of_week%20==%201%20or%20day_of_week%20==%200%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}})
## Extract Numeric Version Numbers
In this section, we will identify version numbers that match numeric values 2, 3, or 1. We will utilize the `num` operator to convert these strings into numerical format, facilitating easier analysis and comparison.
**Sumo Logic:**
```bash
* | parse "Version=*." as number | num(number)
| where number in (2,3,6)
```
**APL:**
```kusto
['sample-http-logs']
| extend p= (req_duration_ms)
| extend number=toint(p)
| where number in (2,3,6)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20p=%20\(req_duration_ms\)\n|%20extend%20number=toint\(p\)\n|%20where%20number%20in%20\(2,3,6\)%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}})
## Making the Leap: Transform Your Data Analytics with APL
As we've navigated through the process of migrating from Sumo Logic to APL, we hope you've found the insights valuable. The powerful capabilities of Axiom Processing Lnaguage are now within your reach, ready to empower your data analytics journey.
Ready to take the next step in your data analytics journey? Dive deeper into APL and discover how it can unlock even more potential in your data. Check out our APL [learning resources](/apl/guides/migrating-from-sql-to-apl) and [tutorials](/apl/tutorial) to become proficient in APL, and join our [community forums](http://axiom.co/discord) to engage with other APL users. Together, we can redefine what’s possible in data analytics. Remember, the migration to APL is not just a change, it’s an upgrade. Embrace the change, because better data analytics await you.
Begin your APL journey today!
# Migrate from Splunk SPL to APL
Source: https://axiom.co/docs/apl/guides/splunk-cheat-sheet
This step-by-step guide provides a high-level mapping from Splunk SPL to APL.
Splunk and Axiom are powerful tools for log analysis and data exploration. The data explorer interface uses Axiom Processing Language (APL). There are some differences between the query languages for Splunk and Axiom. When transitioning from Splunk to APL, you will need to understand how to convert your Splunk SPL queries into APL.
**This guide provides a high-level mapping from Splunk to APL.**
## Basic Searching
Splunk uses a `search` command for basic searching, while in APL, simply specify the dataset name followed by a filter.
**Splunk:**
```bash
search index="myIndex" error
```
**APL:**
```kusto
['myDatasaet']
| where FieldName contains “error”
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20where%20method%20contains%20%27GET%27%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}})
## Filtering
In Splunk, perform filtering using the `search` command, usually specifying field names and their desired values. In APL, perform filtering by using the `where` operator.
**Splunk:**
```bash
Search index=”myIndex” error
| stats count
```
**APL:**
```kusto
['myDataset']
| where fieldName contains “error”
| count
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20where%20content_type%20contains%20%27text%27\n|%20count\n|%20limit%2010%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}})
## Aggregation
In Splunk, the `stats` command is used for aggregation. In APL, perform aggregation using the `summarize` operator.
**Splunk:**
```bash
search index="myIndex"
| stats count by status
```
**APL:**
```kusto
['myDataset']
| summarize count() by status
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20summarize%20count\(\)%20by%20status%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}})
## Time Frames
In Splunk, select a time range for a search in the time picker on the search page. In APL, filter by a time range using the where operator and the `timespan` field of the dataset.
**Splunk:**
```bash
search index="myIndex" earliest=-1d@d latest=now
```
**APL:**
```kusto
['myDataset']
| where _time >= ago(1d) and _time <= now()
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20where%20_time%20%3E=%20ago\(1d\)%20and%20_time%20%3C=%20now\(\)%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}})
## Sorting
In Splunk, the `sort` command is used to order the results of a search. In APL, perform sorting by using the `sort by` operator.
**Splunk:**
```bash
search index="myIndex"
| sort - content_type
```
**APL:**
```kusto
['myDataset']
| sort by countent_type desc
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20sort%20by%20content_type%20desc%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}})
## Selecting Fields
In Splunk, use the fields command to specify which fields to include or exclude in the search results. In APL, use the `project` operator, `project-away` operator, or the `project-keep` operator to specify which fields to include in the query results.
**Splunk:**
```bash
index=main sourcetype=mySourceType
| fields status, responseTime
```
**APL:**
```kusto
['myDataset']
| extend newName = oldName
| project-away oldName
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20newStatus%20=%20status%20\n|%20project-away%20status%20%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}})
## Renaming Fields
In Splunk, rename fields using the `rename` command, while in APL rename fields using the `extend,` and `project` operator. Here is the general syntax:
**Splunk:**
```bash
index="myIndex" sourcetype="mySourceType"
| rename oldFieldName AS newFieldName
```
**APL:**
```kusto
['myDataset']
| where method == "GET"
| extend new_field_name = content_type
| project-away content_type
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20where%20method%20==%20%27GET%27\n|%20extend%20new_field_name%20=%20content_type\n|%20project-away%20content_type%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}})
## Calculated Fields
In Splunk, use the `eval` command to create calculated fields based on the values of other fields, while in APL use the `extend` operator to create calculated fields based on the values of other fields.
**Splunk**
```bash
search index="myIndex"
| eval newField=field1+field2
```
**APL:**
```kusto
['myDataset']
| extend newField = field1 + field2
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20calculatedFields%20=%20req_duration_ms%20%2b%20resp_body_size_bytes%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}})
## Structure and Concepts
The following table compares concepts and data structures between Splunk and APL logs.
| Concept | Splunk | APL | Comment |
| ------------------------- | -------- | ------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| data caches | buckets | caching and retention policies | Controls the period and caching level for the data.This setting directly affects the performance of queries. |
| logical partition of data | index | dataset | Allows logical separation of the data. |
| structured event metadata | N/A | dataset | Splunk doesn’t expose the concept of metadata to the search language. APL logs have the concept of a dataset, which has fields and columns. Each event instance is mapped to a row. |
| data record | event | row | Terminology change only. |
| types | datatype | datatype | APL data types are more explicit because they are set on the fields. Both have the ability to work dynamically with data types and roughly equivalent sets of data types. |
| query and search | search | query | Concepts essentially are the same between APL and Splunk |
## Functions
The following table specifies functions in APL that are equivalent to Splunk Functions.
| Splunk | APL |
| ------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| strcat | strcat() |
| split | split() |
| if | iff() |
| tonumber | todouble(), tolong(), toint() |
| upper, lower | toupper(), tolower() |
| replace | replace\_string() or replace\_regex() |
| substr | substring() |
| tolower | tolower() |
| toupper | toupper() |
| match | matches regex |
| regex | matches regex **(in splunk, regex is an operator. In APL, it’s a relational operator.)** |
| searchmatch | == **(In splunk, `searchmatch` allows searching the exact string.)** |
| random | rand(), rand(n) **(Splunk’s function returns a number between zero to 231 -1. APL returns a number between 0.0 and 1.0, or if a parameter is provided, between 0 and n-1.)** |
| now | now() |
In Splunk, the function is invoked by using the `eval` operator. In APL, it’s used as part of the `extend` or `project`.
In Splunk, the function is invoked by using the `eval` operator. In APL, it can be used with the `where` operator.
## Filter
APL log queries start from a tabular result set in which a filter is applied. In Splunk, filtering is the default operation on the current index. You may also use the where operator in Splunk, but we don’t recommend it.
| Product | Operator | Example |
| :------ | :--------- | :------------------------------------------------------------------------- |
| Splunk | **search** | Sample.Logs="330009.2" method="GET" \_indextime>-24h |
| APL | **where** | \['sample-http-logs'] \| where method == "GET" and \_time > ago(24h) |
## Get n events or rows for inspection
APL log queries also support `take` as an alias to `limit`. In Splunk, if the results are ordered, `head` returns the first n results. In APL, `limit` isn’t ordered, but it returns the first n rows that are found.
| Product | Operator | Example |
| ------- | -------- | ---------------------------------------- |
| Splunk | head | Sample.Logs=330009.2 \| head 100 |
| APL | limit | \['sample-htto-logs'] \| limit 100 |
## Get the first *n* events or rows ordered by a field or column
For the bottom results, in Splunk, use `tail`. In APL, specify ordering direction by using `asc`.
| Product | Operator | Example |
| :------ | :------- | :------------------------------------------------------------------ |
| Splunk | head | Sample.Logs="33009.2" \| sort Event.Sequence \| head 20 |
| APL | top | \['sample-http-logs'] \| top 20 by method |
## Extend the result set with new fields or columns
Splunk has an `eval` function, but it’s not comparable to the `eval` operator in APL. Both the `eval` operator in Splunk and the `extend` operator in APL support only scalar functions and arithmetic operators.
| Product | Operator | Example |
| :------ | :------- | :------------------------------------------------------------------------------------ |
| Splunk | eval | Sample.Logs=330009.2 \| eval state= if(Data.Exception = "0", "success", "error") |
| APL | extend | \['sample-http-logs'] \| extend Grade = iff(req\_duration\_ms >= 80, "A", "B") |
## Rename
APL uses the `project` operator to rename a field. In the `project` operator, a query can take advantage of any indexes that are prebuilt for a field. Splunk has a `rename` operator that does the same.
| Product | Operator | Example |
| :------ | :------- | :-------------------------------------------------------------- |
| Splunk | rename | Sample.Logs=330009.2 \| rename Date.Exception as execption |
| APL | project | \['sample-http-logs'] \| project updated\_status = status |
## Format results and projection
Splunk uses the `table` command to select which columns to include in the results. APL has a `project` operator that does the same and [more](/apl/tabular-operators/project-operator).
| Product | Operator | Example |
| :------ | :------- | :--------------------------------------------------- |
| Splunk | table | Event.Rule=330009.2 \| table rule, state |
| APL | project | \['sample-http-logs'] \| project status, method |
Splunk uses the `field -` command to select which columns to exclude from the results. APL has a `project-away` operator that does the same.
| Product | Operator | Example |
| :------ | :--------------- | :-------------------------------------------------------------- |
| Splunk | **fields -** | Sample.Logs=330009.2\` \| fields - quota, hightest\_seller |
| APL | **project-away** | \['sample-http-logs'] \| project-away method, status |
## Aggregation
See the [list of summarize aggregations functions](/apl/aggregation-function/statistical-functions) that are available.
| Splunk operator | Splunk example | APL operator | APL example |
| :-------------- | :------------------------------------------------------------- | :----------- | :----------------------------------------------------------------------- |
| **stats** | search (Rule=120502.\*) \| stats count by OSEnv, Audience | summarize | \['sample-http-logs'] \| summarize count() by content\_type, status |
## Sort
In Splunk, to sort in ascending order, you must use the `reverse` operator. APL also supports defining where to put nulls, either at the beginning or at the end.
| Product | Operator | Example |
| :------ | :------- | :------------------------------------------------------------- |
| Splunk | sort | Sample.logs=120103 \| sort Data.Hresult \| reverse |
| APL | order by | \['sample-http-logs'] \| order by status desc |
Whether you’re just starting your transition or you’re in the thick of it, this guide can serve as a helpful roadmap to assist you in your journey from Splunk to Axiom Processing Language.
Dive into the Axiom Processing Language, start converting your Splunk queries to APL, and explore the rich capabilities of the Query tab. Embrace the learning curve, and remember, every complex query you master is another step forward in your data analytics journey.
# Axiom Processing Language (APL)
Source: https://axiom.co/docs/apl/introduction
This section explains how to use the Axiom Processing Language to get deeper insights from your data.
The Axiom Processing Language (APL) is a query language that’s perfect for getting deeper insights from your data. Whether logs, events, analytics, or similar, APL provides the flexibility to filter, manipulate, and summarize your data exactly the way you need it.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
## Build an APL query
APL queries consist of the following:
* **Data source:** The most common data source is one of your Axiom datasets.
* **Operators:** Operators filter, manipulate, and summarize your data.
Delimit operators with the pipe character (`|`).
A typical APL query has the following structure:
```kusto
DatasetName
| Operator ...
| Operator ...
```
* `DatasetName` is the name of the dataset you want to query.
* `Operator` is an operation you apply to the data.
Apart from Axiom datasets, you can use other data sources:
* External data sources using the [externaldata](/apl/tabular-operators/externaldata-operator) operator.
* Specify a data table in the APL query itself using the `let` statement.
## Example query
```kusto
['github-issue-comment-event']
| extend isBot = actor contains '-bot' or actor contains '[bot]'
| where isBot == true
| summarize count() by bin_auto(_time), actor
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'github-issue-comment-event'%5D%20%7C%20extend%20isBot%20%3D%20actor%20contains%20'-bot'%20or%20actor%20contains%20'%5Bbot%5D'%20%7C%20where%20isBot%20%3D%3D%20true%20%7C%20summarize%20count\(\)%20by%20bin_auto\(_time\)%2C%20actor%22%7D)
The query above uses a dataset called `github-issue-comment-event` as its data source. It uses the following operators:
* [extend](/apl/tabular-operators/extend-operator) adds a new field `isBot` to the query results. It sets the values of the new field to true if the values of the `actor` field in the original dataset contain `-bot` or `[bot]`.
* [where](/apl/tabular-operators/where-operator) filters for the values of the `isBot` field. It only returns rows where the value is true.
* [summarize](/apl/tabular-operators/summarize-operator) aggregates the data and produces a chart.
Each operator is separated using the pipe character (`|`).
## Example result
As a result, the query returns a chart and a table. The table counts the different values of the `actor` field where `isBot` is true, and the chart displays the distribution of these counts over time.
| actor | count\_ |
| -------------------- | ------- |
| github-actions\[bot] | 487 |
| sonarqubecloud\[bot] | 208 |
| dependabot\[bot] | 148 |
| vercel\[bot] | 91 |
| codecov\[bot] | 63 |
| openshift-ci\[bot] | 52 |
| coderabbitai\[bot] | 43 |
| netlify\[bot] | 37 |
The query results are a representation of your data based on your request. The query doesn’t change the original dataset.
## Quote dataset and field names
If the name of a dataset or field contains at least one of the following special characters, quote the name in your APL query:
* Space (` `)
* Dot (`.`)
* Dash (`-`)
To quote the dataset or field in your APL query, enclose its name with quotation marks (`'` or `"`) and square brackets (`[]`). For example, `['my-field']`.
For more information on rules about naming and quoting entities, see [Entity names](/apl/entities/entity-names).
## What's next
Check out the [list of sample queries](/apl/tutorial) or explore the supported operators and functions:
* [Scalar functions](/apl/scalar-functions/)
* [Aggregation functions](/apl/aggregation-function/)
* [Tabular operators](/apl/tabular-operators/)
* [Scalar operators](/apl/scalar-operators/)
# Set statement
Source: https://axiom.co/docs/apl/query-statement/set-statement
The set statement is used to set a query option in your APL query.
The `set` statement is used to set a query option. Options enabled with the `set` statement only have effect for the duration of the query.
The `set` statement specified will affect how your query is processed and the returned results.
## Syntax
```kusto
set OptionName=OptionValue
```
## Strict types
The `stricttypes` query option lets you specify only the exact type of the data type declaration needed in your query, or a **QueryFailed** error will be thrown.
## Example
```kusto
set stricttypes;
['Dataset']
| where number == 5
```
# Special field attributes
Source: https://axiom.co/docs/apl/reference/special-field-attributes
This page explains how to implement special fields within APL queries to enhance the functionality and interactivity of datasets. Use these fields in APL queries to add unique behaviors to the Axiom user interface.
## Add link to table
* Name: `_row_url`
* Type: string
* Description: Define the URL to which the entire table links.
* APL query example: `extend _row_url = 'https://axiom.co/'`
* Expected behavior: Make rows clickable. When clicked, go to the specified URL.
If you specify a static string as the URL, all rows link to that page. To specify a different URL for each row, use an dynamic expression like `extend _row_url = strcat('https://axiom.co/', uri)` where `uri` is a field in your data.
## Add link to values in a field
* Name: `_FIELDNAME_url`
* Type: string
* Description: Define a URL to which values in a field link.
* APL query example: `extend _website_url = 'https://axiom.co/'`
* Expected behavior: Make values in the `website` field clickable. When clicked, go to the specified URL.
Replace `FIELDNAME` with the actual name of the field.
## Add tooltip to values in a field
* Name: `_FIELDNAME_tooltip`
* Type: string
* Description: Define text to be displayed when hovering over values in a field.
* Example Usage: `extend _errors_tooltip = 'Number of errors'`
* Expected behavior: Display a tooltip with the specified text when the user hovers over values in a field.
Replace `FIELDNAME` with the actual name of the field.
## Add description to values in a field
* Name: `_FIELDNAME_description`
* Type: string
* Description: Define additional information to be displayed under the values in a field.
* Example Usage: `extend _diskusage_description = 'Current disk usage'`
* Expected behavior: Display additional text under the values in a field for more context.
Replace `FIELDNAME` with the actual name of the field.
## Add unit of measurement
* Name: `_FIELDNAME_unit`
* Type: string
* Description: Specify the unit of measurement for another field’s value allowing for proper formatting and display.
* APL query example: `extend _size_unit = "gbytes"`
* Expected behavior: Format the value in the `size` field according to the unit specified in the `_size_unit` field.
Replace `FIELDNAME` with the actual name of the field you want to format. For example, for a field named `size`, use `_size_unit = "gbytes"` to display its values in gigabytes in the query results.
The supported units are the following:
**Percentage**
| Unit name | APL sytax |
| ----------------- | ---------- |
| percent (0-100) | percent100 |
| percent (0.0-1.0) | percent |
**Currency**
| Unit name | APL sytax |
| ------------ | --------- |
| Dollars (\$) | curusd |
| Pounds (£) | curgbp |
| Euro (€) | cureur |
| Bitcoin (฿) | curbtc |
**Data (IEC)**
| Unit name | APL sytax |
| ---------- | --------- |
| bits(IEC) | bits |
| bytes(IEC) | bytes |
| kibibytes | kbytes |
| mebibytes | mbytes |
| gibibytes | gbytes |
| tebibytes | tbytes |
| pebibytes | pbytes |
**Data (metric)**
| Unit name | APL sytax |
| ------------- | --------- |
| bits(Metric) | decbits |
| bytes(Metric) | decbytes |
| kilobytes | deckbytes |
| megabytes | decmbytes |
| gigabytes | decgbytes |
| terabytes | dectbytes |
| petabytes | decpbytes |
**Data rate**
| Unit name | APL sytax |
| ------------- | --------- |
| packets/sec | pps |
| bits/sec | bps |
| bytes/sec | Bps |
| kilobytes/sec | KBs |
| kilobits/sec | Kbits |
| megabytes/sec | MBs |
| megabits/sec | Mbits |
| gigabytes/sec | GBs |
| gigabits/sec | Gbits |
| terabytes/sec | TBs |
| terabits/sec | Tbits |
| petabytes/sec | PBs |
| petabits/sec | Pbits |
**Datetime**
| Unit name | APL sytax |
| ----------------- | --------- |
| Hertz (1/s) | hertz |
| nanoseconds (ns) | ns |
| microseconds (µs) | µs |
| milliseconds (ms) | ms |
| seconds (s) | secs |
| minutes (m) | mins |
| hours (h) | hours |
| days (d) | days |
| ago | ago |
**Throughput**
| Unit name | APL sytax |
| ------------------ | --------- |
| counts/sec (cps) | cps |
| ops/sec (ops) | ops |
| requests/sec (rps) | reqps |
| reads/sec (rps) | rps |
| writes/sec (wps) | wps |
| I/O ops/sec (iops) | iops |
| counts/min (cpm) | cpm |
| ops/min (opm) | opm |
| requests/min (rps) | reqpm |
| reads/min (rpm) | rpm |
| writes/min (wpm) | wpm |
## Example
The example APL query below adds a tooltip and a description to the values of the `status` field. Clicking one of the values in this field leads to a page about status codes. The query adds the new field `resp_body_size_bits` that displays the size of the response body in the unit of bits.
```apl
['sample-http-logs']
| extend _status_tooltip = 'The status of the HTTP request is the response code from the server. It shows if an HTTP request has been successfully completed.'
| extend _status_description = 'This is the status of the HTTP request.'
| extend _status_url = 'https://developer.mozilla.org/en-US/docs/Web/HTTP/Status'
| extend resp_body_size_bits = resp_body_size_bytes * 8
| extend _resp_body_size_bits_unit = 'bits'
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20_status_tooltip%20%3D%20'The%20status%20of%20the%20HTTP%20request%20is%20the%20response%20code%20from%20the%20server.%20It%20shows%20if%20an%20HTTP%20request%20has%20been%20successfully%20completed.'%20%7C%20extend%20_status_description%20%3D%20'This%20is%20the%20status%20of%20the%20HTTP%20request.'%20%7C%20extend%20_status_url%20%3D%20'https%3A%2F%2Fdeveloper.mozilla.org%2Fen-US%2Fdocs%2FWeb%2FHTTP%2FStatus'%20%7C%20extend%20resp_body_size_bits%20%3D%20resp_body_size_bytes%20*%208%20%7C%20extend%20_resp_body_size_bits_unit%20%3D%20'bits'%22%7D)
# Array functions
Source: https://axiom.co/docs/apl/scalar-functions/array-functions
This section explains how to use array functions in APL.
The table summarizes the array functions available in APL.
| Function | Description |
| -------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------- |
| [array\_concat](/apl/scalar-functions/array-functions/array-concat) | Concatenates a number of dynamic arrays to a single array. |
| [array\_extract](/apl/scalar-functions/array-functions/array-extract) | Returns a dynamic array containing the extracted elements. |
| [array\_iff](/apl/scalar-functions/array-functions/array-iff) | Returns a new array containing elements from the input array that satisfy the condition. |
| [array\_index\_of](/apl/scalar-functions/array-functions/array-index-of) | Searches the array for the specified item, and returns its position. |
| [array\_length](/apl/scalar-functions/array-functions/array-length) | Calculates the number of elements in a dynamic array. |
| [array\_reverse](/apl/scalar-functions/array-functions/array-reverse) | Reverses the order of the elements in a dynamic array. |
| [array\_rotate\_left](/apl/scalar-functions/array-functions/array-rotate-left) | Rotates values inside a dynamic array to the left. |
| [array\_rotate\_right](/apl/scalar-functions/array-functions/array-rotate-right) | Rotates values inside a dynamic array to the right. |
| [array\_select\_dict](/apl/scalar-functions/array-functions/array-select-dict) | Selects a dictionary from an array of dictionaries. |
| [array\_shift\_left](/apl/scalar-functions/array-functions/array-shift-left) | Shifts the values inside a dynamic array to the left. |
| [array\_shift\_right](/apl/scalar-functions/array-functions/array-shift-right) | Shifts values inside an array to the right. |
| [array\_slice](/apl/scalar-functions/array-functions/array-slice) | Extracts a slice of a dynamic array. |
| [array\_split](/apl/scalar-functions/array-functions/array-split) | Splits an array to multiple arrays according to the split indices and packs the generated array in a dynamic array. |
| [array\_sum](/apl/scalar-functions/array-functions/array-sum) | Calculates the sum of elements in a dynamic array. |
| [bag\_has\_key](/apl/scalar-functions/array-functions/bag-has-key) | Checks whether a dynamic property bag contains a specific key. |
| [bag\_keys](/apl/scalar-functions/array-functions/bag-keys) | Returns all keys in a dynamic property bag. |
| [bag\_pack](/apl/scalar-functions/array-functions/bag-pack) | Converts a list of key-value pairs to a dynamic property bag. |
| [isarray](/apl/scalar-functions/array-functions/isarray) | Checks whether a value is an array. |
| [len](/apl/scalar-functions/array-functions/len) | Returns the length of a string or the number of elements in an array. |
| [pack\_array](/apl/scalar-functions/array-functions/pack-array) | Packs all input values into a dynamic array. |
| [pack\_dictionary](/apl/scalar-functions/array-functions/pack-dictionary) | Returns a dynamic object that represents a dictionary where each key maps to its associated value. |
| [strcat\_array](/apl/scalar-functions/array-functions/strcat-array) | Takes an array and returns a single concatenated string with the array’s elements separated by the specified delimiter. |
## Dynamic arrays
Most array functions accept a dynamic array as their parameter. Dynamic arrays allow you to add or remove elements. You can change a dynamic array with an array function.
A dynamic array expands as you add more elements. This means that you don’t need to determine the size in advance.
# array_concat
Source: https://axiom.co/docs/apl/scalar-functions/array-functions/array-concat
This page explains how to use the array_concat function in APL.
The `array_concat` function in APL (Axiom Processing Language) concatenates two or more arrays into a single array. Use this function when you need to merge multiple arrays into a single array structure. It’s particularly useful for situations where you need to handle and combine collections of elements across different fields or sources, such as log entries, OpenTelemetry trace data, or security logs.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In SPL, you typically use the `mvappend` function to concatenate multiple fields or arrays into a single array. In APL, the equivalent is `array_concat`, which also combines arrays but requires you to specify each array as a parameter.
```sql Splunk example
| eval combined_array = mvappend(array1, array2, array3)
```
```kusto APL equivalent
| extend combined_array = array_concat(array1, array2, array3)
```
ANSI SQL doesn’t natively support an array concatenation function across different arrays. Instead, you typically use `UNION` to combine results from multiple arrays or collections. In APL, `array_concat` allows you to directly concatenate multiple arrays, providing a more straightforward approach.
```sql SQL example
SELECT array1 UNION ALL array2 UNION ALL array3
```
```kusto APL equivalent
| extend combined_array = array_concat(array1, array2, array3)
```
## Usage
### Syntax
```kusto
array_concat(array1, array2, ...)
```
### Parameters
* `array1`: The first array to concatenate.
* `array2`: The second array to concatenate.
* `...`: Additional arrays to concatenate.
### Returns
An array containing all elements from the input arrays in the order they are provided.
## Use case examples
In log analysis, you can use `array_concat` to merge collections of user requests into a single array to analyze request patterns across different endpoints.
**Query**
```kusto
['sample-http-logs']
| take 50
| summarize combined_requests = array_concat(pack_array(uri), pack_array(method))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20take%2050%20%7C%20summarize%20combined_requests%20%3D%20array_concat\(pack_array\(uri\)%2C%20pack_array\(method\)\)%22%7D)
**Output**
| \_time | uri | method | combined\_requests |
| ------------------- | ----------------------- | ------ | ------------------------------------ |
| 2024-10-28T12:30:00 | /api/v1/textdata/cnfigs | POST | \["/api/v1/textdata/cnfigs", "POST"] |
This example concatenates the `uri` and `method` values into a single array for each log entry, allowing for combined analysis of access patterns and request methods in log data.
In OpenTelemetry traces, use `array_concat` to join span IDs and trace IDs for a comprehensive view of trace behavior across services.
**Query**
```kusto
['otel-demo-traces']
| take 50
| summarize combined_ids = array_concat(pack_array(span_id), pack_array(trace_id))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20take%2050%20%7C%20summarize%20combined_ids%20%3D%20array_concat\(pack_array\(span_id\)%2C%20pack_array\(trace_id\)\)%22%7D)
**Output**
| combined\_ids |
| ---------------------------------- |
| \["span1", "trace1", "span2", ...] |
| \_time | trace\_id | span\_id | combined\_ids |
| ------------------- | ------------- | --------- | ------------------------------- |
| 2024-10-28T12:30:00 | trace\_abc123 | span\_001 | \["trace\_abc123", "span\_001"] |
This example creates an array containing both `span_id` and `trace_id` values, offering a unified view of the trace journey across services.
In security logs, `array_concat` can consolidate multiple IP addresses or user IDs to detect potential attack patterns involving different locations or users.
**Query**
```kusto
['sample-http-logs']
| where status == '500'
| take 50
| summarize failed_attempts = array_concat(pack_array(id), pack_array(['geo.city']))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20where%20status%20%3D%3D%20'500'%20%7C%20take%2050%20%7C%20summarize%20failed_attempts%20%3D%20array_concat\(pack_array\(id\)%2C%20pack_array\(%5B'geo.city'%5D\)\)%22%7D)
**Output**
| \_time | id | geo.city | combined\_ids |
| ------------------- | ------------------------------------ | -------- | --------------------------------------------------- |
| 2024-10-28T12:30:00 | fc1407f5-04ca-4f4e-ad01-f72063736e08 | Avenal | \["fc1407f5-04ca-4f4e-ad01-f72063736e08", "Avenal"] |
This query combines failed user IDs and cities where the request originated, allowing security analysts to detect suspicious patterns or brute force attempts from different regions.
## List of related functions
* [array\_length](/apl/scalar-functions/array-functions/array-length): Returns the number of elements in an array.
* [array\_index\_of](/apl/scalar-functions/array-functions/array-index-of): Finds the index of an element in an array.
* [array\_slice](/apl/scalar-functions/array-functions/array-slice): Extracts a subset of elements from an array.
# array_extract
Source: https://axiom.co/docs/apl/scalar-functions/array-functions/array-extract
This page explains how to use the array_extract function in APL.
Use the `array_extract` function to extract specific values from a dynamic array using a JSON path expression. You can use this function to transform structured array data, such as arrays of objects, into simpler arrays of scalars. This is useful when working with nested JSON-like structures where you need to extract only selected fields for analysis, visualization, or filtering.
Use `array_extract` when:
* You need to pull scalar values from arrays of objects.
* You want to simplify a nested data structure before further analysis.
* You are working with structured logs or metrics where key values are nested inside arrays.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, you typically use `spath` with a wildcard or field extraction logic to navigate nested structures. APL’s `array_extract` uses JSON path syntax to extract array elements that match a given pattern.
```sql Splunk example
| eval arr=mvappend("{\"id\":1,\"value\":true}", "{\"id\":2,\"value\":false}")
| spath input=arr path="{}.value" output=extracted_value
```
```kusto APL equivalent
['sample-http-logs']
| extend extracted_value = array_extract(dynamic([{'id': 1, 'value': true}, {'id': 2, 'value': false}]), @'$[*].value')
| project _time, extracted_value
```
ANSI SQL doesn’t offer native support for JSON path queries on arrays in standard syntax. While some engines support functions like `JSON_VALUE` or `JSON_TABLE`, they operate on single objects. APL’s `array_extract` provides a concise and expressive way to query arrays using JSON path.
```sql SQL example
SELECT JSON_EXTRACT(data, '$[*].value') AS extracted_value
FROM my_table;
```
```kusto APL equivalent
['sample-http-logs']
| extend extracted_value = array_extract(dynamic([{'id': 1, 'value': true}, {'id': 2, 'value': false}]), @'$[*].value')
| project _time, extracted_value
```
## Usage
### Syntax
```kusto
array_extract(sourceArray, jsonPath)
```
### Parameters
| Name | Type | Description |
| ------------- | --------- | ------------------------------------------------------- |
| `sourceArray` | `dynamic` | A JSON-like dynamic array to extract values from. |
| `jsonPath` | `string` | A JSON path expression to select values from the array. |
### Returns
A dynamic array of values that match the JSON path expression. The function always returns an array, even when the path matches only one element or no elements.
## Use case examples
Use `array_extract` to retrieve specific fields from structured arrays, such as arrays of request metadata.
**Query**
```kusto
['sample-http-logs']
| extend extracted_value = array_extract(dynamic([{'id': 1, 'value': true}, {'id': 2, 'value': false}]), @'$[*].value')
| project _time, extracted_value
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20extracted_value%20%3D%20array_extract%28dynamic%28\[%7B'id'%3A%201%2C%20'value'%3A%20true%7D%2C%20%7B'id'%3A%202%2C%20'value'%3A%20false%7D]%29%2C%20%40'%24%5B*%5D.value'%29%20%7C%20project%20_time%2C%20extracted_value%22%7D)
**Output**
| \_time | extracted\_value |
| ---------------- | ------------------ |
| Jun 24, 09:28:10 | \["true", "false"] |
| Jun 24, 09:28:10 | \["true", "false"] |
| Jun 24, 09:28:10 | \["true", "false"] |
This query extracts the `value` field from an array of objects, returning a flat array of booleans in string form.
Use `array_extract` to extract service names from a nested structure—for example, collecting `service.name` from span records in a trace bundle.
**Query**
```kusto
['otel-demo-traces']
| summarize traces=make_list(pack('trace_id', trace_id, 'service', ['service.name'])) by span_id
| extend services=array_extract(traces, @'$[*].service')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20traces%3Dmake_list%28pack%28'trace_id'%2C%20trace_id%2C%20'service'%2C%20%5B'service.name'%5D%29%29%20by%20span_id%20%7C%20extend%20services%3Darray_extract%28traces%2C%20%40'%24%5B*%5D.service'%29%22%7D)
**Output**
| span\_id | services |
| ---------------- | ----------------- |
| 24157518330f7967 | \[frontend-proxy] |
| 209a0815d291d88a | \[currency] |
| aca763479149f1d0 | \[frontend-web] |
This query collects and extracts the `service.name` fields from a constructed nested structure of spans.
Use `array_extract` to extract HTTP status codes from structured log entries grouped into sessions.
**Query**
```kusto
['sample-http-logs']
| summarize events=make_list(pack('uri', uri, 'status', status)) by id
| extend status_codes=array_extract(events, @'$[*].status')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20events%3Dmake_list%28pack%28'uri'%2C%20uri%2C%20'status'%2C%20status%29%29%20by%20id%20%7C%20extend%20status_codes%3Darray_extract%28events%2C%20%40'%24%5B*%5D.status'%29%22%7D)
**Output**
| id | status\_codes |
| ----- | ------------- |
| user1 | \[200] |
| user2 | \[201] |
| user3 | \[200] |
This query extracts all HTTP status codes per user session, helping to identify patterns like repeated failures or suspicious behavior.
## List of related functions
* [array\_slice](/apl/scalar-functions/array-functions/array-slice): Returns a subarray like `array_extract`, but supports negative indexing.
* [array\_length](/apl/scalar-functions/array-functions/array-length): Returns the number of elements in an array. Useful before applying `array_extract`.
* [array\_concat](/apl/scalar-functions/array-functions/array-concat): Joins arrays end-to-end. Use before or after slicing arrays with `array_extract`.
* [array\_index\_of](/apl/scalar-functions/array-functions/array-index-of): Finds the position of an element in an array, which can help set the `startIndex` for `array_extract`.
# array_iff
Source: https://axiom.co/docs/apl/scalar-functions/array-functions/array-iff
This page explains how to use the array_iff function in APL.
The `array_iff` function in Axiom Processing Language (APL) allows you to create arrays based on a condition. It returns an array with elements from two specified arrays, choosing each element from the first array when a condition is met and from the second array otherwise. This function is useful for scenarios where you need to evaluate a series of conditions across multiple datasets, especially in log analysis, trace data, and other applications requiring conditional element selection within arrays.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, array manipulation based on conditions typically requires using conditional functions or eval expressions. APL’s `array_iff` function lets you directly select elements from one array or another based on a condition, offering more streamlined array manipulation.
```sql Splunk example
eval selected_array=if(condition, array1, array2)
```
```kusto APL equivalent
array_iff(condition_array, array1, array2)
```
In ANSI SQL, conditionally selecting elements from arrays often requires complex `CASE` statements or functions. With APL’s `array_iff` function, you can directly compare arrays and conditionally populate them, simplifying array-based operations.
```sql SQL example
CASE WHEN condition THEN array1 ELSE array2 END
```
```kusto APL equivalent
array_iff(condition_array, array1, array2)
```
## Usage
### Syntax
```kusto
array_iff(condition_array, array1, array2)
```
### Parameters
* `condition_array`: An array of boolean values, where each element determines whether to choose the corresponding element from `array1` or `array2`.
* `array1`: The array to select elements from when the corresponding `condition_array` element is `true`.
* `array2`: The array to select elements from when the corresponding `condition_array` element is `false`.
### Returns
An array where each element is selected from `array1` if the corresponding `condition_array` element is `true`, and from `array2` otherwise.
## Use case examples
The `array_iff` function can help filter log data conditionally, such as choosing specific durations based on HTTP status codes.
**Query**
```kusto
['sample-http-logs']
| order by _time desc
| limit 1000
| summarize is_ok = make_list(status == '200'), request_duration = make_list(req_duration_ms)
| project ok_request_duration = array_iff(is_ok, request_duration, 0)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20order%20by%20_time%20desc%20%7C%20limit%201000%20%7C%20summarize%20is_ok%20%3D%20make_list\(status%20%3D%3D%20'200'\)%2C%20request_duration%20%3D%20make_list\(req_duration_ms\)%20%7C%20project%20ok_request_duration%20%3D%20array_iff\(is_ok%2C%20request_duration%2C%200\)%22%7D)
**Output**
| ok\_request\_duration |
| -------------------------------------------------------------------- |
| \[0.3150485097707766, 0, 0.21691408087847264, 0, 0.2757618582190533] |
This example filters the `req_duration_ms` field to include only durations for the most recent 1,000 requests with status `200`, replacing others with `0`.
With OpenTelemetry trace data, you can use `array_iff` to filter spans based on the service type, such as selecting durations for `server` spans and setting others to zero.
**Query**
```kusto
['otel-demo-traces']
| order by _time desc
| limit 1000
| summarize is_server = make_list(kind == 'server'), duration_list = make_list(duration)
| project server_durations = array_iff(is_server, duration_list, 0)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20order%20by%20_time%20desc%20%7C%20limit%201000%20%7C%20summarize%20is_server%20%3D%20make_list\(kind%20%3D%3D%20'server'\)%2C%20duration_list%20%3D%20make_list\(duration\)%20%7C%20project%20%20server_durations%20%3D%20array_iff\(is_server%2C%20duration_list%2C%200\)%22%7D)
**Output**
| server\_durations |
| ---------------------------------------- |
| \["45.632µs", "54.622µs", 0, "34.051µs"] |
In this example, `array_iff` selects durations only for `server` spans, setting non-server spans to `0`.
In security logs, `array_iff` can be used to focus on specific cities in which HTTP requests originated, such as showing response durations for certain cities and excluding others.
**Query**
```kusto
['sample-http-logs']
| limit 1000
| summarize is_london = make_list(['geo.city'] == "London"), request_duration = make_list(req_duration_ms)
| project london_duration = array_iff(is_london, request_duration, 0)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%20%7C%20limit%201000%20%7C%20summarize%20is_london%20%3D%20make_list\(%5B'geo.city'%5D%20%3D%3D%20'London'\)%2C%20request_duration%20%3D%20make_list\(req_duration_ms\)%20%7C%20project%20london_duration%20%3D%20array_iff\(is_london%2C%20request_duration%2C%200\)%22%7D)
**Output**
| london\_duration |
| ---------------- |
| \[100, 0, 250] |
This example filters the `req_duration_ms` array to show durations for requests from London, with non-matching cities having `0` as duration.
## List of related functions
* [array\_slice](/apl/scalar-functions/array-functions/array-slice): Extracts a subset of elements from an array.
* [array\_concat](/apl/scalar-functions/array-functions/array-concat): Combines multiple arrays.
* [array\_rotate\_right](/apl/scalar-functions/array-functions/array-rotate-right): Rotates array elements to the right by a specified number of positions.
# array_index_of
Source: https://axiom.co/docs/apl/scalar-functions/array-functions/array-index-of
This page explains how to use the array_index_of function in APL.
The `array_index_of` function in APL returns the zero-based index of the first occurrence of a specified value within an array. If the value isn’t found, the function returns `-1`. Use this function when you need to identify the position of a specific item within an array, such as finding the location of an error code in a sequence of logs or pinpointing a particular value within telemetry data arrays.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, the `mvfind` function retrieves the position of an element within an array, similar to how `array_index_of` operates in APL. However, note that APL uses a zero-based index for results, while SPL is one-based.
```splunk Splunk example
| eval index=mvfind(array, "value")
```
```kusto APL equivalent
let index = array_index_of(array, 'value')
```
ANSI SQL doesn’t have a direct equivalent for finding the index of an element within an array. Typically, you would use a combination of array and search functions if supported by your SQL variant.
```sql SQL example
SELECT POSITION('value' IN ARRAY[...])
```
```kusto APL equivalent
let index = array_index_of(array, 'value')
```
## Usage
### Syntax
```kusto
array_index_of(array, lookup_value, [start], [length], [occurrence])
```
### Parameters
| Name | Type | Required | Description |
| ------------- | ------ | -------- | ---------------------------------------------------------------------------------------------------------------------------------------------- |
| array | array | Yes | Input array to search. |
| lookup\_value | scalar | Yes | Scalar value to search for in the array. Accepted data types: long, integer, double, datetime, timespan, or string. |
| start\_index | number | No | The index where to start the search. A negative value offsets the starting search value from the end of the array by `abs(start_index)` steps. |
| length | number | No | Number of values to examine. A value of `-1` means unlimited length. |
| occurrence | number | No | The number of the occurrence. By default `1`. |
### Returns
`array_index_of` returns the zero-based index of the first occurrence of the specified `lookup_value` in `array`. If `lookup_value` doesn’t exist in the array, it returns `-1`.
## Use case examples
You can use `array_index_of` to find the position of a specific HTTP status code within an array of codes in your log analysis.
**Query**
```kusto
['sample-http-logs']
| take 50
| summarize status_array = make_list(status)
| extend index_500 = array_index_of(status_array, '500')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20take%2050%20%7C%20summarize%20status_array%20%3D%20make_list\(status\)%20%7C%20extend%20index_500%20%3D%20array_index_of\(status_array%2C%20'500'\)%22%7D)
**Output**
| status\_array | index\_500 |
| ---------------------- | ---------- |
| \["200", "404", "500"] | 2 |
This query creates an array of `status` codes and identifies the position of the first occurrence of the `500` status.
In OpenTelemetry traces, you can find the position of a specific `service.name` within an array of service names to detect when a particular service appears.
**Query**
```kusto
['otel-demo-traces']
| take 50
| summarize service_array = make_list(['service.name'])
| extend frontend_index = array_index_of(service_array, 'frontend')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20%20service_array%20%3D%20make_list\(%5B'service.name'%5D\)%20%7C%20extend%20frontend_index%20%3D%20array_index_of\(service_array%2C%20'frontend'\)%22%7D)
**Output**
| service\_array | frontend\_index |
| ---------------------------- | --------------- |
| \["frontend", "cartservice"] | 0 |
This query collects the array of services and determines where the `frontend` service first appears.
When working with security logs, `array_index_of` can help identify the index of a particular error or status code, such as `500`, within an array of `status` codes.
**Query**
```kusto
['sample-http-logs']
| take 50
| summarize status_array = make_list(status)
| extend index_500 = array_index_of(status_array, '500')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20take%2050%20%7C%20summarize%20status_array%20%3D%20make_list\(status\)%20%7C%20extend%20index_500%20%3D%20array_index_of\(status_array%2C%20'500'\)%22%7D)
**Output**
| status\_array | index\_500 |
| ---------------------- | ---------- |
| \["200", "404", "500"] | 2 |
This query helps identify at what index the `500` status code appears.
## List of related functions
* [array\_concat](/apl/scalar-functions/array-functions/array-concat): Combines multiple arrays.
* [array\_rotate\_right](/apl/scalar-functions/array-functions/array-rotate-right): Rotates array elements to the right by a specified number of positions.
* [array\_rotate\_left](/apl/scalar-functions/array-functions/array-rotate-left): Rotates elements of an array to the left.
# array_length
Source: https://axiom.co/docs/apl/scalar-functions/array-functions/array-length
This page explains how to use the array_length function in APL.
The `array_length` function in APL (Axiom Processing Language) returns the length of an array. You can use this function to analyze and filter data by array size, such as identifying log entries with specific numbers of entries or events with multiple tags. This function is useful for analyzing structured data fields that contain arrays, such as lists of error codes, tags, or IP addresses.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, you might use the `mvcount` function to determine the length of a multivalue field. In APL, `array_length` serves the same purpose by returning the size of an array within a column.
```sql Splunk example
| eval array_size = mvcount(array_field)
```
```kusto APL equivalent
['sample-http-logs']
| extend array_size = array_length(array_field)
```
In ANSI SQL, you would use functions such as `CARDINALITY` or `ARRAY_LENGTH` (in databases that support arrays) to get the length of an array. In APL, the `array_length` function is straightforward and works directly with array fields in any dataset.
```sql SQL example
SELECT CARDINALITY(array_field) AS array_size
FROM sample_table
```
```kusto APL equivalent
['sample-http-logs']
| extend array_size = array_length(array_field)
```
## Usage
### Syntax
```kusto
array_length(array_expression)
```
### Parameters
* array\_expression: An expression representing the array to measure.
### Returns
The function returns an integer representing the number of elements in the specified array.
## Use case example
In OpenTelemetry traces, `array_length` can reveal the number of events associated with a span.
**Query**
```kusto
['otel-demo-traces']
| take 50
| extend event_count = array_length(events)
| where event_count > 2
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20take%2050%20%7C%20extend%20event_count%20%3D%20array_length\(events\)%20%7C%20where%20event_count%20%3E%202%22%7D)
**Output**
| \_time | trace\_id | span\_id | service.name | event\_count |
| ------------------- | ------------- | --------- | ------------ | ------------ |
| 2024-10-28T12:30:00 | trace\_abc123 | span\_001 | frontend | 3 |
This query finds spans associated with at least three events.
## List of related functions
* [array\_slice](/apl/scalar-functions/array-functions/array-slice): Extracts a subset of elements from an array.
* [array\_concat](/apl/scalar-functions/array-functions/array-concat): Combines multiple arrays.
* [array\_shift\_left](/apl/scalar-functions/array-functions/array-shift-left): Shifts array elements one position to the left, moving the first element to the last position.
# array_reverse
Source: https://axiom.co/docs/apl/scalar-functions/array-functions/array-reverse
This page explains how to use the array_reverse function in APL.
Use the `array_reverse` function in APL to reverse the order of elements in an array. This function is useful when you need to transform data where the sequence matters, such as reversing a list of events for chronological analysis or processing lists in descending order.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk, reversing an array is not a built-in function, so you typically manipulate the data manually or use workarounds. In APL, `array_reverse` simplifies this process by reversing the array directly.
```sql Splunk example
# SPL does not have a direct array_reverse equivalent.
```
```kusto APL equivalent
let arr = dynamic([1, 2, 3, 4, 5]);
print reversed_arr = array_reverse(arr)
```
Standard ANSI SQL lacks an explicit function to reverse an array; you generally need to create a custom solution. APL’s `array_reverse` makes reversing an array straightforward.
```sql SQL example
-- ANSI SQL lacks a built-in array reverse function.
```
```kusto APL equivalent
let arr = dynamic([1, 2, 3, 4, 5]);
print reversed_arr = array_reverse(arr)
```
## Usage
### Syntax
```kusto
array_reverse(array_expression)
```
### Parameters
* `array_expression`: The array you want to reverse. This array must be of a dynamic type.
### Returns
Returns the input array with its elements in reverse order.
## Use case examples
Use `array_reverse` to inspect the sequence of actions in log entries, reversing the order to understand the initial steps of a user's session.
**Query**
```kusto
['sample-http-logs']
| summarize paths = make_list(uri) by id
| project id, reversed_paths = array_reverse(paths)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20paths%20%3D%20make_list\(uri\)%20by%20id%20%7C%20project%20id%2C%20reversed_paths%20%3D%20array_reverse\(paths\)%22%7D)
**Output**
| id | reversed\_paths |
| ----- | ------------------------------------ |
| U1234 | \['/home', '/cart', '/product', '/'] |
| U5678 | \['/login', '/search', '/'] |
This example identifies a user’s navigation sequence in reverse, showing their entry point into the system.
Use `array_reverse` to analyze trace data by reversing the sequence of span events for each trace, allowing you to trace back the sequence of service calls.
**Query**
```kusto
['otel-demo-traces']
| summarize spans = make_list(span_id) by trace_id
| project trace_id, reversed_spans = array_reverse(spans)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20spans%20%3D%20make_list\(span_id\)%20by%20trace_id%20%7C%20project%20trace_id%2C%20reversed_spans%20%3D%20array_reverse\(spans\)%22%7D)
**Output**
| trace\_id | reversed\_spans |
| --------- | ------------------------- |
| T12345 | \['S4', 'S3', 'S2', 'S1'] |
| T67890 | \['S7', 'S6', 'S5'] |
This example reveals the order in which service calls were made in a trace, but in reverse, aiding in backtracking issues.
Apply `array_reverse` to examine security events, like login attempts or permission checks, in reverse order to identify unusual access patterns or last actions.
**Query**
```kusto
['sample-http-logs']
| where status == '403'
| summarize blocked_uris = make_list(uri) by id
| project id, reversed_blocked_uris = array_reverse(blocked_uris)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20where%20status%20%3D%3D%20'403'%20%7C%20summarize%20blocked_uris%20%3D%20make_list\(uri\)%20by%20id%20%7C%20project%20id%2C%20reversed_blocked_uris%20%3D%20array_reverse\(blocked_uris\)%22%7D)
**Output**
| id | reversed\_blocked\_uris |
| ----- | ------------------------------------- |
| U1234 | \['/admin', '/settings', '/login'] |
| U5678 | \['/account', '/dashboard', '/login'] |
This example helps identify the sequence of unauthorized access attempts by each user.
## List of related functions
* [array\_length](/apl/scalar-functions/array-functions/array-length): Returns the number of elements in an array.
* [array\_shift\_right](/apl/scalar-functions/array-functions/array-shift-right): Shifts array elements to the right.
* [array\_shift\_left](/apl/scalar-functions/array-functions/array-shift-left): Shifts array elements one position to the left, moving the first element to the last position.
# array_rotate_left
Source: https://axiom.co/docs/apl/scalar-functions/array-functions/array-rotate-left
This page explains how to use the array_rotate_left function in APL.
The `array_rotate_left` function in Axiom Processing Language (APL) rotates the elements of an array to the left by a specified number of positions. It’s useful when you want to reorder elements in a fixed-length array, shifting elements to the left while moving the leftmost elements to the end. For instance, this function can help analyze sequences where relative order matters but the starting position doesn’t, such as rotating network logs, error codes, or numeric arrays in data for pattern identification.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In APL, `array_rotate_left` allows for direct rotation within the array. Splunk SPL does not have a direct equivalent, so you may need to combine multiple SPL functions to achieve a similar rotation effect.
```sql Splunk example
| eval rotated_array = mvindex(array, 1) . "," . mvindex(array, 0)
```
```kusto APL equivalent
print rotated_array = array_rotate_left(dynamic([1,2,3,4]), 1)
```
ANSI SQL lacks a direct equivalent for array rotation within arrays. A similar transformation can be achieved using array functions if available or by restructuring the array through custom logic.
```sql SQL example
SELECT array_column[2], array_column[3], array_column[0], array_column[1] FROM table
```
```kusto APL equivalent
print rotated_array = array_rotate_left(dynamic([1,2,3,4]), 2)
```
## Usage
### Syntax
```kusto
array_rotate_left(array, positions)
```
### Parameters
* `array`: The array to be rotated. Use a dynamic data type.
* `positions`: An integer specifying the number of positions to rotate the array to the left.
### Returns
A new array where the elements have been rotated to the left by the specified number of positions.
## Use case example
Analyze traces by rotating the field order for visualization or pattern matching.
**Query**
```kusto
['otel-demo-traces']
| extend rotated_sequence = array_rotate_left(events, 1)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20extend%20rotated_sequence%20%3D%20array_rotate_left\(events%2C%201\)%22%7D)
**Output**
```json events
[
{
"name": "Enqueued",
"timestamp": 1733997117722909000
},
{
"timestamp": 1733997117722911700,
"name": "Sent"
},
{
"name": "ResponseReceived",
"timestamp": 1733997117723591400
}
]
```
```json rotated_sequence
[
{
"timestamp": 1733997117722911700,
"name": "Sent"
},
{
"name": "ResponseReceived",
"timestamp": 1733997117723591400
},
{
"timestamp": 1733997117722909000,
"name": "Enqueued"
}
]
```
This example rotates trace-related fields, which can help to identify variations in trace data when visualized differently.
## List of related functions
* [array\_slice](/apl/scalar-functions/array-functions/array-slice): Extracts a subset of elements from an array.
* [array\_rotate\_right](/apl/scalar-functions/array-functions/array-rotate-right): Rotates array elements to the right by a specified number of positions.
* [array\_reverse](/apl/scalar-functions/array-functions/array-reverse): Reverses the order of array elements.
# array_rotate_right
Source: https://axiom.co/docs/apl/scalar-functions/array-functions/array-rotate-right
This page explains how to use the array_rotate_right function in APL.
The `array_rotate_right` function in APL allows you to rotate the elements of an array to the right by a specified number of positions. This function is useful when you need to reorder data within arrays, either to shift recent events to the beginning, reorder log entries, or realign elements based on specific processing logic.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In APL, the `array_rotate_right` function provides functionality similar to the use of `mvindex` or specific SPL commands for reordering arrays. The rotation here shifts all elements by a set count to the right, maintaining their original order within the new positions.
```sql Splunk example
| eval rotated_array=mvindex(array, -3)
```
```kusto APL equivalent
| extend rotated_array = array_rotate_right(array, 3)
```
ANSI SQL lacks a direct function for rotating elements within arrays. In APL, the `array_rotate_right` function offers a straightforward way to accomplish this by specifying a rotation count, while SQL users typically require a more complex use of `CASE` statements or custom functions to achieve the same.
```sql SQL example
-- No direct ANSI SQL equivalent for array rotation
```
```kusto APL equivalent
| extend rotated_array = array_rotate_right(array_column, 3)
```
## Usage
### Syntax
```kusto
array_rotate_right(array, count)
```
### Parameters
* `array`: An array to rotate.
* `count`: An integer specifying the number of positions to rotate the array to the right.
### Returns
An array where the elements are rotated to the right by the specified `count`.
## Use case example
In OpenTelemetry traces, rotating an array of span details can help you reorder trace information for performance tracking or troubleshooting.
**Query**
```kusto
['otel-demo-traces']
| extend rotated_sequence = array_rotate_right(events, 1)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20extend%20rotated_sequence%20%3D%20array_rotate_right\(events%2C%201\)%22%7D)
**Output**
```json events
[
{
"attributes": null,
"name": "Enqueued",
"timestamp": 1733997421220380700
},
{
"name": "Sent",
"timestamp": 1733997421220390400,
"attributes": null
},
{
"attributes": null,
"name": "ResponseReceived",
"timestamp": 1733997421221118500
}
]
```
```json rotated_sequence
[
{
"attributes": null,
"name": "ResponseReceived",
"timestamp": 1733997421221118500
},
{
"attributes": null,
"name": "Enqueued",
"timestamp": 1733997421220380700
},
{
"name": "Sent",
"timestamp": 1733997421220390400,
"attributes": null
}
]
```
## List of related functions
* [array\_length](/apl/scalar-functions/array-functions/array-length): Returns the number of elements in an array.
* [array\_index\_of](/apl/scalar-functions/array-functions/array-index-of): Finds the index of an element in an array.
* [array\_rotate\_left](/apl/scalar-functions/array-functions/array-rotate-left): Rotates elements of an array to the left.
# array_select_dict
Source: https://axiom.co/docs/apl/scalar-functions/array-functions/array-select-dict
This page explains how to use the array_select_dict function in APL.
The `array_select_dict` function in APL allows you to retrieve a dictionary from an array of dictionaries based on a specified key-value pair. This function is useful when you need to filter arrays and extract specific dictionaries for further processing. If no match exists, it returns `null`. Non-dictionary values in the input array are ignored.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
The `array_select_dict` function in APL is similar to filtering objects in an array based on conditions in Splunk SPL. However, unlike Splunk, where filtering often applies directly to JSON structures, `array_select_dict` specifically targets arrays of dictionaries.
```sql Splunk example
| eval selected = mvfilter(array, 'key' == 5)
```
```kusto APL equivalent
| project selected = array_select_dict(array, "key", 5)
```
In ANSI SQL, filtering typically involves table rows rather than nested arrays. The APL `array_select_dict` function applies a similar concept to array elements, allowing you to extract dictionaries from arrays using a condition.
```sql SQL example
SELECT *
FROM my_table
WHERE JSON_CONTAINS(array_column, '{"key": 5}')
```
```kusto APL equivalent
| project selected = array_select_dict(array_column, "key", 5)
```
## Usage
### Syntax
```kusto
array_select_dict(array, key, value)
```
### Parameters
| Name | Type | Description |
| ----- | ------- | ------------------------------------- |
| array | dynamic | Input array of dictionaries. |
| key | string | Key to match in each dictionary. |
| value | scalar | Value to match for the specified key. |
### Returns
The function returns the first dictionary in the array that matches the specified key-value pair. If no match exists, it returns `null`. Non-dictionary elements in the array are ignored.
## Use case example
This example demonstrates how to use `array_select_dict` to extract a dictionary where the key `service.name` has the value `frontend`.
**Query**
```kusto
['sample-http-logs']
| extend array = dynamic([{"service.name": "frontend", "status_code": "200"}, {"service.name": "backend", "status_code": "500"}])
| project selected = array_select_dict(array, "service.name", "frontend")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20array%20%3D%20dynamic\(%5B%7B'service.name'%3A%20'frontend'%2C%20'status_code'%3A%20'200'%7D%2C%20%7B'service.name'%3A%20'backend'%2C%20'status_code'%3A%20'500'%7D%5D\)%20%7C%20project%20selected%20%3D%20array_select_dict\(array%2C%20'service.name'%2C%20'frontend'\)%22%7D)
**Output**
`{"service.name": "frontend", "status_code": "200"}`
This query selects the first dictionary in the array where `service.name` equals `frontend` and returns it.
## List of related functions
* [array\_index\_of](/apl/scalar-functions/array-functions/array-index-of): Finds the index of an element in an array.
* [array\_concat](/apl/scalar-functions/array-functions/array-concat): Combines multiple arrays.
* [array\_rotate\_right](/apl/scalar-functions/array-functions/array-rotate-right): Rotates array elements to the right by a specified number of positions.
# array_shift_left
Source: https://axiom.co/docs/apl/scalar-functions/array-functions/array-shift-left
This page explains how to use the array_shift_left function in APL.
The `array_shift_left` function in APL rotates the elements of an array to the left by a specified number of positions. If the shift exceeds the array length, it wraps around and continues from the beginning. This function is useful when you need to realign or reorder elements for pattern analysis, comparisons, or other array transformations.
For example, you can use `array_shift_left` to:
* Align time-series data for comparative analysis.
* Rotate log entries for cyclic pattern detection.
* Reorganize multi-dimensional datasets in your queries.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, there is no direct equivalent to `array_shift_left`, but you can achieve similar results using custom code or by manipulating arrays manually. In APL, `array_shift_left` simplifies this operation by providing a built-in, efficient implementation.
```sql Splunk example
| eval rotated_array = mvindex(array, 1) . mvindex(array, 0)
```
```kusto APL equivalent
['array_shift_left'](array, 1)
```
ANSI SQL does not have a native function equivalent to `array_shift_left`. Typically, you would use procedural SQL to write custom logic for this transformation. In APL, the `array_shift_left` function provides an elegant, concise solution.
```sql SQL example
-- Pseudo code in SQL
SELECT ARRAY_SHIFT_LEFT(array_column, shift_amount)
```
```kusto APL equivalent
['array_shift_left'](array_column, shift_amount)
```
## Usage
### Syntax
```kusto
['array_shift_left'](array, shift_amount)
```
### Parameters
| Parameter | Type | Description |
| -------------- | ------- | ------------------------------------------------------ |
| `array` | Array | The array to shift. |
| `shift_amount` | Integer | The number of positions to shift elements to the left. |
### Returns
An array with elements shifted to the left by the specified `shift_amount`. The function wraps the excess elements to the start of the array.
## Use case example
Reorganize span events to analyze dependencies in a different sequence.
**Query**
```kusto
['otel-demo-traces']
| take 50
| extend shifted_events = array_shift_left(events, 1)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20take%2050%20%7C%20extend%20shifted_events%20%3D%20array_shift_left\(events%2C%201\)%22%7D)
**Output**
```json events
[
{
"name": "Enqueued",
"timestamp": 1734001111273917000,
"attributes": null
},
{
"attributes": null,
"name": "Sent",
"timestamp": 1734001111273925400
},
{
"name": "ResponseReceived",
"timestamp": 1734001111274167300,
"attributes": null
}
]
```
```json shifted_events
[
{
"attributes": null,
"name": "Sent",
"timestamp": 1734001111273925400
},
{
"name": "ResponseReceived",
"timestamp": 1734001111274167300,
"attributes": null
},
null
]
```
This query shifts span events for `frontend` services to analyze the adjusted sequence.
## List of related functions
* [array\_rotate\_right](/apl/scalar-functions/array-functions/array-rotate-right): Rotates array elements to the right by a specified number of positions.
* [array\_rotate\_left](/apl/scalar-functions/array-functions/array-rotate-left): Rotates elements of an array to the left.
* [array\_shift\_right](/apl/scalar-functions/array-functions/array-shift-right): Shifts array elements to the right.
# array_shift_right
Source: https://axiom.co/docs/apl/scalar-functions/array-functions/array-shift-right
This page explains how to use the array_shift_right function in APL.
The `array_shift_right` function in Axiom Processing Language (APL) shifts the elements of an array one position to the right. The last element of the array wraps around and becomes the first element. You can use this function to reorder elements, manage time-series data in circular arrays, or preprocess arrays for specific analytical needs.
### When to use the function
* To manage and rotate data within arrays.
* To implement cyclic operations or transformations.
* To manipulate array data structures in log analysis or telemetry contexts.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, similar functionality might be achieved using custom code to rotate array elements, as there is no direct equivalent to `array_shift_right`. APL provides this functionality natively, making it easier to work with arrays directly.
```sql Splunk example
| eval shifted_array=mvappend(mvindex(array,-1),mvindex(array,0,len(array)-1))
```
```kusto APL equivalent
['dataset.name']
| extend shifted_array = array_shift_right(array)
```
ANSI SQL does not have a built-in function for shifting arrays. In SQL, achieving this would involve user-defined functions or complex subqueries. In APL, `array_shift_right` simplifies this operation significantly.
```sql SQL example
WITH shifted AS (
SELECT
array_column[ARRAY_LENGTH(array_column)] AS first_element,
array_column[1:ARRAY_LENGTH(array_column)-1] AS rest_of_elements
FROM table
)
SELECT ARRAY_APPEND(first_element, rest_of_elements) AS shifted_array
FROM shifted
```
```kusto APL equivalent
['dataset.name']
| extend shifted_array = array_shift_right(array)
```
## Usage
### Syntax
```kusto
array_shift_right(array: array) : array
```
### Parameters
| Parameter | Type | Description |
| --------- | ----- | ------------------------------------------ |
| `array` | array | The input array whose elements are shifted |
### Returns
An array with its elements shifted one position to the right. The last element of the input array wraps around to the first position.
## Use case example
Reorganize span events in telemetry data for visualization or debugging.
**Query**
```kusto
['otel-demo-traces']
| take 50
| extend shifted_events = array_shift_right(events, 1)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20take%2050%20%7C%20extend%20shifted_events%20%3D%20array_shift_right\(events%2C%201\)%22%7D)
**Output**
```json events
[
{
"name": "Enqueued",
"timestamp": 1734001215487927300,
"attributes": null
},
{
"attributes": null,
"name": "Sent",
"timestamp": 1734001215487937000
},
{
"timestamp": 1734001215488191000,
"attributes": null,
"name": "ResponseReceived"
}
]
```
```json shifted_events
[
null,
{
"timestamp": 1734001215487927300,
"attributes": null,
"name": "Enqueued"
},
{
"attributes": null,
"name": "Sent",
"timestamp": 1734001215487937000
}
]
```
The query rotates span events for better trace debugging.
## List of related functions
* [array\_rotate\_right](/apl/scalar-functions/array-functions/array-rotate-right): Rotates array elements to the right by a specified number of positions.
* [array\_rotate\_left](/apl/scalar-functions/array-functions/array-rotate-left): Rotates elements of an array to the left.
* [array\_shift\_left](/apl/scalar-functions/array-functions/array-shift-left): Shifts array elements one position to the left, moving the first element to the last position.
# array_slice
Source: https://axiom.co/docs/apl/scalar-functions/array-functions/array-slice
This page explains how to use the array_slice function in APL.
The `array_slice` function in APL extracts a subset of elements from an array, based on specified start and end indices. This function is useful when you want to analyze or transform a portion of data within arrays, such as trimming logs, filtering specific events, or working with trace data in OpenTelemetry logs.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, you can use `mvindex` to extract elements from an array. APL's `array_slice` is similar but more expressive, allowing you to specify slices with optional bounds.
```sql Splunk example
| eval sliced_array=mvindex(my_array, 1, 3)
```
```kusto APL equivalent
T | extend sliced_array = array_slice(my_array, 1, 3)
```
In ANSI SQL, arrays are often handled using JSON functions or window functions, requiring workarounds to slice arrays. In APL, `array_slice` directly handles arrays, making operations more concise.
```sql SQL example
SELECT JSON_EXTRACT(my_array, '$[1:3]') AS sliced_array FROM my_table
```
```kusto APL equivalent
T | extend sliced_array = array_slice(my_array, 1, 3)
```
## Usage
### Syntax
```kusto
array_slice(array, start, end)
```
### Parameters
| Parameter | Description |
| --------- | -------------------------------------------------------------------------------------------------- |
| `array` | The input array to slice. |
| `start` | The starting index of the slice (inclusive). If negative, it is counted from the end of the array. |
| `end` | The ending index of the slice (exclusive). If negative, it is counted from the end of the array. |
### Returns
An array containing the elements from the specified slice. If the indices are out of bounds, it adjusts to return valid elements without error.
## Use case example
Filter spans from trace data to analyze a specific range of events.
**Query**
```kusto
['otel-demo-traces']
| where array_length(events) > 4
| extend sliced_events = array_slice(events, -3, -1)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20where%20array_length\(events\)%20%3E%204%20%7C%20extend%20sliced_events%20%3D%20array_slice\(events%2C%20-3%2C%20-1\)%22%7D)
**Output**
```json events
[
{
"timestamp": 1734001336443987200,
"attributes": null,
"name": "prepared"
},
{
"attributes": {
"feature_flag.provider_name": "flagd",
"feature_flag.variant": "off",
"feature_flag.key": "paymentServiceUnreachable"
},
"name": "feature_flag",
"timestamp": 1734001336444001800
},
{
"name": "charged",
"timestamp": 1734001336445970200,
"attributes": {
"custom": {
"app.payment.transaction.id": "49567406-21f4-41aa-bab2-69911c055753"
}
}
},
{
"name": "shipped",
"timestamp": 1734001336446488600,
"attributes": {
"custom": {
"app.shipping.tracking.id": "9a3b7a5c-aa41-4033-917f-50cb7360a2a4"
}
}
},
{
"attributes": {
"feature_flag.variant": "off",
"feature_flag.key": "kafkaQueueProblems",
"feature_flag.provider_name": "flagd"
},
"name": "feature_flag",
"timestamp": 1734001336461096700
}
]
```
```json sliced_events
[
{
"name": "charged",
"timestamp": 1734001336445970200,
"attributes": {
"custom": {
"app.payment.transaction.id": "49567406-21f4-41aa-bab2-69911c055753"
}
}
},
{
"name": "shipped",
"timestamp": 1734001336446488600,
"attributes": {
"custom": {
"app.shipping.tracking.id": "9a3b7a5c-aa41-4033-917f-50cb7360a2a4"
}
}
}
]
```
Slices the last three events from the `events` array, excluding the final one.
## List of related functions
* [array\_concat](/apl/scalar-functions/array-functions/array-concat): Combines multiple arrays.
* [array\_reverse](/apl/scalar-functions/array-functions/array-reverse): Reverses the order of array elements.
* [array\_shift\_right](/apl/scalar-functions/array-functions/array-shift-right): Shifts array elements to the right.
# array_split
Source: https://axiom.co/docs/apl/scalar-functions/array-functions/array-split
This page explains how to use the array_split function in APL.
The `array_split` function in APL splits an array into smaller subarrays based on specified split indices and packs the generated subarrays into a dynamic array. This function is useful when you want to partition data for analysis, batch processing, or distributing workloads across smaller units.
You can use `array_split` to:
* Divide large datasets into manageable chunks for processing.
* Create segments for detailed analysis or visualization.
* Handle nested data structures for targeted processing.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, array manipulation is achieved through functions like `mvzip` and `mvfilter`, but there is no direct equivalent to `array_split`. APL provides a more explicit approach for splitting arrays.
```sql Splunk example
| eval split_array = mvzip(array_field, "2")
```
```kusto APL equivalent
['otel-demo-traces']
| extend split_array = array_split(events, 2)
```
ANSI SQL does not have built-in functions for directly splitting arrays. APL provides this capability natively, making it easier to handle array operations within queries.
```sql SQL example
-- SQL typically requires custom functions or JSON manipulation.
SELECT * FROM dataset WHERE JSON_ARRAY_LENGTH(array_field) > 0;
```
```kusto APL equivalent
['otel-demo-traces']
| extend split_array = array_split(events, 2)
```
## Usage
### Syntax
```kusto
array_split(array, index)
```
### Parameters
| Parameter | Description | Type |
| --------- | -------------------------------------------------------------------------------------------------------------------------- | ------------------ |
| `array` | The array to split. | Dynamic |
| `index` | An integer or dynamic array of integers. These zero-based split indices indicate the location at which to split the array. | Integer or Dynamic |
### Returns
Returns a dynamic array containing N+1 arrays where N is the number of input indices. The original array is split at the input indices.
## Use case examples
### Single split index
Split large event arrays into manageable chunks for analysis.
```kusto
['otel-demo-traces']
| where array_length(events) == 3
| extend split_events = array_split(events, 2)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20where%20array_length\(events\)%20%3D%3D%203%20%7C%20extend%20span_chunks%20%3D%20array_split\(events%2C%202\)%22%7D)
**Output**
```json events
[
{
"timestamp": 1734033733465219300,
"name": "Enqueued"
},
{
"name": "Sent",
"timestamp": 1734033733465228500
},
{
"timestamp": 1734033733465455900,
"name": "ResponseReceived"
}
]
```
```json split_events
[
[
{
"timestamp": 1734033733465219300,
"name": "Enqueued"
},
{
"name": "Sent",
"timestamp": 1734033733465228500
}
],
[
{
"timestamp": 1734033733465455900,
"name": "ResponseReceived"
}
]
]
```
This query splits the `events` array at index `2` into two subarrays for further processing.
### Multiple split indeces
Divide traces into fixed-size segments for better debugging.
**Query**
```kusto
['otel-demo-traces']
| where array_length(events) == 3
| extend split_events = array_split(events, dynamic([1,2]))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20where%20array_length\(events\)%20%3D%3D%203%20%7C%20extend%20span_chunks%20%3D%20array_split\(events%2C%20dynamic\(%5B1%2C2%5D\)\)%22%7D)
**Output**
```json events
[
{
"attributes": null,
"name": "Enqueued",
"timestamp": 1734034755085206000
},
{
"name": "Sent",
"timestamp": 1734034755085215500,
"attributes": null
},
{
"attributes": null,
"name": "ResponseReceived",
"timestamp": 1734034755085424000
}
]
```
```json split_events
[
[
{
"timestamp": 1734034755085206000,
"attributes": null,
"name": "Enqueued"
}
],
[
{
"timestamp": 1734034755085215500,
"attributes": null,
"name": "Sent"
}
],
[
{
"attributes": null,
"name": "ResponseReceived",
"timestamp": 1734034755085424000
}
]
]
```
This query splits the `events` array into three subarrays based on the indices `[1,2]`.
## List of related functions
* [array\_index\_of](/apl/scalar-functions/array-functions/array-index-of): Finds the index of an element in an array.
* [array\_rotate\_right](/apl/scalar-functions/array-functions/array-rotate-right): Rotates array elements to the right by a specified number of positions.
* [array\_shift\_left](/apl/scalar-functions/array-functions/array-shift-left): Shifts array elements one position to the left, moving the first element to the last position.
# array_sum
Source: https://axiom.co/docs/apl/scalar-functions/array-functions/array-sum
This page explains how to use the array_sum function in APL.
The `array_sum` function in APL computes the sum of all numerical elements in an array. This function is particularly useful when you want to aggregate numerical values stored in an array field, such as durations, counts, or measurements, across events or records. Use `array_sum` when your dataset includes array-type fields, and you need to quickly compute their total.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, you might need to use commands or functions such as `mvsum` for similar operations. In APL, `array_sum` provides a direct method to compute the sum of numerical arrays.
```sql Splunk example
| eval total_duration = mvsum(duration_array)
```
```kusto APL equivalent
['dataset.name']
| extend total_duration = array_sum(duration_array)
```
ANSI SQL does not natively support array operations like summing array elements. However, you can achieve similar results with `UNNEST` and `SUM`. In APL, `array_sum` simplifies this by handling array summation directly.
```sql SQL example
SELECT SUM(value) AS total_duration
FROM UNNEST(duration_array) AS value;
```
```kusto APL equivalent
['dataset.name']
| extend total_duration = array_sum(duration_array)
```
## Usage
### Syntax
```kusto
array_sum(array_expression)
```
### Parameters
| Parameter | Type | Description |
| ------------------ | ----- | ------------------------------------------ |
| `array_expression` | array | An array of numerical values to be summed. |
### Returns
The function returns the sum of all numerical values in the array. If the array is empty or contains no numerical values, the result is `null`.
## Use case example
Summing the duration of all events in an array field.
**Query**
```kusto
['otel-demo-traces']
| summarize event_duration = make_list(duration) by ['service.name']
| extend total_event_duration = array_sum(event_duration)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20event_duration%20%3D%20make_list\(duration\)%20by%20%5B'service.name'%5D%20%7C%20extend%20total_event_duration%20%3D%20array_sum\(event_duration\)%22%7D)
**Output**
| service.name | total\_event\_duration |
| --------------- | ---------------------- |
| frontend | 1667269530000 |
| checkoutservice | 3801404276900 |
The query calculates the total duration of all events for each service.
## List of related functions
* [array\_rotate\_right](/apl/scalar-functions/array-functions/array-rotate-right): Rotates array elements to the right by a specified number of positions.
* [array\_reverse](/apl/scalar-functions/array-functions/array-reverse): Reverses the order of array elements.
* [array\_shift\_left](/apl/scalar-functions/array-functions/array-shift-left): Shifts array elements one position to the left, moving the first element to the last position.
# bag_has_key
Source: https://axiom.co/docs/apl/scalar-functions/array-functions/bag-has-key
This page explains how to use the bag_has_key function in APL.
Use the `bag_has_key` function in APL to check whether a dynamic property bag contains a specific key. This is helpful when your data includes semi-structured or nested fields encoded as dynamic objects, such as JSON-formatted logs or telemetry metadata.
You often encounter property bags in observability data where log entries, spans, or alerts carry key–value metadata. Use `bag_has_key` to filter, conditionally process, or join such records based on the existence of specific keys, without needing to extract the values themselves.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, you often check whether a key exists in a JSON object using `spath` and conditional logic. APL simplifies this with `bag_has_key`, which returns a boolean directly and avoids explicit parsing.
```sql Splunk example
| eval hasKey=if(isnull(spath(data, "keyName")), false, true)
```
```kusto APL equivalent
['sample-http-logs']
| where bag_has_key(dynamic_field, 'keyName')
```
ANSI SQL doesn’t include native support for property bags or dynamic fields. You typically use JSON functions to access keys in JSON-formatted strings. In APL, dynamic fields are first-class, and `bag_has_key` provides direct support for key existence checks.
```sql SQL example
SELECT *
FROM logs
WHERE JSON_EXTRACT(json_column, '$.keyName') IS NOT NULL
```
```kusto APL equivalent
['sample-http-logs']
| where bag_has_key(dynamic_field, 'keyName')
```
## Usage
### Syntax
```kusto
bag_has_key(bag: dynamic, key: string)
```
### Parameters
| Name | Type | Description |
| ----- | --------- | ---------------------------------------------------------------- |
| `bag` | `dynamic` | A dynamic value representing a property bag (e.g., JSON object). |
| `key` | `string` | The key to check for within the property bag. |
### Returns
Returns a `bool` value:
* `true` if the specified key exists in the property bag
* `false` otherwise
## Use case examples
Use `bag_has_key` to filter log entries that include a specific metadata key embedded in a dynamic object.
**Query**
```kusto
['sample-http-logs']
| extend metadata = bag_pack('source', 'cdn', 'env', 'prod')
| where bag_has_key(metadata, 'env')
| project _time, id, method, uri, status, metadata
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20extend%20metadata%20%3D%20bag_pack%28%27source%27%2C%20%27cdn%27%2C%20%27env%27%2C%20%27prod%27%29%20%7C%20where%20bag_has_key%28metadata%2C%20%27env%27%29%20%7C%20project%20_time%2C%20id%2C%20method%2C%20uri%2C%20status%2C%20metadata%22%7D)
**Output**
| \_time | id | method | uri | status | metadata |
| ----------------- | ---- | ------ | -------------- | ------ | ------------------------------ |
| 2025-05-27T12:30Z | u123 | GET | /login | 200 | \{'source':'cdn','env':'prod'} |
| 2025-05-27T12:31Z | u124 | POST | /cart/checkout | 500 | \{'source':'cdn','env':'prod'} |
The query filters logs where the synthetic `metadata` bag includes the key `'env'`.
Use `bag_has_key` to filter spans that include specific dynamic span attributes.
**Query**
```kusto
['otel-demo-traces']
| extend attributes = bag_pack('user', 'alice', 'feature_flag', 'beta')
| where bag_has_key(attributes, 'feature_flag')
| project _time, trace_id, span_id, ['service.name'], kind, attributes
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D%20%7C%20extend%20attributes%20%3D%20bag_pack%28%27user%27%2C%20%27alice%27%2C%20%27feature_flag%27%2C%20%27beta%27%29%20%7C%20where%20bag_has_key%28attributes%2C%20%27feature_flag%27%29%20%7C%20project%20_time%2C%20trace_id%2C%20span_id%2C%20%5B%27service.name%27%5D%2C%20kind%2C%20attributes%22%7D)
**Output**
| \_time | trace\_id | span\_id | \['service.name'] | kind | attributes |
| ----------------- | --------- | -------- | ----------------- | ------ | ---------------------------------------- |
| 2025-05-27T10:02Z | abc123 | span567 | frontend | client | \{'user':'alice','feature\_flag':'beta'} |
The query selects spans with dynamic `attributes` bags containing the `'feature_flag'` key.
Use `bag_has_key` to identify HTTP logs where the request metadata contains sensitive audit-related keys.
**Query**
```kusto
['sample-http-logs']
| extend audit_info = bag_pack('action', 'delete', 'reason', 'admin_override')
| where bag_has_key(audit_info, 'reason')
| project _time, id, uri, status, audit_info
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20extend%20audit_info%20%3D%20bag_pack%28%27action%27%2C%20%27delete%27%2C%20%27reason%27%2C%20%27admin_override%27%29%20%7C%20where%20bag_has_key%28audit_info%2C%20%27reason%27%29%20%7C%20project%20_time%2C%20id%2C%20uri%2C%20status%2C%20audit_info%22%7D)
**Output**
| \_time | id | uri | status | audit\_info |
| ----------------- | ---- | ------------- | ------ | ----------------------------------------------- |
| 2025-05-27T13:45Z | u999 | /admin/delete | 403 | \{'action':'delete','reason':'admin\_override'} |
The query returns only logs where the `audit_info` bag includes the `'reason'` key, indicating administrative override events.
## List of related functions
* [bag\_keys](/apl/scalar-functions/array-functions/bag-keys): Returns all keys in a dynamic property bag. Use it when you need to enumerate available keys.
* [bag\_pack](/apl/scalar-functions/array-functions/bag-pack): Converts a list of key-value pairs to a dynamic property bag. Use when you need to build a bag.
# bag_keys
Source: https://axiom.co/docs/apl/scalar-functions/array-functions/bag-keys
This page explains how to use the bag_keys function in APL.
Use the `bag_keys` function in APL to extract the keys of a dynamic (bag) object as an array of strings. This is useful when you want to inspect or manipulate the structure of a dynamic field—such as JSON-like nested objects—without needing to know its exact schema in advance.
Use `bag_keys` when you’re working with semi-structured data and want to:
* Discover what properties are present in a dynamic object.
* Iterate over the keys programmatically using other array functions.
* Perform validation or debugging tasks to ensure all expected keys exist.
This function is especially helpful in log analytics, observability pipelines, and security auditing, where dynamic properties are often collected from various services or devices.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, you typically interact with JSON-like fields using the `spath` command or use `keys(_raw)` to retrieve field names. In APL, `bag_keys` serves a similar purpose by returning an array of keys from a dynamic object.
```sql Splunk example
| eval key_list=keys(data_field)
```
```kusto APL equivalent
datatable(data: dynamic)
[
dynamic({ "ip": "127.0.0.1", "status": "200", "method": "GET" })
]
| extend keys = bag_keys(data)
```
ANSI SQL doesn’t have native support for dynamic objects or JSON key introspection in the same way. However, some SQL dialects (like PostgreSQL or BigQuery) provide JSON-specific functions for extracting keys. `bag_keys` is the APL equivalent for dynamically introspecting JSON objects.
```sql SQL example
SELECT JSON_OBJECT_KEYS(data) FROM logs;
```
```kusto APL equivalent
datatable(data: dynamic)
[
dynamic({ "ip": "127.0.0.1", "status": "200", "method": "GET" })
]
| extend keys = bag_keys(data)
```
## Usage
### Syntax
```kusto
bag_keys(bag)
```
### Parameters
| Name | Type | Description |
| ----- | --------- | -------------------------------------------------- |
| `bag` | `dynamic` | The dynamic object whose keys you want to extract. |
### Returns
An array of type `string[]` containing the names of the keys in the dynamic object. If the input is not a dynamic object, the function returns `null`.
## Use case examples
Use `bag_keys` to audit dynamic metadata fields in HTTP logs where each record contains a nested object representing additional request attributes.
**Query**
```kusto
['sample-http-logs']
| extend metadata = dynamic({ 'os': 'Windows', 'browser': 'Firefox', 'device': 'Desktop' })
| extend key_list = bag_keys(metadata)
| project _time, uri, metadata, key_list
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20metadata%20%3D%20dynamic\(%7B%20'os'%3A%20'Windows'%2C%20'browser'%3A%20'Firefox'%2C%20'device'%3A%20'Desktop'%20%7D\)%20%7C%20extend%20key_list%20%3D%20bag_keys\(metadata\)%20%7C%20project%20_time%2C%20uri%2C%20metadata%2C%20key_list%22%7D)
**Output**
| \_time | uri | metadata | key\_list |
| ------------------- | ------ | ------------------------------------------------- | ---------------------------- |
| 2025-05-26 12:01:23 | /login | \{os: Windows, browser: Firefox, device: Desktop} | \[‘os’, ‘browser’, ‘device’] |
This query inspects a simulated metadata object and returns the list of its keys, helping you debug inconsistencies or missing fields.
Use `bag_keys` to examine custom span attributes encoded as dynamic fields within OpenTelemetry trace events.
**Query**
```kusto
['otel-demo-traces']
| extend attributes = dynamic({ 'user_id': 'abc123', 'feature_flag': 'enabled' })
| extend attribute_keys = bag_keys(attributes)
| project _time, ['service.name'], kind, attributes, attribute_keys
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20extend%20attributes%20%3D%20dynamic\(%7B%20'user_id'%3A%20'abc123'%2C%20'feature_flag'%3A%20'enabled'%20%7D\)%20%7C%20extend%20attribute_keys%20%3D%20bag_keys\(attributes\)%20%7C%20project%20_time%2C%20%5B'service.name'%5D%2C%20kind%2C%20attributes%2C%20attribute_keys%22%7D)
**Output**
| \_time | \['service.name'] | kind | attributes | attribute\_keys |
| ------------------- | ----------------- | ------ | ------------------------------------------- | ------------------------------ |
| 2025-05-26 13:14:01 | frontend | client | \{user\_id: abc123, feature\_flag: enabled} | \[‘user\_id’, ‘feature\_flag’] |
This query inspects the custom span-level attributes and extracts their keys to verify attribute coverage or completeness.
Use `bag_keys` to list all security-related fields captured dynamically during request monitoring for auditing or compliance.
**Query**
```kusto
['sample-http-logs']
| extend security_context = dynamic({ 'auth_status': 'success', 'role': 'admin', 'ip': '192.168.1.5' })
| extend fields = bag_keys(security_context)
| project _time, status, ['geo.country'], security_context, fields
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20security_context%20%3D%20dynamic\(%7B%20'auth_status'%3A%20'success'%2C%20'role'%3A%20'admin'%2C%20'ip'%3A%20'192.168.1.5'%20%7D\)%20%7C%20extend%20fields%20%3D%20bag_keys\(security_context\)%20%7C%20project%20_time%2C%20status%2C%20%5B'geo.country'%5D%2C%20security_context%2C%20fields%22%7D)
**Output**
| \_time | status | \['geo.country'] | security\_context | fields |
| ------------------- | ------ | ---------------- | ------------------------------------------------------ | ------------------------------- |
| 2025-05-26 15:32:10 | 200 | US | \{auth\_status: success, role: admin, ip: 192.168.1.5} | \[‘auth\_status’, ‘role’, ‘ip’] |
This helps you audit security metadata in requests and ensure key fields are present across records.
## List of related functions
* [bag\_pack](/apl/scalar-functions/array-functions/bag-pack): Converts a list of key-value pairs to a dynamic property bag. Use when you need to build a bag.
* [bag\_has\_key](/apl/scalar-functions/array-functions/bag-has-key): Checks whether a dynamic property bag contains a specific key.
# bag_pack
Source: https://axiom.co/docs/apl/scalar-functions/array-functions/bag-pack
This page explains how to use the bag_pack and pack functions in APL.
Use the `bag_pack` function in APL to construct a dynamic property bag from a list of key-value pairs. A property bag is a flexible data structure where keys are strings and values are dynamic types. This function is useful when you want to combine multiple values into a single dynamic object, often to simplify downstream processing or export.
You typically use `bag_pack` in projection scenarios to consolidate structured data—for example, packing related request metadata into one field, or grouping trace data by contextual attributes. This makes it easier to output, filter, or transform nested information.
The `pack` and `bag_pack` functions are equivalent in APL.
A common use is `bag_pack(*)` that gets all fields of your dataset as a bag. This can be useful when you want to get sets of values.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk, you can use `mvzip` and `eval` to create key-value mappings, or use `spath` to interpret JSON data. However, packing data into a true key-value structure for export or downstream use requires JSON manipulation. APL’s `bag_pack` provides a native and type-safe way to do this.
```sql Splunk example
| eval metadata=tojson({"status": status, "duration": req_duration_ms})
```
```kusto APL equivalent
project metadata = bag_pack('status', status, 'duration', req_duration_ms)
```
SQL doesn’t have a direct built-in function like `bag_pack`. To achieve similar behavior, you typically construct JSON objects using functions like `JSON_OBJECT` or use user-defined types. In APL, `bag_pack` is the idiomatic way to construct dynamic objects with labeled fields.
```sql SQL example
SELECT JSON_OBJECT('status' VALUE status, 'duration' VALUE req_duration_ms) AS metadata FROM logs;
```
```kusto APL equivalent
project metadata = bag_pack('status', status, 'duration', req_duration_ms)
```
## Usage
### Syntax
```kusto
bag_pack(key1, value1, key2, value2, ...)
```
### Parameters
| Name | Type | Description |
| --------------------- | -------- | ------------------------------------------------------------------------ |
| `key1, key2, ...` | `string` | The names of the fields to include in the property bag. |
| `value1, value2, ...` | `scalar` | The corresponding values for the keys. Values can be of any scalar type. |
The number of keys must equal the number of values. Keys must be string literals or string expressions.
### Returns
A `dynamic` value representing a property bag (dictionary) where keys are strings and values are the corresponding values.
## Use case examples
Use `bag_pack` to create a structured object that captures key request attributes for easier inspection or export.
**Query**
```kusto
['sample-http-logs']
| where status == '500'
| project _time, error_context = bag_pack('uri', uri, 'method', method, 'duration_ms', req_duration_ms)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20where%20status%20%3D%3D%20'500'%20%7C%20project%20_time%2C%20error_context%20%3D%20bag_pack%28'uri'%2C%20uri%2C%20'method'%2C%20method%2C%20'duration_ms'%2C%20req_duration_ms%29%22%7D)
**Output**
| \_time | error\_context |
| -------------------- | -------------------------------------------------------------- |
| 2025-05-27T10:00:00Z | `{ "uri": "/api/data", "method": "GET", "duration_ms": 342 }` |
| 2025-05-27T10:05:00Z | `{ "uri": "/api/auth", "method": "POST", "duration_ms": 879 }` |
The query filters HTTP logs to 500 errors and consolidates key request fields into a single dynamic column named `error_context`.
Use `bag_pack` to enrich trace summaries with service metadata for each span.
**Query**
```kusto
['otel-demo-traces']
| where ['service.name'] == 'checkout'
| project trace_id, span_id, span_info = bag_pack('kind', kind, 'duration', duration, 'status_code', status_code)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20where%20%5B'service.name'%5D%20%3D%3D%20'checkout'%20%7C%20project%20trace_id%2C%20span_id%2C%20span_info%20%3D%20bag_pack%28'kind'%2C%20kind%2C%20'duration'%2C%20duration%2C%20'status_code'%2C%20status_code%29%22%7D)
**Output**
| trace\_id | span\_id | span\_info |
| --------- | -------- | ------------------------------------------------------------------------------ |
| a1b2... | f9c3... | `{ "kind": "server", "duration": "00:00:00.1240000", "status_code": "OK" }` |
| c3d4... | h7e2... | `{ "kind": "client", "duration": "00:00:00.0470000", "status_code": "ERROR" }` |
The query targets spans from the `checkout` and combines attributes into a single object per span.
Use `bag_pack` to create a compact event summary combining user ID and geographic info for anomaly detection.
**Query**
```kusto
['sample-http-logs']
| project _time, id, geo_summary = bag_pack('city', ['geo.city'], 'country', ['geo.country'])
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20project%20_time%2C%20id%2C%20geo_summary%20%3D%20bag_pack\('city'%2C%20%5B'geo.city'%5D%2C%20'country'%2C%20%5B'geo.country'%5D\)%22%7D)
**Output**
| \_time | id | geo\_summary |
| -------------------- | -------- | --------------------------------------- |
| 2025-05-27T12:00:00Z | user\_01 | `{ "city": "Berlin", "country": "DE" }` |
| 2025-05-27T12:01:00Z | user\_02 | `{ "city": "Paris", "country": "FR" }` |
The query helps identify patterns in failed access attempts by summarizing location data per event.
## List of related functions
* [bag\_keys](/apl/scalar-functions/array-functions/bag-keys): Returns all keys in a dynamic property bag. Use it when you need to enumerate available keys.
* [bag\_has\_key](/apl/scalar-functions/array-functions/bag-has-key): Checks whether a dynamic property bag contains a specific key.
# isarray
Source: https://axiom.co/docs/apl/scalar-functions/array-functions/isarray
This page explains how to use the isarray function in APL.
The `isarray` function in APL checks whether a specified value is an array. Use this function to validate input data, handle dynamic schemas, or filter for records where a field is explicitly an array. It is particularly useful when working with data that contains fields with mixed data types or optional nested arrays.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, similar functionality is achieved by analyzing the data structure manually, as SPL does not have a direct equivalent to `isarray`. APL simplifies this task by providing the `isarray` function to directly evaluate whether a value is an array.
```sql Splunk example
| eval is_array=if(isnotnull(mvcount(field)), "true", "false")
```
```kusto APL equivalent
['dataset.name']
| extend is_array=isarray(field)
```
In ANSI SQL, there is no built-in function for directly checking if a value is an array. You might need to rely on JSON functions or structural parsing. APL provides the `isarray` function as a more straightforward solution.
```sql SQL example
SELECT CASE
WHEN JSON_TYPE(field) = 'ARRAY' THEN TRUE
ELSE FALSE
END AS is_array
FROM dataset_name;
```
```kusto APL equivalent
['dataset.name']
| extend is_array=isarray(field)
```
## Usage
### Syntax
```kusto
isarray(value)
```
### Parameters
| Parameter | Description |
| --------- | ------------------------------------- |
| `value` | The value to check if it is an array. |
### Returns
A boolean value:
* `true` if the specified value is an array.
* `false` otherwise.
## Use case example
Filter for records where the `events` field contains an array.
**Query**
```kusto
['otel-demo-traces']
| take 50
| summarize events_array = make_list(events)
| extend is_array = isarray(events_array)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20take%2050%20%7C%20summarize%20events_array%20%3D%20make_list\(events\)%20%7C%20extend%20is_array%20%3D%20isarray\(events_array\)%22%7D)
**Output**
| is\_array |
| --------- |
| true |
## List of related functions
* [array\_length](/apl/scalar-functions/array-functions/array-length): Returns the number of elements in an array.
* [array\_index\_of](/apl/scalar-functions/array-functions/array-index-of): Finds the index of an element in an array.
* [array\_slice](/apl/scalar-functions/array-functions/array-slice): Extracts a subset of elements from an array.
# len
Source: https://axiom.co/docs/apl/scalar-functions/array-functions/len
This page explains how to use the len function in APL.
Use the `len` function in APL (Axiom Processing Language) to determine the length of a string or the number of elements in an array. This function is useful when you want to filter, sort, or analyze data based on the size of a value—whether that’s the number of characters in a request URL or the number of cities associated with a user.
Use `len` when you need to:
* Measure string lengths (for example, long request URIs).
* Count elements in dynamic arrays (such as tags or multi-value fields).
* Create conditional expressions based on the length of values.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, you often use the `len` function within `eval` or `where` expressions to determine string length or array size. In APL, `len` works similarly, but is used as a standalone scalar function.
```sql Splunk example
... | eval uri_length=len(uri)
```
```kusto APL equivalent
['sample-http-logs']
| extend uri_length = len(uri)
```
In ANSI SQL, you use `LENGTH()` for strings and `CARDINALITY()` for arrays. In APL, `len` handles both cases—string and array—depending on the input type.
```sql SQL example
SELECT LENGTH(uri) AS uri_length FROM http_logs;
```
```kusto APL equivalent
['sample-http-logs']
| extend uri_length = len(uri)
```
## Usage
### Syntax
```kusto
len(value)
```
### Parameters
| Name | Type | Description |
| ----- | --------------- | ------------------------------------------------- |
| value | string or array | The input to measure—either a string or an array. |
### Returns
* If `value` is a string, returns the number of characters.
* If `value` is an array, returns the number of elements.
* Returns `null` if the input is `null`.
## Use case examples
Use `len` to find requests with long URIs, which might indicate poorly designed endpoints or potential abuse.
**Query**
```kusto
['sample-http-logs']
| extend uri_length = len(uri)
| where uri_length > 100
| project _time, id, uri, uri_length
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20uri_length%20%3D%20len\(uri\)%20%7C%20where%20uri_length%20%3E%20100%20%7C%20project%20_time%2C%20id%2C%20uri%2C%20uri_length%22%7D)
**Output**
| \_time | id | uri | uri\_length |
| -------------------- | ------- | --------------------------------- | ----------- |
| 2025-06-18T12:34:00Z | user123 | /api/products/search?query=... | 132 |
| 2025-06-18T12:35:00Z | user456 | /download/file/very/long/path/... | 141 |
The query filters logs for URIs longer than 100 characters and displays their lengths.
Use `len` to identify traces with IDs of unexpected length, which might indicate instrumentation issues or data inconsistencies.
**Query**
```kusto
['otel-demo-traces']
| extend trace_id_length = len(trace_id)
| summarize count() by trace_id_length
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20durations%3Dmake_list\(req_duration_ms\)%20by%20id%20%7C%20extend%20sorted%3Dsort_array\(durations%2C%20'desc'\)%20%7C%20extend%20top_3%3Darray_extract\(sorted%2C%200%2C%203\)%22%7D)
**Output**
| trace\_id\_length | count |
| ----------------- | ----- |
| 32 | 4987 |
| 16 | 12 |
The query summarizes trace IDs by their lengths to find unexpected values.
Use `len` to analyze request methods and flag unusually short ones (e.g., malformed logs or attack vectors).
**Query**
```kusto
['sample-http-logs']
| extend method_length = len(method)
| where method_length < 3
| project _time, id, method, method_length
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20method_length%20%3D%20len\(method\)%20%7C%20where%20method_length%20%3C%203%20%7C%20project%20_time%2C%20id%2C%20method%2C%20method_length%22%7D)
**Output**
| \_time | id | method | method\_length |
| -------------------- | ------- | ------ | -------------- |
| 2025-06-18T13:10:00Z | user789 | P | 1 |
| 2025-06-18T13:12:00Z | user222 | G | 1 |
The query finds suspicious or malformed request methods that are unusually short.
## List of related functions
* [array\_length](/apl/scalar-functions/array-functions/array-length): Returns the number of elements in an array. Use this when working specifically with arrays.
* [array\_slice](/apl/scalar-functions/array-functions/array-slice): Returns a subarray like `array_extract`, but supports negative indexing.
* [array\_concat](/apl/scalar-functions/array-functions/array-concat): Joins arrays end-to-end. Use before or after slicing arrays with `array_extract`.
# pack_array
Source: https://axiom.co/docs/apl/scalar-functions/array-functions/pack-array
This page explains how to use the pack_array function in APL.
The `pack_array` function in APL creates an array from individual values or expressions. You can use this function to group related data into a single field, which can simplify handling and querying of data collections. It is especially useful when working with nested data structures or aggregating data into arrays for further processing.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, you typically use functions like `mvappend` to create multi-value fields. In APL, the `pack_array` function serves a similar purpose by combining values into an array.
```sql Splunk example
| eval array_field = mvappend(value1, value2, value3)
```
```kusto APL equivalent
| extend array_field = pack_array(value1, value2, value3)
```
In ANSI SQL, arrays are often constructed using functions like `ARRAY`. The `pack_array` function in APL performs a similar operation, creating an array from specified values.
```sql SQL example
SELECT ARRAY[value1, value2, value3] AS array_field;
```
```kusto APL equivalent
| extend array_field = pack_array(value1, value2, value3)
```
## Usage
### Syntax
```kusto
pack_array(value1, value2, ..., valueN)
```
### Parameters
| Parameter | Description |
| --------- | ------------------------------------------ |
| `value1` | The first value to include in the array. |
| `value2` | The second value to include in the array. |
| `...` | Additional values to include in the array. |
| `valueN` | The last value to include in the array. |
### Returns
An array containing the specified values in the order they are provided.
## Use case example
Use `pack_array` to consolidate span data into an array for a trace summary.
**Query**
```kusto
['otel-demo-traces']
| extend span_summary = pack_array(['service.name'], kind, duration)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20extend%20span_summary%20%3D%20pack_array\(%5B'service.name'%5D%2C%20kind%2C%20duration\)%22%7D)
**Output**
| service.name | kind | duration | span\_summary |
| ------------ | ------ | -------- | -------------------------------- |
| frontend | server | 123ms | \["frontend", "server", "123ms"] |
This query creates a concise representation of span details.
## List of related functions
* [array\_slice](/apl/scalar-functions/array-functions/array-slice): Extracts a subset of elements from an array.
* [array\_concat](/apl/scalar-functions/array-functions/array-concat): Combines multiple arrays.
* [array\_length](/apl/scalar-functions/array-functions/array-length): Returns the number of elements in an array.
# pack_dictionary
Source: https://axiom.co/docs/apl/scalar-functions/array-functions/pack-dictionary
This page explains how to use the pack_dictionary function in APL.
Use the `pack_dictionary` function in APL to construct a dynamic property bag (dictionary) from a list of keys and values. The resulting dictionary maps each specified key to its corresponding value and allows you to store key-value pairs in a single column for downstream operations like serialization, custom grouping, or structured export.
`pack_dictionary` is especially useful when you want to:
* Create flexible data structures for export or transformation.
* Group dynamic sets of key-value metrics or attributes into a single column.
* Combine multiple scalar fields into a single dictionary for post-processing or output.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
While SPL doesn't have a direct equivalent of `pack_dictionary`, you can simulate similar behavior using the `eval` command and `mvzip` or `mvmap` to construct composite objects. In APL, `pack_dictionary` is a simpler and more declarative way to produce key-value structures inline.
```sql Splunk example
| eval dict=mvmap("key1", value1, "key2", value2)
```
```kusto APL equivalent
| extend dict = pack_dictionary('key1', value1, 'key2', value2)
```
ANSI SQL lacks built-in support for dynamic dictionaries. You typically achieve similar functionality by manually assembling JSON strings or using vendor-specific extensions (like PostgreSQL’s `jsonb_build_object`). In contrast, APL provides a native and type-safe way to construct dictionaries using `pack_dictionary`.
```sql SQL example
SELECT '{"key1":' || value1 || ',"key2":' || value2 || '}' AS dict FROM my_table;
```
```kusto APL equivalent
| extend dict = pack_dictionary('key1', value1, 'key2', value2)
```
## Usage
### Syntax
```kusto
pack_dictionary(key1, value1, key2, value2, ...)
```
### Parameters
| Name | Type | Description |
| ------ | -------- | ------------------------------------------------------- |
| keyN | `string` | A constant string that represents a dictionary key. |
| valueN | `scalar` | A scalar value to associate with the corresponding key. |
* The number of arguments must be even.
* Keys must be constant strings.
* Values can be any scalar type.
### Returns
A dynamic object that represents a dictionary where each key maps to its associated value.
## Use case examples
Use `pack_dictionary` to store request metadata in a compact format for structured inspection or export.
**Query**
```kusto
['sample-http-logs']
| extend request_info = pack_dictionary(
'method', method,
'uri', uri,
'status', status,
'duration', req_duration_ms
)
| project _time, id, request_info
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20request_info%20%3D%20pack_dictionary\(%20'method'%2C%20method%2C%20'uri'%2C%20uri%2C%20'status'%2C%20status%2C%20'duration'%2C%20req_duration_ms%20\)%20%7C%20project%20_time%2C%20id%2C%20request_info%22%7D)
**Output**
| \_time | id | request\_info |
| -------------------- | ------ | ---------------------------------------------------------------------- |
| 2025-06-18T14:35:00Z | user42 | `{ "method": "GET", "uri": "/home", "status": "200", "duration": 82 }` |
This example creates a single `request_info` column that contains key HTTP request data as a dictionary, simplifying downstream analysis or visualization.
Use `pack_dictionary` to consolidate trace metadata into a structured format for export or debugging.
**Query**
```kusto
['otel-demo-traces']
| extend trace_metadata = pack_dictionary(
'trace_id', trace_id,
'span_id', span_id,
'service', ['service.name'],
'kind', kind,
'status_code', status_code
)
| project _time, duration, trace_metadata
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20extend%20trace_metadata%20%3D%20pack_dictionary\(%20'trace_id'%2C%20trace_id%2C%20'span_id'%2C%20span_id%2C%20'service'%2C%20%5B'service.name'%5D%2C%20'kind'%2C%20kind%2C%20'status_code'%2C%20status_code%20\)%20%7C%20project%20_time%2C%20duration%2C%20trace_metadata%22%7D)
**Output**
| \_time | duration | trace\_metadata |
| -------------------- | -------- | -------------------------------------------------------------------------------------------------------------------- |
| 2025-06-18T14:40:00Z | 00:00:01 | `{ "trace_id": "abc123", "span_id": "def456", "service": "checkoutservice", "kind": "server", "status_code": "OK" }` |
This query generates a `trace_metadata` column that organizes important trace identifiers and status into a single dynamic field.
Use `pack_dictionary` to package request metadata along with geographic information for audit logging or incident forensics.
**Query**
```kusto
['sample-http-logs']
| extend geo_info = pack_dictionary(
'city', ['geo.city'],
'country', ['geo.country']
)
| extend request_info = pack_dictionary(
'method', method,
'uri', uri,
'status', status,
'geo', geo_info
)
| project _time, id, request_info
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20geo_info%20%3D%20pack_dictionary\(%20'city'%2C%20%5B'geo.city'%5D%2C%20'country'%2C%20%5B'geo.country'%5D%20\)%20%7C%20extend%20request_info%20%3D%20pack_dictionary\(%20'method'%2C%20method%2C%20'uri'%2C%20uri%2C%20'status'%2C%20status%2C%20'geo'%2C%20geo_info%20\)%20%7C%20project%20_time%2C%20id%2C%20request_info%22%7D)
**Output**
| \_time | id | request\_info |
| -------------------- | ------ | ------------------------------------------------------------------------------------------------------ |
| 2025-06-18T14:20:00Z | user88 | `{ "method": "POST", "uri": "/login", "status": "403", "geo": { "city": "Berlin", "country": "DE" } }` |
This example nests geographic context inside the main dictionary to create a structured log suitable for security investigations.
## List of related functions
* [pack\_array](/apl/scalar-functions/array-functions/pack-array): Use this to combine scalar values into an array. Use `pack_array` when you don’t need named keys and want positional data instead.
* [bag\_keys](/apl/scalar-functions/array-functions/bag-keys): Returns the list of keys in a dynamic dictionary. Use this to inspect or filter contents created by `pack_dictionary`.
* [bag\_pack](/apl/scalar-functions/array-functions/bag-pack): Expands a dictionary into multiple columns. Use it to revert the packing performed by `pack_dictionary`.
# strcat_array
Source: https://axiom.co/docs/apl/scalar-functions/array-functions/strcat-array
This page explains how to use the strcat_array function in APL.
The `strcat_array` function in Axiom Processing Language (APL) allows you to concatenate the elements of an array into a single string, with an optional delimiter separating each element. This function is useful when you need to transform a set of values into a readable or exportable format, such as combining multiple log entries, tracing IDs, or security alerts into a single output for further analysis or reporting.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, concatenation typically involves transforming fields into a string using the `eval` command with the `+` operator or `mvjoin()` for arrays. In APL, `strcat_array` simplifies array concatenation by natively supporting array input with a delimiter.
```sql Splunk example
| eval concatenated=mvjoin(array_field, ", ")
```
```kusto APL equivalent
dataset
| extend concatenated = strcat_array(array_field, ', ')
```
In ANSI SQL, concatenation involves functions like `STRING_AGG()` or manual string building using `CONCAT()`. APL’s `strcat_array` is similar to `STRING_AGG()`, but focuses on array input directly with a customizable delimiter.
```sql SQL example
SELECT STRING_AGG(column_name, ', ') AS concatenated FROM table;
```
```kusto APL equivalent
dataset
| summarize concatenated = strcat_array(column_name, ', ')
```
## Usage
### Syntax
```kusto
strcat_array(array, delimiter)
```
### Parameters
| Parameter | Type | Description |
| ----------- | ------- | ---------------------------------------------------------------------------------------------------------------------------- |
| `array` | dynamic | The array of values to concatenate. |
| `delimiter` | string | The string used to separate each element in the concatenated result. Optional. Defaults to an empty string if not specified. |
### Returns
A single concatenated string with the array’s elements separated by the specified delimiter.
## Use case example
You can use `strcat_array` to combine HTTP methods and URLs for a quick summary of unique request paths.
**Query**
```kusto
['sample-http-logs']
| take 50
| extend combined_requests = strcat_delim(' ', method, uri)
| summarize requests_list = make_list(combined_requests)
| extend paths = strcat_array(requests_list, ', ')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20take%2050%20%7C%20extend%20combined_requests%20%3D%20strcat_delim\('%20'%2C%20method%2C%20uri\)%20%7C%20summarize%20requests_list%20%3D%20make_list\(combined_requests\)%20%7C%20extend%20paths%20%3D%20strcat_array\(requests_list%2C%20'%2C%20'\)%22%7D)
**Output**
| paths |
| ------------------------------------ |
| GET /index, POST /submit, GET /about |
This query summarizes unique HTTP method and URL combinations into a single, readable string.
## List of related functions
* [array\_length](/apl/scalar-functions/array-functions/array-length): Returns the number of elements in an array.
* [array\_index\_of](/apl/scalar-functions/array-functions/array-index-of): Finds the index of an element in an array.
* [array\_concat](/apl/scalar-functions/array-functions/array-concat): Combines multiple arrays.
# Conditional functions
Source: https://axiom.co/docs/apl/scalar-functions/conditional-function
Learn how to use and combine different conditional functions in APL
## Conditional functions
| **Function Name** | **Description** |
| ----------------- | ----------------------------------------------------------------------------------------------------------- |
| [case()](#case) | Evaluates a list of conditions and returns the first result expression whose condition is satisfied. |
| [iff()](#iff) | Evaluates the first argument (the predicate), and returns the value of either the second or third arguments |
## case()
Evaluates a list of conditions and returns the first result whose condition is satisfied.
### Arguments
* condition: An expression that evaluates to a Boolean.
* result: An expression that Axiom evaluates and returns the value if its condition is the first that evaluates to true.
* nothingMatchedResult: An expression that Axiom evaluates and returns the value if none of the conditional expressions evaluates to true.
### Returns
Axiom returns the value of the first result whose condition evaluates to true. If none of the conditions is satisfied, Axiom returns the value of `nothingMatchedResult`.
### Example
```kusto
case(condition1, result1, condition2, result2, condition3, result3, ..., nothingMatchedResult)
```
```kusto
['sample-http-logs'] |
extend status_human_readable = case(
status_int == 200,
'OK',
status_int == 201,
'Created',
status_int == 301,
'Moved Permanently',
status_int == 500,
'Internal Server Error',
'Other'
)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20extend%20status_code%20%3D%20case\(status_int%20%3D%3D%20200%2C%20'OK'%2C%20status_int%20%3D%3D%20201%2C%20'Created'%2C%20status_int%20%3D%3D%20301%2C%20'Moved%20Permanently'%2C%20status_int%20%3D%3D%20500%2C%20'Internal%20Server%20Error'%2C%20'Other'\)%22%7D)
## iff()
Evaluates the first argument (the predicate), and returns the value of either the second or third arguments. The second and third arguments must be of the same type.
The `iif` function is equivalent to the `iff` function.
### Arguments
* predicate: An expression that evaluates to a boolean value.
* ifTrue: An expression that gets evaluated and its value returned from the function if predicate evaluates to `true`.
* ifFalse: An expression that gets evaluated and its value returned from the function if predicate evaluates to `false`.
### Returns
This function returns the value of ifTrue if predicate evaluates to true, or the value of ifFalse otherwise.
### Examples
```kusto
iff(predicate, ifTrue, ifFalse)
```
```kusto
['sample-http-logs']
| project Status = iff(req_duration_ms == 1, "numeric", "Inactive")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20project%20Status%20%3D%20iff%28req_duration_ms%20%3D%3D%201%2C%20%5C%22numeric%5C%22%2C%20%5C%22Inactive%5C%22%29%22%7D)
# Conversion functions
Source: https://axiom.co/docs/apl/scalar-functions/conversion-functions
Learn how to use and combine different conversion functions in APL
## Conversion functions
| **Function Name** | **Description** |
| ----------------------------------------------------------------- | ------------------------------------------------------------------------------------------ |
| [dynamic\_to\_json](#dynamic-to-json) | Converts a scalar value of type dynamic to a canonical string representation. |
| [ensure\_field](#ensure-field) | Ensures the existence of a field and returns its value or a typed nil if it doesn’t exist. |
| [isbool](#isbool) | Returns a value of true or false if the expression value is passed. |
| [tobool](#tobool) | Converts input to boolean (signed 8-bit) representation. |
| [todatetime](#todatetime) | Converts input to datetime scalar. |
| [todouble, toreal](#todouble%2C-toreal) | Converts the input to a value of type `real`. `todouble` and `toreal` are synonyms. |
| [tohex](#tohex) | Converts input to a hexadecimal string. |
| [toint](#toint) | Converts the input to an integer value (signed 64-bit) number representation. |
| [tolong](#tolong) | Converts input to long (signed 64-bit) number representation. |
| [tostring](#tostring) | Converts input to a string representation. |
| [totimespan](#totimespan) | Converts input to timespan scalar. |
| [toarray](/apl/scalar-functions/conversion-functions/toarray) | Converts input to array. |
| [todynamic](/apl/scalar-functions/conversion-functions/todynamic) | Converts input to dynamic. |
## ensure\_field()
Ensures the existence of a field and returns its value or a typed nil if it doesn’t exist.
### Arguments
| **name** | **type** | **description** |
| ----------- | -------- | ------------------------------------------------------------------------------------------------------ |
| field\_name | string | The name of the field to ensure exists. |
| field\_type | type | The type of the field. See [scalar data types](/apl/data-types/scalar-data-types) for supported types. |
### Returns
This function returns the value of the specified field if it exists, otherwise it returns a typed nil.
### Examples
```kusto
ensure_field(field_name, field_type)
```
### Handle missing fields
In this example, the value of `show_field` is nil because the `myfield` field doesn’t exist.
```kusto
['sample-http-logs']
| extend show_field = ensure_field("myfield", typeof(string))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20show_field%20%3D%20ensure_field%28%27myfield%27%2C%20typeof%28string%29%29%22%7D)
### Access existing fields
In this example, the value of `newstatus` is the value of `status` because the `status` field exists.
```kusto
['sample-http-logs']
| extend newstatus = ensure_field("status", typeof(string))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20newstatus%20%3D%20ensure_field%28%27status%27%2C%20typeof%28string%29%29%22%7D)
### Future-proof queries
In this example, the query is prepared for a field named `upcoming_field` that is expected to be added to the data soon. By using `ensure_field()`, logic can be written around this future field, and the query will work when the field becomes available.
```kusto
['sample-http-logs']
| extend new_field = ensure_field("upcoming_field", typeof(int))
| where new_field > 100
```
## tobool()
Converts input to boolean (signed 8-bit) representation.
### Arguments
* Expr: Expression that will be converted to boolean.
### Returns
* If conversion is successful, result will be a boolean. If conversion isn’t successful, result will be `false`
### Examples
```kusto
tobool(Expr)
toboolean(Expr) (alias)
```
```kusto
tobool("true") == true
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20conversion_function%20%3D%20tobool%28%5C%22true%5C%22%29%20%3D%3D%20true%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"conversion_function": true
}
```
## todatetime()
Converts input to datetime scalar.
### Arguments
* Expr: Expression that will be converted to datetime.
### Returns
If the conversion is successful, the result will be a datetime value. Else, the result will be `false.`
### Examples
```kusto
todatetime(Expr)
```
```kusto
todatetime("2022-11-13")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20conversion_function%20%3D%20todatetime%28%5C%222022-11-13%5C%22%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result
```json
{
"boo": "2022-11-13T00:00:00Z"
}
```
## todouble, toreal
Converts the input to a value of type real. **(todouble() is an alternative word to toreal())**
### Arguments
* Expr: An expression whose value will be converted to a value of type `real.`
### Returns
If conversion is successful, the result is a value of type real. If conversion is not successful, the result returns false.
### Examples
```kusto
toreal(Expr)todouble(Expr)
```
```kusto
toreal("1567") == 1567
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20conversion_function%20%3D%20toreal%28%5C%221567%5C%22%29%20%3D%3D%201567%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"conversion_function": true
}
```
## tostring()
Converts input to a string representation.
### Arguments
* `Expr:` Expression that will be converted to string.
### Returns
If the Expression value is non-null, the result will be a string representation of the Expression. If the Expression value is null, the result will be an empty string.
### Examples
```kusto
tostring(Expr)
```
```kusto
tostring(axiom) == "axiom"
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20conversion_function%20%3D%20tostring%28%5C%22axiom%5C%22%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"conversion_function": "axiom"
}
```
## totimespan
Converts input to timespan scalar.
### Arguments
* `Expr:` Expression that will be converted to timespan.
### Returns
If conversion is successful, result will be a timespan value. Else, result will be false.
### Examples
```kusto
totimespan(Expr)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20conversion_function%20%3D%20totimespan%282022-11-13%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result
```json
{
"conversion_function": "1.998µs"
}
```
## tohex()
Converts input to a hexadecimal string.
### Arguments
* Expr: int or long value that will be converted to a hex string. Other types are not supported.
### Returns
If conversion is successful, result will be a string value. If conversion is not successful, result will be false.
### Examples
```kusto
tohex(value)
```
```kusto
tohex(-546) == 'fffffffffffffdde'
```
```kusto
tohex(546) == '222'
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20conversion_function%20%3D%20tohex%28-546%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"conversion_function": "fffffffffffffdde"
}
```
## tolong()
Converts input to long (signed 64-bit) number representation.
### Arguments
* Expr: Expression that will be converted to long.
### Returns
If conversion is successful, result will be a long number. If conversion is not successful, result will be false.
### Examples
```kusto
tolong(Expr)
```
```kusto
tolong("241") == 241
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20conversion_function%20%3D%20tolong%28%5C%22241%5C%22%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"conversion_function": 241
}
```
## dynamic\_to\_json()
Converts a scalar value of type `dynamic` to a canonical `string` representation.
### Arguments
* dynamic input (EXpr): The function accepts one argument.
### Returns
Returns a canonical representation of the input as a value of type `string`, according to the following rules:
* If the input is a scalar value of type other than `dynamic`, the output is the app of `tostring()` to that value.
* If the input in an array of values, the output is composed of the characters **\[, ,, and ]** interspersed with the canonical representation described here of each array element.
* If the input is a property bag, the output is composed of the characters **\{, ,, and }** interspersed with the colon (:)-delimited name/value pairs of the properties. The pairs are sorted by the names, and the values are in the canonical representation described here of each array element.
### Examples
```kusto
dynamic_to_json(dynamic)
```
```kusto
['sample-http-logs']
| project conversion_function = dynamic_to_json(dynamic([1,2,3]))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20conversion_function%20%3D%20dynamic_to_json%28dynamic%28%5B1%2C2%2C3%5D%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"conversion_function": "[1,2,3]"
}
```
## isbool()
Returns a value of true or false if the expression value is passed.
### Arguments
* Expr: The function accepts one argument. The variable of expression to be evaluated.
### Returns
Returns `true` if expression value is a bool, `false` otherwise.
### Examples
```kusto
isbool(expression)
```
```kusto
isbool("pow") == false
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20conversion_function%20%3D%20isbool%28%5C%22pow%5C%22%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"conversion_function": false
}
```
## toint()
Converts the input to an integer value (signed 64-bit) number representation.
### Arguments
* Value: The value to convert to an [integer](/apl/data-types/scalar-data-types#the-int-data-type).
### Returns
If the conversion is successful, the result will be an integer. Otherwise, the result will be `null`.
### Examples
```kusto
toint(value)
```
```kusto
| project toint("456") == 456
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20toint%28%5C%22456%5C%22%29%20%3D%3D%20456%22%7D)
# toarray
Source: https://axiom.co/docs/apl/scalar-functions/conversion-functions/toarray
This page explains how to use the toarray function in APL.
Use the `toarray` function in APL to convert a dynamic-typed input—such as a bag, property bag, or JSON array—into a regular array. This is helpful when you want to process the elements individually with array functions like `array_length`, `array_index_of`, or `mv-expand`.
You typically use `toarray` when working with semi-structured data, especially after parsing JSON from log fields or external sources. It lets you access and manipulate nested collections using standard array operations.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk, multivalue fields are native, and many SPL commands like `mvexpand`, `mvindex`, and `mvcount` operate directly on them. In APL, dynamic fields can also contain multivalue data, but you need to explicitly convert them to arrays using `toarray` before applying array functions.
```sql Splunk example
... | eval methods=split("GET,POST,PUT", ",") | mvcount(methods)
```
```kusto APL equivalent
print methods = dynamic(["GET", "POST", "PUT"])
| extend method_count = array_length(toarray(methods))
```
ANSI SQL does not support arrays natively. You typically store lists as JSON and use JSON functions to manipulate them. In APL, you can parse JSON into dynamic values and use `toarray` to convert those into arrays for further processing.
```sql SQL example
SELECT JSON_ARRAY_LENGTH('["GET","POST","PUT"]')
```
```kusto APL equivalent
print methods = dynamic(["GET", "POST", "PUT"])
| extend method_count = array_length(toarray(methods))
```
## Usage
### Syntax
```kusto
toarray(value)
```
### Parameters
| Name | Type | Description |
| ----- | ------- | ---------------------------------------- |
| value | dynamic | A JSON array, property bag, or bag value |
### Returns
An array containing the elements of the dynamic input. If the input is already an array, the result is identical. If the input is a property bag, it returns an array of values. If the input is not coercible to an array, the result is an empty array.
## Example
You want to convert a string to an array because you want to pass the result to a function that accepts arrays, such as `array_concat`.
**Query**
```kusto
['otel-demo-traces']
| extend service_list = toarray('123')
| extend json_list = parse_json('["frontend", "cartservice", "checkoutservice"]')
| extend combined_list = array_concat(service_list, json_list)
| project _time, combined_list
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20extend%20service_list%20%3D%20toarray\('123'\)%20%7C%20extend%20json_list%20%3D%20parse_json\('%5B%5C"frontend%5C"%2C%20%5C"cartservice%5C"%2C%20%5C"checkoutservice%5C"%5D'\)%20%7C%20extend%20combined_list%20%3D%20array_concat\(service_list%2C%20json_list\)%20%7C%20project%20_time%2C%20combined_list%22%7D)
**Output**
| \_time | combined\_list |
| ---------------- | ------------------------------------------------------ |
| Jun 24, 09:28:10 | \["123", "frontend", "cartservice", "checkoutservice"] |
| Jun 24, 09:28:10 | \["123", "frontend", "cartservice", "checkoutservice"] |
| Jun 24, 09:28:10 | \["123", "frontend", "cartservice", "checkoutservice"] |
## List of related functions
* [array\_length](/apl/scalar-functions/array-functions/array-length): Returns the number of elements in an array. Useful before applying `array_extract`.
* [array\_index\_of](/apl/scalar-functions/array-functions/array-index-of): Finds the position of an element in an array, which can help set the `startIndex` for `array_extract`.
* [pack\_array](/apl/scalar-functions/array-functions/pack-array): Use this to combine scalar values into an array. Use `pack_array` when you don’t need named keys and want positional data instead.
* [bag\_keys](/apl/scalar-functions/array-functions/bag-keys): Returns the list of keys in a dynamic dictionary. Use this to inspect or filter contents created by `pack_dictionary`.
* [bag\_pack](/apl/scalar-functions/array-functions/bag-pack): Expands a dictionary into multiple columns. Use it to revert the packing performed by `pack_dictionary`.
# todynamic
Source: https://axiom.co/docs/apl/scalar-functions/conversion-functions/todynamic
This page explains how to use the todynamic function in APL.
Use the `todynamic` function to parse a string as a dynamic value, such as a JSON object or array. This function is especially useful when your dataset contains structured data in string format and you want to access nested elements, iterate over arrays, or use dynamic-aware functions.
You often find `todynamic` helpful when working with logs, telemetry, or security events that encode rich metadata or nested attributes in stringified JSON. By converting these strings into dynamic values, you can query, filter, and transform the nested fields using APL’s built-in support for dynamic types.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
Splunk automatically interprets structured JSON data and allows you to use dot notation directly on fields, without explicit conversion. In APL, you need to explicitly cast a JSON string into a dynamic value using `todynamic`.
```sql Splunk example
... | eval json_field = json_extract(raw_field, "$.key")
```
```kusto APL equivalent
... | extend json_field = todynamic(raw_field).key
```
In standard SQL, you typically use `JSON_VALUE`, `JSON_QUERY`, or `CAST(... AS JSON)` to access structured content in string format. In APL, use `todynamic` to convert a string to a dynamic value that supports dot notation and further manipulation.
```sql SQL example
SELECT JSON_VALUE(raw_column, '$.key') AS value FROM logs;
```
```kusto APL equivalent
['sample-http-logs']
| extend value = todynamic(raw_column).key
```
## Usage
### Syntax
```kusto
todynamic(value)
```
### Parameters
| Name | Type | Description |
| ----- | ------ | ----------------------------------------------------- |
| value | string | A string representing a JSON-encoded object or array. |
### Returns
A dynamic value. If the input is not a valid JSON string, the function returns `null`.
## Example
You want to find events that match certain criteria such as URI and status code. The criteria are stored in a stringified dictionary.
**Query**
```kusto
['sample-http-logs']
| extend criteria = '{"uri": "/api/v1/customer/services", "status": "200"}'
| extend metadata = todynamic(criteria)
| where uri == metadata.uri and status == metadata.status
| project _time, id
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20criteria%20%3D%20'%7B%5C"uri%5C"%3A%20%5C"%2Fapi%2Fv1%2Fcustomer%2Fservices%5C"%2C%20%5C"status%5C"%3A%20%5C"200%5C"%7D'%20%7C%20extend%20metadata%20%3D%20todynamic\(criteria\)%20%7C%20where%20uri%20%3D%3D%20metadata.uri%20and%20status%20%3D%3D%20metadata.status%20%7C%20project%20_time%2C%20id%22%7D)
**Output**
| \_time | id |
| ---------------- | ------------------------------------ |
| Jun 24, 09:28:10 | 2f2e5c40-1094-4237-a124-ec50fab7e726 |
| Jun 24, 09:28:10 | 0f9724cb-fa9a-4a2f-bdf6-5c32b2f22efd |
| Jun 24, 09:28:10 | a516c4e9-2ed9-4fb9-a191-94e2844e9b2a |
## List of related functions
* [pack\_array](/apl/scalar-functions/array-functions/pack-array): Use this to combine scalar values into an array. Use `pack_array` when you don’t need named keys and want positional data instead.
* [bag\_keys](/apl/scalar-functions/array-functions/bag-keys): Returns the list of keys in a dynamic dictionary. Use this to inspect or filter contents created by `pack_dictionary`.
* [bag\_pack](/apl/scalar-functions/array-functions/bag-pack): Expands a dictionary into multiple columns. Use it to revert the packing performed by `pack_dictionary`.
# Datetime functions
Source: https://axiom.co/docs/apl/scalar-functions/datetime-functions
Learn how to use and combine different timespan functions in APL
The table summarizes the datetime functions available in APL.
| Name | Description |
| --------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------- |
| [ago](#ago) | Subtracts the given timespan from the current UTC clock time. |
| [datetime\_add](#datetime-add) | Calculates a new datetime from a specified datepart multiplied by a specified amount, added to a specified datetime. |
| [datetime\_part](#datetime-part) | Extracts the requested date part as an integer value. |
| [datetime\_diff](#datetime-diff) | Calculates calendarian difference between two datetime values. |
| [dayofmonth](#dayofmonth) | Returns the integer number representing the day number of the given month |
| [dayofweek](#dayofweek) | Returns the integer number of days since the preceding Sunday, as a timespan. |
| [dayofyear](#dayofyear) | Returns the integer number represents the day number of the given year. |
| [endofyear](#endofyear) | Returns the end of the year containing the date |
| [getmonth](#getmonth) | Get the month number (1-12) from a datetime. |
| [getyear](#getyear) | Returns the year part of the `datetime` argument. |
| [hourofday](#hourofday) | Returns the integer number representing the hour number of the given date. |
| [endofday](#endofday) | Returns the end of the day containing the date. |
| [now](#now) | Returns the current UTC clock time, optionally offset by a given timespan. |
| [endofmonth](#endofmonth) | Returns the end of the month containing the date. |
| [endofweek](#endofweek) | Returns the end of the week containing the date. |
| [monthofyear](#monthofyear) | Returns the integer number represents the month number of the given year. |
| [startofday](#startofday) | Returns the start of the day containing the date. |
| [startofmonth](#startofmonth) | Returns the start of the month containing the date. |
| [startofweek](#startofweek) | Returns the start of the week containing the date. |
| [startofyear](#startofyear) | Returns the start of the year containing the date. |
| [unixtime\_microseconds\_todatetime](/apl/scalar-functions/datetime-functions/unixtime-microseconds-todatetime) | Converts a Unix timestamp expressed in whole microseconds to an APL `datetime` value. |
| [unixtime\_milliseconds\_todatetime](/apl/scalar-functions/datetime-functions/unixtime-milliseconds-todatetime) | Converts a Unix timestamp expressed in whole milliseconds to an APL `datetime` value. |
| [unixtime\_nanoseconds\_todatetime](/apl/scalar-functions/datetime-functions/unixtime-nanoseconds-todatetime) | Converts a Unix timestamp expressed in whole nanoseconds to an APL `datetime` value. |
| [unixtime\_seconds\_todatetime](/apl/scalar-functions/datetime-functions/unixtime-seconds-todatetime) | Converts a Unix timestamp expressed in whole seconds to an APL `datetime` value. |
We support the ISO 8601 format, which is the standard format for representing dates and times in the Gregorian calendar. [Check them out here](/apl/data-types/scalar-data-types#supported-formats)
## ago
Subtracts the given timespan from the current UTC clock time.
### Arguments
* Interval to subtract from the current UTC clock time
### Returns
now() - a\_timespan
### Example
```kusto
ago(6h)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20date_time_functions%20%3D%20ago%286h%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"date_time_functions": "2023-09-11T20:12:39Z"
}
```
```kusto
ago(3d)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20date_time_functions%20%3D%20ago%283d%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"date_time_functions": "2023-09-09T02:13:29Z"
}
```
## datetime\_add
Calculates a new datetime from a specified datepart multiplied by a specified amount, added to a specified datetime.
### Arguments
* period: string.
* amount: integer.
* datetime: datetime value.
### Returns
A date after a certain time/date interval has been added.
### Example
```kusto
datetime_add(period,amount,datetime)
```
```kusto
['sample-http-logs']
| project new_datetime = datetime_add( "month", 1, datetime(2016-10-06))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20new_datetime%20%3D%20datetime_add%28%20%5C%22month%5C%22%2C%201%2C%20datetime%282016-10-06%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"new_datetime": "2016-11-06T00:00:00Z"
}
```
## datetime\_part
Extracts the requested date part as an integer value.
### Arguments
* date: datetime
* part: string
### Returns
An integer representing the extracted part.
### Examples
```kusto
datetime_part(part,datetime)
```
```kusto
['sample-http-logs']
| project new_datetime = datetime_part("Day", datetime(2016-06-26T08:20:03.123456Z))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20new_datetime%20%3D%20datetime_part%28%5C%22Day%5C%22%2C%20datetime%282016-06-26T08%3A20%3A03.123456Z%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"new_datetime": 26
}
```
## datetime\_diff
Calculates calendarian difference between two datetime values.
### Arguments
* period: string.
* datetime\_1: datetime value.
* datetime\_2: datetime value.
### Returns
An integer, which represents amount of periods in the result of subtraction (datetime\_1 - datetime\_2).
### Example
```kusto
datetime_diff(period,datetime_1,datetime_2)
```
```kusto
['sample-http-logs']
| project new_datetime = datetime_diff("week", datetime(2019-06-26T08:20:03.123456Z), datetime(2014-06-26T08:19:03.123456Z))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20new_datetime%20%3D%20datetime_diff%28%5C%22week%5C%22%2C%20datetime%28%5C%222019-06-26T08%3A20%3A03.123456Z%5C%22%29%2C%20datetime%28%5C%222014-06-26T08%3A19%3A03.123456Z%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"new_datetime": 260
}
```
```kusto
['sample-http-logs']
| project new_datetime = datetime_diff("week", datetime(2015-11-08), datetime(2014-11-08))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20new_datetime%20%3D%20datetime_diff%28%5C%22week%5C%22%2C%20datetime%28%5C%222014-11-08%5C%22%29%2C%20datetime%28%5C%222014-11-08%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"new_datetime": 52
}
```
## dayofmonth
Returns the integer number representing the day number of the given month
### Arguments
* `a_date`: A `datetime`
### Returns
day number of the given month.
### Example
```kusto
dayofmonth(a_date)
```
```kusto
['sample-http-logs']
| project day_of_the_month = dayofmonth(datetime(2017-11-30))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20day_of_the_month%20%3D%20dayofmonth%28datetime%28%5C%222017-11-30%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"day_of_the_month": 30
}
```
## dayofweek
Returns the integer number of days since the preceding Sunday, as a timespan.
### Arguments
* a\_date: A datetime.
### Returns
The `timespan` since midnight at the beginning of the preceding Sunday, rounded down to an integer number of days.
### Example
```kusto
dayofweek(a_date)
```
```kusto
['sample-http-logs']
| project day_of_the_week = dayofweek(datetime(2019-05-18))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20day_of_the_week%20%3D%20dayofweek%28datetime%28%5C%222019-05-18%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"day_of_the_week": 6
}
```
## dayofyear
Returns the integer number represents the day number of the given year.
### Arguments
* `a_date`: A `datetime.`
### Returns
`day number` of the given year.
### Example
```kusto
dayofyear(a_date)
```
```kusto
['sample-http-logs']
| project day_of_the_year = dayofyear(datetime(2020-07-20))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20day_of_the_year%20%3D%20dayofyear%28datetime%28%5C%222020-07-20%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"day_of_the_year": 202
}
```
## endofyear
Returns the end of the year containing the date
### Arguments
* date: The input date.
### Returns
A datetime representing the end of the year for the given date value
### Example
```kusto
endofyear(date)
```
```kusto
['sample-http-logs']
| extend end_of_the_year = endofyear(datetime(2016-06-26T08:20))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20%20end_of_the_year%20%3D%20endofyear%28datetime%28%5C%222016-06-26T08%3A20%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"end_of_the_year": "2016-12-31T23:59:59.999999999Z"
}
```
## getmonth
Get the month number (1-12) from a datetime.
```kusto
['sample-http-logs']
| extend get_specific_month = getmonth(datetime(2020-07-26T08:20))
```
## getyear
Returns the year part of the `datetime` argument.
### Example
```kusto
getyear(datetime())
```
```kusto
['sample-http-logs']
| project get_specific_year = getyear(datetime(2020-07-26))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20get_specific_year%20%3D%20getyear%28datetime%28%5C%222020-07-26%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"get_specific_year": 2020
}
```
## hourofday
Returns the integer number representing the hour number of the given date
### Arguments
* a\_date: A datetime.
### Returns
hour number of the day (0-23).
### Example
```kusto
hourofday(a_date)
```
```kusto
['sample-http-logs']
| project get_specific_hour = hourofday(datetime(2016-06-26T08:20:03.123456Z))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20get_specific_hour%20%3D%20hourofday%28datetime%28%5C%222016-06-26T08%3A20%3A03.123456Z%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"get_specific_hour": 8
}
```
## endofday
Returns the end of the day containing the date
### Arguments
* date: The input date.
### Returns
A datetime representing the end of the day for the given date value.
### Example
```kusto
endofday(date)
```
```kusto
['sample-http-logs']
| project end_of_day_series = endofday(datetime(2016-06-26T08:20:03.123456Z))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20end_of_day_series%20%3D%20endofday%28datetime%28%5C%222016-06-26T08%3A20%3A03.123456Z%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"end_of_day_series": "2016-06-26T23:59:59.999999999Z"
}
```
## now
Returns the current UTC clock time, optionally offset by a given timespan. This function can be used multiple times in a statement and the clock time being referenced will be the same for all instances.
### Arguments
* offset: A timespan, added to the current UTC clock time. Default: 0.
### Returns
The current UTC clock time as a datetime.
### Example
```kusto
now([offset])
```
```kusto
['sample-http-logs']
| project returns_clock_time = now(-5d)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20returns_clock_time%20%3D%20now%28-5d%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"returns_clock_time": "2023-09-07T02:54:50Z"
}
```
## endofmonth
Returns the end of the month containing the date
### Arguments
* date: The input date.
### Returns
A datetime representing the end of the month for the given date value.
### Example
```kusto
endofmonth(date)
```
```kusto
['sample-http-logs']
| project end_of_the_month = endofmonth(datetime(2016-06-26T08:20))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20end_of_the_month%20%3D%20endofmonth%28datetime%28%5C%222016-06-26T08%3A20%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"end_of_the_month": "2016-06-30T23:59:59.999999999Z"
}
```
## endofweek
Returns the end of the week containing the date
### Arguments
* date: The input date.
### Returns
A datetime representing the end of the week for the given date value
### Example
```kusto
endofweek(date)
```
```kusto
['sample-http-logs']
| project end_of_the_week = endofweek(datetime(2019-04-18T08:20))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20end_of_the_week%20%3D%20endofweek%28datetime%28%5C%222019-04-18T08%3A20%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"end_of_the_week": "2019-04-20T23:59:59.999999999Z"
}
```
## monthofyear
Returns the integer number represents the month number of the given year.
### Arguments
* `date`: A datetime.
### Returns
month number of the given year.
### Example
```kusto
monthofyear(datetime("2018-11-21"))
```
```kusto
['sample-http-logs']
| project month_of_the_year = monthofyear(datetime(2018-11-11))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20month_of_the_year%20%3D%20monthofyear%28datetime%28%5C%222018-11-11%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"month_of_the_year": 11
}
```
## startofday
Returns the start of the day containing the date
### Arguments
* date: The input date.
### Returns
A datetime representing the start of the day for the given date value
### Examples
```kusto
startofday(datetime(2020-08-31))
```
```kusto
['sample-http-logs']
| project start_of_the_day = startofday(datetime(2018-11-11))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20start_of_the_day%20%3D%20startofday%28datetime%28%5C%222018-11-11%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"start_of_the_day": "2018-11-11T00:00:00Z"
}
```
## startofmonth
Returns the start of the month containing the date
### Arguments
* date: The input date.
### Returns
A datetime representing the start of the month for the given date value
### Example
```kusto
['github-issues-event']
| project start_of_the_month = startofmonth(datetime(2020-08-01))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27github-issues-event%27%5D%5Cn%7C%20project%20start_of_the_month%20%3D%20%20startofmonth%28datetime%28%5C%222020-08-01%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"start_of_the_month": "2020-08-01T00:00:00Z"
}
```
```kusto
['hackernews']
| extend start_of_the_month = startofmonth(datetime(2020-08-01))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27hn%27%5D%5Cn%7C%20project%20start_of_the_month%20%3D%20startofmonth%28datetime%28%5C%222020-08-01%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"start_of_the_month": "2020-08-01T00:00:00Z"
}
```
## startofweek
Returns the start of the week containing the date
Start of the week is considered to be a Sunday.
### Arguments
* date: The input date.
### Returns
A datetime representing the start of the week for the given date value
### Examples
```kusto
['github-issues-event']
| extend start_of_the_week = startofweek(datetime(2020-08-01))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27github-issues-event%27%5D%5Cn%7C%20project%20start_of_the_week%20%3D%20%20startofweek%28datetime%28%5C%222020-08-01%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"start_of_the_week": "2020-07-26T00:00:00Z"
}
```
```kusto
['hackernews']
| extend start_of_the_week = startofweek(datetime(2020-08-01))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27hn%27%5D%5Cn%7C%20project%20start_of_the_week%20%3D%20startofweek%28datetime%28%5C%222020-08-01%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"start_of_the_week": "2020-07-26T00:00:00Z"
}
```
```kusto
['sample-http-logs']
| extend start_of_the_week = startofweek(datetime(2018-06-11T00:00:00Z))
```
## startofyear
Returns the start of the year containing the date
### Arguments
* date: The input date.
### Returns
A datetime representing the start of the year for the given date value
### Examples
```kusto
['sample-http-logs']
| project yearStart = startofyear(datetime(2019-04-03))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20yearStart%20%3D%20startofyear%28datetime%28%5C%222019-04-03%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"yearStart": "2019-01-01T00:00:00Z"
}
```
```kusto
['sample-http-logs']
| project yearStart = startofyear(datetime(2019-10-09 01:00:00.0000000))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20yearStart%20%3D%20startofyear%28datetime%28%5C%222019-10-09%2001%3A00%3A00.0000000%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"yearStart": "2019-01-01T00:00:00Z"
}
```
# unixtime_microseconds_todatetime
Source: https://axiom.co/docs/apl/scalar-functions/datetime-functions/unixtime-microseconds-todatetime
This page explains how to use the unixtime_microseconds_todatetime function in APL.
`unixtime_microseconds_todatetime` converts a Unix timestamp that is expressed in whole microseconds since 1970-01-01 00:00:00 UTC to an APL `datetime` value.
Use the function whenever you ingest data that stores time as epoch microseconds (for example, JSON logs from NGINX or metrics that follow the StatsD line protocol). Converting to `datetime` lets you bin, filter, and visualize events with the rest of your time-series data.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk, you often convert epoch values with `eval ts=strftime(_time,"%Y-%m-%dT%H:%M:%S.%6N")`. In APL, the conversion happens with a scalar function, so you can use it inline wherever a `datetime` literal is accepted.
```sql Splunk example
| eval eventTime=strftime( micro_ts/1000000 , "%Y-%m-%dT%H:%M:%S.%6N")
```
```kusto APL equivalent
| extend eventTime = unixtime_microseconds_todatetime(micro_ts)
```
Standard SQL engines rarely expose microsecond-epoch helpers. You usually cast or divide by 1,000,000 and add an interval. APL gives you a dedicated scalar function that returns a native `datetime`, which then supports the full date-time syntax.
```sql SQL example
SELECT TIMESTAMP '1970-01-01 00:00:00' + micro_ts / 1000000 * INTERVAL '1 second' FROM events;
```
```kusto APL equivalent
['events']
| extend eventTime = unixtime_microseconds_todatetime(micro_ts)
```
## Usage
### Syntax
```kusto
unixtime_microseconds_todatetime(microseconds)
```
### Parameters
| Name | Type | Description |
| -------------- | --------------- | ----------------------------------------------------------------------- |
| `microseconds` | `int` or `long` | Whole microseconds since the Unix epoch. Fractional input is truncated. |
### Returns
A `datetime` value that represents the given epoch microseconds at UTC precision (1 microsecond).
## Use case example
The HTTP access logs keep the timestamp as epoch microseconds and you want to convert the values to datetime.
**Query**
```kusto
['sample-http-logs']
| extend epoch_microseconds = toint(datetime_diff('Microsecond', _time, datetime(1970-01-01)))
| extend datetime_standard = unixtime_microseconds_todatetime(epoch_microseconds)
| project _time, epoch_microseconds, datetime_standard
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20epoch_microseconds%20%3D%20toint\(datetime_diff\('Microsecond'%2C%20_time%2C%20datetime\(1970-01-01\)\)\)%20%7C%20extend%20datetime_standard%20%3D%20unixtime_microseconds_todatetime\(epoch_microseconds\)%20%7C%20project%20_time%2C%20epoch_microseconds%2C%20datetime_standard%22%7D)
**Output**
| \_time | epoch\_microseconds | datetime\_standard |
| ---------------- | ------------------- | -------------------- |
| May 15, 12:09:22 | 1,747,303,762 | 2025-05-15T10:09:22Z |
This query converts the timestamp to epoch microseconds and then back to datetime for demonstration purposes.
## List of related functions
* [unixtime\_milliseconds\_todatetime](/apl/scalar-functions/datetime-functions/unixtime-milliseconds-todatetime): Converts a Unix timestamp expressed in whole milliseconds to an APL `datetime` value.
* [unixtime\_nanoseconds\_todatetime](/apl/scalar-functions/datetime-functions/unixtime-nanoseconds-todatetime): Converts a Unix timestamp expressed in whole nanoseconds to an APL `datetime` value.
* [unixtime\_seconds\_todatetime](/apl/scalar-functions/datetime-functions/unixtime-seconds-todatetime): Converts a Unix timestamp expressed in whole seconds to an APL `datetime` value.
# unixtime_milliseconds_todatetime
Source: https://axiom.co/docs/apl/scalar-functions/datetime-functions/unixtime-milliseconds-todatetime
This page explains how to use the unixtime_milliseconds_todatetime function in APL.
`unixtime_milliseconds_todatetime` converts a Unix timestamp that is expressed in whole milliseconds since 1970-01-01 00:00:00 UTC to an APL `datetime` value.
Use the function whenever you ingest data that stores time as epoch milliseconds (for example, JSON logs from NGINX or metrics that follow the StatsD line protocol). Converting to `datetime` lets you bin, filter, and visualize events with the rest of your time-series data.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
`unixtime_milliseconds_todatetime()` corresponds to an `eval` expression that divides the epoch value by 1000 and formats the result. You skip both steps in APL because the function takes milliseconds directly.
```sql Splunk example
| eval timestamp=strftime(epoch_ms/1000,"%Y-%m-%dT%H:%M:%SZ")
```
```kusto APL equivalent
| extend timestamp=unixtime_milliseconds_todatetime(epoch_ms)
```
The function plays the same role as `FROM_UNIXTIME()` or `TO_TIMESTAMP()` in SQL dialects. In APL, you don’t divide by 1,000 because the function expects milliseconds.
```sql SQL example
SELECT FROM_UNIXTIME(epoch_ms/1000) AS timestamp FROM requests;
```
```kusto APL equivalent
['sample-http-logs']
| extend timestamp=unixtime_milliseconds_todatetime(epoch_ms)
```
## Usage
### Syntax
```kusto
unixtime_milliseconds_todatetime(milliseconds)
```
### Parameters
| Name | Type | Description |
| -------------- | --------------- | ----------------------------------------------------------------------- |
| `milliseconds` | `int` or `long` | Whole milliseconds since the Unix epoch. Fractional input is truncated. |
### Returns
A `datetime` value that represents the given epoch milliseconds at UTC precision (1 millisecond).
## Use case example
The HTTP access logs keep the timestamp as epoch milliseconds and you want to convert the values to datetime.
**Query**
```kusto
['sample-http-logs']
| extend epoch_milliseconds = toint(datetime_diff('Millisecond', _time, datetime(1970-01-01)))
| extend datetime_standard = unixtime_milliseconds_todatetime(epoch_milliseconds)
| project _time, epoch_milliseconds, datetime_standard
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20epoch_milliseconds%20%3D%20toint\(datetime_diff\('Millisecond'%2C%20_time%2C%20datetime\(1970-01-01\)\)\)%20%7C%20extend%20datetime_standard%20%3D%20unixtime_milliseconds_todatetime\(epoch_milliseconds\)%20%7C%20project%20_time%2C%20epoch_milliseconds%2C%20datetime_standard%22%7D)
**Output**
| \_time | epoch\_milliseconds | datetime\_standard |
| ---------------- | ------------------- | -------------------- |
| May 15, 12:09:22 | 1,747,303,762 | 2025-05-15T10:09:22Z |
This query converts the timestamp to epoch milliseconds and then back to datetime for demonstration purposes.
## List of related functions
* [unixtime\_microseconds\_todatetime](/apl/scalar-functions/datetime-functions/unixtime-microseconds-todatetime): Converts a Unix timestamp expressed in whole microseconds to an APL `datetime` value.
* [unixtime\_nanoseconds\_todatetime](/apl/scalar-functions/datetime-functions/unixtime-nanoseconds-todatetime): Converts a Unix timestamp expressed in whole nanoseconds to an APL `datetime` value.
* [unixtime\_seconds\_todatetime](/apl/scalar-functions/datetime-functions/unixtime-seconds-todatetime): Converts a Unix timestamp expressed in whole seconds to an APL `datetime` value.
# unixtime_nanoseconds_todatetime
Source: https://axiom.co/docs/apl/scalar-functions/datetime-functions/unixtime-nanoseconds-todatetime
This page explains how to use the unixtime_nanoseconds_todatetime function in APL.
`unixtime_nanoseconds_todatetime` converts a Unix timestamp that is expressed in whole nanoseconds since 1970-01-01 00:00:00 UTC to an APL `datetime` value.
Use the function whenever you ingest data that stores time as epoch nanoseconds (for example, JSON logs from NGINX or metrics that follow the StatsD line protocol). Converting to `datetime` lets you bin, filter, and visualize events with the rest of your time-series data.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
Splunk SPL usually stores `_time` in seconds and uses functions such as `strftime` or `strptime` for conversion. In APL, you pass the nanosecond integer directly to `unixtime_nanoseconds_todatetime`, so you don’t divide by 1,000,000,000 first.
```sql Splunk example
| eval event_time = strftime(epoch_ns/1000000000, "%Y-%m-%dT%H:%M:%S.%N%z")
```
```kusto APL equivalent
| extend event_time = unixtime_nanoseconds_todatetime(epoch_ns)
```
Many SQL engines use `TO_TIMESTAMP_LTZ()` or similar functions that expect seconds or microseconds. In APL, you pass the nanosecond value directly, and the function returns a `datetime` (UTC).
```sql SQL example
SELECT TO_TIMESTAMP_LTZ(epoch_ns/1e9) AS event_time
FROM events;
```
```kusto APL equivalent
events
| extend event_time = unixtime_nanoseconds_todatetime(epoch_ns)
```
## Usage
### Syntax
```kusto
unixtime_nanoseconds_todatetime(nanoseconds)
```
### Parameters
| Name | Type | Description |
| ------------- | --------------- | ---------------------------------------------------------------------- |
| `nanoseconds` | `int` or `long` | Whole nanoseconds since the Unix epoch. Fractional input is truncated. |
### Returns
A `datetime` value that represents the given epoch nanoseconds at UTC precision (1 nanosecond).
## Use case example
The HTTP access logs keep the timestamp as epoch nanoseconds and you want to convert the values to datetime.
**Query**
```kusto
['sample-http-logs']
| extend epoch_nanoseconds = toint(datetime_diff('Nanosecond', _time, datetime(1970-01-01)))
| extend datetime_standard = unixtime_nanoseconds_todatetime(epoch_nanoseconds)
| project _time, epoch_nanoseconds, datetime_standard
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20epoch_nanoseconds%20%3D%20toint\(datetime_diff\('Nanosecond'%2C%20_time%2C%20datetime\(1970-01-01\)\)\)%20%7C%20extend%20datetime_standard%20%3D%20unixtime_nanoseconds_todatetime\(epoch_nanoseconds\)%20%7C%20project%20_time%2C%20epoch_nanoseconds%2C%20datetime_standard%22%7D)
**Output**
| \_time | epoch\_nanoseconds | datetime\_standard |
| ---------------- | ------------------ | -------------------- |
| May 15, 12:09:22 | 1,747,303,762 | 2025-05-15T10:09:22Z |
This query converts the timestamp to epoch nanoseconds and then back to datetime for demonstration purposes.
## List of related functions
* [unixtime\_microseconds\_todatetime](/apl/scalar-functions/datetime-functions/unixtime-microseconds-todatetime): Converts a Unix timestamp expressed in whole microseconds to an APL `datetime` value.
* [unixtime\_milliseconds\_todatetime](/apl/scalar-functions/datetime-functions/unixtime-milliseconds-todatetime): Converts a Unix timestamp expressed in whole milliseconds to an APL `datetime` value.
* [unixtime\_seconds\_todatetime](/apl/scalar-functions/datetime-functions/unixtime-seconds-todatetime): Converts a Unix timestamp expressed in whole seconds to an APL `datetime` value.
# unixtime_seconds_todatetime
Source: https://axiom.co/docs/apl/scalar-functions/datetime-functions/unixtime-seconds-todatetime
This page explains how to use the unixtime_seconds_todatetime function in APL.
`unixtime_seconds_todatetime` converts a Unix timestamp that is expressed in whole seconds since 1970-01-01 00:00:00 UTC to an APL [datetime value](/apl/data-types/scalar-data-types).
Use the function whenever you ingest data that stores time as epoch seconds (for example, JSON logs from NGINX or metrics that follow the StatsD line protocol). Converting to `datetime` lets you bin, filter, and visualize events with the rest of your time-series data.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
`unixtime_seconds_todatetime` replaces the combination of `eval strftime` / `strptime` that you normally use in Splunk. Pass the epoch value directly and APL returns a `datetime`.
```sql Splunk example
eval event_time = strftime(epoch, "%Y-%m-%dT%H:%M:%S")
```
```kusto APL equivalent
extend event_time = unixtime_seconds_todatetime(epoch)
```
Most ANSI SQL engines call this conversion with `FROM_UNIXTIME` or `TO_TIMESTAMP`. The APL version has the same single-argument signature, returns a full `datetime`, and automatically interprets the input as seconds (not milliseconds).
```sql SQL example
SELECT TO_TIMESTAMP(epoch_seconds) AS event_time FROM events;
```
```kusto APL equivalent
['events']
| extend event_time = unixtime_seconds_todatetime(epoch_seconds)
```
## Usage
### Syntax
```kusto
unixtime_seconds_todatetime(seconds)
```
### Parameters
| Name | Type | Description |
| --------- | --------------- | ------------------------------------------------------------------ |
| `seconds` | `int` or `long` | Whole seconds since the Unix epoch. Fractional input is truncated. |
### Returns
A `datetime` value that represents the given epoch seconds at UTC precision (1 second).
## Use case example
The HTTP access logs keep the timestamp as epoch seconds and you want to convert the values to datetime.
**Query**
```kusto
['sample-http-logs']
| extend epoch_seconds = toint(datetime_diff('Second', _time, datetime(1970-01-01)))
| extend datetime_standard = unixtime_seconds_todatetime(epoch_seconds)
| project _time, epoch_seconds, datetime_standard
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20epoch_seconds%20%3D%20toint\(datetime_diff\('Second'%2C%20_time%2C%20datetime\(1970-01-01\)\)\)%20%7C%20extend%20datetime_standard%20%3D%20unixtime_seconds_todatetime\(epoch_seconds\)%20%7C%20project%20_time%2C%20epoch_seconds%2C%20datetime_standard%22%7D)
**Output**
| \_time | epoch\_seconds | datetime\_standard |
| ---------------- | -------------- | -------------------- |
| May 15, 12:09:22 | 1,747,303,762 | 2025-05-15T10:09:22Z |
This query converts the timestamp to epoch seconds and then back to datetime for demonstration purposes.
## List of related functions
* [unixtime\_microseconds\_todatetime](/apl/scalar-functions/datetime-functions/unixtime-microseconds-todatetime): Converts a Unix timestamp expressed in whole microseconds to an APL `datetime` value.
* [unixtime\_milliseconds\_todatetime](/apl/scalar-functions/datetime-functions/unixtime-milliseconds-todatetime): Converts a Unix timestamp expressed in whole milliseconds to an APL `datetime` value.
* [unixtime\_nanoseconds\_todatetime](/apl/scalar-functions/datetime-functions/unixtime-nanoseconds-todatetime): Converts a Unix timestamp expressed in whole nanoseconds to an APL `datetime` value.
# Hash functions
Source: https://axiom.co/docs/apl/scalar-functions/hash-functions
Learn how to use and combine various hash functions in APL
The table summarizes the hash functions available in APL.
| Name | Description |
| ------------------------------------------------- | -------------------------------------------------- |
| [hash](/apl/scalar-functions/hash-functions/hash) | Returns a signed integer hash for the input value. |
| [hash\_md5](#hash-md5) | Returns a MD5 hash value for the input value. |
| [hash\_sha1](#hash-sha1) | Returns a sha1 hash value for the input value. |
| [hash\_sha256](#hash-sha256) | Returns a SHA256 hash value for the input value. |
| [hash\_sha512](#hash-sha512) | Returns a SHA512 hash value for the input value. |
## hash\_md5
Returns an MD5 hash value for the input value.
### Arguments
* source: The value to be hashed.
### Returns
The MD5 hash value of the given scalar, encoded as a hex string (a string of characters, each two of which represent a single Hex number between 0 and 255).
### Examples
```kusto
hash_md5(source)
```
```kusto
['sample-http-logs']
| project md5_hash_value = hash_md5(content_type)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20md5_hash_value%20%3D%20hash_md5%28content_type%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result
```json
{
"md5_hash_value": "b980a9c041dbd33d5893fad65d33284b"
}
```
## hash\_sha1
Returns a SHA1 hash value for the input value.
### Arguments
* source: The value to be hashed.
### Returns
The sha1 hash value of the given scalar, encoded as a hex string
### Examples
```kusto
hash_sha1(source)
```
```kusto
['sample-http-logs']
| project sha1_hash_value = hash_sha1(content_type)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20sha1_hash_value%20%3D%20hash_sha1%28content_type%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result
```json
{
"sha1_hash_value": "9f9af029585ba014e07cd3910ca976cf56160616"
}
```
## hash\_sha256
Returns a SHA256 hash value for the input value.
### Arguments
* source: The value to be hashed.
### Returns
The sha256 hash value of the given scalar, encoded as a hex string (a string of characters, each two of which represent a single Hex number between 0 and 255).
### Examples
```kusto
hash_sha256(source)
```
```kusto
['sample-http-logs']
| project sha256_hash_value = hash_sha256(content_type)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20sha256_hash_value%20%3D%20hash_sha256%28content_type%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result
```json
{
"sha256_hash_value": "bb4770ff4ac5b7d2be41a088cb27d8bcaad53b574b6f27941e8e48e9e10fc25a"
}
```
## hash\_sha512
Returns a SHA512 hash value for the input value.
### Arguments
* source: The value to be hashed.
### Returns
The sha512 hash value of the given scalar, encoded as a hex string (a string of characters, each two of which represent a single Hex number between 0 and 511).
### Examples
```kusto
hash_sha512(source)
```
```kusto
['sample-http-logs']
| project sha512_hash_value = hash_sha512(status)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20sha512_hash_value%20%3D%20hash_sha512%28status%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result
```json
{
"sha512_hash_value": "0878a61b503dd5a9fe9ea3545d6d3bd41c3b50a47f3594cb8bbab3e47558d68fc8fcc409cd0831e91afc4e609ef9da84e0696c50354ad86b25f2609efef6a834"
}
```
***
```kusto
['sample-http-logs']
| project sha512_hash_value = hash_sha512(content_type)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20sha512_hash_value%20%3D%20hash_sha512%28content_type%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result
```json
{
"sha512_hash_value": "95c6eacdd41170b129c3c287cfe088d4fafea34e371422b94eb78b9653a89d4132af33ef39dd6b3d80e18c33b21ae167ec9e9c2d820860689c647ffb725498c4"
}
```
# hash
Source: https://axiom.co/docs/apl/scalar-functions/hash-functions/hash
This page explains how to use the hash function in APL.
Use the `hash` scalar function to transform any data type as a string of bytes into a signed integer. The result is deterministic so the value is always identical given the same input data.
Use the `hash` function to:
* Anonymise personally identifiable information (PII) while preserving joinability.
* Create reproducible buckets for sampling, sharding, or load-balancing.
* Build low-cardinality keys for fast aggregation and look-ups.
* You need a reversible-by-key surrogate or a quick way to distribute rows evenly.
Don’t use `hash` to generate values for long term usage. `hash` is generic and the underlying hashing algorithm may change. For long term stability, use the [other hash functions](/apl/scalar-functions/hash-functions) with specific algorithm like `hash_sha1`.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
Splunk’s `hash` (or `md5`, `sha1`, etc.) returns a hexadecimal string and lets you pick an algorithm. In APL `hash` always returns a 64-bit integer that trades cryptographic strength for speed and compactness. Use `hash_sha256` if you need a cryptographically secure digest.
```sql Splunk example
... | eval anon_id = md5(id) | stats count by anon_id
```
```kusto APL equivalent
['sample-http-logs']
| extend anon_id = hash(id)
| summarize count() by anon_id
```
Standard SQL often exposes vendor-specific functions such as `HASH` (BigQuery), `HASH_BYTES` (SQL Server), or `MD5`. These return either bytes or hex strings. In APL `hash` always yields an `int64`. To emulate SQL’s modulo bucketing, pipe the result into the arithmetic operator that you need.
```sql SQL example
SELECT HASH(id) % 10 AS bucket, COUNT(*) AS requests
FROM sample_http_logs
GROUP BY bucket
```
```kusto APL equivalent
['sample-http-logs']
| extend bucket = abs(hash(id) % 10)
| summarize requests = count() by bucket
```
## Usage
### Syntax
```kusto
hash(source [, salt])
```
### Parameters
| Name | Type | Description |
| ----------- | ------ | ----------------------------------------------------------------------------------------- |
| valsourceue | scalar | Any scalar expression except `real`. |
| salt | `int` | (Optional) Salt that lets you derive a different 64-bit domain while keeping determinism. |
### Returns
The signed integer hash of `source` (and `salt` if supplied).
## Use case examples
Hash requesters to see your busiest anonymous users.
**Query**
```kusto
['sample-http-logs']
| extend anon_id = hash(id)
| summarize requests = count() by anon_id
| top 5 by requests
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20extend%20anon_id%20%3D%20hash%28id%29%20%7C%20summarize%20requests%20%3D%20count%28%29%20by%20anon_id%20%7C%20top%205%20by%20requests%22%7D)
**Output**
| anon\_id | requests |
| -------------------- | -------- |
| -5872831405421830129 | 128 |
| 902175364502087611 | 97 |
| -354879610945237854 | 85 |
| 6423087105927348713 | 74 |
| -919087345721004317 | 69 |
The query replaces raw IDs with hashed surrogates, counts requests per surrogate, then lists the five most active requesters without exposing PII.
Hash trace IDs to see which anonymous trace has the most spans.
**Query**
```kusto
['otel-demo-traces']
| extend trace_bucket = hash(trace_id)
| summarize spans = count() by trace_bucket
| sort by spans desc
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20extend%20trace_bucket%20%3D%20hash\(trace_id\)%20%7C%20summarize%20spans%20%3D%20count\(\)%20by%20trace_bucket%20%7C%20sort%20by%20spans%20desc%22%7D)
**Output**
| trace\_bucket | spans |
| -------------------------- | ----- |
| 8,858,860,617,655,667,000 | 62 |
| 4,193,515,424,067,409,000 | 62 |
| 1,779,014,838,419,064,000 | 62 |
| 5,399,024,001,804,211,000 | 62 |
| -2,480,347,067,347,939,000 | 62 |
Group suspicious endpoints without leaking the exact URI.
**Query**
```kusto
['sample-http-logs']
| extend uri_hash = hash(uri)
| summarize requests = count() by uri_hash, status
| top 10 by requests
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20extend%20uri_hash%20%3D%20hash%28uri%29%20%7C%20summarize%20requests%20%3D%20count%28%29%20by%20uri_hash%2C%20status%20%7C%20top%2010%20by%20requests%22%7D)
**Output**
| uri\_hash | status | requests |
| ------------------- | ------ | -------- |
| -123640987553821047 | 404 | 230 |
| 4385902145098764321 | 403 | 145 |
| -85439034872109873 | 401 | 132 |
| 493820743209857311 | 404 | 129 |
| -90348122345872001 | 500 | 118 |
The query hides sensitive path information yet still lets you see which hashed endpoints return the most errors.
# IP functions
Source: https://axiom.co/docs/apl/scalar-functions/ip-functions
This section explains how to use IP functions in APL.
The table summarizes the IP functions available in APL.
| Function | Description |
| ------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------- |
| [format\_ipv4](/apl/scalar-functions/ip-functions/format-ipv4) | Parses input with a netmask and returns string representing IPv4 address. |
| [format\_ipv4\_mask](/apl/scalar-functions/ip-functions/format-ipv4-mask) | Formats an IPv4 address and a bitmask into CIDR notation. |
| [geo\_info\_from\_ip\_address](/apl/scalar-functions/ip-functions/geo-info-from-ip-address) | Extracts geographical, geolocation, and network information from IP addresses. |
| [has\_any\_ipv4](/apl/scalar-functions/ip-functions/has-any-ipv4) | Returns a Boolean value indicating whether the specified column contains any of the given IPv4 addresses. |
| [has\_any\_ipv4\_prefix](/apl/scalar-functions/ip-functions/has-any-ipv4-prefix) | Returns a Boolean value indicating whether the IPv4 address matches any of the specified prefixes. |
| [has\_ipv4](/apl/scalar-functions/ip-functions/has-ipv4) | Returns a Boolean value indicating whether the given IPv4 address is valid and found in the source text. |
| [has\_ipv4\_prefix](/apl/scalar-functions/ip-functions/has-ipv4-prefix) | Returns a Boolean value indicating whether the given IPv4 address starts with a specified prefix. |
| [ipv4\_compare](/apl/scalar-functions/ip-functions/ipv4-compare) | Compares two IPv4 addresses. |
| [ipv4\_is\_in\_any\_range](/apl/scalar-functions/ip-functions/ipv4-is-in-any-range) | Returns a Boolean value indicating whether the given IPv4 address is in any specified range. |
| [ipv4\_is\_in\_range](/apl/scalar-functions/ip-functions/ipv4-is-in-range) | Checks if IPv4 string address is in IPv4-prefix notation range. |
| [ipv4\_is\_match](/apl/scalar-functions/ip-functions/ipv4-is-match) | Returns a Boolean value indicating whether the given IPv4 matches the specified pattern. |
| [ipv4\_is\_private](/apl/scalar-functions/ip-functions/ipv4-is-private) | Checks if IPv4 string address belongs to a set of private network IPs. |
| [ipv4\_netmask\_suffix](/apl/scalar-functions/ip-functions/ipv4-netmask-suffix) | Returns the value of the IPv4 netmask suffix from IPv4 string address. |
| [ipv6\_compare](/apl/scalar-functions/ip-functions/ipv6-compare) | Compares two IPv6 addresses. |
| [ipv6\_is\_in\_any\_range](/apl/scalar-functions/ip-functions/ipv6-is-in-any-range) | Returns a Boolean value indicating whether the given IPv6 address is in any specified range. |
| [ipv6\_is\_in\_range](/apl/scalar-functions/ip-functions/ipv6-is-in-range) | Checks if IPv6 string address is in IPv6-prefix notation range. |
| [ipv6\_is\_match](/apl/scalar-functions/ip-functions/ipv6-is-match) | Returns a Boolean value indicating whether the given IPv6 matches the specified pattern. |
| [parse\_ipv4](/apl/scalar-functions/ip-functions/parse-ipv4) | Converts input to long (signed 64-bit) number representation. |
| [parse\_ipv4\_mask](/apl/scalar-functions/ip-functions/parse-ipv4-mask) | Converts input string and IP-prefix mask to long (signed 64-bit) number representation. |
## IP-prefix notation
You can define IP addresses with IP-prefix notation using a slash (`/`) character. The IP address to the left of the slash is the base IP address. The number (1 to 32) to the right of the slash is the number of contiguous bits in the netmask. For example, `192.168.2.0/24` has an associated net/subnetmask containing 24 contiguous bits or `255.255.255.0` in dotted decimal format.
# format_ipv4
Source: https://axiom.co/docs/apl/scalar-functions/ip-functions/format-ipv4
This page explains how to use the format_ipv4 function in APL.
The `format_ipv4` function in APL converts a numeric representation of an IPv4 address into its standard dotted-decimal format. This function is particularly useful when working with logs or datasets where IP addresses are stored as integers, making them hard to interpret directly.
You can use `format_ipv4` to enhance log readability, enrich security logs, or convert raw telemetry data for analysis.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, IPv4 address conversion is typically not a built-in function. You may need to use custom scripts or calculations. APL simplifies this process with the `format_ipv4` function.
ANSI SQL doesn’t have a built-in function for IPv4 formatting. You’d often use string manipulation or external utilities to achieve the same result. In APL, `format_ipv4` offers a straightforward solution.
## Usage
### Syntax
```kusto
format_ipv4(ipv4address)
```
### Parameters
* `ipv4address`: A `long` numeric representation of the IPv4 address in network byte order.
### Returns
* Returns a string representing the IPv4 address in dotted-decimal format.
* Returns an empty string if the conversion fails.
## Use case example
When analyzing HTTP request logs, you can convert IP addresses stored as integers into a readable format to identify client locations or troubleshoot issues.
**Query**
```kusto
['sample-http-logs']
| extend formatted_ip = format_ipv4(3232235776)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20formatted_ip%20%3D%20format_ipv4\(3232235776\)%22%7D)
**Output**
| \_time | formatted\_ip | status | uri | method |
| ------------------- | ------------- | ------ | ------------- | ------ |
| 2024-11-14 10:00:00 | 192.168.1.0 | 200 | /api/products | GET |
This query decodes raw IP addresses into a human-readable format for easier analysis.
## List of related functions
* [has\_any\_ipv4](/apl/scalar-functions/ip-functions/has-any-ipv4): Matches any IP address in a string column with a list of IP addresses or ranges.
* [has\_ipv4](/apl/scalar-functions/ip-functions/has-ipv4): Checks if a single IP address is present in a string column.
* [ipv4\_compare](/apl/scalar-functions/ip-functions/ipv4-compare): Compares two IPv4 addresses lexicographically. Use for sorting or range evaluations.
* [parse\_ipv4](/apl/scalar-functions/ip-functions/parse-ipv4): Converts a dotted-decimal IP address into a numeric representation.
# format_ipv4_mask
Source: https://axiom.co/docs/apl/scalar-functions/ip-functions/format-ipv4-mask
This page explains how to use the format_ipv4_mask function in APL.
Use the `format_ipv4_mask` function to format an IPv4 address and a bitmask into Classless Inter-Domain Routing (CIDR) notation. This function is useful when you need to standardize or analyze network addresses, especially in datasets that contain raw IPs or numerical IP representations. It supports both string-based and numeric IPv4 inputs and can apply an optional prefix to generate a subnet mask.
You can use `format_ipv4_mask` to normalize IP addresses, extract network segments, or apply filtering or grouping logic based on subnet granularity.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
SPL doesn’t have a direct built-in equivalent to `format_ipv4_mask`. To format IPv4 addresses with subnet masks, you typically use custom field extractions or external lookup tables. In contrast, APL provides a native function for this task, simplifying analysis at the network or subnet level.
```sql Splunk example
| eval cidr=ip."/24"
```
```kusto APL equivalent
format_ipv4_mask('192.168.1.10', 24)
```
Standard SQL lacks native functions for manipulating IP addresses or CIDR notation. This type of transformation usually requires application-side logic or user-defined functions (UDFs). APL simplifies this by offering a first-class function for formatting IPs directly in queries.
```sql SQL example
-- Requires custom UDF or external processing
SELECT format_ip_with_mask(ip, 24) FROM connections
```
```kusto APL equivalent
format_ipv4_mask(ip, 24)
```
## Usage
### Syntax
```kusto
format_ipv4_mask(ip, prefix)
```
### Parameters
| Name | Type | Required | Description |
| ------ | ------ | -------- | ------------------------------------------------------------------------------------------------------- |
| ip | string | ✔️ | The IPv4 address in CIDR notation. You can use a string (e.g., `'192.168.1.1'`) or a big-endian number. |
| prefix | int | ✔️ | An integer between 0 and 32. Specifies how many leading bits to include in the mask. |
### Returns
A string representing the IPv4 address in CIDR notation if the conversion succeeds. If the conversion fails, the function returns an empty string.
## Example
**Query**
```kusto
['sample-http-logs']
| extend subnet = format_ipv4_mask('192.168.1.54', 24)
| project _time, subnet
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20subnet%20%3D%20format_ipv4_mask\('192.168.1.54'%2C%2024\)%20%7C%20project%20_time%2C%20subnet%22%7D)
**Output**
| \_time | subnet |
| ----------------- | -------------- |
| 1Jun 30, 11:11:46 | 192.168.1.0/24 |
## List of related functions
* [format\_ipv4](/apl/scalar-functions/ip-functions/format-ipv4): Converts a 32-bit unsigned integer to an IPv4 address string. Use it when your input is a raw numeric IP instead of a prefix length.
* [parse\_ipv4](/apl/scalar-functions/ip-functions/parse-ipv4): Parses an IPv4 string into a numeric representation. Use it when you want to do arithmetic or masking on IP addresses.
* [ipv4\_is\_in\_range](/apl/scalar-functions/ip-functions/ipv4-is-in-range): Checks whether an IPv4 address falls within a given range. Use it when you need to filter or classify IPs against subnets.
# geo_info_from_ip_address
Source: https://axiom.co/docs/apl/scalar-functions/ip-functions/geo-info-from-ip-address
This page explains how to use the geo_info_from_ip_address function in APL.
The `geo_info_from_ip_address` function in APL retrieves geographic information based on an IP address. It maps an IP address to attributes such as city, region, and country, allowing you to perform location-based analytics on your datasets. This function is particularly useful for analyzing web logs, security events, and telemetry data to uncover geographic trends or detect anomalies based on location.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk, the equivalent process often involves using lookup tables or add-ons to resolve IP addresses into geographic details. In APL, `geo_info_from_ip_address` performs the resolution natively within the query, streamlining the workflow.
```sql Splunk example
| eval geo_info = iplocation(client_ip)
```
```kusto APL equivalent
['sample-http-logs']
| extend geo_info = geo_info_from_ip_address(client_ip)
```
In SQL, geographic information retrieval typically requires a separate database or API integration. In APL, the `geo_info_from_ip_address` function directly provides geographic details, simplifying the query process.
```sql SQL example
SELECT ip_to_location(client_ip) AS geo_info
FROM sample_http_logs
```
```kusto APL equivalent
['sample-http-logs']
| extend geo_info = geo_info_from_ip_address(client_ip)
```
## Usage
### Syntax
```kusto
geo_info_from_ip_address(ip_address)
```
### Parameters
| Parameter | Type | Description |
| ------------ | ------ | ------------------------------------------------------------ |
| `ip_address` | string | The IP address for which to retrieve geographic information. |
### Returns
A dynamic object containing the IP address’s geographic attributes (if available). The object contains the following fields:
| Name | Type | Description |
| ------------ | ------ | -------------------------------------------- |
| country | string | Country name |
| state | string | State (subdivision) name |
| city | string | City name |
| latitude | real | Latitude coordinate |
| longitude | real | Longitude coordinate |
| country\_iso | string | ISO code of the country |
| time\_zone | string | Time zone in which the IP address is located |
## Use case example
Use geographic data to analyze web log traffic.
**Query**
```kusto
['sample-http-logs']
| extend geo_info = geo_info_from_ip_address('172.217.22.14')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20geo_info%20%3D%20geo_info_from_ip_address\('172.217.22.14'\)%22%7D)
**Output**
```json geo_info
{
"state": "",
"longitude": -97.822,
"latitude": 37.751,
"country_iso": "US",
"country": "United States",
"city": "",
"time_zone": "America/Chicago"
}
```
This query identifies the geographic location of the IP address `172.217.22.14`.
## List of related functions
* [has\_any\_ipv4](/apl/scalar-functions/ip-functions/has-any-ipv4): Matches any IP address in a string column with a list of IP addresses or ranges.
* [has\_ipv4](/apl/scalar-functions/ip-functions/has-ipv4): Checks if a single IP address is present in a string column.
* [ipv4\_is\_in\_range](/apl/scalar-functions/ip-functions/ipv4-is-in-range): Checks if an IP address is within a specified range.
* [ipv4\_is\_private](/apl/scalar-functions/ip-functions/ipv4-is-private): Checks if an IPv4 address is within private IP ranges.
## IPv4 Examples
### Extract geolocation information from IPv4 address
```kusto
['sample-http-logs']
| extend ip_location = geo_info_from_ip_address('172.217.11.4')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20ip_location%20%3D%20geo_info_from_ip_address%28%27172.217.11.4%27%29%22%7D)
### Project geolocation information from IPv4 address
```kusto
['sample-http-logs']
| project ip_location=geo_info_from_ip_address('20.53.203.50')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20ip_location%3Dgeo_info_from_ip_address%28%2720.53.203.50%27%29%22%7D)
### Filter geolocation information from IPv4 address
```kusto
['sample-http-logs']
| extend ip_location = geo_info_from_ip_address('20.53.203.50')
| where ip_location.country == "Australia" and ip_location.country_iso == "AU" and ip_location.state == "New South Wales"
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20ip_location%20%3D%20geo_info_from_ip_address%28%2720.53.203.50%27%29%5Cn%7C%20where%20ip_location.country%20%3D%3D%20%5C%22Australia%5C%22%20and%20ip_location.country_iso%20%3D%3D%20%5C%22AU%5C%22%20and%20ip_location.state%20%3D%3D%20%5C%22New%20South%20Wales%5C%22%22%7D)
### Group geolocation information from IPv4 address
```kusto
['sample-http-logs']
| extend ip_location = geo_info_from_ip_address('20.53.203.50')
| summarize Count=count() by ip_location.state, ip_location.city, ip_location.latitude, ip_location.longitude
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20ip_location%20%3D%20geo_info_from_ip_address%28%2720.53.203.50%27%29%5Cn%7C%20summarize%20Count%3Dcount%28%29%20by%20ip_location.state%2C%20ip_location.city%2C%20ip_location.latitude%2C%20ip_location.longitude%22%7D)
## IPv6 Examples
### Extract geolocation information from IPv6 address
```kusto
['sample-http-logs']
| extend ip_location = geo_info_from_ip_address('2607:f8b0:4005:805::200e')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20ip_location%20%3D%20geo_info_from_ip_address%28%272607%3Af8b0%3A4005%3A805%3A%3A200e%27%29%22%7D)
### Project geolocation information from IPv6 address
```kusto
['sample-http-logs']
| project ip_location=geo_info_from_ip_address('2a03:2880:f12c:83:face:b00c::25de')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20ip_location%3Dgeo_info_from_ip_address%28%272a03%3A2880%3Af12c%3A83%3Aface%3Ab00c%3A%3A25de%27%29%22%7D)
### Filter geolocation information from IPv6 address
```kusto
['sample-http-logs']
| extend ip_location = geo_info_from_ip_address('2a03:2880:f12c:83:face:b00c::25de')
| where ip_location.country == "United States" and ip_location.country_iso == "US" and ip_location.state == "Florida"
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20ip_location%20%3D%20geo_info_from_ip_address%28%272a03%3A2880%3Af12c%3A83%3Aface%3Ab00c%3A%3A25de%27%29%5Cn%7C%20where%20ip_location.country%20%3D%3D%20%5C%22United%20States%5C%22%20and%20ip_location.country_iso%20%3D%3D%20%5C%22US%5C%22%20and%20ip_location.state%20%3D%3D%20%5C%22Florida%5C%22%22%7D)
### Group geolocation information from IPv6 address
```kusto
['sample-http-logs']
| extend ip_location = geo_info_from_ip_address('2a03:2880:f12c:83:face:b00c::25de')
| summarize Count=count() by ip_location.state, ip_location.city, ip_location.latitude, ip_location.longitude
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20ip_location%20%3D%20geo_info_from_ip_address%28%272a03%3A2880%3Af12c%3A83%3Aface%3Ab00c%3A%3A25de%27%29%5Cn%7C%20summarize%20Count%3Dcount%28%29%20by%20ip_location.state%2C%20ip_location.city%2C%20ip_location.latitude%2C%20ip_location.longitude%22%7D)
# has_any_ipv4
Source: https://axiom.co/docs/apl/scalar-functions/ip-functions/has-any-ipv4
This page explains how to use the has_any_ipv4 function in APL.
The `has_any_ipv4` function in Axiom Processing Language (APL) allows you to check whether a specified column contains any IPv4 addresses from a given set of IPv4 addresses or CIDR ranges. This function is useful when analyzing logs, tracing OpenTelemetry data, or investigating security events to quickly filter records based on a predefined list of IP addresses or subnets.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk, you typically use the `cidrmatch` or similar functions for working with IP ranges. In APL, `has_any_ipv4` offers similar functionality by matching any IPv4 address in a column against multiple values or ranges.
```sql Splunk example
| where cidrmatch("192.168.1.0/24", ip_field)
```
```kusto APL equivalent
['sample-http-logs']
| where has_any_ipv4('ip_field', dynamic(['192.168.1.0/24']))
```
SQL does not natively support CIDR matching or IP address comparison out of the box. In APL, the `has_any_ipv4` function is designed to simplify these checks with concise syntax.
```sql SQL example
SELECT * FROM logs WHERE ip_field = '192.168.1.1' OR ip_field = '192.168.1.2';
```
```kusto APL equivalent
['sample-http-logs']
| where has_any_ipv4('ip_field', dynamic(['192.168.1.1', '192.168.1.2']))
```
## Usage
### Syntax
```kusto
has_any_ipv4(column, ip_list)
```
### Parameters
| Parameter | Description | Type |
| --------- | ---------------------------------------- | --------- |
| `column` | The column to evaluate. | `string` |
| `ip_list` | A list of IPv4 addresses or CIDR ranges. | `dynamic` |
### Returns
A boolean value indicating whether the specified column contains any of the given IPv4 addresses or matches any of the CIDR ranges in `ip_list`.
## Use case example
When analyzing logs, you can use `has_any_ipv4` to filter requests from specific IPv4 addresses or subnets.
**Query**
```kusto
['sample-http-logs']
| extend has_ip = has_any_ipv4('192.168.1.1', dynamic(['192.168.1.1', '192.168.0.0/16']))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20has_ip%20%3D%20has_any_ipv4\('192.168.1.1'%2C%20dynamic\(%5B'192.168.1.1'%2C%20'192.168.0.0%2F16'%5D\)\)%22%7D)
**Output**
| \_time | has\_ip | status |
| ------------------- | ------- | ------ |
| 2024-11-14T10:00:00 | true | 200 |
This query identifies log entries from specific IPs or subnets.
## List of related functions
* [has\_ipv4\_prefix](/apl/scalar-functions/ip-functions/has-ipv4-prefix): Checks if an IPv4 address matches a single prefix.
* [has\_ipv4](/apl/scalar-functions/ip-functions/has-ipv4): Checks if a single IP address is present in a string column.
# has_any_ipv4_prefix
Source: https://axiom.co/docs/apl/scalar-functions/ip-functions/has-any-ipv4-prefix
This page explains how to use the has_any_ipv4_prefix function in APL.
The `has_any_ipv4_prefix` function in APL lets you determine if an IPv4 address starts with any prefix in a list of specified prefixes. This function is particularly useful for filtering, segmenting, and analyzing data involving IP addresses, such as log data, network traffic, or security events. By efficiently checking prefixes, you can identify IP ranges of interest for purposes like geolocation, access control, or anomaly detection.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, checking if an IP address matches a prefix requires custom search logic with pattern matching or conditional expressions. In APL, `has_any_ipv4_prefix` provides a direct and optimized way to perform this check.
```sql Splunk example
| eval is_in_range=if(match(ip, "10.*") OR match(ip, "192.168.*"), 1, 0)
```
```kusto APL equivalent
['sample-http-logs']
| where has_any_ipv4_prefix(uri, dynamic(['10.', '192.168.']))
```
In ANSI SQL, you need to use `LIKE` clauses combined with `OR` operators to check prefixes. In APL, the `has_any_ipv4_prefix` function simplifies this process by accepting a dynamic list of prefixes.
```sql SQL example
SELECT * FROM logs
WHERE ip LIKE '10.%' OR ip LIKE '192.168.%';
```
```kusto APL equivalent
['sample-http-logs']
| where has_any_ipv4_prefix(uri, dynamic(['10.', '192.168.']))
```
## Usage
### Syntax
```kusto
has_any_ipv4_prefix(ip_column, prefixes)
```
### Parameters
| Parameter | Type | Description |
| ----------- | --------- | ----------------------------------------- |
| `ip_column` | `string` | The column containing the IPv4 address. |
| `prefixes` | `dynamic` | A list of IPv4 prefixes to check against. |
### Returns
* `true` if the IPv4 address matches any of the specified prefixes.
* `false` otherwise.
## Use case example
Detect requests from specific IP ranges.
**Query**
```kusto
['sample-http-logs']
| extend has_ip_prefix = has_any_ipv4_prefix('192.168.0.1', dynamic(['172.16.', '192.168.']))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20has_ip_prefix%20%3D%20has_any_ipv4_prefix\('192.168.0.1'%2C%20dynamic\(%5B'172.16.'%2C%20'192.168.'%5D\)\)%22%7D)
**Output**
| \_time | has\_ip\_prefix | status |
| ------------------- | --------------- | ------ |
| 2024-11-14T10:00:00 | true | 200 |
## List of related functions
* [has\_any\_ipv4](/apl/scalar-functions/ip-functions/has-any-ipv4): Matches any IP address in a string column with a list of IP addresses or ranges.
* [has\_ipv4\_prefix](/apl/scalar-functions/ip-functions/has-ipv4-prefix): Checks if an IPv4 address matches a single prefix.
* [has\_ipv4](/apl/scalar-functions/ip-functions/has-ipv4): Checks if a single IP address is present in a string column.
# has_ipv4
Source: https://axiom.co/docs/apl/scalar-functions/ip-functions/has-ipv4
This page explains how to use the has_ipv4 function in APL.
## Introduction
The `has_ipv4` function in Axiom Processing Language (APL) allows you to check if a specified IPv4 address appears in a given text. The function is useful for tasks such as analyzing logs, monitoring security events, and processing network data where you need to identify or filter entries based on IP addresses.
To use `has_ipv4`, ensure that IP addresses in the text are properly delimited with non-alphanumeric characters. For example:
* **Valid:** `192.168.1.1` in `"Requests from: 192.168.1.1, 10.1.1.115."`
* **Invalid:** `192.168.1.1` in `"192.168.1.1ThisText"`
The function returns `true` if the IP address is valid and present in the text; otherwise, it returns `false`.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, you might use `match` or similar regex-based functions to locate IPv4 addresses in a string. In APL, `has_ipv4` provides a simpler and more efficient alternative for detecting specific IPv4 addresses.
```sql Splunk example
search sourcetype=access_combined | eval isPresent=match(_raw, "192\.168\.1\.1")
```
```kusto APL equivalent
print result=has_ipv4('05:04:54 192.168.1.1 GET /favicon.ico 404', '192.168.1.1')
```
In ANSI SQL, locating IPv4 addresses often involves string manipulation or pattern matching with `LIKE` or regular expressions. APL’s `has_ipv4` function provides a more concise and purpose-built approach.
```sql SQL example
SELECT CASE WHEN column_text LIKE '%192.168.1.1%' THEN TRUE ELSE FALSE END AS result
FROM log_table;
```
```kusto APL equivalent
print result=has_ipv4('05:04:54 192.168.1.1 GET /favicon.ico 404', '192.168.1.1')
```
## Usage
### Syntax
```kusto
has_ipv4(source, ip_address)
```
### Parameters
| Name | Type | Description |
| ------------ | ------ | --------------------------------------------------- |
| `source` | string | The source text where to search for the IP address. |
| `ip_address` | string | The IP address to look for in the source. |
### Returns
* `true` if `ip_address` is a valid IP address and is found in `source`.
* `false` otherwise.
## Use case example
Identify requests coming from a specific IP address in HTTP logs.
**Query**
```kusto
['sample-http-logs']
| extend has_ip = has_ipv4('Requests from: 192.168.1.1, 10.1.1.115.', '192.168.1.1')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20has_ip%20%3D%20has_ipv4\('Requests%20from%3A%20192.168.1.1%2C%2010.1.1.115.'%2C%20'192.168.1.1'\)%22%7D)
**Output**
| \_time | has\_ip | status |
| ------------------- | ------- | ------ |
| 2024-11-14T10:00:00 | true | 200 |
## List of related functions
* [has\_any\_ipv4](/apl/scalar-functions/ip-functions/has-any-ipv4): Matches any IP address in a string column with a list of IP addresses or ranges.
* [has\_ipv4\_prefix](/apl/scalar-functions/ip-functions/has-ipv4-prefix): Checks if an IPv4 address matches a single prefix.
# has_ipv4_prefix
Source: https://axiom.co/docs/apl/scalar-functions/ip-functions/has-ipv4-prefix
This page explains how to use the has_ipv4_prefix function in APL.
The `has_ipv4_prefix` function checks if an IPv4 address starts with a specified prefix. Use this function to filter or match IPv4 addresses efficiently based on their prefixes. It is particularly useful when analyzing network traffic, identifying specific address ranges, or working with CIDR-based IP filtering in datasets.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, you use string-based matching or CIDR functions for IP comparison. In APL, `has_ipv4_prefix` simplifies the process by directly comparing an IP against a prefix.
```sql Splunk example
| eval is_match = if(cidrmatch("192.168.0.0/24", ip), true, false)
```
```kusto APL equivalent
['sample-http-logs']
| where has_ipv4_prefix(uri, "192.168.0")
```
In ANSI SQL, there is no direct equivalent to `has_ipv4_prefix`. You would typically use substring or LIKE operators for partial matching. APL provides a dedicated function for this purpose, ensuring simplicity and accuracy.
```sql SQL example
SELECT *
FROM sample_http_logs
WHERE ip LIKE '192.168.0%'
```
```kusto APL equivalent
['sample-http-logs']
| where has_ipv4_prefix(uri, "192.168.0")
```
## Usage
### Syntax
```kusto
has_ipv4_prefix(column_name, prefix)
```
### Parameters
| Parameter | Type | Description |
| ------------- | ------ | --------------------------------------------------------------- |
| `column_name` | string | The column containing the IPv4 addresses to evaluate. |
| `prefix` | string | The prefix to check for, expressed as a string (e.g., "192.0"). |
### Returns
* Returns a Boolean (`true` or `false`) indicating whether the IPv4 address starts with the specified prefix.
## Use case example
Use `has_ipv4_prefix` to filter logs for requests originating from a specific IP range.
**Query**
```kusto
['sample-http-logs']
| extend has_prefix= has_ipv4_prefix('192.168.0.1', '192.168.')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20has_prefix%3D%20has_ipv4_prefix\('192.168.0.1'%2C%20'192.168.'\)%22%7D)
**Output**
| \_time | has\_prefix | status |
| ------------------- | ----------- | ------ |
| 2024-11-14T10:00:00 | true | 200 |
## List of related functions
* [has\_any\_ipv4](/apl/scalar-functions/ip-functions/has-any-ipv4): Matches any IP address in a string column with a list of IP addresses or ranges.
* [has\_ipv4](/apl/scalar-functions/ip-functions/has-ipv4): Checks if a single IP address is present in a string column.
# ipv4_compare
Source: https://axiom.co/docs/apl/scalar-functions/ip-functions/ipv4-compare
This page explains how to use the ipv4_compare function in APL.
The `ipv4_compare` function in APL allows you to compare two IPv4 addresses lexicographically or numerically. This is useful for sorting IP addresses, validating CIDR ranges, or detecting overlaps between IP ranges. It’s particularly helpful in analyzing network logs, performing security investigations, and managing IP-based filters or rules.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, similar functionality can be achieved using `sort` or custom commands. In APL, `ipv4_compare` is a dedicated function for comparing two IPv4 addresses.
```sql Splunk example
| eval comparison = if(ip1 < ip2, -1, if(ip1 == ip2, 0, 1))
```
```kusto APL equivalent
| extend comparison = ipv4_compare(ip1, ip2)
```
In ANSI SQL, you might manually parse or order IP addresses as strings. In APL, `ipv4_compare` simplifies this task with built-in support for IPv4 comparison.
```sql SQL example
SELECT CASE
WHEN ip1 < ip2 THEN -1
WHEN ip1 = ip2 THEN 0
ELSE 1
END AS comparison
FROM ips;
```
```kusto APL equivalent
['sample-http-logs']
| extend comparison = ipv4_compare(ip1, ip2)
```
## Usage
### Syntax
```kusto
ipv4_compare(ip1, ip2)
```
### Parameters
| Parameter | Type | Description |
| --------- | ------ | ----------------------------------- |
| `ip1` | string | The first IPv4 address to compare. |
| `ip2` | string | The second IPv4 address to compare. |
### Returns
* Returns `1` if the long representation of `ip1` is greater than the long representation of `ip2`
* Returns `0` if the long representation of `ip1` is equal to the long representation of `ip2`
* Returns `-1` if the long representation of `ip1` is less than the long representation of `ip2`
* Returns `null` if the conversion fails.
## Use case example
You can use `ipv4_compare` to sort logs based on IP addresses or to identify connections between specific IPs.
**Query**
```kusto
['sample-http-logs']
| extend ip1 = '192.168.1.1', ip2 = '192.168.1.10'
| extend comparison = ipv4_compare(ip1, ip2)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20extend%20ip1%20%3D%20%27192.168.1.1%27%2C%20ip2%20%3D%20%27192.168.1.10%27%20%7C%20extend%20comparison%20%3D%20ipv4_compare\(ip1%2C%20ip2\)%22%7D)
**Output**
| ip1 | ip2 | comparison |
| ----------- | ------------ | ---------- |
| 192.168.1.1 | 192.168.1.10 | -1 |
This query compares two hardcoded IP addresses. It returns `-1`, indicating that `192.168.1.1` is lexicographically less than `192.168.1.10`.
## List of related functions
* [ipv4\_is\_in\_range](/apl/scalar-functions/ip-functions/ipv4-is-in-range): Checks if an IP address is within a specified range.
* [ipv4\_is\_private](/apl/scalar-functions/ip-functions/ipv4-is-private): Checks if an IPv4 address is within private IP ranges.
* [parse\_ipv4](/apl/scalar-functions/ip-functions/parse-ipv4): Converts a dotted-decimal IP address into a numeric representation.
# ipv4_is_in_any_range
Source: https://axiom.co/docs/apl/scalar-functions/ip-functions/ipv4-is-in-any-range
This page explains how to use the ipv4_is_in_any_range function in APL.
The `ipv4_is_in_any_range` function checks whether a given IPv4 address belongs to any range of IPv4 subnets. You can use it to evaluate whether an IP address falls within a set of CIDR blocks or IP ranges, which is useful for filtering, monitoring, or analyzing network traffic in your datasets.
This function is particularly helpful for security monitoring, analyzing log data for specific geolocated traffic, or validating access based on allowed IP ranges.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, you use `cidrmatch` to check if an IP belongs to a range. In APL, `ipv4_is_in_any_range` is equivalent, but it supports evaluating against multiple ranges simultaneously.
```sql Splunk example
| eval is_in_range = cidrmatch("192.168.0.0/24", ip_address)
```
```kusto APL equivalent
['dataset']
| extend is_in_range = ipv4_is_in_any_range(ip_address, dynamic(['192.168.0.0/24', '10.0.0.0/8']))
```
ANSI SQL does not have a built-in function for checking IP ranges. Instead, you use custom functions or comparisons. APL’s `ipv4_is_in_any_range` simplifies this by handling multiple CIDR blocks and ranges in a single function.
```sql SQL example
SELECT *,
CASE WHEN ip_address BETWEEN '192.168.0.0' AND '192.168.0.255' THEN 1 ELSE 0 END AS is_in_range
FROM dataset;
```
```kusto APL equivalent
['dataset']
| extend is_in_range = ipv4_is_in_any_range(ip_address, dynamic(['192.168.0.0/24', '10.0.0.0/8']))
```
## Usage
### Syntax
```kusto
ipv4_is_in_any_range(ip_address: string, ranges: dynamic)
```
### Parameters
| Parameter | Type | Description |
| ------------ | ------- | --------------------------------------------------------------------------- |
| `ip_address` | string | The IPv4 address to evaluate. |
| `ranges` | dynamic | A list of IPv4 ranges or CIDR blocks to check against (in JSON array form). |
### Returns
* `true` if the IP address is in any specified range.
* `false` otherwise.
* `null` if the conversion of a string wasn’t successful.
## Use case example
Identify log entries from specific subnets, such as local office IP ranges.
**Query**
```kusto
['sample-http-logs']
| extend is_in_range = ipv4_is_in_any_range('192.168.0.0', dynamic(['192.168.0.0/24', '10.0.0.0/8']))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%20%7C%20extend%20is_in_range%20%3D%20ipv4_is_in_any_range\('192.168.0.0'%2C%20dynamic\(%5B'192.168.0.0%2F24'%2C%20'10.0.0.0%2F8'%5D\)\)%22%7D)
**Output**
| \_time | id | method | uri | status | is\_in\_range |
| ------------------- | ------- | ------ | ----- | ------ | ------------- |
| 2024-11-14 10:00:00 | user123 | GET | /home | 200 | true |
## List of related functions
* [ipv4\_compare](/apl/scalar-functions/ip-functions/ipv4-compare): Compares two IPv4 addresses lexicographically. Use for sorting or range evaluations.
* [ipv4\_is\_in\_range](/apl/scalar-functions/ip-functions/ipv4-is-in-range): Checks if an IP address is within a specified range.
* [ipv4\_is\_private](/apl/scalar-functions/ip-functions/ipv4-is-private): Checks if an IPv4 address is within private IP ranges.
* [parse\_ipv4](/apl/scalar-functions/ip-functions/parse-ipv4): Converts a dotted-decimal IP address into a numeric representation.
# ipv4_is_in_range
Source: https://axiom.co/docs/apl/scalar-functions/ip-functions/ipv4-is-in-range
This page explains how to use the ipv4_is_in_range function in APL.
The `ipv4_is_in_range` function in Axiom Processing Language (APL) determines whether an IPv4 address falls within a specified range of addresses. This function is particularly useful for filtering or grouping logs based on geographic regions, network blocks, or security zones.
You can use this function to:
* Analyze logs for requests originating from specific IP address ranges.
* Detect unauthorized or suspicious activity by isolating traffic outside trusted IP ranges.
* Aggregate metrics for specific IP blocks or subnets.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
The `ipv4_is_in_range` function in APL operates similarly to the `cidrmatch` function in Splunk SPL. Both determine whether an IP address belongs to a specified range, but APL uses a different syntax and format.
```sql Splunk example
| eval in_range = cidrmatch("192.168.0.0/24", ip_address)
```
```kusto APL equivalent
['sample-http-logs']
| extend in_range = ipv4_is_in_range(ip_address, '192.168.0.0/24')
```
ANSI SQL doesn’t have a built-in equivalent for determining if an IP address belongs to a CIDR range. In SQL, you would typically need custom functions or expressions to achieve this. APL’s `ipv4_is_in_range` provides a concise way to perform this operation.
```sql SQL example
SELECT CASE
WHEN ip_address BETWEEN '192.168.0.0' AND '192.168.0.255' THEN 1
ELSE 0
END AS in_range
FROM logs
```
```kusto APL equivalent
['sample-http-logs']
| extend in_range = ipv4_is_in_range(ip_address, '192.168.0.0/24')
```
## Usage
### Syntax
```kusto
ipv4_is_in_range(ip: string, range: string)
```
### Parameters
| Parameter | Type | Description |
| --------- | ------ | --------------------------------------------------------- |
| `ip` | string | The IPv4 address to evaluate. |
| `range` | string | The IPv4 range in CIDR notation (e.g., `192.168.1.0/24`). |
### Returns
* `true` if the IPv4 address is in the range.
* `false` otherwise.
* `null` if the conversion of a string wasn’t successful.
## Use case example
You can use `ipv4_is_in_range` to identify traffic from specific geographic regions or service provider IP blocks.
**Query**
```kusto
['sample-http-logs']
| extend in_range = ipv4_is_in_range('192.168.1.0', '192.168.1.0/24')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20in_range%20%3D%20ipv4_is_in_range\('192.168.1.0'%2C%20'192.168.1.0%2F24'\)%22%7D)
**Output**
| geo.city | in\_range |
| -------- | --------- |
| Seattle | true |
| Denver | true |
This query identifies the number of requests from IP addresses in the specified range.
## List of related functions
* [ipv4\_compare](/apl/scalar-functions/ip-functions/ipv4-compare): Compares two IPv4 addresses lexicographically. Use for sorting or range evaluations.
* [ipv4\_is\_private](/apl/scalar-functions/ip-functions/ipv4-is-private): Checks if an IPv4 address is within private IP ranges.
* [parse\_ipv4](/apl/scalar-functions/ip-functions/parse-ipv4): Converts a dotted-decimal IP address into a numeric representation.
# ipv4_is_match
Source: https://axiom.co/docs/apl/scalar-functions/ip-functions/ipv4-is-match
This page explains how to use the ipv4_is_match function in APL.
The `ipv4_is_match` function in APL helps you determine whether a given IPv4 address matches a specific IPv4 pattern. This function is especially useful for tasks that involve IP address filtering, including network security analyses, log file inspections, and geo-locational data processing. By specifying patterns that include wildcards or CIDR notations, you can efficiently check if an IP address falls within defined ranges or meets specific conditions.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
The `ipv4_is_match` function in APL resembles the `cidrmatch` function in Splunk SPL. Both functions assess whether an IP address falls within a designated CIDR range, but `ipv4_is_match` also supports wildcard pattern matching, providing additional flexibility.
```sql Splunk example
cidrmatch("192.168.1.0/24", ip)
```
```kusto APL equivalent
ipv4_is_match(ip, "192.168.1.0/24")
```
ANSI SQL lacks a direct equivalent to the `ipv4_is_match` function, but you can replicate similar functionality with a combination of `LIKE` and range checking. However, these approaches can be complex and less efficient than `ipv4_is_match`, which simplifies CIDR and wildcard-based IP matching.
```sql SQL example
ip LIKE '192.168.1.0'
```
```kusto APL equivalent
ipv4_is_match(ip, "192.168.1.0")
```
## Usage
### Syntax
```kusto
ipv4_is_match(ipaddress1, ipaddress2, prefix)
```
### Parameters
* **ipaddress1**: A string representing the first IPv4 address you want to evaluate. Use CIDR notation (for example, `192.168.1.0/24`).
* **ipaddress2**: A string representing the second IPv4 address you want to evaluate. Use CIDR notation (for example, `192.168.1.0/24`).
* **prefix**: Optionally, a number between 0 and 32 that specifies the number of most-significant bits taken into account.
### Returns
* `true` if the IPv4 addresses match.
* `false` otherwise.
* `null` if the conversion of an IPv4 string wasn’t successful.
## Use case example
The `ipv4_is_match` function allows you to identify traffic based on IP addresses, enabling faster identification of traffic patterns and potential issues.
**Query**
```kusto
['sample-http-logs']
| extend is_match = ipv4_is_match('203.0.113.112', '203.0.113.112')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20is_match%20%3D%20ipv4_is_match\('203.0.113.112'%2C%20'203.0.113.112'\)%22%7D)
**Output**
| \_time | id | status | method | uri | is\_match |
| ------------------- | ------------- | ------ | ------ | ----------- | --------- |
| 2023-11-11T13:20:14 | 203.0.113.45 | 403 | GET | /admin | true |
| 2023-11-11T13:30:32 | 203.0.113.101 | 401 | POST | /restricted | true |
## List of related functions
* [has\_any\_ipv4](/apl/scalar-functions/ip-functions/has-any-ipv4): Matches any IP address in a string column with a list of IP addresses or ranges.
* [has\_ipv4\_prefix](/apl/scalar-functions/ip-functions/has-ipv4-prefix): Checks if an IPv4 address matches a single prefix.
* [has\_ipv4](/apl/scalar-functions/ip-functions/has-ipv4): Checks if a single IP address is present in a string column.
* [ipv4\_compare](/apl/scalar-functions/ip-functions/ipv4-compare): Compares two IPv4 addresses lexicographically. Use for sorting or range evaluations.
# ipv4_is_private
Source: https://axiom.co/docs/apl/scalar-functions/ip-functions/ipv4-is-private
This page explains how to use the ipv4_is_private function in APL.
The `ipv4_is_private` function determines if an IPv4 address belongs to a private range, as defined by [RFC 1918](https://www.rfc-editor.org/rfc/rfc1918). You can use this function to filter private addresses in datasets such as server logs, network traffic, and other IP-based data.
This function is especially useful in scenarios where you want to:
* Exclude private IPs from logs to focus on public traffic.
* Identify traffic originating from within an internal network.
* Simplify security analysis by categorizing IP addresses.
The private IPv4 addresses reserved for private networks by the Internet Assigned Numbers Authority (IANA) are the following:
| IP address range | Number of addresses | Largest CIDR block (subnet mask) |
| ----------------------------- | ------------------- | -------------------------------- |
| 10.0.0.0 – 10.255.255.255 | 16777216 | 10.0.0.0/8 (255.0.0.0) |
| 172.16.0.0 – 172.31.255.255 | 1048576 | 172.16.0.0/12 (255.240.0.0) |
| 192.168.0.0 – 192.168.255.255 | 65536 | 192.168.0.0/16 (255.255.0.0) |
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, you might use a combination of CIDR matching functions or regex to check for private IPs. In APL, the `ipv4_is_private` function offers a built-in and concise way to achieve the same result.
```sql Splunk example
eval is_private=if(cidrmatch("10.0.0.0/8", ip) OR cidrmatch("172.16.0.0/12", ip) OR cidrmatch("192.168.0.0/16", ip), 1, 0)
```
```kusto APL equivalent
['sample-http-logs']
| extend is_private=ipv4_is_private(client_ip)
```
In ANSI SQL, you might use `CASE` statements with CIDR-based checks or regex patterns to detect private IPs. In APL, the `ipv4_is_private` function simplifies this with a single call.
```sql SQL example
SELECT ip,
CASE
WHEN ip LIKE '10.%' OR ip LIKE '172.16.%' OR ip LIKE '192.168.%' THEN 'true'
ELSE 'false'
END AS is_private
FROM logs;
```
```kusto APL equivalent
['sample-http-logs']
| extend is_private=ipv4_is_private(client_ip)
```
## Usage
### Syntax
```kusto
ipv4_is_private(ip: string)
```
### Parameters
| Parameter | Type | Description |
| --------- | ------ | ------------------------------------------------------ |
| `ip` | string | The IPv4 address to evaluate for private range status. |
### Returns
* `true`: The input IP address is private.
* `false`: The input IP address is not private.
## Use case example
You can use `ipv4_is_private` to filter logs and focus on public traffic for external analysis.
**Query**
```kusto
['sample-http-logs']
| extend is_private = ipv4_is_private('192.168.0.1')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20is_private%20%3D%20ipv4_is_private\('192.168.0.1'\)%22%7D)
**Output**
| geo.country | is\_private |
| ----------- | ----------- |
| USA | true |
| UK | true |
## List of related functions
* [ipv4\_compare](/apl/scalar-functions/ip-functions/ipv4-compare): Compares two IPv4 addresses lexicographically. Use for sorting or range evaluations.
* [ipv4\_is\_in\_range](/apl/scalar-functions/ip-functions/ipv4-is-in-range): Checks if an IP address is within a specified range.
* [parse\_ipv4](/apl/scalar-functions/ip-functions/parse-ipv4): Converts a dotted-decimal IP address into a numeric representation.
# ipv4_netmask_suffix
Source: https://axiom.co/docs/apl/scalar-functions/ip-functions/ipv4-netmask-suffix
This page explains how to use the ipv4_netmask_suffix function in APL.
The `ipv4_netmask_suffix` function in APL extracts the netmask suffix from an IPv4 address. The netmask suffix, also known as the subnet prefix length, specifies how many bits are used for the network portion of the address.
This function is useful for network log analysis, security auditing, and infrastructure monitoring. It helps you categorize IP addresses by their subnets, enabling you to detect patterns or anomalies in network traffic or to manage IP allocations effectively.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk, netmask suffix extraction typically requires manual parsing or custom scripts. In APL, the `ipv4_netmask_suffix` function simplifies this task by directly extracting the suffix from an IPv4 address in CIDR notation.
```spl Splunk example
eval netmask = replace(ip, "^.*?/", "")
```
```kusto APL equivalent
extend netmask = ipv4_netmask_suffix(ip)
```
In ANSI SQL, extracting the netmask suffix often involves using string functions like `SUBSTRING` or `CHARINDEX`. In APL, the `ipv4_netmask_suffix` function provides a direct and efficient alternative.
```sql SQL example
SELECT SUBSTRING(ip, CHARINDEX('/', ip) + 1, LEN(ip)) AS netmask FROM logs;
```
```kusto APL equivalent
extend netmask = ipv4_netmask_suffix(ip)
```
## Usage
### Syntax
```kusto
ipv4_netmask_suffix(ipv4address)
```
### Parameters
| Parameter | Type | Description |
| ------------- | ------ | ----------------------------------------------------------- |
| `ipv4address` | string | The IPv4 address in CIDR notation (e.g., `192.168.1.1/24`). |
### Returns
* Returns an integer representing the netmask suffix. For example, `24` for `192.168.1.1/24`.
* Returns the value `32` when the input IPv4 address doesn’t contain the suffix.
* Returns `null` if the input is not a valid IPv4 address in CIDR notation.
## Use case example
When analyzing network traffic logs, you can extract the netmask suffix to group or filter traffic by subnets.
**Query**
```kusto
['sample-http-logs']
| extend netmask = ipv4_netmask_suffix('192.168.1.1/24')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20netmask%20%3D%20ipv4_netmask_suffix\('192.168.1.1%2F24'\)%22%7D)
**Output**
| geo.country | netmask |
| ----------- | ------- |
| USA | 24 |
| UK | 24 |
## List of related functions
* [ipv4\_compare](/apl/scalar-functions/ip-functions/ipv4-compare): Compares two IPv4 addresses lexicographically. Use for sorting or range evaluations.
* [ipv4\_is\_in\_range](/apl/scalar-functions/ip-functions/ipv4-is-in-range): Checks if an IP address is within a specified range.
* [ipv4\_is\_private](/apl/scalar-functions/ip-functions/ipv4-is-private): Checks if an IPv4 address is within private IP ranges.
* [parse\_ipv4](/apl/scalar-functions/ip-functions/parse-ipv4): Converts a dotted-decimal IP address into a numeric representation.
# ipv6_compare
Source: https://axiom.co/docs/apl/scalar-functions/ip-functions/ipv6-compare
This page explains how to use the ipv6_compare function in APL.
Use the `ipv6_compare` function to compare two IPv6 addresses and determine their relative order. This function helps you evaluate whether one address is less than, equal to, or greater than another. It returns `-1`, `0`, or `1` accordingly.
You can use `ipv6_compare` in scenarios where IPv6 addresses are relevant, such as sorting traffic logs, grouping metrics by address ranges, or identifying duplicate or misordered entries. It’s especially useful in network observability and security use cases where working with IPv6 is common.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
Splunk SPL does not have a built-in function for directly comparing IPv6 addresses. Users often work around this limitation by converting the addresses into a comparable numeric format using external scripts or custom commands.
```sql Splunk example
| eval ip1 = "2001:db8::1", ip2 = "2001:db8::2"
| eval comparison = if(ip1 == ip2, 0, if(ip1 < ip2, -1, 1))
```
```kusto APL equivalent
print comparison = ipv6_compare('2001:db8::1', '2001:db8::2')
```
ANSI SQL does not natively support IPv6 comparisons. Typically, users must store IPv6 addresses as strings or binary values and write custom logic to compare them.
```sql SQL example
SELECT CASE
WHEN ip1 = ip2 THEN 0
WHEN ip1 < ip2 THEN -1
ELSE 1
END AS comparison
FROM my_table
```
```kusto APL equivalent
print comparison = ipv6_compare('2001:db8::1', '2001:db8::2')
```
## Usage
### Syntax
```kusto
ipv6_compare(ipv6_1, ipv6_2)
```
### Parameters
| Name | Type | Description |
| -------- | ------ | ----------------------------------- |
| `ipv6_1` | string | The first IPv6 address to compare. |
| `ipv6_2` | string | The second IPv6 address to compare. |
### Returns
An integer that represents the result of the comparison:
* `-1` if `ipv6_1` is less than `ipv6_2`
* `0` if `ipv6_1` is equal to `ipv6_2`
* `1` if `ipv6_1` is greater than `ipv6_2`
## Example
Use `ipv6_compare` to identify whether requests from certain IPv6 addresses fall into specific ranges or appear out of expected order.
**Query**
```kusto
['sample-http-logs']
| extend comparison = ipv6_compare('2001:db8::1', '2001:db8::abcd')
| project _time, uri, method, status, comparison
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20comparison%20%3D%20ipv6_compare\('2001%3Adb8%3A%3A1'%2C%20'2001%3Adb8%3A%3Aabcd'\)%20%7C%20project%20_time%2C%20uri%2C%20method%2C%20status%2C%20comparison%22%7D)
**Output**
| \_time | uri | method | status | comparison |
| -------------------- | ----------- | ------ | ------ | ---------- |
| 2025-06-29T22:10:00Z | /products/1 | GET | 200 | -1 |
This example compares two static IPv6 addresses and attaches the result to each row for further filtering or grouping.
## List of related functions
* [ipv6\_is\_match](/apl/scalar-functions/ip-functions/ipv6-is-match): Checks if an IPv6 address matches a given subnet. Use it for range filtering instead of sorting or comparison.
* [ipv4\_is\_private](/apl/scalar-functions/ip-functions/ipv4-is-private): Determines whether an IPv4 address is in a private range. Use this to filter non-public traffic.
* [ipv4\_compare](/apl/scalar-functions/ip-functions/ipv4-compare): Works the same way as `ipv6_compare` but for IPv4 addresses. Use it when your data contains IPv4 instead of IPv6.
# ipv6_is_in_any_range
Source: https://axiom.co/docs/apl/scalar-functions/ip-functions/ipv6-is-in-any-range
This page explains how to use the ipv6_is_in_any_range function in APL.
Use the `ipv6_is_in_any_range` function to determine whether a given IPv6 address belongs to any of a specified set of IPv6 CIDR ranges. This function is particularly useful in log enrichment, threat detection, and network analysis tasks that involve validating or filtering IP addresses against allowlists or blocklists.
You can use this function to:
* Detect whether traffic originates from known internal or external networks.
* Match IPv6 addresses against predefined address ranges for compliance or security auditing.
* Filter datasets based on whether requesters fall into allowed or disallowed IP zones.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
Splunk doesn’t offer a built-in function that directly checks if an IP falls within a list of CIDR ranges. Typically, SPL users must write custom logic using `cidrmatch()` repeatedly or rely on lookup tables.
```sql Splunk example
| eval is_internal = if(cidrmatch("2001:db8::/32", ip), "true", "false")
```
```kusto APL equivalent
ipv6_is_in_any_range('2001:db8::1', dynamic(['2001:db8::/32']))
```
ANSI SQL doesn’t natively support IPv6-aware CIDR range checks. Such functionality usually requires user-defined functions or external extensions.
```sql SQL example
-- Typically handled via stored procedures or UDFs in extended SQL environments
SELECT ip, is_in_range(ip, '2001:db8::/32') FROM traffic_logs
```
```kusto APL equivalent
ipv6_is_in_any_range('2001:db8::1', dynamic(['2001:db8::/32']))
```
## Usage
### Syntax
```kusto
ipv6_is_in_any_range(ipv6_address, ipv6_ranges)
```
### Parameters
| Name | Type | Description |
| -------------- | --------------- | --------------------------------------------------------- |
| `ipv6_address` | `string` | An IPv6 address in standard format (e.g., `2001:db8::1`). |
| `ipv6_ranges` | `dynamic array` | A JSON array of IPv6 CIDR strings to compare against. |
### Returns
A `bool` value:
* `true` if the given IPv6 address is within any of the provided CIDR ranges.
* `false` otherwise.
## Example
You want to detect HTTP requests from a specific internal IPv6 block.
**Query**
```kusto
['sample-http-logs']
| extend inRange = ipv6_is_in_any_range('2001:db8::1234', dynamic(['2001:db8::/32', 'fd00::/8']))
| project _time, uri, method, status, inRange
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20inRange%20%3D%20ipv6_is_in_any_range\('2001%3Adb8%3A%3A1234'%2C%20dynamic\(%5B'2001%3Adb8%3A%3A%2F32'%2C%20'fd00%3A%3A%2F8'%5D\)\)%20%7C%20project%20_time%2C%20id%2C%20uri%2C%20method%2C%20status%2C%20inRange%22%7D)
**Output**
| \_time | uri | method | status | inRange |
| -------------------- | ------------ | ------ | ------ | ------- |
| 2025-06-30T01:00:00Z | /api/login | POST | 200 | true |
| 2025-06-30T01:01:00Z | /healthcheck | GET | 204 | true |
## List of related functions
* [ipv4\_is\_in\_any\_range](/apl/scalar-functions/ip-functions/ipv4-is-in-any-range): Use this function when working with IPv4 addresses instead of IPv6.
* [ipv6\_compare](/apl/scalar-functions/ip-functions/ipv6-compare): Compares two IPv6 addresses. Use this for sorting or deduplication rather than range matching.
* [ipv6\_is\_match](/apl/scalar-functions/ip-functions/ipv6-is-match): Checks whether an IPv6 address matches a specific range. Use this if you need to test against a single CIDR block.
# ipv6_is_in_range
Source: https://axiom.co/docs/apl/scalar-functions/ip-functions/ipv6-is-in-range
This page explains how to use the ipv6_is_in_range function in APL.
Use the `ipv6_is_in_range` function to check whether an IPv6 address falls within a specified IPv6 CIDR range. This is useful when you need to classify, filter, or segment network traffic by address range—such as identifying requests from internal subnets, geo-localized regional blocks, or known malicious networks.
You can use this function when analyzing HTTP logs, trace telemetry, or security events where IPv6 addresses are present, and you want to restrict attention to or exclude certain address ranges.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, IP range checking for IPv6 addresses typically requires custom scripts or manual logic, as there is no built-in function equivalent to `ipv6_is_in_range`.
```sql Splunk example
| eval inRange=if(cidrmatch("2001:db8::/32", src_ip), "yes", "no")
```
```kusto APL equivalent
['sample-http-logs']
| extend inRange = ipv6_is_in_range(src_ip, '2001:db8::/32')
```
ANSI SQL does not have native functions for CIDR range checks on IPv6 addresses. You typically rely on user-defined functions (UDFs) or external tooling. In APL, `ipv6_is_in_range` provides this capability out of the box.
```sql SQL example
-- Using a hypothetical UDF
SELECT ipv6_in_range(ip_address, '2001:db8::/32') AS in_range FROM logs;
```
```kusto APL equivalent
['sample-http-logs']
| extend inRange = ipv6_is_in_range(src_ip, '2001:db8::/32')
```
## Usage
### Syntax
```kusto
ipv6_is_in_range(ipv6: string, cidr_range: string)
```
### Parameters
| Name | Type | Description |
| ------------ | ------ | --------------------------------------------- |
| `ipv6` | string | The IPv6 address to check. |
| `cidr_range` | string | The IPv6 CIDR block (e.g. `'2001:db8::/32'`). |
### Returns
A `bool` value:
* `true` if the IPv6 address is within the specified CIDR range.
* `false` otherwise.
## Example
Use this function to isolate internal service calls originating from a designated IPv6 block.
**Query**
```kusto
['otel-demo-traces']
| extend inRange = ipv6_is_in_range('fd00::a1b2', 'fd00::/8')
| project _time, span_id, ['service.name'], duration, inRange
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20extend%20inRange%20%3D%20ipv6_is_in_range\('fd00%3A%3Aa1b2'%2C%20'fd00%3A%3A%2F8'\)%20%7C%20project%20_time%2C%20span_id%2C%20%5B'service.name'%5D%2C%20duration%2C%20inRange%22%7D)
**Output**
| \_time | span\_id | \['service.name'] | duration | inRange |
| -------------------- | -------- | ----------------- | ---------- | ------- |
| 2025-06-28T11:20:00Z | span-124 | frontend | 00:00:02.4 | true |
| 2025-06-28T11:21:03Z | span-209 | cartservice | 00:00:01.1 | true |
## List of related functions
* [ipv4\_is\_in\_range](/apl/scalar-functions/ip-functions/ipv4-is-in-range): Checks whether an IPv4 address is within a specified CIDR range. Use this function when working with IPv4 instead of IPv6.
* [ipv6\_compare](/apl/scalar-functions/ip-functions/ipv6-compare): Compares two IPv6 addresses. Use when you want to sort or test address equality or ordering.
* [ipv6\_is\_match](/apl/scalar-functions/ip-functions/ipv6-is-match): Checks whether an IPv6 address matches a pattern. Use for wildcard or partial-match filtering rather than range checking.
# ipv6_is_match
Source: https://axiom.co/docs/apl/scalar-functions/ip-functions/ipv6-is-match
This page explains how to use the ipv6_is_match function in APL.
Use the `ipv6_is_match` function to determine whether an IPv6 address belongs to a specified IPv6 subnet. This function is useful when you want to classify, filter, or route network events based on IPv6 subnet membership.
You can use `ipv6_is_match` in scenarios such as identifying traffic from a known address range, enforcing access control policies, or correlating logs to specific networks. It supports CIDR notation for subnet specification and returns a boolean value for each row in your dataset.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
Splunk SPL does not have a dedicated function for matching IPv6 addresses against CIDR blocks. You typically use regular expressions or custom lookups to perform similar checks. In contrast, APL provides a built-in function that directly evaluates IPv6 CIDR membership.
```sql Splunk example
| eval is_in_subnet=if(match(ipv6_field, "^2001:db8::/32"), "true", "false")
```
```kusto APL equivalent
['sample-http-logs']
| extend is_in_subnet = ipv6_is_match('2001:db8:abcd:0012::0', '2001:db8::/32')
```
ANSI SQL does not have a standard function to check if an IPv6 address belongs to a subnet. You often implement this logic with string manipulation or rely on database-specific functions. APL simplifies this with `ipv6_is_match`, which accepts a full IPv6 address and a subnet in CIDR notation.
```sql SQL example
SELECT CASE
WHEN ip_address LIKE '2001:db8:%' THEN TRUE
ELSE FALSE
END AS is_in_subnet
FROM logs
```
```kusto APL equivalent
['sample-http-logs']
| extend is_in_subnet = ipv6_is_match('2001:db8:abcd:0012::0', '2001:db8::/32')
```
## Usage
### Syntax
```kusto
ipv6_is_match(ipv6_address, ipv6_subnet)
```
### Parameters
| Name | Type | Description |
| -------------- | ------ | ---------------------------------------------------------- |
| `ipv6_address` | string | The full IPv6 address you want to check. |
| `ipv6_subnet` | string | The target subnet in CIDR notation, e.g., `2001:db8::/32`. |
### Returns
A boolean value:
* `true` if the `ipv6_address` belongs to the specified `ipv6_subnet`.
* `false` otherwise.
## Example
Identify requests that originate from a known IPv6 subnet.
**Query**
```kusto
['sample-http-logs']
| extend isInternal = ipv6_is_match('2001:db8:abcd::1', '2001:db8::/32')
| project _time, uri, method, status, isInternal
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20isInternal%20%3D%20ipv6_is_match\('2001%3Adb8%3Aabcd%3A%3A1'%2C%20'2001%3Adb8%3A%3A%2F32'\)%20%7C%20project%20_time%2C%20uri%2C%20method%2C%20status%2C%20isInternal%22%7D)
**Output**
| \_time | uri | method | status | isInternal |
| -------------------- | ----------- | ------ | ------ | ---------- |
| 2025-06-28T13:04:10Z | /health | GET | 200 | true |
| 2025-06-28T13:05:22Z | /api/orders | POST | 201 | true |
## List of related functions
* [ipv4\_is\_match](/apl/scalar-functions/ip-functions/ipv4-is-match): Checks whether an IPv4 address belongs to a specified IPv4 subnet. Use it when working with IPv4 addresses.
* [parse\_ipv4](/apl/scalar-functions/ip-functions/parse-ipv4): Parses a string into an IPv4 address. Use it when working with raw IPv4 strings.
# parse_ipv4
Source: https://axiom.co/docs/apl/scalar-functions/ip-functions/parse-ipv4
This page explains how to use the parse_ipv4 function in APL.
The `parse_ipv4` function in APL converts an IPv4 address and represents it as a long number. You can use this function to convert an IPv4 address for advanced analysis, filtering, or comparisons. It is especially useful for tasks like analyzing network traffic logs, identifying trends in IP address usage, or performing security-related queries.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
Splunk does not provide a direct function for converting an IPv4 address into a long number. However, you can achieve similar functionality using custom SPL expressions.
```sql Splunk example
| eval ip_int = tonumber(replace(ip, "\\.", ""))
```
```kusto APL equivalent
['sample-http-logs']
| extend ip_long = parse_ipv4(uri)
```
SQL does not have a built-in function equivalent to `parse_ipv4`, but you can use bitwise operations to achieve a similar result.
```sql SQL example
SELECT
(CAST(SPLIT_PART(ip, '.', 1) AS INT) << 24) +
(CAST(SPLIT_PART(ip, '.', 2) AS INT) << 16) +
(CAST(SPLIT_PART(ip, '.', 3) AS INT) << 8) +
CAST(SPLIT_PART(ip, '.', 4) AS INT) AS ip_int
FROM logs;
```
```kusto APL equivalent
['sample-http-logs']
| extend ip_long = parse_ipv4(uri)
```
## Usage
### Syntax
```kusto
parse_ipv4(ipv4_address)
```
### Parameters
| Parameter | Type | Description |
| -------------- | ------ | --------------------------------------------- |
| `ipv4_address` | string | The IPv4 address to parse into a long number. |
### Returns
The function returns the IPv4 address as a long number if the conversion succeeds. If the conversion fails, the function returns `null`.
## Use case example
You can use the `parse_ipv4` function to analyze web traffic by representing IP addresses as long numbers.
**Query**
```kusto
['sample-http-logs']
| extend ip_long = parse_ipv4('192.168.1.1')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20ip_octets%20%3D%20parse_ipv4\('192.168.1.1'\)%22%7D)
**Output**
| \_time | uri | method | ip\_long |
| ------------------- | ----------- | ------ | ------------- |
| 2024-11-14T10:00:00 | /index.html | GET | 3,232,235,777 |
## List of related functions
* [has\_any\_ipv4](/apl/scalar-functions/ip-functions/has-any-ipv4): Matches any IP address in a string column with a list of IP addresses or ranges.
* [has\_ipv4\_prefix](/apl/scalar-functions/ip-functions/has-ipv4-prefix): Checks if an IPv4 address matches a single prefix.
* [has\_ipv4](/apl/scalar-functions/ip-functions/has-ipv4): Checks if a single IP address is present in a string column.
* [ipv4\_compare](/apl/scalar-functions/ip-functions/ipv4-compare): Compares two IPv4 addresses lexicographically. Use for sorting or range evaluations.
* [ipv4\_is\_in\_range](/apl/scalar-functions/ip-functions/ipv4-is-in-range): Checks if an IP address is within a specified range.
* [ipv4\_is\_private](/apl/scalar-functions/ip-functions/ipv4-is-private): Checks if an IPv4 address is within private IP ranges.
# parse_ipv4_mask
Source: https://axiom.co/docs/apl/scalar-functions/ip-functions/parse-ipv4-mask
This page explains how to use the parse_ipv4_mask function in APL.
## Introduction
The `parse_ipv4_mask` function in APL converts an IPv4 address and its associated netmask into a signed 64-bit wide, long number representation in big-endian order. Use this function when you need to process or compare IPv4 addresses efficiently as numerical values, such as for IP range filtering, subnet calculations, or network analysis.
This function is particularly useful in scenarios where you need a compact and precise way to represent IP addresses and their masks for further aggregation or filtering.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, you use functions like `cidrmatch` for subnet operations. In APL, `parse_ipv4_mask` focuses on converting an IP and mask into a numerical representation for low-level processing.
```sql Splunk example
| eval converted_ip = cidrmatch("192.168.1.0/24", ip)
```
```kusto APL equivalent
print converted_ip = parse_ipv4_mask("192.168.1.0", 24)
```
In ANSI SQL, you typically use custom expressions or stored procedures to perform similar IP address transformations. In APL, `parse_ipv4_mask` offers a built-in, optimized function for this task.
```sql SQL example
SELECT inet_aton('192.168.1.0') & (0xFFFFFFFF << (32 - 24)) AS converted_ip
```
```kusto APL equivalent
print converted_ip = parse_ipv4_mask("192.168.1.0", 24)
```
## Usage
### Syntax
```kusto
parse_ipv4_mask(ip, prefix)
```
### Parameters
| Name | Type | Description |
| -------- | ------ | ------------------------------------------------------------------------- |
| `ip` | string | The IPv4 address to convert to a long number. |
| `prefix` | int | An integer from 0 to 32 representing the number of most-significant bits. |
### Returns
* A signed, 64-bit long number in big-endian order if the conversion is successful.
* `null` if the conversion is unsuccessful.
### Example
```kusto
print parse_ipv4_mask("127.0.0.1", 24)
```
## Use case example
Use `parse_ipv4_mask` to analyze logs and filter entries based on IP ranges.
**Query**
```kusto
['sample-http-logs']
| extend masked_ip = parse_ipv4_mask('192.168.0.1', 24)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20masked_ip%20%3D%20parse_ipv4_mask\('192.168.0.1'%2C%2024\)%22%7D)
**Output**
| \_time | uri | method | masked\_ip |
| ------------------- | ----------- | ------ | ------------- |
| 2024-11-14T10:00:00 | /index.html | GET | 3,232,235,520 |
## List of related functions
* [ipv4\_compare](/apl/scalar-functions/ip-functions/ipv4-compare): Compares two IPv4 addresses lexicographically. Use for sorting or range evaluations.
* [ipv4\_is\_in\_range](/apl/scalar-functions/ip-functions/ipv4-is-in-range): Checks if an IP address is within a specified range.
* [parse\_ipv4](/apl/scalar-functions/ip-functions/parse-ipv4): Converts a dotted-decimal IP address into a numeric representation.
# Mathematical functions
Source: https://axiom.co/docs/apl/scalar-functions/mathematical-functions
Learn how to use and combine different mathematical functions in APL
The table summarizes the mathematical functions available in APL.
| Name | Description |
| --------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------- |
| [abs](#abs) | Calculates the absolute value of the input. |
| [acos](#acos) | Returns the angle whose cosine is the specified number (the inverse operation of cos()). |
| [asin](#asin) | Returns the angle whose sine is the specified number (the inverse operation of sin()). |
| [atan](#atan) | Returns the angle whose tangent is the specified number (the inverse operation of tan()). |
| [atan2](#atan2) | Calculates the angle, in radians, between the positive x-axis and the ray from the origin to the point (y, x). |
| [cos](#cos) | Returns the cosine function. |
| [degrees](#degrees) | Converts angle value in radians into value in degrees, using formula degrees = (180 / PI) \* angle-in-radians. |
| [exp](#exp) | The base-e exponential function of x, which is e raised to the power x: e^x. |
| [exp10](#exp10) | The base-10 exponential function of x, which is 10 raised to the power x: 10^x. |
| [exp2](#exp2) | The base-2 exponential function of x, which is 2 raised to the power x: 2^x. |
| [gamma](#gamma) | Computes gamma function. |
| [isinf](#isinf) | Returns whether input is an infinite (positive or negative) value. |
| [isint](#isint) | Returns whether input is an integer (positive or negative) value |
| [isnan](#isnan) | Returns whether input is Not-a-Number (NaN) value. |
| [log](#log) | Returns the natural logarithm function. |
| [log10](#log10) | Returns the common (base-10) logarithm function. |
| [log2](#log2) | Returns the base-2 logarithm function. |
| [loggamma](#loggamma) | Computes log of absolute value of the gamma function. |
| [max\_of](/apl/scalar-functions/mathematical-functions/max-of) | Returns the largest of the provided values. |
| [min\_of](/apl/scalar-functions/mathematical-functions/min-of) | Returns the smallest of the provided values. |
| [not](#not) | Reverses the value of its bool argument. |
| [pi](#pi) | Returns the constant value of Pi (π). |
| [pow](#pow) | Returns a result of raising to power. |
| [radians](#radians) | Converts angle value in degrees into value in radians, using formula radians = (PI / 180) \* angle-in-degrees. |
| [round](#round) | Returns the rounded source to the specified precision. |
| [set\_difference](/apl/scalar-functions/mathematical-functions/set-difference) | Returns the difference between two arrays. |
| [set\_has\_element](/apl/scalar-functions/mathematical-functions/set-has-element) | Determines if a set contains a specific value. |
| [set\_intersect](/apl/scalar-functions/mathematical-functions/set-intersect) | Returns the intersection of two arrays. |
| [set\_union](/apl/scalar-functions/mathematical-functions/set-union) | Returns the union of two arrays. |
| [sign](#sign) | Sign of a numeric expression. |
| [sin](#sin) | Returns the sine function. |
| [sqrt](#sqrt) | Returns the square root function. |
| [tan](#tan) | Returns the tangent function. |
## abs
Calculates the absolute value of the input.
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| -------- | --------------------- | ------------------------ | -------------------------- |
| x | int, real or timespan | Required | The value to make absolute |
### Returns
* Absolute value of x.
### Examples
```kusto
abs(x)
```
```kusto
abs(80.5) == 80.5
```
```kusto
['sample-http-logs']
| project absolute_value = abs(req_duration_ms)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20absolute_value%20%3D%20abs%28req_duration_ms%29%22%7D)
## acos
Returns the angle whose cosine is the specified number (the inverse operation of cos()) .
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| -------- | -------- | ------------------------ | -------------------------------- |
| x | real | Required | A real number in range \[-1,. 1] |
### Returns
* The value of the arc cosine of x
* `null` if `x` \< -1 or `x` > 1
### Examples
```kusto
acos(x)
```
```kusto
acos(-1) == 3.141592653589793
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20cosine_angle%20%3D%20acos%28-1%29%22%7D)
## asin
Returns the angle whose sine is the specified number (the inverse operation of sin()) .
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| -------- | -------- | ------------------------ | -------------------------------- |
| x | real | Required | A real number in range \[-1,. 1] |
* x: A real number in range \[-1, 1].
### Returns
* The value of the arc sine of x
* null if x \< -1 or x > 1
### Examples
```kusto
asin(x)
```
```kusto
['sample-http-logs']
| project inverse_sin_angle = asin(-1)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20inverse_sin_angle%20%3D%20asin%28-1%29%22%7D)
## atan
Returns the angle whose tangent is the specified number (the inverse operation of tan()) .
### Arguments
x: A real number.
### Returns
The value of the arc tangent of x
### Examples
```kusto
atan(x)
```
```kusto
atan(-1) == -0.7853981633974483
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20inverse_tan_angle%20%3D%20atan%28-1%29%22%7D)
## atan2
Calculates the angle, in radians, between the positive x-axis and the ray from the origin to the point (y, x).
### Arguments
x: X coordinate (a real number).
y: Y coordinate (a real number).
### Returns
The angle, in radians, between the positive x-axis and the ray from the origin to the point (y, x).
### Examples
```kusto
atan2(y,x)
```
```kusto
atan2(-1, 1) == -0.7853981633974483
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20angle_in_rads%20%3D%20atan2%28-1%2C%201%29%22%7D)
## cos
Returns the cosine function.
### Arguments
x: A real number.
### Returns
The result of cos(x)
### Examples
```kusto
cos(x)
```
```kusto
cos(-1) == 0.5403023058681398
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20cosine_function%20%3D%20cos%28-1%29%22%7D)
## degrees
Converts angle value in radians into value in degrees, using formula degrees = (180 / PI ) \* angle\_in\_radians
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| -------- | -------- | ------------------------ | ----------------- |
| a | real | Required | Angle in radians. |
### Returns
The corresponding angle in degrees for an angle specified in radians.
### Examples
```kusto
degrees(a)
```
```kusto
degrees(3.14) == 179.9087476710785
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20degree_rads%20%3D%20degrees%283.14%29%22%7D)
## exp
The base-e exponential function of x, which is e raised to the power x: e^x.
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| -------- | ----------- | ------------------------ | ---------------------- |
| x | real number | Required | Value of the exponent. |
### Returns
* Exponential value of x.
* For natural (base-e) logarithms, see [log](/apl/scalar-functions/mathematical-functions#log\(\)).
* For exponential functions of base-2 and base-10 logarithms, see [exp2](/apl/scalar-functions/mathematical-functions#exp2\(\)), [exp10](/apl/scalar-functions/mathematical-functions#exp10\(\))
### Examples
```kusto
exp(x)
```
```kusto
exp(1) == 2.718281828459045
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20exponential_value%20%3D%20exp%281%29%22%7D)
## exp2
The base-2 exponential function of x, which is 2 raised to the power x: 2^x.
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| -------- | ----------- | ------------------------ | ---------------------- |
| x | real number | Required | Value of the exponent. |
### Returns
* Exponential value of x.
* For natural (base-2) logarithms, see [log2](/apl/scalar-functions/mathematical-functions#log2\(\)).
* For exponential functions of base-e and base-10 logarithms, see [exp](/apl/scalar-functions/mathematical-functions#exp\(\)), [exp10](/apl/scalar-functions/mathematical-functions#exp10\(\))
### Examples
```kusto
exp2(x)
```
```kusto
| project base_2_exponential_value = exp2(req_duration_ms)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20base_2_exponential_value%20%3D%20exp2%28req_duration_ms%29%22%7D)
## gamma
Computes [gamma function](https://en.wikipedia.org/wiki/Gamma_function)
### Arguments
* x: Parameter for the gamma function
### Returns
* Gamma function of x.
* For computing log-gamma function, see loggamma().
### Examples
```kusto
gamma(x)
```
```kusto
gamma(4) == 6
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20gamma_function%20%3D%20gamma%284%29%22%7D)
## isinf
Returns whether input is an infinite (positive or negative) value.
### Example
```kusto
isinf(x)
```
```kusto
isinf(45.56) == false
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20infinite_value%20%3D%20isinf%2845.56%29%22%7D)
### Arguments
x: A real number.
### Returns
A non-zero value (true) if x is a positive or negative infinite; and zero (false) otherwise.
## isnan
Returns whether input is Not-a-Number (NaN) value.
### Arguments
x: A real number.
### Returns
A non-zero value (true) if x is NaN; and zero (false) otherwise.
### Examples
```kusto
isnan(x)
```
```kusto
isnan(45.56) == false
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20nan%20%3D%20isnan%2845.56%29%22%7D)
## log
log() returns the natural logarithm function.
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| -------- | -------- | ------------------------ | ------------------ |
| x | real | Required | A real number > 0. |
### Returns
The natural logarithm is the base-e logarithm: the inverse of the natural exponential function (exp).
null if the argument is negative or null or can’t be converted to a real value.
### Examples
```kusto
log(x)
```
```kusto
log(1) == 0
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20natural_log%20%3D%20log%281%29%22%7D)
## log10
log10() returns the common (base-10) logarithm function.
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| -------- | -------- | ------------------------ | ------------------ |
| x | real | Required | A real number > 0. |
### Returns
The common logarithm is the base-10 logarithm: the inverse of the exponential function (exp) with base 10.
null if the argument is negative or null or can’t be converted to a real value.
### Examples
```kusto
log10(x)
```
```kusto
log10(4) == 0.6020599913279624
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20base10%20%3D%20log10%284%29%22%7D)
## log2
log2() returns the base-2 logarithm function.
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| -------- | -------- | ------------------------ | ------------------ |
| x | real | Required | A real number > 0. |
### Returns
The logarithm is the base-2 logarithm: the inverse of the exponential function (exp) with base 2.
null if the argument is negative or null or can’t be converted to a real value.
### Examples
```kusto
log2(x)
```
```kusto
log2(6) == 2.584962500721156
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20base2_log%20%3D%20log2%286%29%22%7D)
## loggamma
Computes log of absolute value of the [gamma function](https://en.wikipedia.org/wiki/Gamma_function)
### Arguments
x: Parameter for the gamma function
### Returns
* Returns the natural logarithm of the absolute value of the gamma function of x.
### Examples
````kusto
loggamma(x)
```kusto
loggamma(16) == 27.89927138384089
````
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20gamma_function%20%3D%20loggamma%2816%29%22%7D)
## not
Reverses the value of its bool argument.
### Examples
```kusto
not(expr)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20reverse%20%3D%20not%28false%29%22%7D)
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| -------- | -------- | ------------------------ | ----------------------------------- |
| Expr | bool | Required | A `bool` expression to be reversed. |
### Returns
Returns the reversed logical value of its bool argument.
## pi
Returns the constant value of Pi.
### Returns
* The double value of Pi (3.1415926...)
### Examples
```kusto
pi()
```
```kusto
['sample-http-logs']
| project pie = pi()
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20pie%20%3D%20pi%28%29%22%7D)
## pow
Returns a result of raising to power
### Examples
```kusto
pow(base, exponent )
```
```kusto
pow(2, 6) == 64
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20power%20%3D%20pow%282%2C%206%29%22%7D)
### Arguments
* *base:* Base value.
* *exponent:* Exponent value.
### Returns
Returns base raised to the power exponent: base ^ exponent.
## radians
Converts angle value in degrees into value in radians, using formula `radians = (PI / 180 ) * angle_in_degrees`
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| -------- | -------- | ------------------------ | --------------------------------- |
| a | real | Required | Angle in degrees (a real number). |
### Returns
The corresponding angle in radians for an angle specified in degrees.
### Examples
```kusto
radians(a)
```
```kusto
radians(60) == 1.0471975511965976
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20radians%20%3D%20radians%2860%29%22%7D)
## round
Returns the rounded source to the specified precision.
### Arguments
* source: The source scalar the round is calculated on.
* Precision: Number of digits the source will be rounded to.(default value is 0)
### Returns
The rounded source to the specified precision.
### Examples
```kusto
round(source [, Precision])
```
```kusto
round(25.563663) == 26
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20rounded_value%20%3D%20round%2825.563663%29%22%7D)
## sign
Sign of a numeric expression
### Examples
```kusto
sign(x)
```
```kusto
sign(25.563663) == 1
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20numeric_expression%20%3D%20sign%2825.563663%29%22%7D)
### Arguments
* x: A real number.
### Returns
* The positive (+1), zero (0), or negative (-1) sign of the specified expression.
## sin
Returns the sine function.
### Examples
```kusto
sin(x)
```
```kusto
sin(25.563663) == 0.41770848373492825
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20sine_function%20%3D%20sin%2825.563663%29%22%7D)
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| -------- | -------- | ------------------------ | --------------- |
| x | real | Required | A real number. |
### Returns
The result of sin(x)
## sqrt
Returns the square root function.
### Examples
```kusto
sqrt(x)
```
```kusto
sqrt(25.563663) == 5.0560521160288685
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20squareroot%20%3D%20sqrt%2825.563663%29%22%7D)
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| -------- | -------- | ------------------------ | ------------------- |
| x | real | Required | A real number >= 0. |
### Returns
* A positive number such that \_sqrt(x) \_ sqrt(x) == x\*
* null if the argument is negative or cannot be converted to a real value.
## tan
Returns the tangent function.
### Examples
```kusto
tan(x)
```
```kusto
tan(25.563663) == 0.4597371460602336
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20tangent_function%20%3D%20tan%2825.563663%29%22%7D)
### Argument
| **Name** | **Type** | **Required or Optional** | **Description** |
| -------- | -------- | ------------------------ | --------------- |
| x | real | Required | A real number. |
### Returns
* The result of `tan(x)`
## exp10
The base-10 exponential function of x, which is 10 raised to the power x: 10^x.
### Examples
```kusto
exp10(x)
```
```kusto
exp10(25.563663) == 36,615,333,994,520,800,000,000,000
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20base_10_exponential%20%3D%20pow%2810%2C%2025.563663%29%22%7D)
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| -------- | -------- | ------------------------ | ------------------------------------- |
| x | real | Required | A real number, value of the exponent. |
### Returns
* Exponential value of x.
* For natural (base-10) logarithms, see [log10](/apl/scalar-functions/mathematical-functions#log10\(\)).
* For exponential functions of base-e and base-2 logarithms, see [exp](/apl/scalar-functions/mathematical-functions#exp\(\)), [exp2](/apl/scalar-functions/mathematical-functions#exp2\(\))
## isint
Returns whether input is an integer (positive or negative) value.
### Arguments
* Expr: expression value which can be a real number
### Returns
A non-zero value (true) if expression is a positive or negative integer; and zero (false) otherwise.
### Examples
```kusto
isint(expression)
```
```kusto
isint(resp_body_size_bytes) == true
```
```kusto
isint(25.563663) == false
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20integer_value%20%3D%20isint%2825.563663%29%22%7D)
## isfinite
Returns whether input is a finite value (is neither infinite nor NaN).
### Arguments
* number: A real number.
### Returns
A non-zero value (true) if x is finite; and zero (false) otherwise.
### Examples
```kusto
isfinite(number)
```
```kusto
isfinite(25.563663) == true
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20isfinite_value%20%3D%20isfinite%2825.563663%29%22%7D)
# max_of
Source: https://axiom.co/docs/apl/scalar-functions/mathematical-functions/max-of
This page explains how to use the max_of function in APL.
Use the `max_of` function in APL (Axiom Processing Language) to return the maximum value from a list of scalar expressions. You can use it when you need to compute the maximum of a fixed set of values within each row, rather than across rows like with [aggregation functions](/apl/aggregation-function/statistical-functions). It is especially useful when the values you want to compare come from different columns or are dynamically calculated within the same row.
Use `max_of` when you want to:
* Compare multiple fields in a single event to determine the highest value.
* Perform element-wise maximum calculations in datasets where values are spread across columns.
* Evaluate conditional values and select the highest one on a per-row basis.
* Ensure a minimum value. For example, `max_of(value, 0)` always returns greater than 0.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
Splunk does not provide a direct function equivalent to `max_of`. However, you can use the `eval` command with nested `if` statements or custom logic to emulate similar functionality on a per-event basis.
```sql Splunk example
eval max_value=if(a > b and a > c, a, if(b > c, b, c))
```
```kusto APL equivalent
extend max_value = max_of(a, b, c)
```
ANSI SQL does not offer a built-in function like `max_of` to compute the maximum across expressions in a single row. Instead, you typically use `GREATEST`, which serves a similar purpose.
```sql SQL example
SELECT GREATEST(a, b, c) AS max_value FROM table
```
```kusto APL equivalent
extend max_value = max_of(a, b, c)
```
## Usage
### Syntax
```kusto
max_of(Expr1, Expr2, ..., ExprN)
```
### Parameters
The function takes a comma-separated list of expressions to compare. All values must be of the same type.
### Returns
The function returns the maximum value among the input expressions. The type of the result matches the type of the input expressions. All expressions must be of the same or compatible types.
## Use case example
You have two data points for the size of HTTP responses: header size and body size. You want to find the maximum of these two values for each event.
**Query**
```kusto
['sample-http-logs']
| extend max_size = max_of(resp_header_size_bytes, resp_body_size_bytes)
| project _time, id, resp_header_size_bytes, resp_body_size_bytes, max_size
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20max_size%20%3D%20max_of\(resp_header_size_bytes%2C%20resp_body_size_bytes\)%20%7C%20project%20_time%2C%20id%2C%20resp_header_size_bytes%2C%20resp_body_size_bytes%2C%20max_size%22%7D)
**Output**
| \_time | id | resp\_header\_size\_bytes | resp\_body\_size\_bytes | max\_size |
| ---------------- | ------------------------------------ | ------------------------- | ----------------------- | --------- |
| May 15, 11:18:53 | 4baad81e-2bca-408f-8a47-092065274037 | 39 B | 2,805 B | 2,805 |
| May 15, 11:18:53 | 05b257c0-8f9d-4b23-8901-c5f288abc30b | 24 B | 988 B | 988 |
| May 15, 11:18:53 | b34d937c-527a-4a05-b88f-5f3dba645de6 | 72 B | 4,399 B | 4,399 |
| May 15, 11:18:53 | 12a623ec-8b0d-4149-a9eb-d3e18ad5b1cd | 34 B | 1,608 B | 1,608 |
| May 15, 11:18:53 | d24f22a7-8748-4d3d-a815-ed93081fd5d1 | 84 B | 4,080 B | 4,080 |
| May 15, 11:18:53 | 3cc68be1-bb9a-4199-bf75-62eef59e3a09 | 76 B | 5,117 B | 5,117 |
| May 15, 11:18:53 | abadabac-a6c0-4ff2-80a1-11143d7c408b | 41 B | 2,845 B | 2,845 |
# min_of
Source: https://axiom.co/docs/apl/scalar-functions/mathematical-functions/min-of
This page explains how to use the min_of function in APL.
Use the `min_of` function in APL to determine the minimum value among two or more scalar values. The function returns the smallest of its arguments, making it especially useful when you want to compare metrics, constants, or calculated expressions in queries.
You typically use `min_of` when you want to:
* Compare numeric or time-based values across multiple fields or constants.
* Apply conditional logic in summarization or filtering steps.
* Normalize or bound values when computing metrics.
Unlike aggregation functions such as `min()`, which work across rows in a group, `min_of` operates on values within a single row or context.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk, you often use the `eval` command with the `min` function to compare multiple values. APL’s `min_of` is similar, but used as a scalar function directly in expressions.
```sql Splunk example
eval smallest=min(field1, field2)
```
```kusto APL equivalent
extend smallest = min_of(field1, field2)
```
In SQL, you typically use `LEAST()` to find the smallest of multiple values. APL’s `min_of` is the equivalent of `LEAST()`.
```sql SQL example
SELECT LEAST(col1, col2, col3) AS min_val FROM table;
```
```kusto APL equivalent
extend min_val = min_of(col1, col2, col3)
```
## Usage
### Syntax
```kusto
min_of(Expr1, Expr2, ..., ExprN)
```
### Parameters
The function takes a comma-separated list of expressions to compare. All values must be of the same type.
### Returns
The function returns the smallest of the provided values. The type of the return value matches the type of the input arguments.
## Use case example
You have two data points for the size of HTTP responses: header size and body size. You want to find the minimum of these two values for each event.
**Query**
```kusto
['sample-http-logs']
| extend min_size = min_of(resp_header_size_bytes, resp_body_size_bytes)
| project _time, id, resp_header_size_bytes, resp_body_size_bytes, min_size
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20min_size%20%3D%20min_of\(resp_header_size_bytes%2C%20resp_body_size_bytes\)%20%7C%20project%20_time%2C%20id%2C%20resp_header_size_bytes%2C%20resp_body_size_bytes%2C%20min_size%22%7D)
**Output**
| \_time | id | resp\_header\_size\_bytes | resp\_body\_size\_bytes | min\_size |
| ---------------- | ------------------------------------ | ------------------------- | ----------------------- | --------- |
| May 15, 11:31:05 | 739b0433-39aa-4891-a5e0-3bde3cb40386 | 41 B | 3,410 B | 41 |
| May 15, 11:31:05 | 3016c439-ea30-454b-858b-06f0a66f44b9 | 53 B | 5,333 B | 53 |
| May 15, 11:31:05 | b26b0a5c-bc73-4693-86ad-be9e0cc767d6 | 60 B | 2,936 B | 60 |
| May 15, 11:31:05 | 8d939423-26ae-43f7-9927-13499e7cc7d3 | 60 B | 2,896 B | 60 |
| May 15, 11:31:05 | 10c37b1a-5639-4c99-a232-c8295e3ce664 | 63 B | 4,871 B | 63 |
| May 15, 11:31:05 | 4aa1821a-6906-4ede-9417-3097efb76b89 | 78 B | 1,729 B | 78 |
| May 15, 11:31:05 | 6325de66-0033-4133-b2f3-99fa70f8c9c0 | 96 B | 4,232 B | 96 |
# set_difference
Source: https://axiom.co/docs/apl/scalar-functions/mathematical-functions/set-difference
This page explains how to use the set_difference function in APL.
Use the `set_difference` function in APL to compute the distinct elements in one array that are not present in another. This function helps you filter out shared values between two arrays, producing a new array that includes only the unique values from the first input array.
Use `set_difference` when you need to identify new or missing elements, such as:
* Users who visited today but not yesterday.
* Error codes that occurred in one region but not another.
* Service calls that appear in staging but not production.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, similar logic often uses the `setdiff` function from the `mv` (multivalue) function family. APL’s `set_difference` behaves similarly, returning values that are only in the first multivalue field.
```sql Splunk example
| eval a=mvappend("a", "b", "c"), b=mvappend("b", "c")
| eval diff=mvfilter(NOT match(a, b))
```
```kusto APL equivalent
print a=dynamic(['a', 'b', 'c']), b=dynamic(['b', 'c'])
| extend diff=set_difference(a, b)
```
ANSI SQL doesn’t support array operations directly, but you can emulate set difference with `EXCEPT` when working with rows, not arrays. APL provides native array functions like `set_difference` for this purpose.
```sql SQL example
SELECT value FROM array1
EXCEPT
SELECT value FROM array2;
```
```kusto APL equivalent
print a=dynamic(['a', 'b', 'c']), b=dynamic(['b', 'c'])
| extend diff=set_difference(a, b)
```
## Usage
### Syntax
```kusto
set_difference(Array1, Array2)
```
### Parameters
| Name | Type | Description |
| -------- | ----- | ---------------------------------------------------- |
| `Array1` | array | The array to subtract from. |
| `Array2` | array | The array containing values to remove from `Array1`. |
### Returns
An array that includes all values from `Array1` that are not present in `Array2`. The result does not include duplicates.
## Example
Use `set_difference` to return the difference between two arrays.
**Query**
```kusto
['sample-http-logs']
| extend difference = set_difference(dynamic([1, 2, 3]), dynamic([2, 3, 4, 5]))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20difference%20%3D%20set_difference\(dynamic\(%5B1%2C%202%2C%203%5D\)%2C%20dynamic\(%5B2%2C%203%2C%204%2C%205%5D\)\)%22%7D)
**Output**
| \_time | difference |
| ---------------- | ---------- |
| May 22, 11:42:52 | \[5, 1, 4] |
## List of related functions
* [set\_difference](apl/scalar-functions/mathematical-functions/set-difference): Returns elements in the first array that are not in the second. Use it to find exclusions.
* [set\_has\_element](/apl/scalar-functions/mathematical-functions/set-has-element): Tests whether a set contains a specific value. Prefer it when you only need a Boolean result.
* [set\_union](/apl/scalar-functions/mathematical-functions/set-union): Returns the union of two or more sets. Use it when you need any element that appears in at least one set instead of every set.
# set_has_element
Source: https://axiom.co/docs/apl/scalar-functions/mathematical-functions/set-has-element
This page explains how to use the set_has_element function in APL.
`set_has_element` returns true when a dynamic array contains a specific element and false when it does not. Use it to perform fast membership checks on values that you have already aggregated into a set with functions such as `make_set`.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk, you usually call `in` for scalar membership or use multivalue functions such as `mvfind` for arrays. `set_has_element` plays the role of those helpers after you build a multivalue field with `stats values`.
```sql Splunk example
index=web
| stats values(uri) AS uris BY id
| where "/checkout" in uris
```
```kusto APL equivalent
['sample-http-logs']
| summarize uris=make_set(uri) by id
| where set_has_element(uris, '/checkout')
```
Standard SQL has no built-in array type, but dialects that implement arrays (for example PostgreSQL) use the `ANY` or `member of` operators. `set_has_element` is the APL counterpart and is applied after you build an array with `ARRAY_AGG` equivalents such as `make_set`.
```sql SQL example
SELECT id
FROM sample_http_logs
GROUP BY id
HAVING 'US' = ANY(ARRAY_AGG(country));
```
```kusto APL equivalent
['sample-http-logs']
| summarize countries=make_set(['geo.country']) by id
| where set_has_element(countries, 'US')
```
## Usage
### Syntax
```kusto
set_has_element(set, value)
```
### Parameters
| Name | Type | Description |
| ------- | ------- | ------------------------------------------------------------------------------------------ |
| `set` | dynamic | The array to search. |
| `value` | scalar | The element to look for. Accepts `long`, `real`, `datetime`, `timespan`, `string`, `bool`. |
### Returns
A `bool` that is true when `value` exists in `set` and false otherwise.
## Example
Use `set_has_element` to determine if a set contains a specific value.
**Query**
```kusto
['sample-http-logs']
| extend hasElement = set_has_element(dynamic([1, 2, 3]), 2)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20hasElement%20%3D%20set_has_element\(dynamic\(%5B1%2C%202%2C%203%5D\)%2C%202\)%22%7D)
**Output**
| \_time | hasElement |
| ---------------- | ---------- |
| May 22, 11:42:52 | true |
## List of related functions
* [set\_difference](apl/scalar-functions/mathematical-functions/set-difference): Returns elements in the first array that are not in the second. Use it to find exclusions.
* [set\_union](/apl/scalar-functions/mathematical-functions/set-union): Returns the union of two or more sets. Use it when you need any element that appears in at least one set instead of every set.
# set_intersect
Source: https://axiom.co/docs/apl/scalar-functions/mathematical-functions/set-intersect
This page explains how to use the set_intersect function in APL.
Use the `set_intersect` function in APL to find common elements between two dynamic arrays. This function returns a new array that contains only the elements that appear in both input arrays, preserving the order from the first array and eliminating duplicates.
You can use `set_intersect` when you need to compare sets of values—for example, to find users who accessed two different URLs, or to identify traces that passed through multiple services. This function is especially useful for working with dynamic fields generated during aggregations or transformations.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
Splunk SPL does not have a direct equivalent to `set_intersect`, but you can achieve similar functionality using `mvfilter` with conditions based on a lookup or manually defined set. APL simplifies this process by offering a built-in array intersection function.
```sql Splunk example
| eval A=split("apple,banana,cherry", ",")
| eval B=split("banana,cherry,dragonfruit", ",")
| eval C=mvfilter(match(A, B))
```
```kusto APL equivalent
print A=dynamic(['apple', 'banana', 'cherry']), B=dynamic(['banana', 'cherry', 'dragonfruit'])
| extend C = set_intersect(A, B)
```
ANSI SQL does not natively support array data types or set operations over arrays. To perform an intersection, you usually need to normalize the arrays using `UNNEST` or `JOIN`, which can be verbose. In APL, `set_intersect` performs this in a single step.
```sql SQL example
-- Using PostgreSQL syntax
SELECT ARRAY(
SELECT unnest(array['apple','banana','cherry'])
INTERSECT
SELECT unnest(array['banana','cherry','dragonfruit'])
);
```
```kusto APL equivalent
print A=dynamic(['apple', 'banana', 'cherry']), B=dynamic(['banana', 'cherry', 'dragonfruit'])
| extend C = set_intersect(A, B)
```
## Usage
### Syntax
```kusto
set_intersect(Array1, Array2)
```
### Parameters
| Name | Type | Description |
| -------- | ------- | ---------------------------- |
| `Array1` | dynamic | The first array to compare. |
| `Array2` | dynamic | The second array to compare. |
### Returns
A dynamic array containing elements that exist in both `Array1` and `Array2`, in the order they appear in `Array1`, with duplicates removed.
## Example
Use `set_intersect` to return the intersection of two arrays.
**Query**
```kusto
['sample-http-logs']
| extend intersect = set_intersect(dynamic([1, 2, 3]), dynamic([2, 3, 4, 5]))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20intersect%20%3D%20set_intersect\(dynamic\(%5B1%2C%202%2C%203%5D\)%2C%20dynamic\(%5B2%2C%203%2C%204%2C%205%5D\)\)%22%7D)
**Output**
| \_time | together |
| ---------------- | -------- |
| May 22, 11:42:52 | \[2, 3] |
## List of related functions
* [set\_difference](apl/scalar-functions/mathematical-functions/set-difference): Returns elements in the first array that are not in the second. Use it to find exclusions.
* [set\_has\_element](/apl/scalar-functions/mathematical-functions/set-has-element): Tests whether a set contains a specific value. Prefer it when you only need a Boolean result.
* [set\_union](/apl/scalar-functions/mathematical-functions/set-union): Returns the union of two or more sets. Use it when you need any element that appears in at least one set instead of every set.
# set_union
Source: https://axiom.co/docs/apl/scalar-functions/mathematical-functions/set-union
This page explains how to use the set_union function in APL.
Use the `set_union` function in APL to combine two dynamic arrays into one, returning a new array that includes all distinct elements from both. The order of elements in the result is not guaranteed and may differ from the original input arrays.
You can use `set_union` when you need to merge two arrays and eliminate duplicates. It is especially useful in scenarios where you need to perform set-based logic, such as comparing user activity across multiple sources, correlating IPs from different datasets, or combining traces or log attributes from different events.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
APL’s `set_union` works similarly to using `mvappend` followed by `mvdedup` in SPL. While SPL stores multivalue fields and uses field-based manipulation, APL focuses on dynamic arrays. You need to explicitly apply set logic in APL using functions like `set_union`.
```sql Splunk example
| eval result=mvappend(array1, array2)
| eval result=mvdedup(result)
```
```kusto APL equivalent
extend result = set_union(array1, array2)
```
Standard SQL doesn’t support arrays as first-class types or set functions like `set_union`. However, conceptually, `set_union` behaves like applying `UNION` between two subqueries that return one column each, followed by a `DISTINCT`.
```sql SQL example
SELECT value FROM (
SELECT value FROM table1
UNION
SELECT value FROM table2
)
```
```kusto APL equivalent
extend result = set_union(array1, array2)
```
## Usage
### Syntax
```kusto
set_union(Array1, Array2)
```
### Parameters
| Name | Type | Description |
| ------ | ------- | -------------------------- |
| Array1 | dynamic | The first array to merge. |
| Array2 | dynamic | The second array to merge. |
### Returns
A dynamic array that contains the distinct elements of both input arrays.
## Example
Use `set_union` to return the union of two arrays.
**Query**
```kusto
['sample-http-logs']
| extend together = set_union(dynamic([1, 2, 3]), dynamic([2, 3, 4, 5]))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20together%20%3D%20set_union\(dynamic\(%5B1%2C%202%2C%203%5D\)%2C%20dynamic\(%5B2%2C%203%2C%204%2C%205%5D\)\)%22%7D)
**Output**
| \_time | together |
| ---------------- | ----------------- |
| May 22, 11:42:52 | \[1, 2, 3, 4, 5 ] |
## List of related functions
* [set\_difference](apl/scalar-functions/mathematical-functions/set-difference): Returns elements in the first array that are not in the second. Use it to find exclusions.
* [set\_has\_element](/apl/scalar-functions/mathematical-functions/set-has-element): Tests whether a set contains a specific value. Prefer it when you only need a Boolean result.
* [set\_union](/apl/scalar-functions/mathematical-functions/set-union): Returns the union of two or more sets. Use it when you need any element that appears in at least one set instead of every set.
# column_ifexists
Source: https://axiom.co/docs/apl/scalar-functions/metadata-functions/column-ifexists
This page explains how to use the column_ifexists function in APL.
Use `column_ifexists()` to make your queries resilient to schema changes. The function checks if a field with a given name exists in the dataset. If it does, the function returns it. If not, it returns a fallback field or expression that you provide.
This is especially useful when working with datasets that evolve over time or come from multiple sources with different schemas. Instead of failing when a field is missing, your query continues running by using a default. Use this function to safely handle queries where the presence of a field isn’t guaranteed.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk, field selection is strict—missing fields typically return `null` in results, but conditional logic for fallback fields requires using `eval` or `coalesce`. In APL, `column_ifexists()` directly substitutes the fallback field at query-time based on schema.
```sql Splunk example
... | eval field=if(isnull(Capital), State, Capital)
```
```kusto APL equivalent
StormEvents | project column_ifexists('Capital', State)
```
In SQL, you need to check for the existence of a field using system views or error handling. `column_ifexists()` in APL simplifies this by allowing fallback behavior inline without needing procedural code.
```sql SQL example
SELECT CASE
WHEN EXISTS(SELECT 1 FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'StormEvents' AND COLUMN_NAME = 'Capital')
THEN Capital ELSE State END AS Result
FROM StormEvents
```
```kusto APL equivalent
StormEvents | project column_ifexists('Capital', State)
```
## Usage
### Syntax
```kusto
column_ifexists(FieldName, DefaultValue)
```
### Parameters
* `FieldName`: The name of the field to return as a string.
* `DefaultValue`: The fallback value to return if `FieldName` doesn’t exist. This can be another field or a literal.
### Returns
Returns the field specified by `FieldName` if it exists in the table schema. Otherwise, returns the result of `DefaultValue`.
## Use case examples
You want to examine HTTP logs, and your schema might have a `geo.region` field in some environments and not in others. You fall back to `geo.country` when `geo.region` is missing.
**Query**
```kusto
['sample-http-logs']
| project _time, location = column_ifexists('geo.region', ['geo.country'])
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20project%20_time%2C%20location%20%3D%20column_ifexists\('geo.region'%2C%20%5B'geo.country'%5D\)%22%7D)
**Output**
| \_time | location |
| -------------------- | -------------- |
| 2025-04-28T12:04:10Z | United States |
| 2025-04-28T12:04:12Z | Canada |
| 2025-04-28T12:04:15Z | United Kingdom |
The query returns `geo.region` if it exists; otherwise, it falls back to `geo.country`.
You analyze OpenTelemetry traces and you’re not sure if your data contains `status_code` and `status` fields. You fall back to `100` when it’s missing.
**Query**
```kusto
['otel-demo-traces']
| extend status_code_field = column_ifexists('status_code', '100')
| extend status_field = column_ifexists('status', 100)
| project _time, trace_id, span_id, status_code_field, status_field
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20extend%20status_code_field%20%3D%20column_ifexists\('status_code'%2C%20'100'\)%20%7C%20extend%20status_field%20%3D%20column_ifexists\('status'%2C%20100\)%20%7C%20project%20_time%2C%20trace_id%2C%20span_id%2C%20status_code_field%2C%20status_field%22%7D)
**Output**
| \_time | trace\_id | span\_id | status\_code\_field | status\_field |
| -------------------- | --------- | -------- | ------------------- | ------------- |
| 2025-04-28T10:30:12Z | abc123 | span567 | nil | 100 |
| 2025-04-28T10:30:15Z | def456 | span890 | 200 | 100 |
The query returns the `status_code` field if it exists. Otherwise, it falls back to `100`.
You inspect logs for suspicious activity. In some datasets, a `threat_level` field exists, but not in all. You use the `status` field as a fallback.
**Query**
```kusto
['sample-http-logs']
| project _time, id, threat = column_ifexists('threat_level', status)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20project%20_time%2C%20id%2C%20threat%20%3D%20column_ifexists\('threat_level'%2C%20status\)%22%7D)
**Output**
| \_time | id | threat |
| -------------------- | ---- | ------ |
| 2025-04-28T13:22:11Z | u123 | 200 |
| 2025-04-28T13:22:13Z | u456 | 403 |
The function avoids breaking the query if `threat_level` doesn’t exist by defaulting to `status`.
## List of related functions
* [coalesce](/apl/scalar-functions/string-functions#coalesce): Returns the first non-null value from a list of expressions. Use when you want to handle null values, not missing fields.
* [iff](/apl/scalar-functions/conditional-function#iff): Performs conditional logic based on a boolean expression. Use when you want explicit control over evaluation.
* [isnull](/apl/scalar-functions/string-functions#isnull): Checks if a value is null. Useful when combined with other functions for fine-grained control.
* [case](/apl/scalar-functions/conditional-function#case): Allows multiple conditional branches. Use when fallback logic depends on multiple conditions.
* [project](/apl/tabular-operators/project-operator): Selects and transforms fields. Use with `column_ifexists()` to build resilient field projections.
# ingestion_time
Source: https://axiom.co/docs/apl/scalar-functions/metadata-functions/ingestion_time
This page explains how to use the ingestion_time function in APL.
Use the `ingestion_time` function to retrieve the timestamp of when each record was ingested into Axiom. This function helps you distinguish between the original event time (as captured in the `_time` field) and the time the data was actually received by Axiom.
You can use `ingestion_time` to:
* Detect delays or lags in data ingestion.
* Filter events based on their ingestion window.
* Audit data pipelines by comparing event time with ingestion time.
This function is especially useful when working with streaming or event-based data sources where ingestion delays are common and might affect alerting, dashboarding, or correlation accuracy.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
Splunk provides the `_indextime` field, which represents when an event was indexed. In APL, the equivalent concept is accessed using the `ingestion_time` function, which must be called explicitly.
```sql Splunk example
... | eval ingest_time=_indextime
```
```kusto APL equivalent
...
| extend ingest_time = ingestion_time()
```
ANSI SQL does not have a standard equivalent to `ingestion_time`, since SQL databases typically do not distinguish ingestion time from event time. APL provides `ingestion_time` for observability-specific workflows where the arrival time of data is important.
```sql SQL example
SELECT event_time, CURRENT_TIMESTAMP AS ingest_time FROM logs;
```
```kusto APL equivalent
['sample-http-logs']
| extend ingest_time = ingestion_time()
```
## Usage
### Syntax
```kusto
ingestion_time()
```
### Parameters
This function does not take any parameters.
### Returns
A `datetime` value that represents when each record was ingested into Axiom.
## Use case examples
Use `ingestion_time` to identify delays between when an HTTP request occurred and when it was ingested into Axiom.
**Query**
```kusto
['sample-http-logs']
| extend ingest_time = ingestion_time()
| extend delay = datetime_diff('second', ingest_time, _time)
| where delay > 1
| project _time, ingest_time, delay, method, uri, status
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20ingest_time%20%3D%20ingestion_time\(\)%20%7C%20extend%20delay%20%3D%20datetime_diff\('second'%2C%20ingest_time%2C%20_time\)%20%7C%20where%20delay%20%3E%201%20%7C%20project%20_time%2C%20ingest_time%2C%20delay%2C%20method%2C%20uri%2C%20status%22%7D)
**Output**
| \_time | ingest\_time | delay | method | uri | status |
| -------------------- | -------------------- | ----- | ------ | ------------- | ------ |
| 2025-06-10T12:00:00Z | 2025-06-10T12:01:30Z | 90 | GET | /api/products | 200 |
| 2025-06-10T12:05:00Z | 2025-06-10T12:06:10Z | 70 | POST | /api/cart/add | 201 |
This query calculates the difference between the ingestion time and event time, highlighting entries with more than 60 seconds delay.
Use `ingestion_time` to monitor ingestion lags for spans generated by services, helping identify pipeline slowdowns or delivery issues.
**Query**
```kusto
['otel-demo-traces']
| extend ingest_time = ingestion_time()
| extend delay = datetime_diff('second', ingest_time, _time)
| summarize avg_delay = avg(delay) by ['service.name'], kind
| order by avg_delay desc
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20extend%20ingest_time%20%3D%20ingestion_time\(\)%20%7C%20extend%20delay%20%3D%20datetime_diff\('second'%2C%20ingest_time%2C%20_time\)%20%7C%20summarize%20avg_delay%20%3D%20avg\(delay\)%20by%20%5B'service.name'%5D%2C%20kind%20%7C%20order%20by%20avg_delay%20desc%22%7D)
**Output**
| service.name | kind | avg\_delay |
| --------------- | -------- | ---------- |
| checkoutservice | server | 45 |
| cartservice | client | 30 |
| frontend | internal | 12 |
This query calculates the average ingestion delay per service and kind to identify services affected by delayed ingestion.
Use `ingestion_time` to identify recently ingested suspicious activity, even if the event occurred earlier.
**Query**
```kusto
['sample-http-logs']
| extend ingest_time = ingestion_time()
| where status == '401' and ingest_time > ago(1h)
| project _time, ingest_time, id, method, uri, ['geo.country']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20ingest_time%20%3D%20ingestion_time\(\)%20%7C%20where%20status%20%3D%3D%20'401'%20and%20ingest_time%20%3E%20ago\(1h\)%20%7C%20project%20_time%2C%20ingest_time%2C%20id%2C%20method%2C%20uri%2C%20%5B'geo.country'%5D%22%7D)
**Output**
| \_time | ingest\_time | id | method | uri | geo.country |
| -------------------- | -------------------- | ------- | ------ | ------------------ | ----------- |
| 2025-06-11T09:15:00Z | 2025-06-11T10:45:00Z | user123 | GET | /admin/login | US |
| 2025-06-11T08:50:00Z | 2025-06-11T10:30:00Z | user456 | POST | /api/session/start | DE |
This query surfaces failed login attempts that were ingested in the last hour, regardless of when the request actually occurred.
# Pair functions
Source: https://axiom.co/docs/apl/scalar-functions/pair-functions
Learn how to use and combine different pair functions in APL
## Pair functions
| **Function Name** | **Description** |
| ---------------------------- | ------------------------------------ |
| [pair()](#pair) | Creates a pair from a key and value. |
| [parse\_pair()](#parse-pair) | Parses a string to form a pair. |
Each argument has a **required** section which is denoted with `required` or `optional`
* If it’s denoted by `required` it means the argument must be passed into that function before it'll work.
* if it’s denoted by `optional` it means the function can work without passing the argument value.
## pair()
Creates a pair from a key and value.
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| --------- | -------- | ------------------------ | ----------------------------------------------- |
| Key | string | Required | String for the key in the pair |
| Value | string | Required | String for the value in the pair |
| Separator | string | Optional (Default: ":") | Separator between the key and value in the pair |
### Returns
Returns a pair with the key **Key** and the value **Value** with the separator **Seperator**.
### Examples
```kusto
pair("key", "value", ".")
```
```kusto
['logs']
| where tags contains pair("host", "mymachine")
```
[Run in Playground]()
## parse\_pair()
Creates a pair from a key and value.
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| --------- | -------- | ------------------------ | ----------------------------------------------- |
| Pair | string | Required | String that has a pair of key value to pull out |
| Separator | string | Optional (Default: ":") | Separator between the key and value in the pair |
### Returns
Returns a pair with the key and value separated by the separator **Seperator** in **Pair**. If
none is found a pair with the value of **Pair** and an empty key is returned.
### Examples
```kusto
parse_pair("key.value", ".")
```
```kusto
['logs']
| where parse_pair(tags[0]).key == "host"
```
[Run in Playground]()
# Rounding functions
Source: https://axiom.co/docs/apl/scalar-functions/rounding-functions
Learn how to use and combine different rounding functions in APL
## Rounding functions
| **Function Name** | **Description** |
| ------------------------ | ------------------------------------------------------------------------------------------------------------------------- |
| [ceiling()](#ceiling) | Calculates the smallest integer greater than, or equal to, the specified numeric expression. |
| [bin()](#bin) | Rounds values down to an integer multiple of a given bin size. |
| [bin\_auto()](#bin-auto) | Rounds values down to a fixed-size "bin", with control over the bin size and starting point provided by a query property. |
| [floor()](#floor) | Calculates the largest integer less than, or equal to, the specified numeric expression. |
## ceiling()
Calculates the smallest integer greater than, or equal to, the specified numeric expression.
### Arguments
* x: A real number.
### Returns
* The smallest integer greater than, or equal to, the specified numeric expression.
### Examples
```kusto
ceiling(x)
```
```kusto
ceiling(25.43) == 26
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20project%20smallest_integer%20%3D%20ceiling%2825.43%29%22%7D)
## bin()
Rounds values down to an integer multiple of a given bin size.
The `bin()` function is used with [summarize operator](/apl/tabular-operators/summarize-operator). If your set of values are disorderly, they will be grouped into fractions.
### Arguments
* value: A date, number, or [timespan](/apl/data-types/scalar-data-types#timespan-literals)
* roundTo: The "bin size", a number or timespan that divides value.
### Returns
The nearest multiple of roundTo below value.
### Examples
```kusto
bin(value,roundTo)
```
```kusto
bin(25.73, 4) == 24
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20project%20round_value%20%3D%20bin%2825.73%2C%204%29%22%7D)
## bin\_auto()
Rounds values down to a fixed-size "bin", the `bin_auto()` function can only be used with the [summarize operator](/apl/tabular-operators/summarize-operator) by statement with the `_time` column.
### Arguments
* Expression: A scalar expression of a numeric type indicating the value to round.
### Returns
The nearest multiple of `query_bin_auto_at` below Expression, shifted so that `query_bin_auto_at` will be translated into itself.
### Example
```kusto
summarize count() by bin_auto(_time)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20summarize%20count%28%29%20by%20bin_auto%28_time%29%22%7D)
## floor()
Calculates the largest integer less than, or equal to, the specified numeric expression.
### Arguments
* number: A real number.
### Returns
* The largest integer greater than, or equal to, the specified numeric expression.
### Examples
```kusto
floor(number)
```
```kusto
floor(25.73) == 25
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20project%20largest_integer_number%20%3D%20floor%2825.73%29%22%7D)
# SQL functions
Source: https://axiom.co/docs/apl/scalar-functions/sql-functions
Learn how to use SQL functions in APL
## SQL functions
| **Function Name** | **Description** |
| ---------------------------- | ------------------------------------------------------------------------------------------------------------------ |
| [parse\_sql()](#parse-sql) | Interprets and analyzes SQL queries, making it easier to extract and understand SQL statements within datasets. |
| [format\_sql()](#format-sql) | Converts the data model produced by `parse_sql()` back into a SQL statement for validation or formatting purposes. |
## parse\_sql()
Analyzes an SQL statement and constructs a data model, enabling insights into the SQL content within a dataset.
### Limitations
* It is mainly used for simple SQL queries. SQL statements like stored procedures, Windows functions, common table expressions (CTEs), recursive queries, advanced statistical functions, and special joins are not supported.
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| -------------- | -------- | ------------------------ | ----------------------------- |
| sql\_statement | string | Required | The SQL statement to analyze. |
### Returns
A dictionary representing the structured data model of the provided SQL statement. This model includes maps or slices that detail the various components of the SQL statement, such as tables, fields, conditions, etc.
### Examples
### Basic data retrieval
The SQL statement **`SELECT * FROM db`** retrieves all columns and rows from the table named **`db`**.
```kusto
hn
| project parse_sql("select * from db")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22hn%20%7C%20project%20parse_sql\('select%20*%20from%20db'\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D)
### WHERE Clause
This example parses a **`SELECT`** statement with a **`WHERE`** clause, filtering **`customers`** by **`subscription_status`**.
```kusto
hn
| project parse_sql("SELECT id, email FROM customers WHERE subscription_status = 'active'")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22hn%20%7C%20project%20parse_sql\(%5C%22SELECT%20id%2C%20email%20FROM%20customers%20WHERE%20subscription_status%20%3D%20'active'%5C%22\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D)
### JOIN operation
This example shows parsing an SQL statement that performs a **`JOIN`** operation between **`orders`** and **`customers`** tables to match orders with customer names.
```kusto
hn
| project parse_sql("SELECT orders.id, customers.name FROM orders JOIN customers ON orders.customer_id = customers.id")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22hn%20%7C%20project%20parse_sql\(%5C%22SELECT%20orders.id%2C%20customers.name%20FROM%20orders%20JOIN%20customers%20ON%20orders.customer_id%20%3D%20customers.id%5C%22\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D)
### GROUP BY Clause
In this example, the **`parse_sql()`** function is used to parse an SQL statement that aggregates order counts by **`product_id`** using the **`GROUP BY`** clause.
```kusto
hn
| project parse_sql("SELECT product_id, COUNT(*) as order_count FROM orders GROUP BY product_id")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22hn%20%7C%20project%20parse_sql\(%5C%22SELECT%20product_id%2C%20COUNT\(*\)%20as%20order_count%20FROM%20orders%20GROUP%20BY%20product_id%5C%22\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D)
### Nested Queries
This example demonstrates parsing a nested SQL query, where the inner query selects **`user_id`** from **`orders`** based on **`purchase_date`**, and the outer query selects names from **`users`** based on those IDs.
```kusto
hn
| project parse_sql("SELECT name FROM users WHERE id IN (SELECT user_id FROM orders WHERE purchase_date > '2022-01-01')")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22hn%20%7C%20project%20parse_sql\(%5C%22SELECT%20name%20FROM%20users%20WHERE%20id%20IN%20\(SELECT%20user_id%20FROM%20orders%20WHERE%20purchase_date%20%3E%20'2022-01-01'\)%5C%22\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D)
### ORDER BY Clause
Here, the example shows how to parse an SQL statement that orders **`users`** by **`registration_date`** in descending order.
```kusto
hn
| project parse_sql("SELECT name, registration_date FROM users ORDER BY registration_date DESC")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22hn%20%7C%20project%20parse_sql\(%5C%22SELECT%20name%2C%20registration_date%20FROM%20users%20ORDER%20BY%20registration_date%20DESC%5C%22\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D)
### Sorting users by registration data
This example demonstrates parsing an SQL statement that retrieves the **`name`** and **`registration_date`** of users from the **`users`** table, and orders the results by **`registration_date`** in descending order, showing how to sort data based on a specific column.
```kusto
hn | extend parse_sql("SELECT name, registration_date FROM users ORDER BY registration_date DESC")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22hn%20%7C%20extend%20parse_sql\(%5C%22SELECT%20name%2C%20registration_date%20FROM%20users%20ORDER%20BY%20registration_date%20DESC%5C%22\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D)
### Querying with index hints to use a specific index
This query hints at MySQL to use a specific index named **`index_name`** when executing the SELECT statement on the **`users`** table.
```kusto
hn
| project parse_sql("SELECT * FROM users USE INDEX (index_name) WHERE user_id = 101")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22hn%20%7C%20project%20parse_sql\(%5C%22SELECT%20*%20FROM%20users%20USE%20INDEX%20\(index_name\)%20WHERE%20user_id%20%3D%20101%5C%22\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D)
### Inserting data with ON DUPLICATE KEY UPDATE
This example showcases MySQL’s ability to handle duplicate key entries elegantly by updating the existing record if the insert operation encounters a duplicate key.
```kusto
hn
| project parse_sql("INSERT INTO settings (user_id, setting, value) VALUES (1, 'theme', 'dark') ON DUPLICATE KEY UPDATE value='dark'")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22hn%20%7C%20project%20parse_sql\(%5C%22INSERT%20INTO%20settings%20\(user_id%2C%20setting%2C%20value\)%20VALUES%20\(1%2C%20'theme'%2C%20'dark'\)%20ON%20DUPLICATE%20KEY%20UPDATE%20value%3D'dark'%5C%22\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D)
### Using JSON functions
This query demonstrates MySQL’s support for JSON data types and functions, extracting the age from a JSON object stored in the **`user_info`** column.
```kusto
hn
| project parse_sql("SELECT JSON_EXTRACT(user_info, '$.age') AS age FROM users WHERE user_id = 101")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22hn%20%7C%20project%20parse_sql\(%5C%22SELECT%20JSON_EXTRACT\(user_info%2C%20%27%24.age%27\)%20AS%20age%20FROM%20users%20WHERE%20user_id%20%3D%20101%5C%22\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D)
## format\_sql()
Transforms the data model output by `parse_sql()` back into a SQL statement. Useful for testing and ensuring that the parsing accurately retains the original structure and intent of the SQL statement.
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| ------------------ | ---------- | ------------------------ | -------------------------------------------------- |
| parsed\_sql\_model | dictionary | Required | The structured data model output by `parse_sql()`. |
### Returns
A string that represents the SQL statement reconstructed from the provided data model.
### Examples
### Reformatting a basic SELECT Query
After parsing a SQL statement, you can reformat it back to its original or a standard SQL format.
```kusto
hn
| extend parsed = parse_sql("SELECT * FROM db")
| project formatted_sql = format_sql(parsed)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22hn%20%7C%20extend%20parsed%20%3D%20parse_sql\(%5C%22SELECT%20*%20FROM%20db%5C%22\)%20%7C%20project%20formatted_sql%20%3D%20format_sql\(parsed\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D)
### Formatting SQL Queries
This example first parses a SQL statement to analyze its structure and then formats the parsed structure back into a SQL string using `format_sql`.
```kusto
hn
| extend parsed = parse_sql("SELECT name, registration_date FROM users ORDER BY registration_date DESC")
| project format_sql(parsed)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22hn%20%7C%20extend%20parsed%20%3D%20parse_sql\(%5C%22SELECT%20name%2C%20registration_date%20FROM%20users%20ORDER%20BY%20registration_date%20DESC%5C%22\)%20%7C%20project%20formatted_sql%20%3D%20format_sql\(parsed\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D)
### Formatting a simple SELECT Statement
This example demonstrates parsing a straightforward `SELECT` statement that retrieves user IDs and usernames from an `user_accounts` table where the `active` status is `1`. After parsing, it uses `format_sql` to convert the parsed data back into a SQL string.
```kusto
hn
| extend parsed = parse_sql("SELECT user_id, username FROM user_accounts WHERE active = 1")
| project formatted_sql = format_sql(parsed)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22hn%20%7C%20extend%20parsed%20%3D%20parse_sql\(%5C%22SELECT%20user_id%2C%20username%20FROM%20user_accounts%20WHERE%20active%20%3D%201%5C%22\)%20%7C%20project%20formatted_sql%20%3D%20format_sql\(parsed\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D)
### Reformatting a complex query with JOINS
In this example, a more complex SQL statement involving an `INNER JOIN` between `orders` and `customers` tables is parsed. The query selects orders and customer names for orders placed after January 1, 2023. `format_sql` is then used to reformat the parsed structure into a SQL string.
```kusto
hn
| extend parsed = parse_sql("SELECT orders.order_id, customers.name FROM orders INNER JOIN customers ON orders.customer_id = customers.id WHERE orders.order_date > '2023-01-01'")
| project formatted_sql = format_sql(parsed)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22hn%20%7C%20extend%20parsed%20%3D%20parse_sql\(%5C%22SELECT%20orders.order_id%2C%20customers.name%20FROM%20orders%20INNER%20JOIN%20customers%20ON%20orders.customer_id%20%3D%20customers.id%20WHERE%20orders.order_date%20%3E%20'2023-01-01'%5C%22\)%20%7C%20project%20formatted_sql%20%3D%20format_sql\(parsed\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D)
### Using format\_sql with aggregation functions
This example focuses on parsing an SQL statement that performs aggregation. It selects product IDs and counts of total sales from a `sales` table, grouping by `product_id` and having a condition on the count. After parsing, `format_sql` reformats the output into an SQL string.
```kusto
hn
| extend parsed = parse_sql("SELECT product_id, COUNT(*) as total_sales FROM sales GROUP BY product_id HAVING COUNT(*) > 100")
| project formatted_sql = format_sql(parsed)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22hn%20%7C%20extend%20parsed%20%3D%20parse_sql\(%5C%22SELECT%20product_id%2C%20COUNT\(*\)%20as%20total_sales%20FROM%20sales%20GROUP%20BY%20product_id%20HAVING%20COUNT\(*\)%20%3E%20100%5C%22\)%20%7C%20project%20formatted_sql%20%3D%20format_sql\(parsed\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D)
# String functions
Source: https://axiom.co/docs/apl/scalar-functions/string-functions
Learn how to use and combine different string functions in APL
## String functions
| **Function Name** | **Description** |
| ----------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------- |
| [base64\_encode\_tostring()](#base64-encode-tostring) | Encodes a string as base64 string. |
| [base64\_decode\_tostring()](#base64-decode-tostring) | Decodes a base64 string to a UTF-8 string. |
| [countof()](#countof) | Counts occurrences of a substring in a string. |
| [countof\_regex()](#countof-regex) | Counts occurrences of a substring in a string. Regex matches don’t. |
| [coalesce()](#coalesce) | Evaluates a list of expressions and returns the first non-null (or non-empty for string) expression. |
| [extract()](#extract) | Get a match for a regular expression from a text string. |
| [extract\_all()](#extract-all) | Get all matches for a regular expression from a text string. |
| [format\_bytes()](#format-bytes) | Formats a number of bytes as a string including bytes units |
| [format\_url()](#format-url) | Formats an input string into a valid URL by adding the necessary protocol if it’s escaping illegal URL characters. |
| [indexof()](#indexof) | Function reports the zero-based index of the first occurrence of a specified string within input string. |
| [isempty()](#isempty) | Returns true if the argument is an empty string or is null. |
| [isnotempty()](#isnotempty) | Returns true if the argument isn’t an empty string or a null. |
| [isnotnull()](#isnotnull) | Returns true if the argument is not null. |
| [isnull()](#isnull) | Evaluates its sole argument and returns a bool value indicating if the argument evaluates to a null value. |
| [parse\_bytes()](#parse-bytes) | Parses a string including byte size units and returns the number of bytes |
| [parse\_json()](#parse-json) | Interprets a string as a JSON value) and returns the value as dynamic. |
| [parse\_url()](#parse-url) | Parses an absolute URL string and returns a dynamic object contains all parts of the URL. |
| [parse\_urlquery()](#parse-urlquery) | Parses a url query string and returns a dynamic object contains the Query parameters. |
| [replace()](#replace) | Replace all regex matches with another string. |
| [replace\_regex()](#replace-regex) | Replaces all regex matches with another string. |
| [replace\_string()](#replace-string) | Replaces all string matches with another string. |
| [reverse()](#reverse) | Function makes reverse of input string. |
| [split()](#split) | Splits a given string according to a given delimiter and returns a string array with the contained substrings. |
| [strcat()](#strcat) | Concatenates between 1 and 64 arguments. |
| [strcat\_delim()](#strcat-delim) | Concatenates between 2 and 64 arguments, with delimiter, provided as first argument. |
| [strcmp()](#strcmp) | Compares two strings. |
| [strlen()](#strlen) | Returns the length, in characters, of the input string. |
| [strrep()](#strrep) | Repeats given string provided number of times (default = 1). |
| [substring()](#substring) | Extracts a substring from a source string. |
| [toupper()](#toupper) | Converts a string to upper case. |
| [tolower()](#tolower) | Converts a string to lower case. |
| [trim()](#trim) | Removes all leading and trailing matches of the specified cutset. |
| [trim\_regex()](#trim-regex) | Removes all leading and trailing matches of the specified regular expression. |
| [trim\_end()](#trim-end) | Removes trailing match of the specified cutset. |
| [trim\_end\_regex()](#trim-end-regex) | Removes trailing match of the specified regular expression. |
| [trim\_start()](#trim-start) | Removes leading match of the specified cutset. |
| [trim\_start\_regex()](#trim-start-regex) | Removes leading match of the specified regular expression. |
| [url\_decode()](#url-decode) | The function converts encoded URL into a regular URL representation. |
| [url\_encode()](#url-encode) | The function converts characters of the input URL into a format that can be transmitted over the Internet. |
| [gettype()](#gettype) | Returns the runtime type of its single argument. |
| [parse\_csv()](#parse-csv) | Splits a given string representing a single record of comma-separated values and returns a string array with these values. |
Each argument has a **required** section which is denoted with `required` or `optional`
* If it’s denoted by `required` it means the argument must be passed into that function before it'll work.
* if it’s denoted by `optional` it means the function can work without passing the argument value.
## base64\_encode\_tostring()
Encodes a string as base64 string.
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| -------- | -------- | ------------------------ | ------------------------------------------------------------ |
| String | string | Required | Input string or string field to be encoded as base64 string. |
### Returns
Returns the string encoded as base64 string.
* To decode base64 strings to UTF-8 strings, see [base64\_decode\_tostring()](#base64-decode-tostring)
### Examples
```kusto
base64_encode_tostring(string)
```
```kusto
['sample-http-logs']
| project encoded_base64_string = base64_encode_tostring(content_type)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20encoded_base64_string%20%3D%20base64_encode_tostring\(content_type\)%22%7D)
## base64\_decode\_tostring()
Decodes a base64 string to a UTF-8 string.
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| -------- | -------- | ------------------------ | ------------------------------------------------------------------------ |
| String | string | Required | Input string or string field to be decoded from base64 to UTF8-8 string. |
### Returns
Returns UTF-8 string decoded from base64 string.
* To encode strings to base64 string, see [base64\_encode\_tostring()](#base64-encode-tostring)
### Examples
```kusto
base64_decode_tostring(string)
```
```kusto
['sample-http-logs']
| project decoded_base64_string = base64_decode_tostring("VGhpcyBpcyBhbiBlbmNvZGVkIG1lc3NhZ2Uu")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20decoded_base64_string%20%3D%20base64_decode_tostring\(%5C%22VGhpcyBpcyBhbiBlbmNvZGVkIG1lc3NhZ2Uu%5C%22\)%22%7D)
## countof()
Counts occurrences of a substring in a string.
### Arguments
| **name** | **type** | **description** | **Required or Optional** |
| ----------- | ---------- | ---------------------------------------- | ------------------------ |
| text source | **string** | Source to count your occurences from | Required |
| search | **string** | The plain string to match inside source. | Required |
### Returns
The number of times that the search string can be matched.
### Examples
```kusto
countof(search, text)
```
```kusto
['sample-http-logs']
| project count = countof("con", "content_type")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20count%20%3D%20countof\(%5C%22con%5C%22%2C%20%5C%22content_type%5C%22\)%22%7D)
## countof\_regex()
Counts occurrences of a substring in a string. regex matches don’t.
### Arguments
* text source: A string.
* regex search: regular expression to match inside your text source.
### Returns
The number of times that the search string can be matched in the dataset. Regex matches do not.
### Examples
```kusto
countof_regex(regex, text)
```
```kusto
['sample-http-logs']
| project count = countof_regex("c.n", "content_type")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20count%20%3D%20countof_regex\(%5C%22c.n%5C%22%2C%20%5C%22content_type%5C%22\)%22%7D)
## coalesce()
Evaluates a list of expressions and returns the first non-null (or non-empty for string) expression.
### Arguments
| **name** | **type** | **description** | **Required or Optional** |
| --------- | ---------- | ---------------------------------------- | ------------------------ |
| arguments | **scalar** | The expression or field to be evaluated. | Required |
### Returns
The value of the first argument whose value isn’t null (or not-empty for string expressions).
### Examples
```kusto
['sample-http-logs']
| project coalesced = coalesce(content_type, ['geo.city'], method)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20coalesced%20%3D%20coalesce\(content_type%2C%20%5B%27geo.city%27%5D%2C%20method\)%22%7D)
```kusto
['http-logs']
| project req_duration_ms, server_datacenter, predicate = coalesce(content_type, method, status)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20req_duration_ms%2C%20server_datacenter%2C%20predicate%20%3D%20coalesce\(content_type%2C%20method%2C%20status\)%22%7D)
## extract()
Retrieve the first substring matching a regular expression from a source string.
### Arguments
| **name** | **type** | **description** |
| ------------ | -------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| regex | **expression** | A regular expression. |
| captureGroup | **int** | A positive `int` constant indicating the capture group to extract. 0 stands for the entire match, 1 for the value matched by the first '('parenthesis')' in the regular expression, 2 or more for subsequent parentheses. |
| source | **string** | A string to search |
### Returns
If regex finds a match in source: the substring matched against the indicated capture group captureGroup, optionally converted to typeLiteral.
If there’s no match, or the type conversion fails: `-1` or `string error`
### Examples
```kusto
extract(regex, captureGroup, source)
```
```kusto
['sample-http-logs']
| project extract_sub = extract("^.{2,2}(.{4,4})", 1, content_type)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20extract_sub%20%3D%20%20extract\(%5C%22%5E.%7B2%2C2%7D\(.%7B4%2C4%7D\)%5C%22%2C%201%2C%20content_type\)%22%7D)
```kusto
extract("x=([0-9.]+)", 1, "axiom x=65.6|po") == "65.6"
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20extract_sub%20%3D%20%20extract\(%5C%22x%3D\(%5B0-9.%5D%2B\)%5C%22%2C%201%2C%20%5C%22axiom%20x%3D65.6%7Cpo%5C%22\)%20%3D%3D%20%5C%2265.6%5C%22%22%7D)
## extract\_all()
Retrieve all substrings matching a regular expression from a source string. Optionally, retrieve only a subset of the matching groups.
### Arguments
| **name** | **type** | **description** | **Required or Optional** |
| ------------- | -------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------ |
| regex | **expression** | A regular expression containing between one and 16 capture groups. Examples of a valid regex: @"(\d+)". Examples of an invalid regex: @"\d+" | Required |
| captureGroups | **array** | A dynamic array constant that indicates the capture group to extract. Valid values are from 1 to the number of capturing groups in the regular expression. | Required |
| source | **string** | A string to search | Required |
### Returns
* If regex finds a match in source: Returns dynamic array including all matches against the indicated capture groups captureGroups, or all of capturing groups in the regex.
* If number of captureGroups is 1: The returned array has a single dimension of matched values.
* If number of captureGroups is more than 1: The returned array is a two-dimensional collection of multi-value matches per captureGroups selection, or all capture groups present in the regex if captureGroups is omitted.
* If there’s no match: `-1`
### Examples
```kusto
extract_all(regex, [captureGroups,] source)
```
```kusto
['sample-http-logs']
| project extract_match = extract_all(@"(\w)(\w+)(\w)", dynamic([1,3]), content_type)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20extract_match%20%3D%20extract_all%28%40%5C%22%28%5C%5Cw%29%28%5C%5Cw%2B%29%28%5C%5Cw%29%5C%22%2C%20dynamic%28%5B1%2C3%5D%29%2C%20content_type%29%22%2C%20%22queryOptions%22%3A%20%7B%22quickRange%22%3A%20%2290d%22%7D%7D)
```kusto
extract_all(@"(\w)(\w+)(\w)", dynamic([1,3]), content_type) == [["t", "t"],["c","v"]]
```
```kusto
['sample-http-logs']
| project extract_match = extract_all(@"(\w)(\w+)(\w)", pack_array(), content_type)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20project%20extract_match%20%3D%20extract_all\(%40%5C%22\(%5C%5Cw\)\(%5C%5Cw%2B\)\(%5C%5Cw\)%5C%22%2C%20pack_array\(\)%2C%20content_type\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
## format\_bytes()
Formats a number as a string representing data size in bytes.
### Arguments
| **name** | **type** | **description** | **Required or Optional** |
| --------- | ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------ |
| value | **number** | a number to be formatted as data size in bytes | Required |
| precision | **number** | Number of digits the value will be rounded to. (default value is zero) | Optional |
| units | **string** | Units of the target data size the string formatting will use (base 2 suffixes: `Bytes`, `KiB`, `KB`, `MiB`, `MB`, `GiB`, `GB`, `TiB`, `TB`, `PiB`, `EiB`, `ZiB`, `YiB`; base 10 suffixes: `kB` `MB` `GB` `TB` `PB` `EB` `ZB` `YB`). If the parameter is empty the units will be auto-selected based on input value. | Optional |
| base | **number** | Either 2 or 10 to specify whether the prefix is calculated using 1000s or 1024s for each type. (default value is 2) | Optional |
### Returns
* A formatted string for humans
### Examples
```kusto
format_bytes( 4000, number, "['id']", num_comments ) == "3.9062500000000 KB"
```
```kusto
format_bytes(value [, precision [, units [, base]]])
format_bytes(1024) == "1 KB"
format_bytes(8000000, 2, "MB", 10) == "8.00 MB"
```
```kusto
['github-issues-event']
| project formated_bytes = format_bytes( 4783549035, number, "['id']", num_comments )
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27github-issues-event%27%5D%5Cn%7C%20project%20formated_bytes%20%3D%20format_bytes\(4783549035%2C%20number%2C%20%5C%22%5B%27id%27%5D%5C%22%2C%20num_comments\)%22%7D)
## format\_url()
Formats an input string into a valid URL. This function will return a string that is a properly formatted URL.
### Arguments
| **name** | **type** | **description** | **Required or Optional** |
| -------- | ----------- | ------------------------------------------ | ------------------------ |
| url | **dynamic** | string input you want to format into a URL | Required |
### Returns
* A string that represents a properly formatted URL.
### Examples
```kusto
['sample-http-logs']
| project formatted_url = format_url(dynamic({"scheme": "https", "host": "github.com", "path": "/axiomhq/next-axiom"})
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20formatted_url%20%3D%20format_url%28dynamic%28%7B%5C%22scheme%5C%22%3A%20%5C%22https%5C%22%2C%20%5C%22host%5C%22%3A%20%5C%22github.com%5C%22%2C%20%5C%22path%5C%22%3A%20%5C%22%2Faxiomhq%2Fnext-axiom%5C%22%7D%29%29%22%7D)
```kusto
['sample-http-logs']
| project formatted_url = format_url(dynamic({"scheme": "https", "host": "github.com", "path": "/axiomhq/next-axiom", "port": 443, "fragment": "axiom","user": "axiom", "password": "apl"}))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20formatted_url%20%3D%20format_url%28dynamic%28%7B%5C%22scheme%5C%22%3A%20%5C%22https%5C%22%2C%20%5C%22host%5C%22%3A%20%5C%22github.com%5C%22%2C%20%5C%22path%5C%22%3A%20%5C%22%2Faxiomhq%2Fnext-axiom%5C%22%2C%20%5C%22port%5C%22%3A%20443%2C%20%5C%22fragment%5C%22%3A%20%5C%22axiom%5C%22%2C%20%5C%22user%5C%22%3A%20%5C%22axiom%5C%22%2C%20%5C%22password%5C%22%3A%20%5C%22apl%5C%22%7D%29%29%22%7D)
* These are all the supported keys when using the `format_url` function: scheme, host, port, fragment, user, password, query.
## indexof()
Reports the zero-based index of the first occurrence of a specified string within the input string.
### Arguments
| **name** | **type** | **description** | **usage** |
| ------------ | -------------- | ------------------------------------------------------------------------------- | --------- |
| source | **string** | Input string | Required |
| lookup | **string** | String to look up | Required |
| start\_index | **text** | Search start position. | Optional |
| length | **characters** | Number of character positions to examine. A value of -1 means unlimited length. | Optional |
| occurrence | **number** | The number of the occurrence. Default 1. | Optional |
### Returns
* Zero-based index position of lookup.
* Returns -1 if the string isn’t found in the input.
### Examples
```kusto
indexof( body, ['id'], 2, 1, number ) == "-1"
```
```kusto
indexof(source,lookup[,start_index[,length[,occurrence]]])
indexof ()
```
```kusto
['github-issues-event']
| project occurrence = indexof( body, ['id'], 23, 5, number )
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27github-issues-event%27%5D%5Cn%7C%20project%20occurrence%20%3D%20indexof%28%20body%2C%20%5B%27id%27%5D%2C%2023%2C%205%2C%20number%20%29%22%7D)
## isempty()
Returns `true` if the argument is an empty string or is null.
### Returns
Indicates whether the argument is an empty string or isnull.
### Examples
```kusto
isempty("") == true
```
```kusto
isempty([value])
```
```kusto
['github-issues-event']
| project empty = isempty(num_comments)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20project%20empty%20%3D%20isempty%28num_comments%29%22%7D)
## isnotempty()
Returns `true` if the argument isn’t an empty string, and it isn’t null.
### Examples
```kusto
isnotempty("") == false
```
```kusto
isnotempty([value])
notempty([value]) -- alias of isnotempty
```
```kusto
['github-issues-event']
| project not_empty = isnotempty(num_comments)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20project%20not_empty%20%3D%20isnotempty%28num_comments%29%22%7D)
## isnotnull()
Returns `true` if the argument is not null.
### Examples
```kusto
isnotnull( num_comments ) == true
```
```kusto
isnotnull([value])
notnull([value]) - alias for `isnotnull`
```
```kusto
['github-issues-event']
| project not_null = isnotnull(num_comments)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20project%20not_null%20%3D%20isnotnull%28num_comments%29%22%7D)
## isnull()
Evaluates its sole argument and returns a bool value indicating if the argument evaluates to a null value.
### Returns
True or false, depending on whether or not the value is null.
### Examples
```kusto
isnull(Expr)
```
```kusto
['github-issues-event']
| project is_null = isnull(creator)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20project%20is_null%20%3D%20isnull%28creator%29%22%7D)
## parse\_bytes()
Parses a string including byte size units and returns the number of bytes
### Arguments
| **name** | **type** | **description** | **Required or Optional** |
| ------------- | ---------- | ------------------------------------------------------------------------------------------------------------------------------ | ------------------------ |
| bytes\_string | **string** | A string formated defining the number of bytes | Required |
| base | **number** | (optional) Either 2 or 10 to specify whether the prefix is calculated using 1000s or 1024s for each type. (default value is 2) | Required |
### Returns
* The number of bytes or zero if unable to parse
### Examples
```kusto
parse_bytes(bytes_string [, base])
parse_bytes("1 KB") == 1024
parse_bytes("1 KB", 10) == 1000
parse_bytes("128 Bytes") == 128
parse_bytes("bad data") == 0
```
```kusto
['github-issues-event']
| extend parsed_bytes = parse_bytes("300 KB", 10)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20extend%20parsed_bytes%20%3D%20%20parse_bytes%28%5C%22300%20KB%5C%22%2C%2010%29%22%7D)
```kusto
['github-issues-event']
| project parsed_bytes = parse_bytes("300 KB", 10)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20project%20parsed_bytes%20%3D%20%20parse_bytes%28%5C%22300%20KB%5C%22%2C%2010%29%22%7D)
## parse\_json()
Interprets a string as a JSON value and returns the value as dynamic.
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| --------- | -------- | ------------------------ | -------------------------------------------------------------------- |
| Json Expr | string | Required | Expression that will be used, also represents a JSON-formatted value |
### Returns
An object of type json that is determined by the value of json:
* If json is of type string, and is a properly formatted JSON string, then the string is parsed, and the value produced is returned.
* If json is of type string, but it isn’t a properly formatted JSON string, then the returned value is an object of type dynamic that holds the original string value.
### Examples
```kusto
parse_json(json)
```
```kusto
['vercel']
| extend parsed = parse_json('{"name":"vercel", "statuscode":200, "region": { "route": "usage streams", "number": 9 }}')
```
```kusto
['github-issues-event']
| extend parsed = parse_json(creator)
| where isnotnull( parsed)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20extend%20parsed%20%3D%20parse_json%28creator%29%5Cn%7C%20where%20isnotnull%28parsed%29%22%7D)
## parse\_url()
Parses an absolute URL `string` and returns an object contains `URL parts.`
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| -------- | -------- | ------------------------ | ------------------------------------------------------- |
| URL | string | Required | A string represents a URL or the query part of the URL. |
### Returns
An object of type dynamic that included the URL components: Scheme, Host, Port, Path, Username, Password, Query Parameters, Fragment.
### Examples
```kusto
parse_url(url)
```
```kusto
['sample-http-logs']
| extend ParsedURL = parse_url("https://www.example.com/path/to/page?query=example")
| project
Scheme = ParsedURL["scheme"],
Host = ParsedURL["host"],
Path = ParsedURL["path"],
Query = ParsedURL["query"]
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20extend%20ParsedURL%20%3D%20parse_url%28%5C%22https%3A%2F%2Fwww.example.com%2Fpath%2Fto%2Fpage%3Fquery%3Dexample%5C%22%29%5Cn%7C%20project%20%5Cn%20%20Scheme%20%3D%20ParsedURL%5B%5C%22scheme%5C%22%5D%2C%5Cn%20%20Host%20%3D%20ParsedURL%5B%5C%22host%5C%22%5D%2C%5Cn%20%20Path%20%3D%20ParsedURL%5B%5C%22path%5C%22%5D%2C%5Cn%20%20Query%20%3D%20ParsedURL%5B%5C%22query%5C%22%5D%22%7D)
* Result
```json
{
"Host": "www.example.com",
"Path": "/path/to/page",
"Query": {
"query": "example"
},
"Scheme": "https"
}
```
## parse\_urlquery()
Returns a `dynamic` object contains the Query parameters.
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| -------- | -------- | ------------------------ | -------------------------------- |
| Query | string | Required | A string represents a url query. |
query: A string represents a url query
### Returns
An object of type dynamic that includes the query parameters.
### Examples
```kusto
parse_urlquery("a1=b1&a2=b2&a3=b3")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20extend%20ParsedURLQUERY%20%3D%20parse_urlquery%28%5C%22a1%3Db1%26a2%3Db2%26a3%3Db3%5C%22%29%22%7D)
* Result
```json
{
"Result": {
"a3": "b3",
"a2": "b2",
"a1": "b1"
}
}
```
```kusto
parse_urlquery(query)
```
```kusto
['github-issues-event']
| project parsed = parse_urlquery("https://play.axiom.co/axiom-play-qf1k/query?qid=fUKgiQgLjKE-rd7wjy")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20project%20parsed%20%3D%20parse_urlquery%28%5C%22https%3A%2F%2Fplay.axiom.co%2Faxiom-play-qf1k%2Fexplorer%3Fqid%3DfUKgiQgLjKE-rd7wjy%5C%22%29%22%7D)
## replace()
Replace all regex matches with another string.
### Arguments
* regex: The regular expression to search source. It can contain capture groups in '('parentheses')'.
* rewrite: The replacement regex for any match made by matchingRegex. Use $0 to refer to the whole match, $1 for the first capture group, \$2 and so on for subsequent capture groups.
* source: A string.
### Returns
* source after replacing all matches of regex with evaluations of rewrite. Matches do not overlap.
### Examples
```kusto
replace(regex, rewrite, source)
```
```kusto
['sample-http-logs']
| project content_type, Comment = replace("[html]", "[censored]", method)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20project%20content_type%2C%20Comment%20%3D%20replace%28%5C%22%5Bhtml%5D%5C%22%2C%20%5C%22%5Bcensored%5D%5C%22%2C%20method%29%22%7D)
## replace\_regex()
Replaces all regex matches with another string.
### Arguments
* regex: The regular expression to search text.
* rewrite: The replacement regex for any match made by *matchingRegex*.
* text: A string.
### Returns
source after replacing all matches of regex with evaluations of rewrite. Matches do not overlap.
### Examples
```kusto
replace_regex(@'^logging', 'axiom', 'logging-data')
```
* Result
```json
{
"replaced": "axiom-data"
}
```
```kusto
replace_regex(regex, rewrite, text)
```
```kusto
['github-issues-event']
| extend replaced = replace_regex(@'^logging', 'axiom', 'logging-data')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20project%20replaced_regex%20%3D%20replace_regex%28%40'%5Elogging'%2C%20'axiom'%2C%20'logging-data'%29%22%7D)
### Backreferences
Backreferences match the same text as previously matched by a capturing group. With Backreferences, you can identify a repeated character or substring within a string.
* Backreferences in APL is implemented using the `$` sign.
#### Examples
```kusto
['github-issues-event']
| project backreferences = replace_regex(@'observability=(.+)', 'axiom=$1', creator)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%20%7C%20project%20backreferences%20%3D%20replace_regex\(%40'observability%3D\(.%2B\)'%2C%20'axiom%3D%241'%2C%20creator\)%22%7D)
## replace\_string()
Replaces all string matches with another string.
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| -------- | -------- | ------------------------ | ----------------------------------------------------------------------- |
| lookup | string | Required | A string which Axiom matches in `text` and replaces with `rewrite`. |
| rewrite | string | Required | A string with which Axiom replaces parts of `text` that match `lookup`. |
| text | string | Required | A string where Axiom replaces parts matching `lookup` with `rewrite`. |
### Returns
`text` after replacing all matches of `lookup` with evaluations of `rewrite`. Matches don’t overlap.
### Examples
```kusto
replace_string("github", "axiom", "The project is hosted on github")
```
* Result
```json
{
"replaced_string": "axiom"
}
```
```kusto
replace_string(lookup, rewrite, text)
```
```kusto
['sample-http-logs']
| extend replaced_string = replace_string("The project is hosted on github", "github", "axiom")
| project replaced_string
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20replaced_string%20%3D%20replace_string%28%27github%27%2C%20%27axiom%27%2C%20%27The%20project%20is%20hosted%20on%20github%27%29%5Cn%7C%20project%20replaced_string%22%7D)
## reverse()
Function reverses the order of the input Field.
### Arguments
| **name** | **type** | **description** | **Required or Optional** |
| -------- | -------- | ----------------- | ------------------------ |
| Field | `string` | Field input value | Required |
### Returns
The reverse order of a field value.
### Examples
```kusto
reverse(value)
```
```kusto
project reversed = reverse("axiom")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20project%20reversed_value%20%3D%20reverse%28'axiom'%29%22%7D)
* Result
```json
moixa
```
## split()
Splits a given string according to a given delimiter and returns a string array with the contained substrings.
Optionally, a specific substring can be returned if exists.
### Arguments
* source: The source string that will be split according to the given delimiter.
* delimiter: The delimiter (Field) that will be used in order to split the source string.
### Returns
* A string array that contains the substrings of the given source string that are delimited by the given delimiter.
### Examples
```kusto
split(source, delimiter)
```
```kusto
project split_str = split("axiom_observability_monitoring", "_")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27github-issues-event%27%5D%5Cn%7C%20project%20split_str%20%3D%20split%28%5C%22axiom_observability_monitoring%5C%22%2C%20%5C%22_%5C%22%29%22%7D)
* Result
```json
{
"split_str": ["axiom", "observability", "monitoring"]
}
```
## strcat()
Concatenates between 1 and 64 arguments.
If the arguments aren’t of string type, they'll be forcibly converted to string.
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| -------- | -------- | ------------------------ | ------------------------------- |
| Expr | string | Required | Expressions to be concatenated. |
### Returns
Arguments, concatenated to a single string.
### Examples
```kusto
strcat(argument1, argument2[, argumentN])
```
```kusto
['github-issues-event']
| project stract_con = strcat( ['milestone.creator'], number )
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20project%20stract_con%20%3D%20strcat%28%20%5B'milestone.creator'%5D%2C%20number%20%29%22%7D)
```kusto
['github-issues-event']
| project stract_con = strcat( 'axiom', number )
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20project%20stract_con%20%3D%20strcat%28%20'axiom'%2C%20number%20%29%22%7D)
* Result
```json
{
"stract_con": "axiom3249"
}
```
## strcat\_delim()
Concatenates between 2 and 64 arguments, with delimiter, provided as first argument.
* If arguments aren’t of string type, they'll be forcibly converted to string.
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| ------------ | -------- | ------------------------ | --------------------------------------------------- |
| delimiter | string | Required | string expression, which will be used as separator. |
| argument1 .. | string | Required | Expressions to be concatenated. |
### Returns
Arguments, concatenated to a single string with delimiter.
### Examples
```kusto
strcat_delim(delimiter, argument1, argument2[ , argumentN])
```
```kusto
['github-issues-event']
| project strcat = strcat_delim(":", actor, creator)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20project%20strcat%20%3D%20strcat_delim%28'%3A'%2C%20actor%2C%20creator%29%22%7D)
```kusto
project strcat = strcat_delim(":", "axiom", "monitoring")
```
* Result
```json
{
"strcat": "axiom:monitoring"
}
```
## strcmp()
Compares two strings.
The function starts comparing the first character of each string. If they are equal to each other, it continues with the following pairs until the characters differ or until the end of shorter string is reached.
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| -------- | -------- | ------------------------ | ----------------------------------- |
| string1 | string | Required | first input string for comparison. |
| string2 | string | Required | second input string for comparison. |
### Returns
Returns an integral value indicating the relationship between the strings:
* When the result is 0: The contents of both strings are equal.
* When the result is -1: the first character that does not match has a lower value in string1 than in string2.
* When the result is 1: the first character that does not match has a higher value in string1 than in string2.
### Examples
```kusto
strcmp(string1, string2)
```
```kusto
['github-issues-event']
| extend cmp = strcmp( body, repo )
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20extend%20cmp%20%3D%20strcmp%28%20body%2C%20repo%20%29%22%7D)
```kusto
project cmp = strcmp( "axiom", "observability")
```
* Result
```json
{
"input_string": -1
}
```
## strlen()
Returns the length, in characters, of the input string.
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| -------- | -------- | ------------------------ | ---------------------------------------------------------- |
| source | string | Required | The source string that will be measured for string length. |
### Returns
Returns the length, in characters, of the input string.
### Examples
```kusto
strlen(source)
```
```kusto
project str_len = strlen("axiom")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20project%20str_len%20%3D%20strlen\(%5C%22axiom%5C%22\)%22%7D)
* Result
```json
{
"str_len": 5
}
```
## strrep()
Repeats given string provided amount of times.
* In case if first or third argument is not of a string type, it will be forcibly converted to string.
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| ---------- | -------- | ------------------------ | ----------------------------------------------------- |
| value | Expr | Required | Inpute Expression |
| multiplier | integer | Required | positive integer value (from 1 to 1024) |
| delimiter | string | Optional | An optional string expression (default: empty string) |
### Returns
* Value repeated for a specified number of times, concatenated with delimiter.
* In case if multiplier is more than maximal allowed value (1024), input string will be repeated 1024 times.
### Examples
```kusto
strrep(value,multiplier,[delimiter])
```
```kusto
['github-issues-event']
| extend repeat_string = strrep( repo, 5, "::" )
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20extend%20repeat_string%20%3D%20strrep\(%20repo%2C%205%2C%20%5C%22%3A%3A%5C%22%20\)%22%7D)
```kusto
project repeat_string = strrep( "axiom", 3, "::" )
```
* Result
```json
{
"repeat_string": "axiom::axiom::axiom"
}
```
## substring()
Extracts a substring from a source string.
### Arguments
* source: The source string that the substring will be taken from.
* startingIndex: The zero-based starting character position of the requested substring.
* length: A parameter that can be used to specify the requested number of characters in the substring.
### Returns
A substring from the given string. The substring starts at startingIndex (zero-based) character position and continues to the end of the string or length characters if specified.
### Examples
```kusto
substring(source, startingIndex [, length])
```
```kusto
['github-issues-event']
| extend extract_string = substring( repo, 4, 5 )
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20extend%20extract_string%20%3D%20substring\(%20repo%2C%204%2C%205%20\)%22%7D)
```kusto
project extract_string = substring( "axiom", 4, 5 )
```
```json
{
"extract_string": "m"
}
```
## toupper()
Converts a string to upper case.
```kusto
toupper("axiom") == "AXIOM"
```
```kusto
['github-issues-event']
| project upper = toupper( body )
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20project%20upper%20%3D%20toupper\(%20body%20\)%22%7D)
## tolower()
Converts a string to lower case.
```kusto
tolower("AXIOM") == "axiom"
```
```kusto
['github-issues-event']
| project low = tolower( body )
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20project%20low%20%3D%20tolower%28body%29%22%7D)
## trim()
Removes all leading and trailing matches of the specified cutset.
### Arguments
* source: A string.
* cutset: A string containing the characters to be removed.
### Returns
source after trimming matches of the cutset found in the beginning and/or the end of source.
### Examples
```kusto
trim(source)
```
```kusto
['github-issues-event']
| extend remove_leading_matches = trim( "locked", repo)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20extend%20remove_leading_matches%20%3D%20trim\(%5C%22locked%5C%22%2C%20repo\)%22%7D)
```kusto
project remove_leading_matches = trim( "axiom", "observability")
```
* Result
```json
{
"remove_leading_matches": "bservability"
}
```
## trim\_regex()
Removes all leading and trailing matches of the specified regular expression.
### Arguments
* regex: String or regular expression to be trimmed from the beginning and/or the end of source.
* source: A string.
### Returns
source after trimming matches of regex found in the beginning and/or the end of source.
### Examples
```kusto
trim_regex(regex, source)
```
```kusto
['github-issues-event']
| extend remove_trailing_match_regex = trim_regex( "^github", action )
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20extend%20remove_trailing_match_regex%20%3D%20trim_regex\(%5C%22%5Egithub%5C%22%2C%20action\)%22%7D)
* Result
```json
{
"remove_trailing_match_regex": "closed"
}
```
## trim\_end()
Removes trailing match of the specified cutset.
### Arguments
* source: A string.
* cutset: A string containing the characters to be removed.\`
### Returns
source after trimming matches of the cutset found in the end of source.
### Examples
```kusto
trim_end(source)
```
```kusto
['github-issues-event']
| extend remove_cutset = trim_end(@"[^\w]+", body)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27github-issues-event%27%5D%5Cn%7C%20extend%20remove_cutset%20%3D%20trim_end%28%40%5C%22%5B%5E%5C%5Cw%5D%2B%5C%22%2C%20body%29%22%7D)
* Result
```json
{
"remove_cutset": "In [`9128d50`](https://7aa98788e07\n), **down**:\n- HTTP code: 0\n- Response time: 0 ms\n"
}
```
## trim\_end\_regex()
Removes trailing match of the specified regular expression.
### Arguments
* regex: String or regular expression to be trimmed from the end of source.
* source: A string.
### Returns
source after trimming matches of regex found in the end of source.
### Examples
```kusto
trim_end_regex(regex, source)
```
```kusto
['github-issues-event']
| project remove_cutset_regex = trim_end_regex( "^github", creator )
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20project%20remove_cutset_regex%20%3D%20trim_end_regex\(%20%5C%22%5Egithub%5C%22%2C%20creator%20\)%22%7D)
* Result
```json
{
"remove_cutset_regex": "axiomhq"
}
```
## trim\_start()
Removes leading match of the specified cutset.
### Arguments
* source: A string.
### Returns
* source after trimming match of the specified cutset found in the beginning of source.
### Examples
```kusto
trim_start(source)
```
```kusto
['github-issues-event']
| project remove_cutset = trim_start( "github", repo)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20project%20remove_cutset%20%3D%20trim_start\(%20%5C%22github%5C%22%2C%20repo\)%22%7D)
* Result
```json
{
"remove_cutset": "axiomhq/next-axiom"
}
```
## trim\_start\_regex()
Removes leading match of the specified regular expression.
### Arguments
* regex: String or regular expression to be trimmed from the beginning of source.
* source: A string.
### Returns
source after trimming match of regex found in the beginning of source.
### Examples
```kusto
trim_start_regex(regex, source)
```
```kusto
['github-issues-event']
| project remove_cutset = trim_start_regex( "github", repo)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20project%20remove_cutset%20%3D%20trim_start_regex\(%20%5C%22github%5C%22%2C%20repo\)%22%7D)
* Result
```json
{
"remove_cutset": "axiomhq/next-axiom"
}
```
## url\_decode()
The function converts encoded URL into a to regular URL representation.
### Arguments
* `encoded url:` encoded URL (string).
### Returns
URL (string) in a regular representation.
### Examples
```kusto
url_decode(encoded url)
```
```kusto
['github-issues-event']
| project decoded_link = url_decode( "https://www.axiom.co/" )
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20project%20decoded_link%20%3D%20url_decode\(%20%5C%22https%3A%2F%2Fwww.axiom.co%2F%5C%22%20\)%22%7D)
* Result
```json
{
"decoded_link": "https://www.axiom.co/"
}
```
## url\_encode()
The function converts characters of the input URL into a format that can be transmitted over the Internet.
### Arguments
* url: input URL (string).
### Returns
URL (string) converted into a format that can be transmitted over the Internet.
### Examples
```kusto
url_encode(url)
```
```kusto
['github-issues-event']
| project encoded_url = url_encode( "https://www.axiom.co/" )
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20project%20encoded_url%20%3D%20url_encode\(%20%5C%22https%3A%2F%2Fwww.axiom.co%2F%5C%22%20\)%22%7D)
* Result
```json
{
"encoded_link": "https%3A%2F%2Fwww.axiom.co%2F"
}
```
## gettype()
Returns the runtime type of its single argument.
### Arguments
* Expressions
### Returns
A string representing the runtime type of its single argument.
### Examples
| **Expression** | **Returns** |
| ----------------------------------------- | -------------- |
| gettype("lima") | **string** |
| gettype(2222) | **int** |
| gettype(5==5) | **bool** |
| gettype(now()) | **datetime** |
| gettype(parse\_json('67')) | **int** |
| gettype(parse\_json(' "polish" ')) | **string** |
| gettype(parse\_json(' \{"axiom":1234} ')) | **dictionary** |
| gettype(parse\_json(' \[6, 7, 8] ')) | **array** |
| gettype(456.98) | **real** |
| gettype(parse\_json('')) | **null** |
## parse\_csv()
Splits a given string representing a single record of comma-separated values and returns a string array with these values.
### Arguments
* csv\_text: A string representing a single record of comma-separated values.
### Returns
A string array that contains the split values.
### Examples
```kusto
parse_csv("axiom,logging,observability") == [ "axiom", "logging", "observability" ]
```
```kusto
parse_csv("axiom, processing, language") == [ "axiom", "processing", "language" ]
```
```kusto
['github-issues-event']
| project parse_csv("github, body, repo")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20project%20parse_csv\(%5C%22github%2C%20body%2C%20repo%5C%22\)%22%7D)
# indexof_regex
Source: https://axiom.co/docs/apl/scalar-functions/string-functions/indexof_regex
This page explains how to use the indexof_regex function in APL.
Use the `indexof_regex` function to find the position of the first match of a regular expression in a string. The function is helpful when you want to locate a pattern within a larger text field and take action based on its position. For example, you can use `indexof_regex` to extract fields from semi-structured logs, validate string formats, or trigger alerts when specific patterns appear in log data.
The function returns the zero-based index of the first match. If no match is found, it returns `-1`. Use `indexof_regex` when you need more flexibility than simple substring search (`indexof`), especially when working with dynamic or non-fixed patterns.
All regex functions of APL use the [RE2 regex syntax](https://github.com/google/re2/wiki/Syntax).
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
Use `match()` in Splunk SPL to perform regular expression matching. However, `match()` returns a Boolean, not the match position. APL’s `indexof_regex` is similar to combining `match()` with additional logic to extract position, which is not natively supported in SPL.
```sql Splunk example
... | eval match_index=if(match(field, "pattern"), 0, -1)
```
```kusto APL equivalent
['dataset']
| extend match_index = indexof_regex(field, 'pattern')
```
ANSI SQL does not have a built-in function to return the index of a regex match. You typically use `REGEXP_LIKE` for Boolean evaluation. `indexof_regex` provides a more direct and powerful way to find the exact match position in APL.
```sql SQL example
SELECT CASE WHEN REGEXP_LIKE(field, 'pattern') THEN 0 ELSE -1 END FROM table;
```
```kusto APL equivalent
['dataset']
| extend match_index = indexof_regex(field, 'pattern')
```
## Usage
### Syntax
```kusto
indexof_regex(string, match [, start [, occurrence [, length]]])
```
### Parameters
| Name | Type | Required | Description |
| ---------- | ------ | -------- | ---------------------------------------------------------------------------------------------------------------------- |
| string | string | Yes | The input text to inspect. |
| match | string | Yes | The regular expression pattern to search for. |
| start | int | | The index in the string where to begin the search. If negative, the function starts that many characters from the end. |
| occurrence | int | | Which instance of the pattern to match. Defaults to `1` if not specified. |
| length | int | | The number of characters to search through. Use `-1` to search to the end of the string. |
### Returns
The function returns the position (starting at zero) where the pattern first matches within the string. If the pattern is not found, the result is `-1`.
The function returns `null` in the following cases:
* The `start` value is negative.
* The `occurrence` value is less than 1.
* The `length` is set to a value below `-1`.
## Use case examples
Use `indexof_regex` to detect whether the URI in a log entry contains an encoded user ID by checking for patterns like `user-[0-9]+`.
**Query**
```kusto
['sample-http-logs']
| extend user_id_pos = indexof_regex(uri, 'user-[0-9]+')
| where user_id_pos != -1
| project _time, id, uri, user_id_pos
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20user_id_pos%20%3D%20indexof_regex\(uri%2C%20'user-%5B0-9%5D%2B'\)%20%7C%20where%20user_id_pos%20!%3D%20-1%20%7C%20project%20_time%2C%20id%2C%20uri%2C%20user_id_pos%22%7D)
**Output**
| \_time | id | uri | user\_id\_pos |
| -------------------- | ------ | ------------------------ | ------------- |
| 2025-06-10T12:34:56Z | user42 | /api/user-12345/settings | 5 |
| 2025-06-10T12:35:07Z | user91 | /v2/user-6789/dashboard | 4 |
The query finds log entries where the URI contains a user ID pattern and shows the position of the match in the URI string.
Use `indexof_regex` to detect trace IDs that include a specific structure, such as four groups of hex digits.
**Query**
```kusto
['otel-demo-traces']
| extend match_index = indexof_regex(trace_id, '^[0-9a-f]{8}-[0-9a-f]{4}')
| where match_index == 0
| project _time, trace_id, match_index
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20extend%20match_index%20%3D%20indexof_regex\(trace_id%2C%20'%5E%5B0-9a-f%5D%7B8%7D-%5B0-9a-f%5D%7B4%7D'\)%20%7C%20where%20match_index%20%3D%3D%200%20%7C%20project%20_time%2C%20trace_id%2C%20match_index%22%7D)
**Output**
| \_time | trace\_id | match\_index |
| -------------------- | ------------------------------------ | ------------ |
| 2025-06-10T08:23:12Z | ab12cd34-1234-5678-9abc-def123456789 | 0 |
| 2025-06-10T08:24:55Z | fe98ba76-4321-abcd-8765-fedcba987654 | 0 |
This query finds spans where the trace ID begins with a specific regex pattern, helping validate span ID formatting.
Use `indexof_regex` to locate suspicious request patterns such as attempts to access system files (`/etc/passwd`).
**Query**
```kusto
['sample-http-logs']
| extend passwd_index = indexof_regex(uri, '/etc/passwd')
| where passwd_index != -1
| project _time, id, uri, passwd_index
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20passwd_index%20%3D%20indexof_regex\(uri%2C%20'%2Fetc%2Fpasswd'\)%20%7C%20where%20passwd_index%20!%3D%20-1%20%7C%20project%20_time%2C%20id%2C%20uri%2C%20passwd_index%22%7D)
**Output**
| \_time | id | uri | passwd\_index |
| -------------------- | ------ | ------------------------------ | ------------- |
| 2025-06-10T10:15:45Z | user88 | /cgi-bin/view?path=/etc/passwd | 20 |
This query detects HTTP requests attempting to access sensitive file paths, a common indicator of intrusion attempts.
# parse_path
Source: https://axiom.co/docs/apl/scalar-functions/string-functions/parse_path
This page explains how to use the parse_path function in APL.
Use the `parse_path` function to extract structured components from file paths, URIs, or URLs in your log and trace data. This function is useful when you want to decompose a full path into individual segments such as the directory, filename, extension, or query parameters for easier filtering, aggregation, or analysis.
You typically use `parse_path` in log analysis, OpenTelemetry traces, and security investigations to understand which resources are being accessed, identify routing patterns, or isolate endpoints with high error rates. It simplifies complex string parsing tasks and helps you normalize paths for comparisons and reporting.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, path or URL parsing often involves a combination of `spath`, `rex`, and `spath` field access logic. You typically write regular expressions or JSONPath selectors manually.
In APL, `parse_path` handles common URL and path structures for you automatically. It returns a dynamic object with fields like `directory`, `basename`, `extension`, `query`, and others.
```sql Splunk example
... | rex field=uri "(?\/api\/v1\/[^\?]+)"
```
```kusto APL equivalent
['sample-http-logs']
| extend path_parts = parse_path(uri)
| extend endpoint = path_parts.directory
```
ANSI SQL doesn’t have a built-in function for parsing structured paths. You often use a combination of `SUBSTRING`, `CHARINDEX`, or user-defined functions.
APL simplifies this task with `parse_path`, which returns a structured object from a URI or file path, removing the need for manual string manipulation.
```sql SQL example
SELECT SUBSTRING(uri, 1, CHARINDEX('/', uri)) AS directory FROM logs;
```
```kusto APL equivalent
['sample-http-logs']
| extend directory = parse_path(uri).directory
```
## Usage
### Syntax
```kusto
parse_path(source)
```
### Parameters
| Name | Type | Description |
| ------ | ------ | ---------------------------------------------------- |
| source | string | A string representing a path, file URI, or full URL. |
### Returns
Returns a dynamic object with the following fields:
* Scheme
* RootPath
* DirectoryPath
* DirectoryName
* Filename
* Extension
* AlternateDataStreamName
## Use case example
Extract endpoint directories and file extensions from HTTP request URIs.
**Query**
```kusto
['sample-http-logs']
| extend path_parts = parse_path(uri)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20path_parts%20%3D%20parse_path\(uri\)%20%7C%20project%20_time%2C%20path_parts%22%7D)
**Output**
| \_time | path\_parts |
| ---------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| Jun 11, 10:39:16 | \{ "Filename": "users", "RootPath": "", "Scheme": "", "AlternateDataStream": "", "DirectoryName": "messages", "DirectoryPath": "/api/v1/messages", "Extension": "" } |
| Jun 11, 10:39:16 | \{ "Scheme": "", "AlternateDataStream": "", "DirectoryName": "background", "DirectoryPath": "/api/v1/textdata/background", "Extension": "", "Filename": "change", "RootPath": "" } |
| Jun 11, 10:39:16 | \{ "Filename": "users", "RootPath": "", "Scheme": "", "AlternateDataStream": "", "DirectoryName": "textdata", "DirectoryPath": "/api/v1/textdata", "Extension": "" } |
This query helps you identify which directories and file types receive the most traffic.
# regex_quote
Source: https://axiom.co/docs/apl/scalar-functions/string-functions/regex_quote
This page explains how to use the regex_quote function in APL.
Use the `regex_quote` function in APL when you need to safely insert arbitrary string values into regular expression patterns. This function escapes all special characters in the input string so that it is interpreted as a literal sequence, rather than as part of a regular expression syntax.
`regex_quote` is especially useful when your APL query constructs regular expressions dynamically using user input or field values. Without escaping, strings like `.*` or `[a-z]` would behave like regex wildcards or character classes, potentially leading to incorrect results or vulnerabilities. With `regex_quote`, you can ensure the string is treated exactly as-is.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk, the `re.escape()` function is not available natively in SPL, so you often handle escaping in external scripts or manually. In APL, `regex_quote` provides built-in support for quoting regular expression metacharacters.
```sql Splunk example
| eval pattern="hello.*world"
| eval safe_pattern=replace(pattern, "\.", "\\.")
```
```kusto APL equivalent
let pattern = 'hello.*world';
print safe_pattern = regex_quote(pattern)
```
ANSI SQL lacks a standard function to escape regular expression strings. Escaping is typically handled manually or with vendor-specific features. In APL, `regex_quote` handles all necessary escaping for you, making regex construction safer and more convenient.
```sql SQL example
SELECT REGEXP_LIKE(col, 'hello\\.*world') FROM table;
```
```kusto APL equivalent
let pattern = 'hello.*world';
print is_match = tostring('hello.*world') matches regex regex_quote(pattern)
```
## Usage
### Syntax
```kusto
regex_quote(value)
```
### Parameters
| Name | Type | Description |
| ----- | ------ | ------------------------------------------------ |
| value | string | The input string to be escaped for regex safety. |
### Returns
A string where all regular expression metacharacters are escaped so that the result can be used safely in regex patterns.
## Use case examples
You want to find requests where the `uri` contains an exact match of a user-provided pattern, such as `/api/v1/users[1]`, which includes regex metacharacters. Use `regex_quote` to safely escape the pattern before matching.
**Query**
```kusto
let pattern = '/api/v1/users[1]';
['sample-http-logs']
| where uri matches regex regex_quote(pattern)
| project _time, id, uri, status
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22let%20pattern%20%3D%20'/api/v1/users\[1]'%3B%20\['sample-http-logs']%20%7C%20where%20uri%20matches%20regex%20regex_quote%28pattern%29%20%7C%20project%20_time%2C%20id%2C%20uri%2C%20status%22%7D)
**Output**
| \_time | id | uri | status |
| -------------------- | -------- | ----------------- | ------ |
| 2025-06-10T15:42:00Z | user-293 | /api/v1/users\[1] | 200 |
This query searches for logs where the `uri` exactly matches the string `/api/v1/users[1]`, without interpreting `[1]` as a character class.
You want to isolate spans whose `trace_id` includes a literal substring that happens to resemble a regex pattern, such as `abc.def[0]`. Using `regex_quote` ensures the pattern is treated literally.
**Query**
```kusto
let search_id = 'abc.def[0]';
['otel-demo-traces']
| where trace_id matches regex regex_quote(search_id)
| project _time, trace_id, span_id, ['service.name'], duration
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22let%20search_id%20%3D%20'abc.def\[0]'%3B%20\['otel-demo-traces']%20%7C%20where%20trace_id%20matches%20regex%20regex_quote%28search_id%29%20%7C%20project%20_time%2C%20trace_id%2C%20span_id%2C%20\['service.name']%2C%20duration%22%7D)
**Output**
| \_time | trace\_id | span\_id | \['service.name'] | duration |
| -------------------- | ----------- | -------- | ----------------- | -------- |
| 2025-06-10T13:20:00Z | abc.def\[0] | span-91 | frontend | 00:00:01 |
This query avoids misinterpretation of `[0]` as a regex character class and treats the whole `trace_id` literally.
You want to scan for potential path traversal attempts where a user’s input includes strings like `..\..\windows\system32`. To search this string safely, you use `regex_quote`.
**Query**
```kusto
let attack_pattern = '../../windows/system32';
['sample-http-logs']
| where uri matches regex regex_quote(attack_pattern)
| project _time, id, uri, status, ['geo.country']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22let%20attack_pattern%20%3D%20'..%2F..%2Fwindows%2Fsystem32'%3B%20%5B'sample-http-logs'%5D%20%7C%20where%20uri%20matches%20regex%20regex_quote\(attack_pattern\)%20%7C%20project%20_time%2C%20id%2C%20uri%2C%20status%2C%20%5B'geo.country'%5D%22%7D)
**Output**
| \_time | id | uri | status | \['geo.country'] |
| -------------------- | -------- | --------------------- | ------ | ---------------- |
| 2025-06-11T10:15:00Z | user-103 | ....\windows\system32 | 403 | DE |
This query detects malicious-looking strings literally, without treating `.` as a wildcard.
# Type funtions
Source: https://axiom.co/docs/apl/scalar-functions/type-functions
This section explains how to use type functions in APL.
The table summarizes the type functions available in APL.
| Function | Description |
| --------------------------------------------------------- | ------------------------------------------------------------------------- |
| [ismap](/apl/scalar-functions/type-functions/ismap) | Checks whether a value is of the `dynamic` type and represents a mapping. |
| [isreal](/apl/scalar-functions/type-functions/isreal) | Checks whether a value is a real number. |
| [isstring](/apl/scalar-functions/type-functions/isstring) | Checks whether a value is a string. |
# ismap
Source: https://axiom.co/docs/apl/scalar-functions/type-functions/ismap
This page explains how to use the ismap function in APL.
Use the `ismap` function in APL to check whether a value is of the `dynamic` type and represents a mapping (also known as a dictionary, associative array, property bag, or object). A mapping consists of key-value pairs where keys are strings and values can be of any type. This function is especially useful when working with semi-structured data, such as logs or telemetry traces, where fields might dynamically contain arrays, objects, or scalar values.
Use `ismap` to:
* Filter records where a field is a map.
* Validate input types in heterogeneous data.
* Avoid runtime errors in downstream operations expecting map values.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, you typically work with field types implicitly and rarely check if a field is a dictionary. SPL lacks a direct equivalent to APL’s `ismap`, but you might perform similar validations using `typeof` checks or custom functions in `eval`.
```sql Splunk example
| eval is_map=if(typeof(field) == "object", true, false)
```
```kusto APL equivalent
['sample-http-logs']
| extend is_map = ismap(dynamic_field)
```
ANSI SQL does not natively support map types. If you use a platform that supports JSON or semi-structured data (such as PostgreSQL with `jsonb`, BigQuery with `STRUCT`, or Snowflake), you can simulate map checks using type inspection or schema introspection.
```sql SQL example
SELECT
CASE
WHEN json_type(field) = 'object' THEN true
ELSE false
END AS is_map
FROM logs
```
```kusto APL equivalent
['sample-http-logs']
| extend is_map = ismap(dynamic_field)
```
## Usage
### Syntax
```kusto
ismap(value)
```
### Parameters
| Name | Type | Description |
| ------- | ---- | ----------------------------------- |
| `value` | any | The value to check for being a map. |
### Returns
Returns `true` if the value is a mapping (dictionary), otherwise returns `false`.
## Example
Use `ismap` to find log entries where a dynamic field contains structured key-value pairs, such as metadata attached to HTTP requests.
**Query**
```kusto
['sample-http-logs']
| extend is_structured = ismap(dynamic({"a":1, "b":2}))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20is_structured%20%3D%20ismap\(dynamic\(%7B'a'%3A1%2C%20'b'%3A2%7D\)\)%22%7D)
**Output**
| \_time | is\_structured |
| -------------------- | -------------- |
| 2025-06-06T08:00:00Z | true |
## List of related functions
* [isreal](/apl/scalar-functions/type-functions/ismap): Checks whether a value is a real number.
* [isstring](/apl/scalar-functions/type-functions/isstring): Checks whether a value is a string. Use this for scalar string validation.
# isreal
Source: https://axiom.co/docs/apl/scalar-functions/type-functions/isreal
This page explains how to use the isreal function in APL.
Use the `isreal` function to determine whether a value is a real number. This function is helpful when you need to validate data before performing numeric operations. For example, you can use `isreal` to filter out invalid values which could otherwise disrupt aggregations or calculations.
You often use `isreal` in data cleaning pipelines, conditional logic, and when inspecting metrics like durations, latencies, or numeric identifiers. It’s especially useful when working with telemetry or log data that includes optional or incomplete numeric fields.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
Splunk uses the `isnum` function to check whether a string represents a numeric value.
```sql Splunk example
... | eval is_valid = if(isnum(duration), "yes", "no")
```
```kusto APL equivalent
... | extend is_valid = iff(isreal(duration), 'yes', 'no')
```
ANSI SQL does not have a direct equivalent to `isreal`. You typically check for numeric values using `IS NOT NULL` and avoid known invalid markers manually. APL’s `isreal` abstracts this by directly checking if a value is a real number.
```sql SQL example
SELECT *,
CASE WHEN duration IS NOT NULL THEN 'yes' ELSE 'no' END AS is_valid
FROM traces
```
```kusto APL equivalent
['otel-demo-traces']
| extend is_valid = iff(isreal(duration), 'yes', 'no')
```
## Usage
### Syntax
```kusto
isreal(value)
```
### Parameters
| Name | Type | Description |
| ----- | ---- | ---------------------------- |
| value | any | The input value to evaluate. |
### Returns
Returns `true` if the input is a valid real number. Returns `false` for strings, nulls, or non-numeric types.
## Example
Use `isreal` to identify real number values.
**Query**
```kusto
['sample-http-logs']
| extend is_real = isreal(123.11)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20is_real%20%3D%20isreal\(123.11\)%22%7D)
**Output**
| \_time | is\_real |
| -------------------- | -------- |
| 2025-06-05T12:01:00Z | true |
## List of related functions
* [ismap](/apl/scalar-functions/type-functions/ismap): Checks whether a value is of the `dynamic` type and represents a mapping.
* [isstring](/apl/scalar-functions/type-functions/isstring): Checks whether a value is a string. Use this for scalar string validation.
# isstring
Source: https://axiom.co/docs/apl/scalar-functions/type-functions/isstring
This page explains how to use the isstring function in APL.
Use the `isstring` function to determine whether a value is of type string. This function is especially helpful when working with heterogeneous datasets where field types are not guaranteed, or when ingesting data from sources with loosely structured or mixed schemas.
You can use `isstring` to:
* Filter rows based on whether a field is a string.
* Validate and clean data before applying string functions.
* Avoid runtime errors in queries that expect specific data types.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, type checking is typically implicit and not exposed through a dedicated function like `isstring`. Instead, you often rely on function compatibility and casting behavior. In APL, `isstring` provides an explicit and reliable way to check if a value is a string before further processing.
```sql Splunk example
| eval type=if(isstr(field), "string", "not string")
```
```kusto APL equivalent
['sample-http-logs']
| extend type=iff(isstring(status), 'string', 'not string')
```
ANSI SQL does not include a built-in `IS STRING` function. Instead, type checks usually rely on schema constraints, manual casting, or vendor-specific solutions. In contrast, APL offers `isstring` as a first-class function that returns a boolean indicating whether a value is of type string.
```sql SQL example
SELECT
CASE
WHEN typeof(status) = 'VARCHAR' THEN 'string'
ELSE 'not string'
END AS type
FROM logs
```
```kusto APL equivalent
['sample-http-logs']
| extend type=iff(isstring(status), 'string', 'not string')
```
## Usage
### Syntax
```kusto
isstring(value)
```
### Parameters
| Name | Type | Description |
| ------- | ---- | ---------------------------------- |
| `value` | any | The value to test for string type. |
### Returns
A `bool` value that is `true` if the input value is of type string, `false` otherwise.
## Use case example
Use `isstring` to filter rows where the HTTP status code is a valid string.
**Query**
```kusto
['sample-http-logs']
| extend is_string = isstring(status)
| where is_string
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20is_string%20%3D%20isstring\(status\)%20%7C%20where%20is_string%22%7D)
**Output**
| \_time | status | is\_string |
| -------------------- | ------ | ---------- |
| 2025-06-05T12:10:00Z | "404" | true |
This query filters out logs where the `status` field is stored as a string, which can help filter out ingestion issues or schema inconsistencies.
## List of related functions
* [ismap](/apl/scalar-functions/type-functions/ismap): Checks whether a value is of the `dynamic` type and represents a mapping.
* [isreal](/apl/scalar-functions/type-functions/isreal): Checks whether a value is a real number.
# Logical operators
Source: https://axiom.co/docs/apl/scalar-operators/logical-operators
Learn how to use and combine different logical operators in APL.
## Logical (binary) operators
The following logical operators are supported between two values of the `bool` type:
**These logical operators are sometimes referred-to as Boolean operators, and sometimes as binary operators. The names are all synonyms.**
| **Operator name** | **Syntax** | **meaning** | |
| ----------------- | ---------- | ------------------------------------------------------------------------------------------------------------------------- | - |
| Equality | **==** | Returns `true` if both operands are non-null and equal to each other. Otherwise, `false`. | |
| Inequality | **!=** | Returns `true` if either one (or both) of the operands are null, or they are not equal to each other. Otherwise, `false`. | |
| Logical and | **and** | Returns `true` if both operands are `true`. | |
| Logical or | **or** | Returns `true `if one of the operands is `true`, regardless of the other operand. | |
# Numerical operators
Source: https://axiom.co/docs/apl/scalar-operators/numerical-operators
Learn how to use and combine numerical operators in APL.
## Numerical operators
The types `int`, `long`, and `real` represent numerical types. The following operators can be used between pairs of these types:
| **Operator** | **Description** | **Example** | |
| ------------ | --------------------------------- | ------------------------------------------------ | - |
| `+` | Add | `3.19 + 3.19`, `ago(10m) + 10m` | |
| `-` | Subtract | `0.26 - 0.23` | |
| `*` | Multiply | `1s * 5`, `5 * 5` | |
| `/` | Divide | `10m / 1s`, `4 / 2` | |
| `%` | Modulo | `10 % 3`, `5 % 2` | |
| `<` | Less | `1 < 2`, `1 <= 1` | |
| `>` | Greater | `0.23 > 0.22`, `10min > 1sec`, `now() > ago(1d)` | |
| `==` | Equals | `3 == 3` | |
| `!=` | Not equals | `2 != 1` | |
| `<=` | Less or Equal | `5 <= 6` | |
| `>=` | Greater or Equal | `7 >= 6` | |
| `in` | Equals to one of the elements | `"abc" in ("123", "345", "abc")` | |
| `!in` | Not equals to any of the elements | `"bca" !in ("123", "345", "abc")` | |
# String operators
Source: https://axiom.co/docs/apl/scalar-operators/string-operators
Learn how to use and combine different query operators for searching string data types.
The table summarizes the string operators available in APL.
| Operator | Description | Case sensitive | Example |
| --------------- | -------------------------------------------- | -------------- | ----------------------------------------- |
| == | Equals | Yes | `"aBc" == "aBc"` |
| != | Not equals | Yes | `"abc" != "ABC"` |
| =\~ | Equals | No | `"abc" =~ "ABC"` |
| !\~ | Not equals | No | `"aBc" !~ "xyz"` |
| contains | RHS occurs as a subsequence of LHS | No | `"parentSpanId" contains "Span"` |
| !contains | RHS doesn’t occur in LHS | No | `"parentSpanId" !contains "abc"` |
| contains\_cs | RHS occurs as a subsequence of LHS | Yes | `"parentSpanId" contains_cs "Id"` |
| !contains\_cs | RHS doesn’t occur in LHS | Yes | `"parentSpanId" !contains_cs "Id"` |
| startswith | RHS is an initial subsequence of LHS | No | `"parentSpanId" startswith "parent"` |
| !startswith | RHS isn’t an initial subsequence of LHS | No | `"parentSpanId" !startswith "Id"` |
| startswith\_cs | RHS is an initial subsequence of LHS | Yes | `"parentSpanId" startswith_cs "parent"` |
| !startswith\_cs | RHS isn’t an initial subsequence of LHS | Yes | `"parentSpanId" !startswith_cs "parent"` |
| endswith | RHS is a closing subsequence of LHS | No | `"parentSpanId" endswith "Id"` |
| !endswith | RHS isn’t a closing subsequence of LHS | No | `"parentSpanId" !endswith "Span"` |
| endswith\_cs | RHS is a closing subsequence of LHS | Yes | `"parentSpanId" endswith_cs "Id"` |
| !endswith\_cs | RHS isn’t a closing subsequence of LHS | Yes | `"parentSpanId" !endswith_cs "Span"` |
| in | Equals to one of the elements | Yes | `"abc" in ("123", "345", "abc")` |
| !in | Not equals to any of the elements | Yes | `"bca" !in ("123", "345", "abc")` |
| in\~ | Equals to one of the elements | No | `"abc" in~ ("123", "345", "ABC")` |
| !in\~ | Not equals to any of the elements | No | `"bca" !in~ ("123", "345", "ABC")` |
| matches regex | LHS contains a match for RHS | Yes | `"parentSpanId" matches regex "g.*r"` |
| !matches regex | LHS doesn’t contain a match for RHS | Yes | `"parentSpanId" !matches regex "g.*r"` |
| has | RHS is a whole term in LHS | No | `"North America" has "america"` |
| !has | RHS isn’t a whole term in LHS | No | `"North America" !has "america"` |
| has\_cs | RHS is a whole term in LHS | Yes | `"North America" has_cs "America"` |
| !has\_cs | RHS isn’t a whole term in LHS | Yes | `"North America" !has_cs "America"` |
| hasprefix | LHS string starts with the RHS string | No | `"Admin_User" hasprefix "Admin"` |
| !hasprefix | LHS string doesn’t start with the RHS string | No | `"Admin_User" !hasprefix "Admin"` |
| hasprefix\_cs | LHS string starts with the RHS string | Yes | `"DOCS_file" hasprefix_cs "DOCS"` |
| !hasprefix\_cs | LHS string doesn’t start with the RHS string | Yes | `"DOCS_file" !hasprefix_cs "DOCS"` |
| hassuffix | LHS string ends with the RHS string | No | `"documentation.docx" hassuffix ".docx"` |
| !hassuffix | LHS string doesn’t end with the RHS string | No | `"documentation.docx" !hassuffix ".docx"` |
| hassuffix\_cs | LHS string ends with the RHS string | Yes | `"Document.HTML" hassuffix_cs ".HTML"` |
| !hassuffix\_cs | LHS string doesn’t end with the RHS string | Yes | `"Document.HTML" !hassuffix_cs ".HTML"` |
RHS = right-hand side of the expression
LHS = left-hand side of the expression
## Case-sensitivity
Operators with `_cs` suffix are case-sensitive.
When two operators do the same task, use the case-sensitive one for better performance.
For example:
* Instead of `=~`, use `==`
* Instead of `in~`, use `in`
* Instead of `contains`, use `contains_cs`
## Best practices
* Use case-sensitive operators when you know the case to improve performance.
* Avoid complex regular expressions for basic matching tasks. Use basic string operators instead.
* When matching against a set of values, ensure the set is as small as possible to improve performance.
* For matching substrings, use prefix or suffix matching instead of general substring matching for better performance.
## Equality and inequality operators
Operators:
* `==`
* `!=`
* `=~`
* `!~`
* `in`
* `!in`
* `in~`
* `!in~`
Query examples:
```kusto
"get" == "get"
"get" != "GET"
"get" =~ "GET"
"get" !~ "put"
"get" in ("get", "put", "delete")
```
* Use `==` or `!=` for exact match comparisons when case sensitivity is important.
* Use `=~` or `!~` for case-insensitive comparisons or when you don’t know the exact case.
* Use `in` or `!in` for checking membership within a set of values which can be efficient for a small set of values.
## Subsequence-matching operators
Operators:
* `contains`
* `!contains`
* `contains_cs`
* `!contains_cs`
* `startswith`
* `!startswith`
* `startswith_cs`
* `!startswith_cs`
* `endswith`
* `!endswith`
* `endswith_cs`
* `!endswith_cs`
Query examples:
```kusto
"parentSpanId" contains "Span" // True
"parentSpanId" !contains "xyz" // True
"parentSpanId" startswith "parent" // True
"parentSpanId" endswith "Id" // True
"parentSpanId" contains_cs "Span" // True if parentSpanId is "parentSpanId", False if parentSpanId is "parentspanid" or "PARENTSPANID"
"parentSpanId" startswith_cs "parent" // True if parentSpanId is "parentSpanId", False if parentSpanId is "ParentSpanId" or "PARENTSPANID"
"parentSpanId" endswith_cs "Id" // True if parentSpanId is "parentSpanId", False if parentSpanId is "parentspanid" or "PARENTSPANID"
```
Use case-sensitive operators (`contains_cs`, `startswith_cs`, `endswith_cs`) when you know the case to improve performance.
## Regular-expression-matching operators
Operators:
* `matches regex`
* `!matches regex`
Query examples:
```kusto
"parentSpanId" matches regex "p.*Id" // True
"parentSpanId" !matches regex "x.*z" // True
```
Avoid complex regular expressions or use string operators for simple substring, prefix, or suffix matching.
## Term-matching operators
Operators:
* `has`
* `!has`
* `has_cs`
* `!has_cs`
* `hasprefix`
* `!hasprefix`
* `hasprefix_cs`
* `!hasprefix_cs`
* `hassuffix`
* `!hassuffix`
* `hassuffix_cs`
* `!hassuffix_cs`
Query examples:
```kusto
"North America" has "america" // True
"North America" !has "america" // False
"North America" has_cs "America" // True
"North America" !has_cs "America" // False
"Admin_User" hasprefix "Admin" // True
"Admin_User" !hasprefix "Admin" // False
"DOCS_file" hasprefix_cs "DOCS" // True
"DOCS_file" !hasprefix_cs "DOCS" // False
"documentation.docx" hassuffix ".docx" // True
"documentation.docx" !hassuffix ".docx" // False
"Document.HTML" hassuffix_cs ".HTML" // True
"Document.HTML" !hassuffix_cs ".HTML" // False
```
* Use `has` or `has_cs` for term matching which can be more efficient than regular expression matching for simple term searches.
* Use `has_cs` when you know the case to improve performance.
* Unlike the `contains` operator, which matches any substring, the `has` operator looks for exact terms, ensuring more precise results.
# count
Source: https://axiom.co/docs/apl/tabular-operators/count-operator
This page explains how to use the count operator function in APL.
The `count` operator in Axiom Processing Language (APL) is a simple yet powerful aggregation function that returns the total number of records in a dataset. You can use it to calculate the number of rows in a table or the results of a query. The `count` operator is useful in scenarios such as log analysis, telemetry data processing, and security monitoring, where you need to know how many events, transactions, or data entries match certain criteria.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk’s SPL, the `stats count` function is used to count the number of events in a dataset. In APL, the equivalent operation is simply `count`. You can use `count` in APL without the need for additional function wrapping.
```splunk Splunk example
index=web_logs
| stats count
```
```kusto APL equivalent
['sample-http-logs']
| count
```
In ANSI SQL, you typically use `COUNT(*)` or `COUNT(field)` to count the number of rows in a table. In APL, the `count` operator achieves the same functionality, but it doesn’t require a field name or `*`.
```sql SQL example
SELECT COUNT(*) FROM web_logs;
```
```kusto APL equivalent
['sample-http-logs']
| count
```
## Usage
### Syntax
```kusto
| count
```
### Parameters
The `count` operator does not take any parameters. It simply returns the number of records in the dataset or query result.
### Returns
`count` returns an integer representing the total number of records in the dataset.
## Use case examples
In this example, you count the total number of HTTP requests in the `['sample-http-logs']` dataset.
**Query**
```kusto
['sample-http-logs']
| count
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20count%22%7D)
**Output**
| count |
| ----- |
| 15000 |
This query returns the total number of HTTP requests recorded in the logs.
In this example, you count the number of traces in the `['otel-demo-traces']` dataset.
**Query**
```kusto
['otel-demo-traces'] |
count
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20count%22%7D)
**Output**
| count |
| ----- |
| 5000 |
This query returns the total number of OpenTelemetry traces in the dataset.
In this example, you count the number of security events in the `['sample-http-logs']` dataset where the status code indicates an error (status codes 4xx or 5xx).
**Query**
```kusto
['sample-http-logs'] |
where status startswith '4' or status startswith '5' |
count
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20where%20status%20startswith%20'4'%20or%20status%20startswith%20'5'%20%7C%20count%22%7D)
**Output**
| count |
| ----- |
| 1200 |
This query returns the number of HTTP requests that resulted in an error (HTTP status code 4xx or 5xx).
## List of related operators
* [summarize](/apl/tabular-operators/summarize-operator): The `summarize` operator is used to aggregate data based on one or more fields, allowing you to calculate sums, averages, and other statistics, including counts. Use `summarize` when you need to group data before counting.
* [extend](/apl/tabular-operators/extend-operator): The `extend` operator adds calculated fields to a dataset. You can use `extend` alongside `count` if you want to add additional calculated data to your query results.
* [project](/apl/tabular-operators/project-operator): The `project` operator selects specific fields from a dataset. While `count` returns the total number of records, `project` can limit or change which fields you see.
* [where](/apl/tabular-operators/where-operator): The `where` operator filters rows based on a condition. Use `where` with `count` to only count records that meet certain criteria.
* [take](/apl/tabular-operators/take-operator): The `take` operator returns a specified number of records. You can use `take` to limit results before applying `count` if you're interested in counting a sample of records.
# distinct
Source: https://axiom.co/docs/apl/tabular-operators/distinct-operator
This page explains how to use the distinct operator function in APL.
The `distinct` operator in APL (Axiom Processing Language) returns a unique set of values from a specified field or set of fields. This operator is useful when you need to filter out duplicate entries and focus only on distinct values, such as unique user IDs, event types, or error codes within your datasets. Use the `distinct` operator in scenarios where eliminating duplicates helps you gain clearer insights from your data, like when analyzing logs, monitoring system traces, or reviewing security incidents.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk’s SPL, the `dedup` command is often used to retrieve distinct values. In APL, the equivalent is the `distinct` operator, which behaves similarly by returning unique values but without necessarily ordering them.
```splunk Splunk example
index=web_logs
| dedup user_id
```
```kusto APL equivalent
['sample-http-logs']
| distinct id
```
In ANSI SQL, you use `SELECT DISTINCT` to return unique rows from a table. In APL, the `distinct` operator serves a similar function but is placed after the table reference rather than in the `SELECT` clause.
```sql SQL example
SELECT DISTINCT user_id FROM web_logs;
```
```kusto APL equivalent
['sample-http-logs']
| distinct id
```
## Usage
### Syntax
```kusto
| distinct FieldName1 [, FieldName2, ...]
```
### Parameters
* `FieldName1, FieldName2, ...`: The fields to include in the distinct operation. If you specify multiple fields, the result will include rows where the combination of values across these fields is unique.
### Returns
The `distinct` operator returns a dataset with unique values from the specified fields, removing any duplicate entries.
## Use case examples
In this use case, the `distinct` operator helps identify unique users who made HTTP requests in a system.
**Query**
```kusto
['sample-http-logs']
| distinct id
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20distinct%20id%22%7D)
**Output**
| id |
| --------- |
| user\_123 |
| user\_456 |
| user\_789 |
This query returns a list of unique user IDs that have made HTTP requests, filtering out duplicate user activity.
Here, the `distinct` operator is used to identify all unique services involved in traces.
**Query**
```kusto
['otel-demo-traces']
| distinct ['service.name']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20distinct%20%5B'service.name'%5D%22%7D)
**Output**
| service.name |
| --------------------- |
| frontend |
| checkoutservice |
| productcatalogservice |
This query returns a distinct list of services involved in traces.
In this example, you use the `distinct` operator to find unique HTTP status codes from security logs.
**Query**
```kusto
['sample-http-logs']
| distinct status
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20distinct%20status%22%7D)
**Output**
| status |
| ------ |
| 200 |
| 404 |
| 500 |
This query provides a distinct list of HTTP status codes that occurred in the logs.
## List of related operators
* [count](/apl/tabular-operators/count-operator): Returns the total number of rows. Use it to count occurrences of data rather than filtering for distinct values.
* [summarize](/apl/tabular-operators/summarize-operator): Allows you to aggregate data and perform calculations like sums or averages while grouping by distinct values.
* [project](/apl/tabular-operators/project-operator): Selects specific fields from the dataset. Use it when you want to control which fields are returned before applying `distinct`.
# extend
Source: https://axiom.co/docs/apl/tabular-operators/extend-operator
This page explains how to use the extend operator in APL.
The `extend` operator in APL allows you to create new calculated fields in your result set based on existing data. You can define expressions or functions to compute new values for each row, making `extend` particularly useful when you need to enrich your data without altering the original dataset. You typically use `extend` when you want to add additional fields to analyze trends, compare metrics, or generate new insights from your data.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk, the `eval` command is used to create new fields or modify existing ones. In APL, you can achieve this using the `extend` operator.
```sql Splunk example
index=myindex
| eval newField = duration * 1000
```
```kusto APL equivalent
['sample-http-logs']
| extend newField = req_duration_ms * 1000
```
In ANSI SQL, you typically use the `SELECT` clause with expressions to create new fields. In APL, `extend` is used instead to define these new computed fields.
```sql SQL example
SELECT id, req_duration_ms, req_duration_ms * 1000 AS newField FROM logs;
```
```kusto APL equivalent
['sample-http-logs']
| extend newField = req_duration_ms * 1000
```
## Usage
### Syntax
```kusto
| extend NewField = Expression
```
### Parameters
* `NewField`: The name of the new field to be created.
* `Expression`: The expression used to compute values for the new field. This can include mathematical operations, string manipulations, or functions.
### Returns
The operator returns a copy of the original dataset with the following changes:
* Field names noted by `extend` that already exist in the input are removed and appended as their new calculated values.
* Field names noted by `extend` that do not exist in the input are appended as their new calculated values.
## Use case examples
In log analysis, you can use `extend` to compute the duration of each request in seconds from a millisecond value.
**Query**
```kusto
['sample-http-logs']
| extend duration_sec = req_duration_ms / 1000
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20extend%20duration_sec%20%3D%20req_duration_ms%20%2F%201000%22%7D)
**Output**
| \_time | req\_duration\_ms | id | status | uri | method | geo.city | geo.country | duration\_sec |
| ------------------- | ----------------- | ---- | ------ | ----- | ------ | -------- | ----------- | ------------- |
| 2024-10-17 09:00:01 | 300 | 1234 | 200 | /home | GET | London | UK | 0.3 |
This query calculates the duration of HTTP requests in seconds by dividing the `req_duration_ms` field by 1000.
You can use `extend` to create a new field that categorizes the service type based on the service’s name.
**Query**
```kusto
['otel-demo-traces']
| extend service_type = iff(['service.name'] in ('frontend', 'frontendproxy'), 'Web', 'Backend')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D%20%7C%20extend%20service_type%20%3D%20iff%28%5B%27service.name%27%5D%20in%20%28%27frontend%27%2C%20%27frontendproxy%27%29%2C%20%27Web%27%2C%20%27Backend%27%29%22%7D)
**Output**
| \_time | span\_id | trace\_id | service.name | kind | status\_code | service\_type |
| ------------------- | -------- | --------- | --------------- | ------ | ------------ | ------------- |
| 2024-10-17 09:00:01 | abc123 | xyz789 | frontend | client | 200 | Web |
| 2024-10-17 09:00:01 | def456 | uvw123 | checkoutservice | server | 500 | Backend |
This query adds a new field `service_type` that categorizes the service into either Web or Backend based on the `service.name` field.
For security logs, you can use `extend` to categorize HTTP statuses as success or failure.
**Query**
```kusto
['sample-http-logs']
| extend status_category = iff(status == '200', 'Success', 'Failure')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20extend%20status_category%20%3D%20iff%28status%20%3D%3D%20%27200%27%2C%20%27Success%27%2C%20%27Failure%27%29%22%7D)
**Output**
| \_time | id | status | uri | status\_category |
| ------------------- | ---- | ------ | ----- | ---------------- |
| 2024-10-17 09:00:01 | 1234 | 200 | /home | Success |
This query creates a new field `status_category` that labels each HTTP request as either a Success or Failure based on the status code.
## List of related operators
* [project](/apl/tabular-operators/project-operator): Use `project` to select specific fields or rename them. Unlike `extend`, it does not add new fields.
* [summarize](/apl/tabular-operators/summarize-operator): Use `summarize` to aggregate data, which differs from `extend` that only adds new calculated fields without aggregation.
# extend-valid
Source: https://axiom.co/docs/apl/tabular-operators/extend-valid-operator
This page explains how to use the extend-valid operator in APL.
The `extend-valid` operator in Axiom Processing Language (APL) allows you to extend a set of fields with new calculated values, where these calculations are based on conditions of validity for each row. It’s particularly useful when working with datasets that contain missing or invalid data, as it enables you to calculate and assign values only when certain conditions are met. This operator helps you keep your data clean by applying calculations to valid data points, and leaving invalid or missing values untouched.
This is a shorthand operator to create a field while also doing basic checking on the validity of the field. In many cases, additional checks are required and it is recommended in those cases a combination of an [extend](/apl/tabular-operators/extend-operator) and a [where](/apl/tabular-operators/where-operator) operator are used. The basic checks that Axiom preform depend on the type of the expression:
* **Dictionary:** Check if the dictionary is not null and has at least one entry.
* **Array:** Check if the arrat is not null and has at least one value.
* **String:** Check is the string is not empty and has at least one character.
* **Other types:** The same logic as `tobool` and a check for true.
You can use `extend-valid` to perform conditional transformations on large datasets, especially in scenarios where data quality varies or when dealing with complex log or telemetry data.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, similar functionality is achieved using the `eval` function, but with the `if` command to handle conditional logic for valid or invalid data. In APL, `extend-valid` is more specialized for handling valid data points directly, allowing you to extend fields based on conditions.
```sql Splunk example
| eval new_field = if(isnotnull(field), field + 1, null())
```
```kusto APL equivalent
['sample-http-logs']
| extend-valid new_field = req_duration_ms + 100
```
In ANSI SQL, similar functionality is often achieved using the `CASE WHEN` expression within a `SELECT` statement to handle conditional logic for fields. In APL, `extend-valid` directly extends a field conditionally, based on the validity of the data.
```sql SQL example
SELECT CASE WHEN req_duration_ms IS NOT NULL THEN req_duration_ms + 100 ELSE NULL END AS new_field FROM sample_http_logs;
```
```kusto APL equivalent
['sample-http-logs']
| extend-valid new_field = req_duration_ms + 100
```
## Usage
### Syntax
```kusto
| extend-valid FieldName1 = Expression1, FieldName2 = Expression2, FieldName3 = ...
```
### Parameters
* `FieldName`: The name of the existing field that you want to extend.
* `Expression`: The expression to evaluate and apply for valid rows.
### Returns
The operator returns a table where the specified fields are extended with new values based on the given expression for valid rows. The original value remains unchanged.
## Use case examples
In this use case, you normalize the HTTP request methods by converting them to uppercase for valid entries.
**Query**
```kusto
['sample-http-logs']
| extend-valid upper_method = toupper(method)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend-valid%20upper_method%20%3D%20toupper\(method\)%22%7D)
**Output**
| \_time | method | upper\_method |
| ------------------- | ------ | ------------- |
| 2023-10-01 12:00:00 | get | GET |
| 2023-10-01 12:01:00 | POST | POST |
| 2023-10-01 12:02:00 | NULL | NULL |
In this query, the `toupper` function converts the `method` field to uppercase, but only for valid entries. If the `method` field is null, the result remains null.
In this use case, you extract the first part of the service namespace (before the hyphen) from valid namespaces in the OpenTelemetry traces.
**Query**
```kusto
['otel-demo-traces']
| extend-valid namespace_prefix = extract('^(.*?)-', 1, ['service.namespace'])
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20extend-valid%20namespace_prefix%20%3D%20extract\('%5E\(.*%3F\)-'%2C%201%2C%20%5B'service.namespace'%5D\)%22%7D)
**Output**
| \_time | service.namespace | namespace\_prefix |
| ------------------- | ------------------ | ----------------- |
| 2023-10-01 12:00:00 | opentelemetry-demo | opentelemetry |
| 2023-10-01 12:01:00 | opentelemetry-prod | opentelemetry |
| 2023-10-01 12:02:00 | NULL | NULL |
In this query, the `extract` function pulls the first part of the service namespace. It only applies to valid `service.namespace` values, leaving nulls unchanged.
In this use case, you extract the first letter of the city names from the `geo.city` field for valid log entries.
**Query**
```kusto
['sample-http-logs']
| extend-valid city_first_letter = extract('^([A-Za-z])', 1, ['geo.city'])
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend-valid%20city_first_letter%20%3D%20extract\('%5E\(%5BA-Za-z%5D\)'%2C%201%2C%20%5B'geo.city'%5D\)%22%7D)
**Output**
| \_time | geo.city | city\_first\_letter |
| ------------------- | -------- | ------------------- |
| 2023-10-01 12:00:00 | New York | N |
| 2023-10-01 12:01:00 | NULL | NULL |
| 2023-10-01 12:02:00 | London | L |
| 2023-10-01 12:03:00 | 1Paris | NULL |
In this query, the `extract` function retrieves the first letter of the city names from the `geo.city` field for valid entries. If the `geo.city` field is null or starts with a non-alphabetical character, no city name is extracted, and the result remains null.
## List of related operators
* [extend](/apl/tabular-operators/extend-operator): Use `extend` to add calculated fields unconditionally, without validating data.
* [project](/apl/tabular-operators/project-operator): Use `project` to select and rename fields, without performing conditional extensions.
* [summarize](/apl/tabular-operators/summarize-operator): Use `summarize` for aggregation, often used before extending fields with further calculations.
# externaldata
Source: https://axiom.co/docs/apl/tabular-operators/externaldata-operator
This page explains how to use the externaldata operator in APL.
The `externaldata` operator in APL allows you to retrieve data from external storage sources, such as Azure Blob Storage, AWS S3, or HTTP endpoints, and use it within queries. You can specify the schema of the external data and query it as if it were a native dataset. This operator is useful when you need to analyze data that is stored externally without importing it into Axiom.
The `externaldata` operator currently supports external data sources with a file size of maximum 5 MB.
The `externaldata` operator is currently in public preview. For more information, see [Feature states](/platform-overview/roadmap#feature-states).
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
Splunk does not have a direct equivalent to `externaldata`, but you can use `inputlookup` or `| rest` commands to retrieve data from external sources.
```sql Splunk example
| inputlookup external_data.csv
```
```kusto APL equivalent
externaldata (id:string, timestamp:datetime) ["https://storage.example.com/data.csv"] with (format="csv")
```
In SQL, the equivalent approach is to use `OPENROWSET` to access external data stored in cloud storage.
```sql SQL example
SELECT * FROM OPENROWSET(BULK 'https://storage.example.com/data.csv', FORMAT = 'CSV') AS data;
```
```kusto APL equivalent
externaldata (id:string, timestamp:datetime) ["https://storage.example.com/data.csv"] with (format="csv")
```
## Usage
### Syntax
```kusto
externaldata (FieldName1:FieldType1, FieldName2:FieldType2, ...) ["URL1", "URL2", ...] [with (format = "FormatType", ignoreFirstRecord=false)]
```
### Parameters
| Parameter | Description |
| --------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `FieldName1:FieldType1, FieldName2:FieldType2, ...` | Defines the schema of the external data. |
| `URL1, URL2, ...` | The external storage URIs where the source data resides. |
| `format` | Optional: Specifies the file format. The supported types are `csv`, `scsv`, `tsv`, `psv`, `json`, `multijson`, `raw`, `txt`. |
| `ignoreFirstRecord` | Optional: A Boolean value that specifies whether to ignore the first record in the external data sources. The default is false. Use this property for CSV files with headers. |
### Returns
The operator returns a table with the specified schema, containing data retrieved from the external source.
## Use case examples
You have an Axiom dataset that contains access logs with a field `employeeID`. You want to add extra information to your APL query by cross-referencing each employee ID in the Axiom dataset with an employee ID defined in an external lookup table. The lookup table is hosted somewhere else in CSV format.
**External lookup table**
```
employeeID, email, name, location
00001, tifa@acme.com, Tifa Lockhart, US
00002, barret@acme.com, Barret Wallace, Europe
00003, cid@acme.com, Cid Highwind, Europe
```
**Query**
```kusto
let employees = externaldata (employeeID: string, email: string, name: string, location: string) ["http://example.com/lookup-table.csv"] with (format="csv", skipFirstRow=true);
accessLogs
| where severity == "high"
| lookup employees on employeeID
| project _time, severity, employeeID, email, name
```
**Output**
| \_time | severity | employeeID | email | name |
| ---------------- | -------- | ---------- | ----------------------------------------- | -------------- |
| Mar 13, 10:08:23 | high | 00001 | [tifa@acme.com](mailto:tifa@acme.com) | Tifa Lockhart |
| Mar 13, 10:05:03 | high | 00001 | [tifa@acme.com](mailto:tifa@acme.com) | Tifa Lockhart |
| Mar 13, 10:04:51 | high | 00003 | [cid@acme.com](mailto:cid@acme.com) | Cid Highwind |
| Mar 13, 10:02:29 | high | 00002 | [barret@acme.com](mailto:barret@acme.com) | Barret Wallace |
| Mar 13, 10:01:13 | high | 00001 | [tifa@acme.com](mailto:tifa@acme.com) | Tifa Lockhart |
This example extends the original dataset with the fields `email` and `name`. These new fields come from the external lookup table.
Use a lookup table from an external source to extend an OTel logs dataset with a field that contains human-readable names for each service.
**External lookup table**
```
serviceName,humanreadableServiceName
frontend,Frontend
frontendproxy,Frontendproxy
flagd,Flagd
productcatalogservice,Productcatalog
loadgenerator,Loadgenerator
checkoutservice,Checkout
cartservice,Cart
recommendationservice,Recommendations
emailservice,Email
adservice,Ads
shippingservice,Shipping
quoteservice,Quote
currencyservice,Currency
paymentservice,Payment
frauddetectionservice,Frauddetection
```
**Query**
```kusto
let LookupTable = externaldata (serviceName: string, humanreadableServiceName: string) ["http://example.com/lookup-table.csv"] with (format="csv", ignoreFirstRecord=true);
['otel-demo-traces']
| lookup kind=leftouter LookupTable on $left.['service.name'] == $right.serviceName
| project _time, span_id, ['service.name'], humanreadableServiceName
```
**Output**
| \_time | span\_id | service.name | humanreadableServiceName |
| ---------------- | ---------------- | --------------------- | ------------------------ |
| Mar 13, 10:02:28 | 398050797bb646ef | flagd | Flagd |
| Mar 13, 10:02:28 | 0ccd6baca8bea890 | flagd | Flagd |
| Mar 13, 10:02:28 | 2e579cbb3632381a | flagd | Flagd |
| Mar 13, 10:02:29 | 468be2336e35ca32 | loadgenerator | Loadgenerator |
| Mar 13, 10:02:29 | e06348cc4b50ab0d | frontend | Frontend |
| Mar 13, 10:02:29 | 74571a6fa797f769 | frontendproxy | Frontendproxy |
| Mar 13, 10:02:29 | 7ab5eb0a5cd2e0cd | frontendproxy | Frontendproxy |
| Mar 13, 10:02:29 | 050cf3e9ab7efdda | frontend | Frontend |
| Mar 13, 10:02:29 | b2882e3343414175 | frontend | Frontend |
| Mar 13, 10:02:29 | fd7c06a6a746f3e2 | frontend | Frontend |
| Mar 13, 10:02:29 | 606d8a818bec7637 | productcatalogservice | Productcatalog |
## List of related operators
* [lookup](/apl/tabular-operators/lookup-operator): Performs joins between a dataset and an external table.
* [union](/apl/tabular-operators/union-operator): Merges multiple datasets, including external ones.
# getschema
Source: https://axiom.co/docs/apl/tabular-operators/getschema-operator
This page explains how to use the getschema operator in APL.
The `getschema` operator in APL returns the schema of a dataset, including field names and their data types. You can use it to inspect the structure of a dataset before performing queries or transformations. This operator is useful when exploring new datasets, verifying data consistency, or debugging queries.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, you can use the `fieldsummary` command to get schema-related information about a dataset. However, `getschema` in APL is more direct and focused specifically on returning field names and types without additional summary statistics.
```sql Splunk example
| fieldsummary
```
```kusto APL equivalent
['sample-http-logs']
| getschema
```
In ANSI SQL, retrieving schema information is typically done using `INFORMATION_SCHEMA` queries. APL’s `getschema` operator provides a more straightforward way to get schema details without requiring system views.
```sql SQL example
SELECT COLUMN_NAME, DATA_TYPE FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'sample_http_logs';
```
```kusto APL equivalent
['sample-http-logs']
| getschema
```
## Usage
### Syntax
```kusto
| getschema
```
### Parameters
The `getschema` operator does not take any parameters.
### Returns
| Field | Type | Description |
| ------------- | ------ | ----------------------------------------------------- |
| ColumnName | string | The name of the field in the dataset. |
| ColumnOrdinal | number | The index number of the field in the dataset. |
| ColumnType | string | The data type of the field. |
| DataType | string | The APL-internal name for the data type of the field. |
## Use case example
You can use `getschema` to explore the schema of your log data before running queries.
**Query**
```kusto
['sample-http-logs'] | getschema
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20getschema%22%7D)
**Output**
| ColumnName | DataType | ColumnOrdinal | ColumnType |
| ------------- | -------- | ------------- | ---------- |
| \_sysTime | datetime | 0 | datetime |
| \_time | datetime | 1 | datetime |
| content\_type | string | 2 | string |
| geo.city | string | 3 | string |
| geo.country | string | 4 | string |
| id | string | 5 | string |
This query helps you verify the available fields and their data types before further analysis.
## List of related operators
* [project](/apl/tabular-operators/project-operator): Use `project` to select specific fields instead of retrieving the entire schema.
* [extend](/apl/tabular-operators/extend-operator): Use `extend` to add new computed fields to your dataset after understanding the schema.
* [summarize](/apl/tabular-operators/summarize-operator): Use `summarize` for aggregations once you verify field types using `getschema`.
* [where](/apl/tabular-operators/where-operator): Use `where` to filter datasets based on field values after checking their schema.
* [order](/apl/tabular-operators/order-operator): Use `order by` to sort datasets after verifying schema details.
# join
Source: https://axiom.co/docs/apl/tabular-operators/join-operator
This page explains how to use the join operator in APL.
The `join` operator in Axiom Processing Language (APL) combines rows from two datasets based on matching values in specified columns. Use `join` to correlate data from different sources or datasets, such as linking logs to traces or enriching logs with additional metadata.
This operator is useful when you want to:
* Combine information from two datasets with shared keys.
* Analyze relationships between different types of events.
* Enrich existing data with supplementary details.
The `join` operator is currently in public preview. For more information, see [Feature states](/platform-overview/roadmap#feature-states).
The preview of the `join` operator works with variable limits depending on the structure of your dataset. For the left side of the join, the limit is 50,000 rows when the dataset has fewer than 100 fields. This limit decreases linearly as the field count increases. For example, the limit is 25,000 rows when your dataset has 200 fields, 12,500 rows at 400 fields, and 10,000 rows at more than 500 fields. The right side of the join has a consistent limit of 50,000 rows.
## Kinds of join
The kinds of join and their typical use cases are the following:
* `inner` (default): Returns rows where the join conditions exist in both datasets. All matching rows from the right dataset are included for each matching row in the left dataset. Useful to retain all matches without limiting duplicates.
* `innerunique`: Matches rows from both datasets where the join conditions exist in both. For each row in the left dataset, only the first matching row from the right dataset is returned. Optimized for performance when duplicate matching rows on the right dataset are irrelevant.
* `leftouter`: Returns all rows from the left dataset. If a match exists in the right dataset, the matching rows are included; otherwise, columns from the right dataset are `null`. Retains all data from the left dataset, enriching it with matching data from the right dataset.
* `rightouter`: Returns all rows from the right dataset. If a match exists in the left dataset, the matching rows are included; otherwise, columns from the left dataset are `null`. Retains all data from the right dataset, enriching it with matching data from the left dataset.
* `fullouter`: Returns all rows from both datasets. Matching rows are combined, while non-matching rows from either dataset are padded with `null` values. Combines both datasets while retaining unmatched rows from both sides.
* `leftanti`: Returns rows from the left dataset that have no matches in the right dataset. Identifies rows in the left dataset that do not have corresponding entries in the right dataset.
* `rightanti`: Returns rows from the right dataset that have no matches in the left dataset. Identifies rows in the right dataset that do not have corresponding entries in the left dataset.
* `leftsemi`: Returns rows from the left dataset that have at least one match in the right dataset. Only columns from the left dataset are included. Filters rows in the left dataset based on existence in the right dataset.
* `rightsemi`: Returns rows from the right dataset that have at least one match in the left dataset. Only columns from the right dataset are included. Filters rows in the right dataset based on existence in the left dataset.
The preview of the `join` operator currently only supports `inner` join. Support for other kinds of join is coming soon.
### Summary of kinds of join
| Kind of join | Behavior | Matches returned |
| ------------- | --------------------------------------------------------------------- | ---------------------------------- |
| `inner` | All matches between left and right datasets | Multiple matches allowed |
| `innerunique` | First match for each row in the left dataset | Only unique matches |
| `leftouter` | All rows from the left, with matching rows from the right or `null` | Left-dominant |
| `rightouter` | All rows from the right, with matching rows from the left or `null` | Right-dominant |
| `fullouter` | All rows from both datasets, with unmatched rows padded with `null` | Complete join |
| `leftanti` | Rows in the left dataset with no matches in the right dataset | No matches |
| `rightanti` | Rows in the right dataset with no matches in the left dataset | No matches |
| `leftsemi` | Rows in the left dataset with at least one match in the right dataset | Matching rows (left dataset only) |
| `rightsemi` | Rows in the right dataset with at least one match in the left dataset | Matching rows (right dataset only) |
### Choose the right kind of join
* Use `inner` for standard joins where you need all matches.
* Use `leftouter` or `rightouter` when you need to retain all rows from one dataset.
* Use `leftanti` or `rightanti` to find rows that do not match.
* Use `fullouter` for complete combinations of both datasets.
* Use `leftsemi` or `rightsemi` to filter rows based on existence in another dataset.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
The `join` operator in APL works similarly to the `join` command in Splunk SPL. However, APL provides additional flexibility by supporting various join types (e.g., `inner`, `outer`, `leftouter`). Splunk uses a single default join type.
```sql Splunk example
index=logs | join type=inner [search index=traces]
```
```kusto APL equivalent
['sample-http-logs']
| join kind=inner ['otel-demo-traces'] on id == trace_id
```
The `join` operator in APL resembles SQL joins but uses distinct syntax. SQL uses `FROM` and `ON` clauses, whereas APL uses the `join` operator with explicit `kind` and `on` clauses.
```sql SQL example
SELECT *
FROM logs
JOIN traces
ON logs.id = traces.trace_id
```
```kusto APL equivalent
['sample-http-logs']
| join kind=inner ['otel-demo-traces'] on id == trace_id
```
## Usage
### Syntax
```kusto
LeftDataset
| join kind=KindOfJoin RightDataset on Conditions
```
### Parameters
* `LeftDataset`: The first dataset, also known as the outer dataset or the left side of the join. If you expect one of the datasets to contain consistently less data than the other, specify the smaller dataset as the left side of the join.
* `RightDataset`: The second dataset, also known as the inner dataset or the right side of the join.
* `KindOfJoin`: Optionally, the [kind of join](#kinds-of-join) to perform.
* `Conditions`: The conditions for matching rows. The conditions are equality expressions that determine how Axiom matches rows from the `LeftDataset` (left side of the equality expression) with rows from the `RightDataset` (right side of the equality expression). The two sides of the equality expression must have the same data type.
* To join datasets on a field that has the same name in the two datasets, simply use the field name. For example, `on id`.
* To join datasets on a field that has different names in the two datasets, define the two field names in an equality expression such as `on id == trace_id`.
* You can use expressions in the join conditions. For example, to compare two fields of different data types, use `on id_string == tostring(trace_id_int)`.
* You can define multiple join conditions. To separate conditions, use commas (`,`). Don’t use `and`. For example, `on id == trace_id, span == span_id`.
### Returns
The `join` operator returns a new table containing rows that match the specified join condition. The fields from the left and right datasets are included.
## Use case example
Join HTTP logs with trace data to correlate user activity with performance metrics.
**Query**
```kusto
['otel-demo-traces']
| join kind=inner ['otel-demo-logs'] on trace_id
```
**Output**
| \_time | trace\_id | span\_id | service.name | duration |
| ---------- | --------- | -------- | ------------ | -------- |
| 2024-12-01 | trace123 | span123 | frontend | 500ms |
This query links user activity in HTTP logs to trace data to investigate performance issues.
## List of related operators
* [union](/apl/tabular-operators/union-operator): Combines rows from multiple datasets without requiring a matching condition.
* [where](/apl/tabular-operators/where-operator): Filters rows based on conditions, often used with `join` for more precise results.
# limit
Source: https://axiom.co/docs/apl/tabular-operators/limit-operator
This page explains how to use the limit operator in APL.
The `limit` operator in Axiom Processing Language (APL) allows you to restrict the number of rows returned from a query. It is particularly useful when you want to see only a subset of results from large datasets, such as when debugging or previewing query outputs. The `limit` operator can help optimize performance and focus analysis by reducing the amount of data processed.
Use the `limit` operator when you want to return only the top rows from a dataset, especially in cases where the full result set is not necessary.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk, the equivalent to APL’s `limit` is the `head` command, which also returns the top rows of a dataset. The main difference is in the syntax.
```sql Splunk example
| head 10
```
```kusto APL equivalent
['sample-http-logs']
| limit 10
```
In ANSI SQL, the `LIMIT` clause is equivalent to the `limit` operator in APL. The SQL `LIMIT` statement is placed at the end of a query, whereas in APL, the `limit` operator comes after the dataset reference.
```sql SQL example
SELECT * FROM sample_http_logs LIMIT 10;
```
```kusto APL equivalent
['sample-http-logs']
| limit 10
```
## Usage
### Syntax
```kusto
| limit [N]
```
### Parameters
* `N`: The maximum number of rows to return. This must be a non-negative integer.
### Returns
The `limit` operator returns the top **`N`** rows from the input dataset. If fewer than **`N`** rows are available, all rows are returned.
## Use case examples
In log analysis, you often want to view only the most recent entries, and `limit` can help narrow the focus on those rows.
**Query**
```kusto
['sample-http-logs']
| limit 5
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20limit%205%22%7D)
**Output**
| \_time | req\_duration\_ms | id | status | uri | method | geo.city | geo.country |
| ------------------- | ----------------- | --- | ------ | -------------- | ------ | -------- | ----------- |
| 2024-10-17T12:00:00 | 200 | 123 | 200 | /index.html | GET | New York | USA |
| 2024-10-17T11:59:59 | 300 | 124 | 404 | /notfound.html | GET | London | UK |
This query limits the output to the first 5 rows from the `['sample-http-logs']` dataset, returning recent HTTP log entries.
When analyzing OpenTelemetry traces, you may want to focus on the most recent traces.
**Query**
```kusto
['otel-demo-traces']
| limit 5
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20limit%205%22%7D)
**Output**
| \_time | duration | span\_id | trace\_id | service.name | kind | status\_code |
| ------------------- | -------- | -------- | --------- | ------------ | ------ | ------------ |
| 2024-10-17T12:00:00 | 500ms | 1abc | 123xyz | frontend | server | OK |
| 2024-10-17T11:59:59 | 200ms | 2def | 124xyz | cartservice | client | OK |
This query retrieves the first 5 rows from the `['otel-demo-traces']` dataset, helping you analyze the latest traces.
For security log analysis, you might want to review the most recent login attempts to ensure no anomalies exist.
**Query**
```kusto
['sample-http-logs']
| where status == '401'
| limit 5
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20where%20status%20%3D%3D%20'401'%20%7C%20limit%205%22%7D)
**Output**
| \_time | req\_duration\_ms | id | status | uri | method | geo.city | geo.country |
| ------------------- | ----------------- | --- | ------ | ----------- | ------ | -------- | ----------- |
| 2024-10-17T12:00:00 | 300 | 567 | 401 | /login.html | POST | Berlin | Germany |
| 2024-10-17T11:59:59 | 250 | 568 | 401 | /login.html | POST | Sydney | Australia |
This query limits the output to 5 unauthorized access attempts (`401` status code) from the `['sample-http-logs']` dataset.
## List of related operators
* [take](/apl/tabular-operators/take-operator): Similar to `limit`, but explicitly focuses on row sampling.
* [top](/apl/tabular-operators/top-operator): Retrieves the top **N** rows sorted by a specific field.
* [sample](/apl/tabular-operators/sample-operator): Randomly samples **N** rows from the dataset.
# lookup
Source: https://axiom.co/docs/apl/tabular-operators/lookup-operator
This page explains how to use the lookup operator in APL.
The `lookup` operator extends a primary dataset with a lookup table based on a specified key column. It retrieves matching rows from the lookup table and appends relevant fields to the primary dataset. You can use `lookup` for enriching event data, adding contextual information, or correlating logs with reference tables.
The `lookup` operator is useful when:
* You need to enrich log events with additional metadata, such as mapping user IDs to user profiles.
* You want to correlate security logs with threat intelligence feeds.
* You need to extend OpenTelemetry traces with supplementary details, such as service dependencies.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, the `lookup` command performs a similar function by enriching event data with fields from an external lookup table. However, unlike Splunk, APL’s `lookup` operator only performs an inner join.
```sql Splunk example
index=web_logs | lookup port_lookup port AS client_port OUTPUT service_name
```
```kusto APL equivalent
['sample-http-logs']
| lookup kind=inner ['port_lookup'] on port
```
In ANSI SQL, `lookup` is similar to an `INNER JOIN`, where records from both tables are matched based on a common key. Unlike SQL, APL does not support other types of joins in `lookup`.
```sql SQL example
SELECT logs.*, ports.service_name
FROM logs
INNER JOIN port_lookup ports ON logs.port = ports.port;
```
```kusto APL equivalent
['sample-http-logs']
| lookup kind=inner ['port_lookup'] on port
```
## Usage
### Syntax
```kusto
PrimaryDataset
| lookup kind=KindOfLookup LookupTable on Conditions
```
### Parameters
* `PrimaryDataset`: The primary dataset that you want to extend. If you expect one of the tables to contain consistently more data than the other, specify the larger table as the primary dataset.
* `LookupTable`: The data table containing additional data, also known as the dimension table or lookup table.
* `KindOfLookup`: Optionally, specifies the lookup type as `leftouter` or `inner`. The default is `leftouter`.
* `leftouter` lookup includes all rows from the primary dataset even if they don’t match the conditions. In unmatched rows, the new fields contain nulls.
* `inner` lookup only includes rows from the primary dataset if they match the conditions. Unmatched rows are excluded from the output.
* `Conditions`: The conditions for matching rows from `PrimaryDataset` to rows from `LookupTable`. The conditions are equality expressions that determine how Axiom matches rows from the `PrimaryDataset` (left side of the equality expression) with rows from the `LookupTable` (right side of the equality expression). The two sides of the equality expression must have the same data type.
* To use `lookup` on a key column that has the same name in the primary dataset and the lookup table, simply use the field name. For example, `on id`.
* To use `lookup` on a key column that has different names in the primary dataset and the lookup table, define the two field names in an equality expression such as `on id == trace_id`.
* You can define multiple conditions. To separate conditions, use commas (`,`). Don’t use `and`. For example, `on id == trace_id, span == span_id`.
### Returns
A dataset where rows from `PrimaryDataset` are enriched with matching columns from `LookupTable` based on the key column.
## Use case example
Add a field with human-readable names for each service.
**Query**
```kusto
let LookupTable=datatable(serviceName:string, humanreadableServiceName:string)[
'frontend', 'Frontend',
'frontendproxy', 'Frontend proxy',
'flagd', 'Flagd',
'productcatalogservice', 'Product catalog',
'loadgenerator', 'Load generator',
'checkoutservice', 'Checkout',
'cartservice', 'Cart',
'recommendationservice', 'Recommendations',
'emailservice', 'Email',
'adservice', 'Ads',
'shippingservice', 'Shipping',
'quoteservice', 'Quote',
'currencyservice', 'Currency',
'paymentservice', 'Payment',
'frauddetectionservice', 'Fraud detection',
];
['otel-demo-traces']
| lookup kind=leftouter LookupTable on $left.['service.name'] == $right.serviceName
| project _time, span_id, ['service.name'], humanreadableServiceName
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22let%20LookupTable%3Ddatatable\(serviceName%3Astring%2C%20humanreadableServiceName%3Astring\)%5B%20'frontend'%2C%20'Frontend'%2C%20'frontendproxy'%2C%20'Frontend%20proxy'%2C%20'flagd'%2C%20'Flagd'%2C%20'productcatalogservice'%2C%20'Product%20catalog'%2C%20'loadgenerator'%2C%20'Load%20generator'%2C%20'checkoutservice'%2C%20'Checkout'%2C%20'cartservice'%2C%20'Cart'%2C%20'recommendationservice'%2C%20'Recommendations'%2C%20'emailservice'%2C%20'Email'%2C%20'adservice'%2C%20'Ads'%2C%20'shippingservice'%2C%20'Shipping'%2C%20'quoteservice'%2C%20'Quote'%2C%20'currencyservice'%2C%20'Currency'%2C%20'paymentservice'%2C%20'Payment'%2C%20'frauddetectionservice'%2C%20'Fraud%20detection'%2C%20%5D%3B%20%5B'otel-demo-traces'%5D%20%7C%20lookup%20kind%3Dleftouter%20LookupTable%20on%20%24left.%5B'service.name'%5D%20%3D%3D%20%24right.serviceName%20%7C%20project%20_time%2C%20span_id%2C%20%5B'service.name'%5D%2C%20humanreadableServiceName%22%7D)
**Output**
| \_time | span\_id | service.name | humanreadableServiceName |
| ---------------- | ---------------- | ------------- | ------------------------ |
| Feb 27, 12:01:55 | 15bf0a95dfbfcd77 | loadgenerator | Load generator |
| Feb 27, 12:01:55 | 86c27626407be459 | frontendproxy | Frontend proxy |
| Feb 27, 12:01:55 | 89d9b5687056b1cf | frontendproxy | Frontend proxy |
| Feb 27, 12:01:55 | bbc1bac7ebf6ce8a | frontend | Frontend |
| Feb 27, 12:01:55 | cd12307e154a4817 | frontend | Frontend |
| Feb 27, 12:01:55 | 21fd89efd3d36b15 | frontend | Frontend |
| Feb 27, 12:01:55 | c6e8db2d149ab273 | frontend | Frontend |
| Feb 27, 12:01:55 | fd569a8fce7a8446 | cartservice | Cart |
| Feb 27, 12:01:55 | ed61fac37e9bf220 | loadgenerator | Load generator |
| Feb 27, 12:01:55 | 83fdf8a30477e726 | frontend | Frontend |
| Feb 27, 12:01:55 | 40d94294da7b04ce | frontendproxy | Frontend proxy |
## List of related operators
* [join](/apl/tabular-operators/join-operator): Performs more flexible join operations, including left, right, and outer joins.
* [project](/apl/tabular-operators/project-operator): Selects specific columns from a dataset, which can be used to refine the output of a lookup operation.
* [union](/apl/tabular-operators/union-operator): Combines multiple datasets without requiring a key column.
# order
Source: https://axiom.co/docs/apl/tabular-operators/order-operator
This page explains how to use the order operator in APL.
The `order` operator in Axiom Processing Language (APL) allows you to sort the rows of a result set by one or more specified fields. You can use this operator to organize data for easier interpretation, prioritize specific values, or prepare data for subsequent analysis steps. The `order` operator is particularly useful when working with logs, telemetry data, or any dataset where ranking or sorting by values (such as time, status, or user ID) is necessary.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, the equivalent operator to `order` is `sort`. SPL uses a similar syntax to APL but with some differences. In SPL, `sort` allows both ascending (`asc`) and descending (`desc`) sorting, while in APL, you achieve sorting using the `asc()` and `desc()` functions for fields.
```splunk Splunk example
| sort - _time
```
```kusto APL equivalent
['sample-http-logs']
| order by _time desc
```
In ANSI SQL, the equivalent of `order` is `ORDER BY`. SQL uses `ASC` for ascending and `DESC` for descending order. In APL, sorting works similarly, with the `asc()` and `desc()` functions added around field names to specify the order.
```sql SQL example
SELECT * FROM logs ORDER BY _time DESC;
```
```kusto APL equivalent
['sample-http-logs']
| order by _time desc
```
## Usage
### Syntax
```kusto
| order by FieldName [asc | desc], FieldName [asc | desc]
```
### Parameters
* `FieldName`: The name of the field by which to sort.
* `asc`: Sorts the field in ascending order.
* `desc`: Sorts the field in descending order.
### Returns
The `order` operator returns the input dataset, sorted according to the specified fields and order (ascending or descending). If multiple fields are specified, sorting is done based on the first field, then by the second if values in the first field are equal, and so on.
## Use case examples
In this example, you sort HTTP logs by request duration in descending order to prioritize the longest requests.
**Query**
```kusto
['sample-http-logs']
| order by req_duration_ms desc
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20order%20by%20req_duration_ms%20desc%22%7D)
**Output**
| \_time | req\_duration\_ms | id | status | uri | method | geo.city | geo.country |
| ------------------- | ----------------- | ------ | ------ | -------------------- | ------ | -------- | ----------- |
| 2024-10-17 10:10:01 | 1500 | user12 | 200 | /api/v1/get-orders | GET | Seattle | US |
| 2024-10-17 10:09:47 | 1350 | user23 | 404 | /api/v1/get-products | GET | New York | US |
| 2024-10-17 10:08:21 | 1200 | user45 | 500 | /api/v1/post-order | POST | London | UK |
This query sorts the logs by request duration, helping you identify which requests are taking the most time to complete.
In this example, you sort OpenTelemetry trace data by span duration in descending order, which helps you identify the longest-running spans across your services.
**Query**
```kusto
['otel-demo-traces']
| order by duration desc
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D%20%7C%20order%20by%20duration%20desc%22%7D)
**Output**
| \_time | duration | span\_id | trace\_id | service.name | kind | status\_code |
| ------------------- | -------- | -------- | --------- | --------------------- | ------ | ------------ |
| 2024-10-17 10:10:01 | 15.3s | span4567 | trace123 | frontend | server | 200 |
| 2024-10-17 10:09:47 | 12.4s | span8910 | trace789 | checkoutservice | client | 200 |
| 2024-10-17 10:08:21 | 10.7s | span1112 | trace456 | productcatalogservice | server | 500 |
This query helps you detect performance bottlenecks by sorting spans based on their duration.
In this example, you analyze security logs by sorting them by time to view the most recent logs.
**Query**
```kusto
['sample-http-logs']
| order by _time desc
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20order%20by%20_time%20desc%22%7D)
**Output**
| \_time | req\_duration\_ms | id | status | uri | method | geo.city | geo.country |
| ------------------- | ----------------- | ------ | ------ | ---------------------- | ------ | -------- | ----------- |
| 2024-10-17 10:10:01 | 300 | user34 | 200 | /api/v1/login | POST | Berlin | DE |
| 2024-10-17 10:09:47 | 150 | user78 | 401 | /api/v1/get-profile | GET | Paris | FR |
| 2024-10-17 10:08:21 | 200 | user56 | 500 | /api/v1/update-profile | PUT | Madrid | ES |
This query sorts the security logs by time to display the most recent log entries first, helping you quickly review recent security events.
## List of related operators
* [top](/apl/tabular-operators/top-operator): The `top` operator returns the top N records based on a specific sorting criteria, which is similar to `order` but only retrieves a fixed number of results.
* [summarize](/apl/tabular-operators/summarize-operator): The `summarize` operator groups data and often works in combination with `order` to rank summarized values.
* [extend](/apl/tabular-operators/extend-operator): The `extend` operator can be used to create calculated fields, which can then be used as sorting criteria in the `order` operator.
# Tabular operators
Source: https://axiom.co/docs/apl/tabular-operators/overview
This section explains how to use and combine tabular operators in APL.
The table summarizes the tabular operators available in APL.
| Function | Description |
| ------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [count](/apl/tabular-operators/count-operator) | Returns an integer representing the total number of records in the dataset. |
| [distinct](/apl/tabular-operators/distinct-operator) | Returns a dataset with unique values from the specified fields, removing any duplicate entries. |
| [extend](/apl/tabular-operators/extend-operator) | Returns the original dataset with one or more new fields appended, based on the defined expressions. |
| [extend-valid](/apl/tabular-operators/extend-valid-operator) | Returns a table where the specified fields are extended with new values based on the given expression for valid rows. |
| [externaldata](/apl/tabular-operators/externaldata-operator) | Returns a table with the specified schema, containing data retrieved from an external source. |
| [getschema](/apl/tabular-operators/getschema-operator) | Returns the schema of a dataset, including field names and their data types |
| [join](/apl/tabular-operators/join-operator) | Returns a dataset containing rows from two different tables based on conditions. |
| [limit](/apl/tabular-operators/limit-operator) | Returns the top N rows from the input dataset. |
| [lookup](/apl/tabular-operators/lookup-operator) | Returns a dataset where rows from one dataset are enriched with matching columns from a lookup table based on conditions. |
| [order](/apl/tabular-operators/order-operator) | Returns the input dataset, sorted according to the specified fields and order. |
| [parse](/apl/tabular-operators/parse-operator) | Returns the input dataset with new fields added based on the specified parsing pattern. |
| [project](/apl/tabular-operators/project-operator) | Returns a dataset containing only the specified fields. |
| [project-away](/apl/tabular-operators/project-away-operator) | Returns the input dataset excluding the specified fields. |
| [project-keep](/apl/tabular-operators/project-keep-operator) | Returns a dataset with only the specified fields. |
| [project-reorder](/apl/tabular-operators/project-reorder-operator) | Returns a table with the specified fields reordered as requested followed by any unspecified fields in their original order. |
| [redact](/apl/tabular-operators/redact-operator) | Returns the input dataset with sensitive data replaced or hashed. |
| [sample](/apl/tabular-operators/sample-operator) | Returns a table containing the specified number of rows, selected randomly from the input dataset. |
| [search](/apl/tabular-operators/search-operator) | Returns all rows where the specified keyword appears in any field. |
| [sort](/apl/tabular-operators/sort-operator) | Returns a table with rows ordered based on the specified fields. |
| [summarize](/apl/tabular-operators/summarize-operator) | Returns a table where each row represents a unique combination of values from the by fields, with the aggregated results calculated for the other fields. |
| [take](/apl/tabular-operators/take-operator) | Returns the specified number of rows from the dataset. |
| [top](/apl/tabular-operators/top-operator) | Returns the top N rows from the dataset based on the specified sorting criteria. |
| [union](/apl/tabular-operators/union-operator) | Returns all rows from the specified tables or queries. |
| [where](/apl/tabular-operators/where-operator) | Returns a filtered dataset containing only the rows where the condition evaluates to true. |
# parse
Source: https://axiom.co/docs/apl/tabular-operators/parse-operator
This page explains how to use the parse operator function in APL.
The `parse` operator in APL enables you to extract and structure information from unstructured or semi-structured text data, such as log files or strings. You can use the operator to specify a pattern for parsing the data and define the fields to extract. This is useful when analyzing logs, tracing information from text fields, or extracting key-value pairs from message formats.
You can find the `parse` operator helpful when you need to process raw text fields and convert them into a structured format for further analysis. It’s particularly effective when working with data that doesn't conform to a fixed schema, such as log entries or custom messages.
## Importance of the parse operator
* **Data extraction:** It allows you to extract structured data from unstructured or semi-structured string fields, enabling you to transform raw data into a more usable format.
* **Flexibility:** The parse operator supports different parsing modes (simple, relaxed, regex) and provides various options to define parsing patterns, making it adaptable to different data formats and requirements.
* **Performance:** By extracting only the necessary information from string fields, the parse operator helps optimize query performance by reducing the amount of data processed and enabling more efficient filtering and aggregation.
* **Readability:** The parse operator provides a clear and concise way to define parsing patterns, making the query code more readable and maintainable.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk, the `rex` command is often used to extract fields from raw events or text. In APL, the `parse` operator performs a similar function. You define the text pattern to match and extract fields, allowing you to extract structured data from unstructured strings.
```splunk Splunk example
index=web_logs | rex field=_raw "duration=(?\d+)"
```
```kusto APL equivalent
['sample-http-logs']
| parse uri with * "duration=" req_duration_ms:int
```
In ANSI SQL, there isn’t a direct equivalent to the `parse` operator. Typically, you use string functions such as `SUBSTRING` or `REGEXP` to extract parts of a text field. However, APL’s `parse` operator simplifies this process by allowing you to define a text pattern and extract multiple fields in a single statement.
```sql SQL example
SELECT SUBSTRING(uri, CHARINDEX('duration=', uri) + 9, 3) AS req_duration_ms
FROM sample_http_logs;
```
```kusto APL equivalent
['sample-http-logs']
| parse uri with * "duration=" req_duration_ms:int
```
## Usage
### Syntax
```kusto
| parse [kind=simple|regex|relaxed] Expression with [*] StringConstant FieldName [: FieldType] [*] ...
```
### Parameters
* `kind`: Optional parameter to specify the parsing mode. Its value can be `simple` for exact matches, `regex` for regular expressions, or `relaxed` for relaxed parsing. The default is `simple`.
* `Expression`: The string expression to parse.
* `StringConstant`: A string literal or regular expression pattern to match against.
* `FieldName`: The name of the field to assign the extracted value.
* `FieldType`: Optional parameter to specify the data type of the extracted field. The default is `string`.
* `*`: Wildcard to match any characters before or after the `StringConstant`.
* `...`: You can specify additional `StringConstant` and `FieldName` pairs to extract multiple values.
### Returns
The parse operator returns the input dataset with new fields added based on the specified parsing pattern. The new fields contain the extracted values from the parsed string expression. If the parsing fails for a particular row, the corresponding fields have null values.
## Use case examples
For log analysis, you can extract the HTTP request duration from the `uri` field using the `parse` operator.
**Query**
```kusto
['sample-http-logs']
| parse uri with * 'duration=' req_duration_ms:int
| project _time, req_duration_ms, uri
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20parse%20uri%20with%20%2A%20'duration%3D'%20req_duration_ms%3Aint%20%7C%20project%20_time%2C%20req_duration_ms%2C%20uri%22%7D)
**Output**
| \_time | req\_duration\_ms | uri |
| ------------------- | ----------------- | ----------------------------- |
| 2024-10-18T12:00:00 | 200 | /api/v1/resource?duration=200 |
| 2024-10-18T12:00:05 | 300 | /api/v1/resource?duration=300 |
This query extracts the `req_duration_ms` from the `uri` field and projects the time and duration for each HTTP request.
In OpenTelemetry traces, the `parse` operator is useful for extracting components of trace data, such as the service name or status code.
**Query**
```kusto
['otel-demo-traces']
| parse trace_id with * '-' ['service.name']
| project _time, ['service.name'], trace_id
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D%20%7C%20parse%20trace_id%20with%20%2A%20'-'%20%5B'service.name'%5D%20%7C%20project%20_time%2C%20%5B'service.name'%5D%2C%20trace_id%22%7D)
**Output**
| \_time | service.name | trace\_id |
| ------------------- | ------------ | -------------------- |
| 2024-10-18T12:00:00 | frontend | a1b2c3d4-frontend |
| 2024-10-18T12:01:00 | cartservice | e5f6g7h8-cartservice |
This query extracts the `service.name` from the `trace_id` and projects the time and service name for each trace.
For security logs, you can use the `parse` operator to extract status codes and the method of HTTP requests.
**Query**
```kusto
['sample-http-logs']
| parse method with * '/' status
| project _time, method, status
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20parse%20method%20with%20%2A%20'%2F'%20status%20%7C%20project%20_time%2C%20method%2C%20status%22%7D)
**Output**
| \_time | method | status |
| ------------------- | ------ | ------ |
| 2024-10-18T12:00:00 | GET | 200 |
| 2024-10-18T12:00:05 | POST | 404 |
This query extracts the HTTP method and status from the `method` field and shows them along with the timestamp.
## Other examples
### Parse content type
This example parses the `content_type` field to extract the `datatype` and `format` values separated by a `/`. The extracted values are projected as separate fields.
**Original string**
```bash
application/charset=utf-8
```
**Query**
```kusto
['sample-http-logs']
| parse content_type with datatype '/' format
| project datatype, format
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20parse%20content_type%20with%20datatype%20'%2F'%20format%20%7C%20project%20datatype%2C%20format%22%7D)
**Output**
```json
{
"datatype": "application",
"format": "charset=utf-8"
}
```
### Parse user agent
This example parses the `user_agent` field to extract the operating system name (`os_name`) and version (`os_version`) enclosed within parentheses. The extracted values are projected as separate fields.
**Original string**
```bash
Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36
```
**Query**
```kusto
['sample-http-logs']
| parse user_agent with * '(' os_name ' ' os_version ';' * ')' *
| project os_name, os_version
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20parse%20user_agent%20with%20*%20'\('%20os_name%20'%20'%20os_version%20'%3B'%20*%20'\)'%20*%20%7C%20project%20os_name%2C%20os_version%22%7D)
**Output**
```json
{
"os_name": "Windows NT 10.0; Win64; x64",
"os_version": "10.0"
}
```
### Parse URI endpoint
This example parses the `uri` field to extract the `endpoint` value that appears after `/api/v1/`. The extracted value is projected as a new field.
**Original string**
```bash
/api/v1/ping/user/textdata
```
**Query**
```kusto
['sample-http-logs']
| parse uri with '/api/v1/' endpoint
| project endpoint
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20parse%20uri%20with%20'%2Fapi%2Fv1%2F'%20endpoint%20%7C%20project%20endpoint%22%7D)
**Output**
```json
{
"endpoint": "ping/user/textdata"
}
```
### Parse ID into region, tenant, and user ID
This example demonstrates how to parse the `id` field into three parts: `region`, `tenant`, and `userId`. The `id` field is structured with these parts separated by hyphens (`-`). The extracted parts are projected as separate fields.
**Original string**
```bash
usa-acmeinc-3iou24
```
**Query**
```kusto
['sample-http-logs']
| parse id with region '-' tenant '-' userId
| project region, tenant, userId
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20parse%20id%20with%20region%20'-'%20tenant%20'-'%20userId%20%7C%20project%20region%2C%20tenant%2C%20userId%22%7D)
**Output**
```json
{
"region": "usa",
"tenant": "acmeinc",
"userId": "3iou24"
}
```
### Parse in relaxed mode
The parse operator supports a relaxed mode that allows for more flexible parsing. In relaxed mode, Axiom treats the parsing pattern as a regular string and matches results in a relaxed manner. If some parts of the pattern are missing or do not match the expected type, Axiom assigns null values.
This example parses the `log` field into four separate parts (`method`, `url`, `status`, and `responseTime`) based on a structured format. The extracted parts are projected as separate fields.
**Original string**
```bash
GET /home 200 123ms
POST /login 500 nonValidResponseTime
PUT /api/data 201 456ms
DELETE /user/123 404 nonValidResponseTime
```
**Query**
```kusto
['HttpRequestLogs']
| parse kind=relaxed log with method " " url " " status:int " " responseTime
| project method, url, status, responseTime
```
**Output**
```json
[
{
"method": "GET",
"url": "/home",
"status": 200,
"responseTime": "123ms"
},
{
"method": "POST",
"url": "/login",
"status": 500,
"responseTime": null
},
{
"method": "PUT",
"url": "/api/data",
"status": 201,
"responseTime": "456ms"
},
{
"method": "DELETE",
"url": "/user/123",
"status": 404,
"responseTime": null
}
]
```
### Parse in regex mode
The parse operator supports a regex mode that allows you to parse use regular expressions. In regex mode, Axiom treats the parsing pattern as a regular expression and matches results based on the specified regex pattern.
This example demonstrates how to parse Kubernetes pod log entries using regex mode to extract various fields such as `podName`, `namespace`, `phase`, `startTime`, `nodeName`, `hostIP`, and `podIP`. The parsing pattern is treated as a regular expression, and the extracted values are assigned to the respective fields.
**Original string**
```bash
Log: PodStatusUpdate (podName=nginx-pod, namespace=default, phase=Running, startTime=2023-05-14 08:30:00, nodeName=node-1, hostIP=192.168.1.1, podIP=10.1.1.1)
```
**Query**
```kusto
['PodLogs']
| parse kind=regex AppName with @"Log: PodStatusUpdate \(podName=" podName: string @", namespace=" namespace: string @", phase=" phase: string @", startTime=" startTime: datetime @", nodeName=" nodeName: string @", hostIP=" hostIP: string @", podIP=" podIP: string @"\)"
| project podName, namespace, phase, startTime, nodeName, hostIP, podIP
```
**Output**
```json
{
"podName": "nginx-pod",
"namespace": "default",
"phase": "Running",
"startTime": "2023-05-14 08:30:00",
"nodeName": "node-1",
"hostIP": "192.168.1.1",
"podIP": "10.1.1.1"
}
```
## Best practices
When using the parse operator, consider the following best practices:
* Use appropriate parsing modes: Choose the parsing mode (simple, relaxed, regex) based on the complexity and variability of the data being parsed. Simple mode is suitable for fixed patterns, while relaxed and regex modes offer more flexibility.
* Handle missing or invalid data: Consider how to handle scenarios where the parsing pattern does not match or the extracted values do not conform to the expected types. Use the relaxed mode or provide default values to handle such cases.
* Project only necessary fields: After parsing, use the project operator to select only the fields that are relevant for further querying. This helps reduce the amount of data transferred and improves query performance.
* Use parse in combination with other operators: Combine parse with other APL operators like where, extend, and summarize to filter, transform, and aggregate the parsed data effectively.
By following these best practices and understanding the capabilities of the parse operator, you can effectively extract and transform data from string fields in APL, enabling powerful querying and insights.
## List of related operators
* [extend](/apl/tabular-operators/extend-operator): Use the `extend` operator when you want to add calculated fields without parsing text.
* [project](/apl/tabular-operators/project-operator): Use `project` to select and rename fields after parsing text.
* [extract](/apl/scalar-functions/string-functions#extract): Use `extract` to retrieve the first substring matching a regular expression from a source string.
* [extract\_all](/apl/scalar-functions/string-functions#extract-all): Use `extract_all` to retrieve all substrings matching a regular expression from a source string.
# project-away
Source: https://axiom.co/docs/apl/tabular-operators/project-away-operator
This page explains how to use the project-away operator function in APL.
The `project-away` operator in APL is used to exclude specific fields from the output of a query. This operator is useful when you want to return a subset of fields from a dataset, without needing to manually specify every field you want to keep. Instead, you specify the fields you want to remove, and the operator returns all remaining fields.
You can use `project-away` in scenarios where your dataset contains irrelevant or sensitive fields that you do not want in the results. It simplifies queries, especially when dealing with wide datasets, by allowing you to filter out fields without having to explicitly list every field to include.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, you use the `fields` command to remove fields from your results. In APL, the `project-away` operator provides a similar functionality, removing specified fields while returning the remaining ones.
```splunk Splunk example
... | fields - status, uri, method
```
```kusto APL equivalent
['sample-http-logs']
| project-away status, uri, method
```
In SQL, you typically use the `SELECT` statement to explicitly include fields. In contrast, APL’s `project-away` operator allows you to exclude fields, offering a more concise approach when you want to keep many fields but remove a few.
```sql SQL example
SELECT _time, req_duration_ms, id, geo.city, geo.country
FROM sample_http_logs;
```
```kusto APL equivalent
['sample-http-logs']
| project-away status, uri, method
```
## Usage
### Syntax
```kusto
| project-away FieldName1, FieldName2, ...
```
### Parameters
* `FieldName`: The field you want to exclude from the result set.
### Returns
The `project-away` operator returns the input dataset excluding the specified fields. The result contains the same number of rows as the input table.
## Use case examples
In log analysis, you might want to exclude unnecessary fields to focus on the relevant fields, such as timestamp, request duration, and user information.
**Query**
```kusto
['sample-http-logs']
| project-away status, uri, method
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20project-away%20status%2C%20uri%2C%20method%22%7D)
**Output**
| \_time | req\_duration\_ms | id | geo.city | geo.country |
| ------------------- | ----------------- | -- | -------- | ----------- |
| 2023-10-17 10:23:00 | 120 | u1 | Seattle | USA |
| 2023-10-17 10:24:00 | 135 | u2 | Berlin | Germany |
The query removes the `status`, `uri`, and `method` fields from the output, keeping the focus on the key fields.
When analyzing OpenTelemetry traces, you can remove fields that aren't necessary for specific trace evaluations, such as span IDs and statuses.
**Query**
```kusto
['otel-demo-traces']
| project-away span_id, status_code
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D%20%7C%20project-away%20span_id%2C%20status_code%22%7D)
**Output**
| \_time | duration | trace\_id | service.name | kind |
| ------------------- | -------- | --------- | --------------- | ------ |
| 2023-10-17 11:01:00 | 00:00:03 | t1 | frontend | server |
| 2023-10-17 11:02:00 | 00:00:02 | t2 | checkoutservice | client |
The query removes the `span_id` and `status_code` fields, focusing on key service information.
In security log analysis, excluding unnecessary fields such as the HTTP method or URI can help focus on user behavior patterns and request durations.
**Query**
```kusto
['sample-http-logs']
| project-away method, uri
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20project-away%20method%2C%20uri%22%7D)
**Output**
| \_time | req\_duration\_ms | id | status | geo.city | geo.country |
| ------------------- | ----------------- | -- | ------ | -------- | ----------- |
| 2023-10-17 10:25:00 | 95 | u3 | 200 | London | UK |
| 2023-10-17 10:26:00 | 180 | u4 | 404 | Paris | France |
The query excludes the `method` and `uri` fields, keeping information like status and geographical details.
## Wildcard
Wildcard refers to a special character or a set of characters that can be used to substitute for any other character in a search pattern. Use wildcards to create more flexible queries and perform more powerful searches.
The syntax for wildcard can either be `data*` or `['data.fo']*`.
Here’s how you can use wildcards in `project-away`:
```kusto
['sample-http-logs']
| project-away status*, user*, is*, ['geo.']*
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project-away%20status%2A%2C%20user%2A%2C%20is%2A%2C%20%20%5B%27geo.%27%5D%2A%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
```kusto
['github-push-event']
| project-away push*, repo*, ['commits']*
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27github-push-event%27%5D%5Cn%7C%20project-away%20push%2A%2C%20repo%2A%2C%20%5B%27commits%27%5D%2A%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
## List of related operators
* [project](/apl/tabular-operators/project-operator): The `project` operator lets you select specific fields to include, rather than excluding them.
* [extend](/apl/tabular-operators/extend-operator): The `extend` operator is used to add new fields, whereas `project-away` is for removing fields.
* [summarize](/apl/tabular-operators/summarize-operator): While `project-away` removes fields, `summarize` is useful for aggregating data across multiple fields.
# project-keep
Source: https://axiom.co/docs/apl/tabular-operators/project-keep-operator
This page explains how to use the project-keep operator function in APL.
The `project-keep` operator in APL is a powerful tool for field selection. It allows you to explicitly keep specific fields from a dataset, discarding any others not listed in the operator's parameters. This is useful when you only need to work with a subset of fields in your query results and want to reduce clutter or improve performance by eliminating unnecessary fields.
You can use `project-keep` when you need to focus on particular data points, such as in log analysis, security event monitoring, or extracting key fields from traces.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, the `table` command performs a similar task to APL’s `project-keep`. It selects only the fields you specify and excludes any others.
```splunk Splunk example
index=main | table _time, status, uri
```
```kusto APL equivalent
['sample-http-logs']
| project-keep _time, status, uri
```
In ANSI SQL, the `SELECT` statement combined with field names performs a task similar to `project-keep` in APL. Both allow you to specify which fields to retrieve from the dataset.
```sql SQL example
SELECT _time, status, uri FROM sample_http_logs
```
```kusto APL equivalent
['sample-http-logs']
| project-keep _time, status, uri
```
## Usage
### Syntax
```kusto
| project-keep FieldName1, FieldName2, ...
```
### Parameters
* `FieldName`: The field you want to keep in the result set.
### Returns
`project-keep` returns a dataset with only the specified fields. All other fields are removed from the output. The result contains the same number of rows as the input table.
## Use case examples
For log analysis, you might want to keep only the fields that are relevant to investigating HTTP requests.
**Query**
```kusto
['sample-http-logs']
| project-keep _time, status, uri, method, req_duration_ms
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20project-keep%20_time%2C%20status%2C%20uri%2C%20method%2C%20req_duration_ms%22%7D)
**Output**
| \_time | status | uri | method | req\_duration\_ms |
| ------------------- | ------ | ------------------ | ------ | ----------------- |
| 2024-10-17 10:00:00 | 200 | /index.html | GET | 120 |
| 2024-10-17 10:01:00 | 404 | /non-existent.html | GET | 50 |
| 2024-10-17 10:02:00 | 500 | /server-error | POST | 300 |
This query filters the dataset to show only the request timestamp, status, URI, method, and duration, which can help you analyze server performance or errors.
For OpenTelemetry trace analysis, you may want to focus on key tracing details such as service names and trace IDs.
**Query**
```kusto
['otel-demo-traces']
| project-keep _time, trace_id, span_id, ['service.name'], duration
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D%20%7C%20project-keep%20_time%2C%20trace_id%2C%20span_id%2C%20%5B%27service.name%27%5D%2C%20duration%22%7D)
**Output**
| \_time | trace\_id | span\_id | service.name | duration |
| ------------------- | --------- | -------- | --------------- | -------- |
| 2024-10-17 10:03:00 | abc123 | xyz789 | frontend | 500ms |
| 2024-10-17 10:04:00 | def456 | mno345 | checkoutservice | 250ms |
This query extracts specific tracing information, such as trace and span IDs, the name of the service, and the span’s duration.
In security log analysis, focusing on essential fields like user ID and HTTP status can help track suspicious activity.
**Query**
```kusto
['sample-http-logs']
| project-keep _time, id, status, uri, ['geo.city'], ['geo.country']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20project-keep%20_time%2C%20id%2C%20status%2C%20uri%2C%20%5B%27geo.city%27%5D%2C%20%5B%27geo.country%27%5D%22%7D)
**Output**
| \_time | id | status | uri | geo.city | geo.country |
| ------------------- | ------- | ------ | ------ | ------------- | ----------- |
| 2024-10-17 10:05:00 | user123 | 403 | /admin | New York | USA |
| 2024-10-17 10:06:00 | user456 | 200 | /login | San Francisco | USA |
This query narrows down the data to track HTTP status codes by users, helping identify potential unauthorized access attempts.
## List of related operators
* [project](/apl/tabular-operators/project-operator): Use `project` to explicitly specify the fields you want in your result, while also allowing transformations or calculations on those fields.
* [extend](/apl/tabular-operators/extend-operator): Use `extend` to add new fields or modify existing ones without dropping any fields.
* [summarize](/apl/tabular-operators/summarize-operator): Use `summarize` when you need to perform aggregation operations on your dataset, grouping data as necessary.
## Wildcard
Wildcard refers to a special character or a set of characters that can be used to substitute for any other character in a search pattern. Use wildcards to create more flexible queries and perform more powerful searches.
The syntax for wildcard can either be `data*` or `['data.fo']*`.
Here’s how you can use wildcards in `project-keep`:
```kusto
['sample-http-logs']
| project-keep resp*, content*, ['geo.']*
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project-keep%20resp%2A%2C%20content%2A%2C%20%20%5B%27geo.%27%5D%2A%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
```kusto
['github-push-event']
| project-keep size*, repo*, ['commits']*, id*
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27github-push-event%27%5D%5Cn%7C%20project-keep%20size%2A%2C%20repo%2A%2C%20%5B%27commits%27%5D%2A%2C%20id%2A%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
# project
Source: https://axiom.co/docs/apl/tabular-operators/project-operator
This page explains how to use the project operator in APL.
# project operator
The `project` operator in Axiom Processing Language (APL) is used to select specific fields from a dataset, potentially renaming them or applying calculations on the fly. With `project`, you can control which fields are returned by the query, allowing you to focus on only the data you need.
This operator is useful when you want to refine your query results by reducing the number of fields, renaming them, or deriving new fields based on existing data. It’s a powerful tool for filtering out unnecessary fields and performing light transformations on your dataset.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, the equivalent of the `project` operator is typically the `table` or `fields` command. While SPL’s `table` focuses on selecting fields, `fields` controls both selection and exclusion, similar to `project` in APL.
```sql Splunk example
| table _time, status, uri
```
```kusto APL equivalent
['sample-http-logs']
| project _time, status, uri
```
In ANSI SQL, the `SELECT` statement serves a similar role to the `project` operator in APL. SQL users will recognize that `project` behaves like selecting fields from a table, with the ability to rename or transform fields inline.
```sql SQL example
SELECT _time, status, uri FROM sample_http_logs;
```
```kusto APL equivalent
['sample-http-logs']
| project _time, status, uri
```
## Usage
### Syntax
```kusto
| project FieldName [= Expression] [, ...]
```
Or
```kusto
| project FieldName, FieldName, FieldName, ...
```
Or
```kusto
| project [FieldName, FieldName[,] = Expression [, ...]
```
### Parameters
* `FieldName`: The names of the fields in the order you want them to appear in the result set. If there is no Expression, then FieldName is compulsory and a field of that name must appear in the input.
* `Expression`: Optional scalar expression referencing the input fields.
### Returns
The `project` operator returns a dataset containing only the specified fields.
## Use case examples
In this example, you’ll extract the timestamp, HTTP status code, and request URI from the sample HTTP logs.
**Query**
```kusto
['sample-http-logs']
| project _time, status, uri
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20project%20_time%2C%20status%2C%20uri%22%7D)
**Output**
| \_time | status | uri |
| ------------------- | ------ | --------------- |
| 2024-10-17 12:00:00 | 200 | /api/v1/getData |
| 2024-10-17 12:01:00 | 404 | /api/v1/getUser |
The query returns only the timestamp, HTTP status code, and request URI, reducing unnecessary fields from the dataset.
In this example, you’ll extract trace information such as the service name, span ID, and duration from OpenTelemetry traces.
**Query**
```kusto
['otel-demo-traces']
| project ['service.name'], span_id, duration
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20project%20%5B'service.name'%5D%2C%20span_id%2C%20duration%22%7D)
**Output**
| service.name | span\_id | duration |
| ------------ | ------------- | -------- |
| frontend | span-1234abcd | 00:00:02 |
| cartservice | span-5678efgh | 00:00:05 |
The query isolates relevant tracing data, such as the service name, span ID, and duration of spans.
In this example, you’ll focus on security log entries by projecting only the timestamp, user ID, and HTTP status from the sample HTTP logs.
**Query**
```kusto
['sample-http-logs']
| project _time, id, status
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20project%20_time%2C%20id%2C%20status%22%7D)
**Output**
| \_time | id | status |
| ------------------- | ----- | ------ |
| 2024-10-17 12:00:00 | user1 | 200 |
| 2024-10-17 12:01:00 | user2 | 403 |
The query extracts only the timestamp, user ID, and HTTP status for analysis of access control in security logs.
## List of related operators
* [extend](/apl/tabular-operators/extend-operator): Use `extend` to add new fields or calculate values without removing any existing fields.
* [summarize](/apl/tabular-operators/summarize-operator): Use `summarize` to aggregate data across groups of rows, which is useful when you’re calculating totals or averages.
* [where](/apl/tabular-operators/where-operator): Use `where` to filter rows based on conditions, often paired with `project` to refine your dataset further.
# project-reorder
Source: https://axiom.co/docs/apl/tabular-operators/project-reorder-operator
This page explains how to use the project-reorder operator in APL.
The `project-reorder` operator in APL allows you to rearrange the fields of a dataset without modifying the underlying data. This operator is useful when you need to control the display order of fields in query results, making your data easier to read and analyze. It can be especially helpful when working with large datasets where field ordering impacts the clarity of the output.
Use `project-reorder` when you want to emphasize specific fields by adjusting their order in the result set without changing their values or structure.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, you use the `table` command to reorder fields, which works similarly to how `project-reorder` functions in APL.
```splunk Splunk example
| table FieldA, FieldB, FieldC
```
```kusto APL equivalent
['dataset.name']
| project-reorder FieldA, FieldB, FieldC
```
In ANSI SQL, the order of fields in a `SELECT` statement determines their arrangement in the output. In APL, `project-reorder` provides more explicit control over the field order without requiring a full `SELECT` clause.
```sql SQL example
SELECT FieldA, FieldB, FieldC FROM dataset;
```
```kusto APL equivalent
| project-reorder FieldA, FieldB, FieldC
```
## Usage
### Syntax
```kusto
| project-reorder Field1 [asc | desc | granny-asc | granny-desc], Field2 [asc | desc | granny-asc | granny-desc], ...
```
### Parameters
* `Field1, Field2, ...`: The names of the fields in the order you want them to appear in the result set.
* `[asc | desc | granny-asc | granny-desc]`: Optional: Specifies the sort order for the reordered fields. `asc` or `desc` order fields by field name in ascending or descending manner. `granny-asc` or `granny-desc` order by ascending or descending while secondarily sorting by the next numeric value. For example, `b50` comes before `b9` when you use `granny-asc`.
### Returns
A table with the specified fields reordered as requested followed by any unspecified fields in their original order. `project-reorder` doesn‘t rename or remove fields from the dataset. All fields that existed in the dataset appear in the results table.
## Use case examples
In this example, you reorder HTTP log fields to prioritize the most relevant ones for log analysis.
**Query**
```kusto
['sample-http-logs']
| project-reorder _time, method, status, uri, req_duration_ms, ['geo.city'], ['geo.country']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20project-reorder%20_time%2C%20method%2C%20status%2C%20uri%2C%20req_duration_ms%2C%20%5B%27geo.city%27%5D%2C%20%5B%27geo.country%27%5D%22%7D)
**Output**
| \_time | method | status | uri | req\_duration\_ms | geo.city | geo.country |
| ------------------- | ------ | ------ | ---------------- | ----------------- | -------- | ----------- |
| 2024-10-17 12:34:56 | GET | 200 | /home | 120 | New York | USA |
| 2024-10-17 12:35:01 | POST | 404 | /api/v1/resource | 250 | Berlin | Germany |
This query rearranges the fields for clarity, placing the most crucial fields (`_time`, `method`, `status`) at the front for easier analysis.
Here’s an example where OpenTelemetry trace fields are reordered to prioritize service and status information.
**Query**
```kusto
['otel-demo-traces']
| project-reorder _time, ['service.name'], kind, status_code, trace_id, span_id, duration
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D%20%7C%20project-reorder%20_time%2C%20%5B%27service.name%27%5D%2C%20kind%2C%20status_code%2C%20trace_id%2C%20span_id%2C%20duration%22%7D)
**Output**
| \_time | service.name | kind | status\_code | trace\_id | span\_id | duration |
| ------------------- | --------------------- | ------ | ------------ | --------- | -------- | -------- |
| 2024-10-17 12:34:56 | frontend | client | 200 | abc123 | span456 | 00:00:01 |
| 2024-10-17 12:35:01 | productcatalogservice | server | 500 | xyz789 | span012 | 00:00:05 |
This query emphasizes service-related fields like `service.name` and `status_code` at the start of the output.
In this example, fields in a security log are reordered to prioritize key fields for investigating HTTP request anomalies.
**Query**
```kusto
['sample-http-logs']
| project-reorder _time, status, method, uri, id, ['geo.city'], ['geo.country']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20project-reorder%20_time%2C%20status%2C%20method%2C%20uri%2C%20id%2C%20%5B%27geo.city%27%5D%2C%20%5B%27geo.country%27%5D%22%7D)
**Output**
| \_time | status | method | uri | id | geo.city | geo.country |
| ------------------- | ------ | ------ | ---------------- | ------ | -------- | ----------- |
| 2024-10-17 12:34:56 | 200 | GET | /home | user01 | New York | USA |
| 2024-10-17 12:35:01 | 404 | POST | /api/v1/resource | user02 | Berlin | Germany |
This query reorders the fields to focus on the HTTP status, request method, and URI, which are critical for security-related analyses.
## Wildcard
Wildcard refers to a special character or a set of characters that can be used to substitute for any other character in a search pattern. Use wildcards to create more flexible queries and perform more powerful searches.
The syntax for wildcard can either be `data*` or `['data.fo']*`.
Here’s how you can use wildcards in `project-reorder`:
Reorder all fields in ascending order:
```kusto
['sample-http-logs']
| project-reorder * asc
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project-reorder%20%2A%20asc%22%7D)
Reorder specific fields to the beginning:
```kusto
['sample-http-logs']
| project-reorder method, status, uri
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project-reorder%20method%2C%20status%2C%20uri%22%7D)
Reorder fields using wildcards and sort in descending order:
```kusto
['github-push-event']
| project-reorder repo*, num_commits, push_id, ref, size, ['id'], size_large desc
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%20%22%5B%27github-push-event%27%5D%5Cn%7C%20project-reorder%20repo%2A%2C%20num_commits%2C%20push_id%2C%20ref%2C%20size%2C%20%5B%27id%27%5D%2C%20size_large%20desc%22%7D)
Reorder specific fields and keep others in original order:
```kusto
['otel-demo-traces']
| project-reorder trace_id, *, span_id // orders the trace_id then everything else, then span_id fields
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%20%22%5B%27otel-demo-traces%27%5D%5Cn%7C%20project-reorder%20trace_id%2C%20%2A%2C%20span_id%22%7D)
## List of related operators
* [project](/apl/tabular-operators/project-operator): Use the `project` operator to select and rename fields without changing their order.
* [extend](/apl/tabular-operators/extend-operator): `extend` adds new calculated fields while keeping the original ones in place.
* [summarize](/apl/tabular-operators/summarize-operator): Use `summarize` to perform aggregations on fields, which can then be reordered using `project-reorder`.
* [sort](/apl/tabular-operators/sort-operator): Sorts rows based on field values, and the results can then be reordered with `project-reorder`.
# redact
Source: https://axiom.co/docs/apl/tabular-operators/redact-operator
This page explains how to use the redact operator in APL.
The `redact` operator in APL replaces sensitive or unwanted data in string fields using regular expressions. You can use it to sanitize log data, obfuscate personal information, or anonymize text for auditing or analysis. The operator allows you to define one or multiple regular expressions to identify and replace matching patterns. You can customize the replacement token, generate hashes of redacted values, or retain structural elements while obfuscating specific segments of data.
This operator is useful when you need to ensure data privacy or compliance with regulations such as GDPR or HIPAA. For example, you can redact credit card numbers, email addresses, or personally identifiable information from logs and datasets.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, data sanitization is often achieved using custom regex-based transformations or eval functions. The `redact` operator in APL simplifies this process by directly applying regular expressions and offering options for replacement or hashing.
```sql Splunk example
| eval sanitized_field=replace(field, "regex_pattern", "*")
```
```kusto APL equivalent
| redact 'regex_pattern' on field
```
ANSI SQL typically requires a combination of functions like `REPLACE` or `REGEXP_REPLACE` for data obfuscation. APL’s `redact` operator consolidates these capabilities into a single, flexible command.
```sql SQL example
SELECT REGEXP_REPLACE(field, 'regex_pattern', '*') AS sanitized_field FROM table;
```
```kusto APL equivalent
| redact 'regex_pattern' on field
```
## Usage
### Syntax
```kusto
| redact [replaceToken="*"] [replaceHash=false] [redactGroups=false] , () [on Field]
```
### Parameters
| Parameter | Type | Description |
| -------------- | ------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `replaceToken` | string | The string with which to replace matches. If you specify a single character, Axiom replaces each character in the matching text with `replaceToken`. If you specify more than one character, Axiom replaces the whole of the matching text with `replaceToken`. The default `replaceToken` is the `*` character. |
| `replaceHash` | bool | Specifies whether to replace matches with a hash of the data. You cannot use both `replaceToken` and `replaceHash` in the same query. |
| `redactGroups` | bool | Specifies whether to look for capturing groups in the regex and only redact characters in the capturing groups. Use this option for partial replacements or replacements that maintain the structure of the data. The default is false. |
| `regex` | regex | A single regex or an array/map of regexes to match against field values. |
| `on Field` | | Limits redaction to specific fields. If you omit this parameter, Axiom redacts all string fields in the dataset. |
### Returns
Returns the input dataset with sensitive data replaced or hashed.
## Sample regular expressions
| Operation | Sample regex | Original string | Redacted string |
| ------------------------------ | ---------------------------------------------------------------------------------- | --------------------------------------------------- | ------------------------------------------------ |
| Redact email addresses | \[a-zA-Z0-9\_.+-]+@\[a-zA-Z0-9-]+.\[a-zA-Z0-9-.]+ | Incoming Mail - [abc@test.com](mailto:abc@test.com) | Incoming Mail - \*\*\*\*\*\*\*\*\*\*\*\* |
| Redact social security numbers | \d{3}-\d{2}-\d{4} | SSN 123-12-1234.pdf | SSN \*\*\*\*\*\*\*\*\*\*\*.pdf |
| Redact IBAN | \[A-Z]{2}\[0-9]{2}(?:\[ ]?\[0-9]{4}){4}(?!(?:\[ ]?\[0-9]){3})(?:\[ ]?\[0-9]{1,2})? | AB12 1234 1234 1234 1234 | \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* |
## Use case examples
Use the `redact` operator to sanitize HTTP logs by obfuscating geographical data.
**Query**
```kusto
['sample-http-logs']
| redact replaceToken="x" @'.*' on ['geo.city'], ['geo.country']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20redact%20replaceToken%3D'x'%20%40'.*'%20on%20%5B'geo.city'%5D%2C%20%5B'geo.country'%5D%22%7D)
**Output**
| \_time | geo.city | geo.country |
| ------------------- | -------- | ------------ |
| 2025-01-01 12:00:00 | `xxx` | `xxxxxxxx` |
| 2025-01-01 12:05:00 | `xxxxxx` | `xxxxxxxxxx` |
The query replaces all characters matching the pattern `.*` with the character `x` in the `geo.city` and `geo.country` fields.
In OpenTelemetry traces, use `redact` to anonymize Kubernetes node names with their hashes while preserving the service structure.
**Query**
```kusto
['otel-demo-traces']
| redact replaceHash=true @'.*' on ['resource.k8s.node.name']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20redact%20replaceHash%3Dtrue%20%40'.*'%20on%20%5B'resource.k8s.node.name'%5D%22%7D)
**Output**
| \_time | resource.k8s.node.name | service.name |
| ------------------- | ---------------------- | ----------------- |
| 2025-01-01 12:00:00 | `QQXRv6VU` | `frontend` |
| 2025-01-01 12:05:00 | `Q1urOteW` | `checkoutservice` |
The query replaces Kubernetes node names with hashed values while keeping the rest of the trace intact.
Use the `redact` operator to remove parts of a URL from security logs.
**Query**
```kusto
['sample-http-logs']
| redact replaceToken="" redactGroups=true @'.*/(.*)' on uri
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20redact%20replaceToken%3D'%3CREDACTED%3E'%20redactGroups%3Dtrue%20%40'.*%2F\(.*\)'%20on%20uri%22%7D)
**Output**
| \_time | uri |
| ------------------- | ----------------------------- |
| 2025-01-01 12:00:00 | `/api/v1/pub/sub/` |
| 2025-01-01 12:05:00 | `/api/v1/textdata/` |
| 2025-01-01 12:10:00 | `/api/v1/payment/` |
The query performs a partial redaction in the capturing groups of the regex. It replaces the slug of the URL (the part after the last `/`) with the text ``.
## List of related operators
* [project](/apl/tabular-operators/project-operator): Select specific fields from the dataset. Useful for focused analysis.
* [summarize](/apl/tabular-operators/summarize-operator): Aggregate data. Helpful when combining redacted data with statistical analysis.
* [parse](/apl/tabular-operators/parse-operator): Extract and parse structured data using regex patterns.
When you need custom replacement patterns, use the [replace\_regex](/apl/scalar-functions/string-functions#replace-regex) function for precise control over string replacements. `redact` provides a simpler, security-focused interface. Use `redact` if you’re primarily focused on data privacy and compliance, and `replace_regex` if you need more control over the replacement text format.
# sample
Source: https://axiom.co/docs/apl/tabular-operators/sample-operator
This page explains how to use the sample operator function in APL.
The `sample` operator in APL psuedo-randomly selects rows from the input dataset at a rate specified by a parameter. This operator is useful when you want to analyze a subset of data, reduce the dataset size for testing, or quickly explore patterns without processing the entire dataset. The sampling algorithm is not statistically rigorous but provides a way to explore and understand a dataset. For statistically rigorous analysis, use `summarize` instead.
You can find the `sample` operator useful when working with large datasets, where processing the entire dataset is resource-intensive or unnecessary. It’s ideal for scenarios like log analysis, performance monitoring, or sampling for data quality checks.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, the `sample` command works similarly, returning a subset of data rows randomly. However, the APL `sample` operator requires a simpler syntax without additional arguments for biasing the randomness.
```sql Splunk example
| sample 10
```
```kusto APL equivalent
['sample-http-logs']
| sample 0.1
```
In ANSI SQL, there is no direct equivalent to the `sample` operator, but you can achieve similar results using the `TABLESAMPLE` clause. In APL, `sample` operates independently and is more flexible, as it’s not tied to a table scan.
```sql SQL example
SELECT * FROM table TABLESAMPLE (10 ROWS);
```
```kusto APL equivalent
['sample-http-logs']
| sample 0.1
```
## Usage
### Syntax
```kusto
| sample ProportionOfRows
```
### Parameters
* `ProportionOfRows`: A float greater than 0 and less than 1 which specifies the proportion of rows to return from the dataset. The rows are selected randomly.
### Returns
The operator returns a table containing the specified number of rows, selected randomly from the input dataset.
## Use case examples
In this use case, you sample a small number of rows from your HTTP logs to quickly analyze trends without working through the entire dataset.
**Query**
```kusto
['sample-http-logs']
| sample 0.05
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20sample%200.05%22%7D)
**Output**
| \_time | req\_duration\_ms | id | status | uri | method | geo.city | geo.country |
| ------------------- | ----------------- | ----- | ------ | --------- | ------ | -------- | ----------- |
| 2023-10-16 12:45:00 | 234 | user1 | 200 | /index | GET | New York | US |
| 2023-10-16 12:47:00 | 120 | user2 | 404 | /login | POST | Paris | FR |
| 2023-10-16 12:48:00 | 543 | user3 | 500 | /checkout | POST | Tokyo | JP |
This query returns a random subset of 5 % of all rows from the HTTP logs, helping you quickly identify any potential issues or patterns without analyzing the entire dataset.
In this use case, you sample traces to investigate performance metrics for a particular service across different spans.
**Query**
```kusto
['otel-demo-traces']
| where ['service.name'] == 'checkoutservice'
| sample 0.05
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D%20%7C%20where%20%5B%27service.name%27%5D%20%3D%3D%20%27checkoutservice%27%20%7C%20sample%200.05%22%7D)
**Output**
| \_time | duration | span\_id | trace\_id | service.name | kind | status\_code |
| ------------------- | -------- | -------- | --------- | --------------- | ------ | ------------ |
| 2023-10-16 14:05:00 | 1.34s | span5678 | trace123 | checkoutservice | client | 200 |
| 2023-10-16 14:06:00 | 0.89s | span3456 | trace456 | checkoutservice | server | 500 |
This query returns 5 % of all traces for the `checkoutservice` to identify potential performance bottlenecks.
In this use case, you sample security log data to spot irregular activity in requests, such as 500-level HTTP responses.
**Query**
```kusto
['sample-http-logs']
| where status == '500'
| sample 0.03
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20where%20status%20%3D%3D%20%27500%27%20%7C%20sample%200.03%22%7D)
**Output**
| \_time | req\_duration\_ms | id | status | uri | method | geo.city | geo.country |
| ------------------- | ----------------- | ----- | ------ | -------- | ------ | -------- | ----------- |
| 2023-10-16 14:30:00 | 543 | user4 | 500 | /payment | POST | Berlin | DE |
| 2023-10-16 14:32:00 | 876 | user5 | 500 | /order | POST | London | GB |
This query helps you quickly spot failed requests (HTTP 500 responses) and investigate any potential causes of these errors.
## List of related operators
* [take](/apl/tabular-operators/take-operator): Use `take` when you want to return the first N rows in the dataset rather than a random subset.
* [where](/apl/tabular-operators/where-operator): Use `where` to filter rows based on conditions rather than sampling randomly.
* [top](/apl/tabular-operators/top-operator): Use `top` to return the highest N rows based on a sorting criterion.
# search
Source: https://axiom.co/docs/apl/tabular-operators/search-operator
This page explains how to use the search operator in APL.
The `search` operator in APL is used to perform a full-text search across multiple fields in a dataset. This operator allows you to locate specific keywords, phrases, or patterns, helping you filter data quickly and efficiently. You can use `search` to query logs, traces, and other data sources without the need to specify individual fields, making it particularly useful when you’re unsure where the relevant data resides.
Use `search` when you want to search multiple fields in a dataset, especially for ad-hoc analysis or quick lookups across logs or traces. It’s commonly applied in log analysis, security monitoring, and trace analysis, where multiple fields may contain the desired data.
## Importance of the search operator
* **Versatility:** It allows you to find a specific text or term across various fields within a dataset that they choose or select for their search, without the necessity to specify each field.
* **Efficiency:** Saves time when you aren’t sure which field or datasets in APL might contain the information you are looking for.
* **User-friendliness:** It’s particularly useful for users or developers unfamiliar with the schema details of a given database.
## Usage
### Syntax
```kusto
search [kind=CaseSensitivity] SearchPredicate
```
or
```kusto
search [kind=CaseSensitivity] SearchPredicate
```
### Parameters
| Name | Type | Required | Description |
| ------------------- | ------ | -------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **CaseSensitivity** | string | | A flag that controls the behavior of all `string` scalar operators, such as `has`, with respect to case sensitivity. Valid values are `default`, `case_insensitive`, `case_sensitive`. The options `default` and `case_insensitive` are synonymous, since the default behavior is case insensitive. |
| **SearchPredicate** | string | ✓ | A Boolean expression to be evaluated for every event in the input. If it returns `true`, the record is outputted. |
## Returns
Returns all rows where the specified keyword appears in any field.
## Search predicate syntax
The SearchPredicate allows you to search for specific terms in all fields of a dataset. The operator that will be applied to a search term depends on the presence and placement of a wildcard asterisk (\*) in the term, as shown in the following table.
| Literal | Operator |
| ---------- | --------------- |
| `axiomk` | `has` |
| `*axiomk` | `hassuffix` |
| `axiomk*` | `hasprefix` |
| `*axiomk*` | `contains` |
| `ax*ig` | `matches regex` |
You can also restrict the search to a specific field, look for an exact match instead of a term match, or search by regular expression. The syntax for each of these cases is shown in the following table.
| Syntax | Explanation |
| ------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------- |
| **FieldName**`:`**StringLiteral** | This syntax can be used to restrict the search to a specific field. The default behavior is to search all fields. |
| **FieldName**`==`**StringLiteral** | This syntax can be used to search for exact matches of a field against a string value. The default behavior is to look for a term-match. |
| **Field** `matches regex` **StringLiteral** | This syntax indicates regular expression matching, in which *StringLiteral* is the regex pattern. |
Use boolean expressions to combine conditions and create more complex searches. For example, `"axiom" and b==789` would result in a search for events that have the term axiom in any field and the value 789 in the b field.
### Search predicate syntax examples
| # | Syntax | Meaning (equivalent `where`) | Comments |
| -- | ---------------------------------------- | --------------------------------------------------------- | ----------------------------------------- |
| 1 | `search "axiom"` | `where * has "axiom"` | |
| 2 | `search field:"axiom"` | `where field has "axiom"` | |
| 3 | `search field=="axiom"` | `where field=="axiom"` | |
| 4 | `search "axiom*"` | `where * hasprefix "axiom"` | |
| 5 | `search "*axiom"` | `where * hassuffix "axiom"` | |
| 6 | `search "*axiom*"` | `where * contains "axiom"` | |
| 7 | `search "Pad*FG"` | `where * matches regex @"\bPad.*FG\b"` | |
| 8 | `search *` | `where 0==0` | |
| 9 | `search field matches regex "..."` | `where field matches regex "..."` | |
| 10 | `search kind=case_sensitive` | | All string comparisons are case-sensitive |
| 11 | `search "axiom" and ("log" or "metric")` | `where * has "axiom" and (* has "log" or * has "metric")` | |
| 12 | `search "axiom" or (A>a and Aa and A datetime('2022-09-16')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20search%20%5C%22get%5C%22%20and%20_time%20%3E%20datetime%28%272022-09-16%27%29%22%7D\&queryOptions=%7B%22quickRange%22%3A%2230d%22%7D)
### Use kind=default
By default, the search is case-insensitive and uses the simple search.
```kusto
['sample-http-logs']
| search kind=default "INDIA"
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20search%20kind%3Ddefault%20%5C%22INDIA%5C%22%22%7D\&queryOptions=%7B%22quickRange%22%3A%2230d%22%7D)
### Use kind=case\_sensitive
Search for logs that contain the term "text" with case sensitivity.
```kusto
['sample-http-logs']
| search kind=case_sensitive "text"
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20search%20kind%3Dcase_sensitive%20%5C%22text%5C%22%22%7D\&queryOptions=%7B%22quickRange%22%3A%2230d%22%7D)
### Use kind=case\_insensitive
Explicitly search for logs that contain the term "CSS" without case sensitivity.
```kusto
['sample-http-logs']
| search kind=case_insensitive "CSS"
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20search%20kind%3Dcase_insensitive%20%5C%22CSS%5C%22%22%7D\&queryOptions=%7B%22quickRange%22%3A%2230d%22%7D)
### Use search \*
Search all logs. This would essentially return all rows in the dataset.
```kusto
['sample-http-logs']
| search *
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20search%20%2A%22%7D\&queryOptions=%7B%22quickRange%22%3A%2230d%22%7D)
### Contain any substring
Search for logs that contain any substring of "brazil".
```kusto
['sample-http-logs']
| search "*brazil*"
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20search%20%5C%22%2Abrazil%2A%5C%22%22%7D\&queryOptions=%7B%22quickRange%22%3A%2230d%22%7D)
### Search for multiple independent terms
Search the logs for entries that contain either the term "GET" or "covina", irrespective of their context or the fields they appear in.
```kusto
['sample-http-logs']
| search "GET" or "covina"
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20search%20%5C%22GET%5C%22%20or%20%5C%22covina%5C%22%22%7D\&queryOptions=%7B%22quickRange%22%3A%2230d%22%7D)
## Use the search operator efficiently
Using non-field-specific filters such as the `search` operator has an impact on performance, especially when used over a high volume of events in a wide time range. To use the `search` operator efficiently, follow these guidelines:
* Use field-specific filters when possible. Field-specific filters narrow your query results to events where a field has a given value. They are more efficient than non-field-specific filters, such as the `search` operator, that narrow your query results by searching across all fields for a given value. When you know the target field, replace the `search` operator with `where` clauses that filter for values in a specific field.
* After using the `search` operator in your query, use other operators, such as `project` statements, to limit the number of returned fields.
* Use the `kind` flag when possible. When you know the pattern that string values in your data follow, use the `kind` flag to specify the case-sensitivity of the search.
# sort
Source: https://axiom.co/docs/apl/tabular-operators/sort-operator
This page explains how to use the sort operator function in APL.
The `sort` operator in APL arranges the rows of a result set based on one or more fields in ascending or descending order. You can use it to organize your data logically or optimize subsequent operations that depend on ordered data. This operator is useful when analyzing logs, traces, or any dataset where the order of results matters, such as when you’re interested in top or bottom performers, chronological sequences, or sorting by status codes.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, the equivalent of `sort` is the `sort` command, which orders search results based on one or more fields. However, in APL, you must explicitly specify the sorting direction for each field, and sorting by multiple fields requires chaining them with commas.
```splunk Splunk example
| sort - _time, status
```
```kusto APL equivalent
['sample-http-logs']
| sort by _time desc, status asc
```
In SQL, sorting is done using the `ORDER BY` clause. The APL `sort` operator behaves similarly but uses the `by` keyword instead of `ORDER BY`. Additionally, APL requires specifying the order direction (`asc` or `desc`) explicitly for each field.
```sql SQL example
SELECT * FROM sample_http_logs
ORDER BY _time DESC, status ASC
```
```kusto APL equivalent
['sample-http-logs']
| sort by _time desc, status asc
```
## Usage
### Syntax
```kusto
| sort by Field1 [asc | desc], Field2 [asc | desc], ...
```
### Parameters
* `Field1`, `Field2`, ...: The fields to sort by.
* \[asc | desc]: Specify the sorting direction for each field as either `asc` for ascending order or `desc` for descending order.
### Returns
A table with rows ordered based on the specified fields.
## Use sort and project together
When you use `project` and `sort` in the same query, ensure you project the fields that you want to sort on. Similarly, when you use `project-away` and `sort` in the same query, ensure you don’t remove the fields that you want to sort on.
The above is also true for time fields. For example, to project the field `status` and sort on the field `_time`, project both fields similarly to the query below:
```apl
['sample-http-logs']
| project status, _time
| sort by _time desc
```
## Use case examples
Sorting HTTP logs by request duration and then by status code is useful to identify slow requests and their corresponding statuses.
**Query**
```kusto
['sample-http-logs']
| sort by req_duration_ms desc, status asc
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20sort%20by%20req_duration_ms%20desc%2C%20status%20asc%22%7D)
**Output**
| \_time | req\_duration\_ms | id | status | uri | method | geo.city | geo.country |
| ------------------- | ----------------- | ---- | ------ | ---------- | ------ | -------- | ----------- |
| 2024-10-18 12:34:56 | 5000 | abc1 | 500 | /api/data | GET | New York | US |
| 2024-10-18 12:35:56 | 4500 | abc2 | 200 | /api/users | POST | London | UK |
The query sorts the HTTP logs by the duration of each request in descending order, showing the longest-running requests at the top. If two requests have the same duration, they are sorted by status code in ascending order.
Sorting OpenTelemetry traces by span duration helps identify the longest-running spans within a specific service.
**Query**
```kusto
['otel-demo-traces']
| sort by duration desc, ['service.name'] asc
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D%20%7C%20sort%20by%20duration%20desc%2C%20%5B%27service.name%27%5D%20asc%22%7D)
**Output**
| \_time | duration | span\_id | trace\_id | service.name | kind | status\_code |
| ------------------- | -------- | -------- | --------- | ------------ | ------ | ------------ |
| 2024-10-18 12:36:56 | 00:00:15 | span1 | trace1 | frontend | server | 200 |
| 2024-10-18 12:37:56 | 00:00:14 | span2 | trace2 | cartservice | client | 500 |
This query sorts spans by their duration in descending order, with the longest spans at the top, followed by the service name in ascending order.
Sorting security logs by status code and then by timestamp can help in investigating recent failed requests.
**Query**
```kusto
['sample-http-logs']
| sort by status asc, _time desc
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20sort%20by%20status%20asc%2C%20_time%20desc%22%7D)
**Output**
| \_time | req\_duration\_ms | id | status | uri | method | geo.city | geo.country |
| ------------------- | ----------------- | ---- | ------ | ---------- | ------ | -------- | ----------- |
| 2024-10-18 12:40:56 | 3000 | abc3 | 400 | /api/login | POST | Toronto | CA |
| 2024-10-18 12:39:56 | 2000 | abc4 | 400 | /api/auth | GET | Berlin | DE |
This query sorts security logs by status code first (in ascending order) and then by the most recent events.
## List of related operators
* [top](/apl/tabular-operators/top-operator): Use `top` to return a specified number of rows with the highest or lowest values, but unlike `sort`, `top` limits the result set.
* [project](/apl/tabular-operators/project-operator): Use `project` to select and reorder fields without changing the order of rows.
* [extend](/apl/tabular-operators/extend-operator): Use `extend` to create calculated fields that can then be used in conjunction with `sort` to refine your results.
* [summarize](/apl/tabular-operators/summarize-operator): Use `summarize` to group and aggregate data before applying `sort` for detailed analysis.
# summarize
Source: https://axiom.co/docs/apl/tabular-operators/summarize-operator
This page explains how to use the summarize operator function in APL.
## Introduction
The `summarize` operator in APL enables you to perform data aggregation and create summary tables from large datasets. You can use it to group data by specified fields and apply aggregation functions such as `count()`, `sum()`, `avg()`, `min()`, `max()`, and many others. This is particularly useful when analyzing logs, tracing OpenTelemetry data, or reviewing security events. The `summarize` operator is helpful when you want to reduce the granularity of a dataset to extract insights or trends.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, the `stats` command performs a similar function to APL’s `summarize` operator. Both operators are used to group data and apply aggregation functions. In APL, `summarize` is more explicit about the fields to group by and the aggregation functions to apply.
```sql Splunk example
index="sample-http-logs" | stats count by method
```
```kusto APL equivalent
['sample-http-logs']
| summarize count() by method
```
The `summarize` operator in APL is conceptually similar to SQL’s `GROUP BY` clause with aggregation functions. In APL, you explicitly specify the aggregation function (like `count()`, `sum()`) and the fields to group by.
```sql SQL example
SELECT method, COUNT(*)
FROM sample_http_logs
GROUP BY method
```
```kusto APL equivalent
['sample-http-logs']
| summarize count() by method
```
## Usage
### Syntax
```kusto
| summarize [[Field1 =] AggregationFunction [, ...]] [by [Field2 =] GroupExpression [, ...]]
```
### Parameters
* `Field1`: A field name.
* `AggregationFunction`: The aggregation function to apply. Examples include `count()`, `sum()`, `avg()`, `min()`, and `max()`.
* `GroupExpression`: A scalar expression that can reference the dataset.
### Returns
The `summarize` operator returns a table where:
* The input rows are arranged into groups having the same values of the `by` expressions.
* The specified aggregation functions are computed over each group, producing a row for each group.
* The result contains the `by` fields and also at least one field for each computed aggregate. Some aggregation functions return multiple fields.
## Use case examples
In log analysis, you can use `summarize` to count the number of HTTP requests grouped by method, or to compute the average request duration.
**Query**
```kusto
['sample-http-logs']
| summarize count() by method
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20summarize%20count\(\)%20by%20method%22%7D)
**Output**
| method | count\_ |
| ------ | ------- |
| GET | 1000 |
| POST | 450 |
This query groups the HTTP requests by the `method` field and counts how many times each method is used.
You can use `summarize` to analyze OpenTelemetry traces by calculating the average span duration for each service.
**Query**
```kusto
['otel-demo-traces']
| summarize avg(duration) by ['service.name']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D%20%7C%20summarize%20avg\(duration\)%20by%20%5B%27service.name%27%5D%22%7D)
**Output**
| service.name | avg\_duration |
| ------------ | ------------- |
| frontend | 50ms |
| cartservice | 75ms |
This query calculates the average duration of traces for each service in the dataset.
In security log analysis, `summarize` can help group events by status codes and see the distribution of HTTP responses.
**Query**
```kusto
['sample-http-logs']
| summarize count() by status
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20summarize%20count\(\)%20by%20status%22%7D)
**Output**
| status | count\_ |
| ------ | ------- |
| 200 | 1200 |
| 404 | 300 |
This query summarizes HTTP status codes, giving insight into the distribution of responses in your logs.
## Other examples
```kusto
['sample-http-logs']
| summarize topk(content_type, 20)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20summarize%20topk\(content_type%2C%2020\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
```kusto
['github-push-event']
| summarize topk(repo, 20) by bin(_time, 24h)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27github-push-event%27%5D%7C%20summarize%20topk\(repo%2C%2020\)%20by%20bin\(_time%2C%2024h\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
Returns a table that shows the heatmap in each interval \[0, 30], \[30, 20, 10], and so on. This example has a cell for `HISTOGRAM(req_duration_ms)`.
```kusto
['sample-http-logs']
| summarize histogram(req_duration_ms, 30)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20summarize%20histogram\(req_duration_ms%2C%2030\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
```kusto
['github-push-event']
| where _time > ago(7d)
| where repo contains "axiom"
| summarize count(), numCommits=sum(size) by _time=bin(_time, 3h), repo
| take 100
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27github-push-event%27%5D%20%7C%20where%20_time%20%3E%20ago\(7d\)%20%7C%20where%20repo%20contains%20%5C%22axiom%5C%22%20%7C%20summarize%20count\(\)%2C%20numCommits%3Dsum\(size\)%20by%20_time%3Dbin\(_time%2C%203h\)%2C%20repo%20%7C%20take%20100%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
## List of related operators
* [count](/apl/tabular-operators/count-operator): Use when you only need to count rows without grouping by specific fields.
* [extend](/apl/tabular-operators/extend-operator): Use to add new calculated fields to a dataset.
* [project](/apl/tabular-operators/project-operator): Use to select specific fields or create new calculated fields, often in combination with `summarize`.
# take
Source: https://axiom.co/docs/apl/tabular-operators/take-operator
This page explains how to use the take operator in APL.
The `take` operator in APL allows you to retrieve a specified number of rows from a dataset. It’s useful when you want to preview data, limit the result set for performance reasons, or fetch a random sample from large datasets. The `take` operator can be particularly effective in scenarios like log analysis, security monitoring, and telemetry where large amounts of data are processed, and only a subset is needed for analysis.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, the `head` and `tail` commands perform similar operations to the APL `take` operator, where `head` returns the first N results, and `tail` returns the last N. In APL, `take` is a flexible way to fetch any subset of rows in a dataset.
```sql Splunk example
| head 10
```
```kusto APL equivalent
['sample-http-logs']
| take 10
```
In ANSI SQL, the equivalent of the APL `take` operator is `LIMIT`. While SQL requires you to specify a sorting order with `ORDER BY` for deterministic results, APL allows you to use `take` to fetch a specific number of rows without needing explicit sorting.
```sql SQL example
SELECT * FROM sample_http_logs LIMIT 10;
```
```kusto APL equivalent
['sample-http-logs']
| take 10
```
## Usage
### Syntax
```kusto
| take N
```
### Parameters
* `N`: The number of rows to take from the dataset. `N` must be a positive integer.
### Returns
The operator returns the specified number of rows from the dataset.
## Use case examples
The `take` operator is useful in log analysis when you need to view a subset of logs to quickly identify trends or errors without analyzing the entire dataset.
**Query**
```kusto
['sample-http-logs']
| take 5
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20take%205%22%7D)
**Output**
| \_time | req\_duration\_ms | id | status | uri | method | geo.city | geo.country |
| -------------------- | ----------------- | ---- | ------ | --------- | ------ | -------- | ----------- |
| 2023-10-18T10:00:00Z | 120 | u123 | 200 | /home | GET | Berlin | Germany |
| 2023-10-18T10:01:00Z | 85 | u124 | 404 | /login | POST | New York | USA |
| 2023-10-18T10:02:00Z | 150 | u125 | 500 | /checkout | POST | Tokyo | Japan |
This query retrieves the first 5 rows from the `sample-http-logs` dataset.
In the context of OpenTelemetry traces, the `take` operator helps extract a small number of traces to analyze span performance or trace behavior across services.
**Query**
```kusto
['otel-demo-traces']
| take 3
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20take%203%22%7D)
**Output**
| \_time | duration | span\_id | trace\_id | service.name | kind | status\_code |
| -------------------- | -------- | -------- | --------- | --------------- | -------- | ------------ |
| 2023-10-18T10:10:00Z | 250ms | s123 | t456 | frontend | server | OK |
| 2023-10-18T10:11:00Z | 300ms | s124 | t457 | checkoutservice | client | OK |
| 2023-10-18T10:12:00Z | 100ms | s125 | t458 | cartservice | internal | ERROR |
This query retrieves the first 3 spans from the OpenTelemetry traces dataset.
For security logs, `take` allows quick sampling of log entries to detect patterns or anomalies without needing the entire log file.
**Query**
```kusto
['sample-http-logs']
| take 10
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20take%2010%22%7D)
**Output**
| \_time | req\_duration\_ms | id | status | uri | method | geo.city | geo.country |
| -------------------- | ----------------- | ---- | ------ | ---------- | ------ | -------- | ----------- |
| 2023-10-18T10:20:00Z | 200 | u223 | 200 | /admin | GET | London | UK |
| 2023-10-18T10:21:00Z | 190 | u224 | 403 | /dashboard | GET | Berlin | Germany |
This query retrieves the first 10 security log entries, useful for quick investigations.
## List of related operators
* [limit](/apl/tabular-operators/limit-operator): Similar to `take`, but explicitly limits the result set and often used for pagination or performance optimization.
* [sort](/apl/tabular-operators/sort-operator): Used in combination with `take` when you want to fetch a subset of sorted data.
* [where](/apl/tabular-operators/where-operator): Filters rows based on a condition before using `take` for sampling specific subsets.
# top
Source: https://axiom.co/docs/apl/tabular-operators/top-operator
This page explains how to use the top operator function in APL.
The `top` operator in Axiom Processing Language (APL) allows you to retrieve the top N rows from a dataset based on specified criteria. It is particularly useful when you need to analyze the highest values in large datasets or want to quickly identify trends, such as the highest request durations in logs or top error occurrences in traces. You can apply it in scenarios like log analysis, security investigations, or tracing system performance.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
The `top` operator in APL is similar to `top` in Splunk SPL but allows greater flexibility in specifying multiple sorting criteria.
```sql Splunk example
index="sample_http_logs" | top limit=5 req_duration_ms
```
```kusto APL equivalent
['sample-http-logs']
| top 5 by req_duration_ms
```
In ANSI SQL, the `TOP` operator is used with an `ORDER BY` clause to limit the number of rows. In APL, the syntax is similar but uses `top` in a pipeline and specifies the ordering criteria directly.
```sql SQL example
SELECT TOP 5 req_duration_ms FROM sample_http_logs ORDER BY req_duration_ms DESC
```
```kusto APL equivalent
['sample-http-logs']
| top 5 by req_duration_ms
```
## Usage
### Syntax
```kusto
| top N by Expression [asc | desc]
```
### Parameters
* `N`: The number of rows to return.
* `Expression`: A scalar expression used for sorting. The type of the values must be numeric, date, time, or string.
* `[asc | desc]`: Optional. Use to sort in ascending or descending order. The default is descending.
### Returns
The `top` operator returns the top N rows from the dataset based on the specified sorting criteria.
## Use case examples
The `top` operator helps you find the HTTP requests with the longest durations.
**Query**
```kusto
['sample-http-logs']
| top 5 by req_duration_ms
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20top%205%20by%20req_duration_ms%22%7D)
**Output**
| \_time | req\_duration\_ms | id | status | uri | method | geo.city | geo.country |
| ------------------- | ----------------- | --- | ------ | ---------------- | ------ | -------- | ----------- |
| 2024-10-01 10:12:34 | 5000 | 123 | 200 | /api/get-data | GET | New York | US |
| 2024-10-01 11:14:20 | 4900 | 124 | 200 | /api/post-data | POST | Chicago | US |
| 2024-10-01 12:15:45 | 4800 | 125 | 200 | /api/update-item | PUT | London | UK |
This query returns the top 5 HTTP requests that took the longest time to process.
The `top` operator is useful for identifying the spans with the longest duration in distributed tracing systems.
**Query**
```kusto
['otel-demo-traces']
| top 5 by duration
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D%20%7C%20top%205%20by%20duration%22%7D)
**Output**
| \_time | duration | span\_id | trace\_id | service.name | kind | status\_code |
| ------------------- | -------- | -------- | --------- | --------------- | ------ | ------------ |
| 2024-10-01 10:12:34 | 300ms | span123 | trace456 | frontend | server | 200 |
| 2024-10-01 10:13:20 | 290ms | span124 | trace457 | cartservice | client | 200 |
| 2024-10-01 10:15:45 | 280ms | span125 | trace458 | checkoutservice | server | 500 |
This query returns the top 5 spans with the longest durations from the OpenTelemetry traces.
The `top` operator is useful for identifying the most frequent HTTP status codes in security logs.
**Query**
```kusto
['sample-http-logs']
| summarize count() by status
| top 3 by count_
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20summarize%20count\(\)%20by%20status%20%7C%20top%203%20by%20count_%22%7D)
**Output**
| status | count\_ |
| ------ | ------- |
| 200 | 500 |
| 404 | 50 |
| 500 | 20 |
This query shows the top 3 most common HTTP status codes in security logs.
## List of related operators
* [order](/apl/tabular-operators/order-operator): Use when you need full control over row ordering without limiting the number of results.
* [summarize](/apl/tabular-operators/summarize-operator): Useful when aggregating data over fields and obtaining summarized results.
* [take](/apl/tabular-operators/take-operator): Returns the first N rows without sorting. Use when ordering is not necessary.
# union
Source: https://axiom.co/docs/apl/tabular-operators/union-operator
This page explains how to use the union operator in APL.
The `union` operator in APL allows you to combine the results of two or more queries into a single output. The operator is useful when you need to analyze or compare data from different datasets or tables in a unified manner. By using `union`, you can merge multiple sets of records, keeping all data from the source tables without applying any aggregation or filtering.
The `union` operator is particularly helpful in scenarios like log analysis, tracing OpenTelemetry events, or correlating security logs across multiple sources. You can use it to perform comprehensive investigations by bringing together information from different datasets into one query.
## Union of two datasets
To understand how the `union` operator works, consider these datasets:
**Server requests**
| \_time | status | method | trace\_id |
| ------ | ------ | ------ | --------- |
| 12:10 | 200 | GET | 1 |
| 12:15 | 200 | POST | 2 |
| 12:20 | 503 | POST | 3 |
| 12:25 | 200 | POST | 4 |
**App logs**
| \_time | trace\_id | message |
| ------ | --------- | ------- |
| 12:12 | 1 | foo |
| 12:21 | 3 | bar |
| 13:35 | 27 | baz |
Performing a union on `Server requests` and `Application logs` would result in a new dataset with all the rows from both `DatasetA` and `DatasetB`.
A union of **requests** and **logs** would produce the following result set:
| \_time | status | method | trace\_id | message |
| ------ | ------ | ------ | --------- | ------- |
| 12:10 | 200 | GET | 1 | |
| 12:12 | | | 1 | foo |
| 12:15 | 200 | POST | 2 | |
| 12:20 | 503 | POST | 3 | |
| 12:21 | | | 3 | bar |
| 12:25 | 200 | POST | 4 | |
| 13:35 | | | 27 | baz |
This result combines the rows and merges types for overlapping fields.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, the `append` command works similarly to the `union` operator in APL. Both operators are used to combine multiple datasets. However, while `append` in Splunk typically adds one dataset to the end of another, APL’s `union` merges datasets while preserving all records.
```splunk Splunk example
index=web OR index=security
```
```kusto APL equivalent
['sample-http-logs']
| union ['security-logs']
```
In ANSI SQL, the `UNION` operator performs a similar function to the APL `union` operator. Both are used to combine the results of two or more queries. However, SQL’s `UNION` removes duplicates by default, whereas APL’s `union` keeps all rows unless you use `union with=kind=unique`.
```sql SQL example
SELECT * FROM web_logs
UNION
SELECT * FROM security_logs;
```
```kusto APL equivalent
['sample-http-logs']
| union ['security-logs']
```
## Usage
### Syntax
```kusto
T1 | union [withsource=FieldName] [T2], [T3], ...
```
### Parameters
* `T1, T2, T3, ...`: Tables or query results you want to combine into a single output.
* `withsource`: Optional, adds a field to the output where each value specifies the source dataset of the row. Specify the name of this additional field in `FieldName`.
### Returns
The `union` operator returns all rows from the specified tables or queries. If fields overlap, they are merged. Non-overlapping fields are retained in their original form.
## Use case examples
In log analysis, you can use the `union` operator to combine HTTP logs from different sources, such as web servers and security systems, to analyze trends or detect anomalies.
**Query**
```kusto
['sample-http-logs']
| union ['security-logs']
| where status == '500'
```
**Output**
| \_time | id | status | uri | method | geo.city | geo.country | req\_duration\_ms |
| ------------------- | ------- | ------ | ------------------- | ------ | -------- | ----------- | ----------------- |
| 2024-10-17 12:34:56 | user123 | 500 | /api/login | GET | London | UK | 345 |
| 2024-10-17 12:35:10 | user456 | 500 | /api/update-profile | POST | Berlin | Germany | 123 |
This query combines two datasets (HTTP logs and security logs) and filters the combined data to show only those entries where the HTTP status code is 500.
When working with OpenTelemetry traces, you can use the `union` operator to combine tracing information from different services for a unified view of system performance.
**Query**
```kusto
['otel-demo-traces']
| union ['otel-backend-traces']
| where ['service.name'] == 'frontend' and status_code == 'error'
```
**Output**
| \_time | trace\_id | span\_id | \['service.name'] | kind | status\_code |
| ------------------- | ---------- | -------- | ----------------- | ------ | ------------ |
| 2024-10-17 12:36:10 | trace-1234 | span-567 | frontend | server | error |
| 2024-10-17 12:38:20 | trace-7890 | span-345 | frontend | client | error |
This query combines traces from two different datasets and filters them to show only errors occurring in the `frontend` service.
For security logs, the `union` operator is useful to combine logs from different sources, such as intrusion detection systems (IDS) and firewall logs.
**Query**
```kusto
['sample-http-logs']
| union ['security-logs']
| where ['geo.country'] == 'Germany'
```
**Output**
| \_time | id | status | uri | method | geo.city | geo.country | req\_duration\_ms |
| ------------------- | ------- | ------ | ---------------- | ------ | -------- | ----------- | ----------------- |
| 2024-10-17 12:34:56 | user789 | 200 | /api/login | GET | Berlin | Germany | 245 |
| 2024-10-17 12:40:22 | user456 | 404 | /api/nonexistent | GET | Munich | Germany | 532 |
This query combines web and security logs, then filters the results to show only those records where the request originated from Germany.
## Other examples
### Basic union
This example combines all rows from `github-push-event` and `github-pull-request-event` without any transformation or filtering.
```kusto
['github-push-event']
| union ['github-pull-request-event']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%20%22%5B%27github-push-event%27%5D%5Cn%7C%20union%20%5B%27github-pull-request-event%27%5D%22%7D)
### Filter after union
This example combines the datasets, and then filters the data to only include rows where the `method` is `GET`.
```kusto
['sample-http-logs']
| union ['github-issues-event']
| where method == "GET"
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20union%20%5B%27github-issues-event%27%5D%5Cn%7C%20where%20method%20%3D%3D%20%5C%22GET%5C%22%22%7D)
### Aggregate after union
This example combines the datasets and summarizes the data, counting the occurrences of each combination of `content_type` and `actor`.
```kusto
['sample-http-logs']
| union ['github-pull-request-event']
| summarize Count = count() by content_type, actor
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20union%20%5B%27github-pull-request-event%27%5D%5Cn%7C%20summarize%20Count%20%3D%20count%28%29%20by%20content_type%2C%20actor%22%7D)
### Filter and project specific data from combined log sources
This query combines GitHub pull request event logs and GitHub push events, filters by actions made by `github-actions[bot]`, and displays key event details such as `time`, `repository`, `commits`, `head` , `id`.
```kusto
['github-pull-request-event']
| union ['github-push-event']
| where actor == "github-actions[bot]"
| project _time, repo, ['id'], commits, head
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%20%22%5B%27github-pull-request-event%27%5D%5Cn%7C%20union%20%5B%27github-push-event%27%5D%5Cn%7C%20where%20actor%20%3D%3D%20%5C%22github-actions%5Bbot%5D%5C%22%5Cn%7C%20project%20_time%2C%20repo%2C%20%5B%27id%27%5D%2C%20commits%2C%20head%22%7D)
### Union with field removing
This example removes the `content_type` and `commits` field in the datasets `sample-http-logs` and `github-push-event` before combining the datasets.
```kusto
['sample-http-logs']
| union ['github-push-event']
| project-away content_type, commits
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20union%20%5B%27github-push-event%27%5D%5Cn%7C%20project-away%20content_type%2C%20commits%22%7D)
### Filter after union
This example performs a union and then filters the resulting set to only include rows where the `method` is `GET`.
```kusto
['sample-http-logs']
| union ['github-issues-event']
| where method == "GET"
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20union%20%5B%27github-issues-event%27%5D%5Cn%7C%20where%20method%20%3D%3D%20%5C%22GET%5C%22%22%7D)
### Union with order by
After the union, the result is ordered by the `type` field.
```kusto
['sample-http-logs']
| union hn
| order by type
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20union%20hn%5Cn%7C%20order%20by%20type%22%7D)
### Union with joint conditions
This example performs a union and then filters the resulting dataset for rows where `content_type` contains the letter `a` and `city` is `seattle`.
```kusto
['sample-http-logs']
| union ['github-pull-request-event']
| where content_type contains "a" and ['geo.city'] == "Seattle"
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20union%20%5B%27github-pull-request-event%27%5D%5Cn%7C%20where%20content_type%20contains%20%5C%22a%5C%22%20and%20%5B%27geo.city%27%5D%20%20%3D%3D%20%5C%22Seattle%5C%22%22%7D)
### Union and count unique values
After the union, the query calculates the number of unique `geo.city` and `repo` entries in the combined dataset.
```kusto
['sample-http-logs']
| union ['github-push-event']
| summarize UniqueNames = dcount(['geo.city']), UniqueData = dcount(repo)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20union%20%5B%27github-push-event%27%5D%5Cn%7C%20summarize%20UniqueNames%20%3D%20dcount%28%5B%27geo.city%27%5D%29%2C%20UniqueData%20%3D%20dcount%28repo%29%22%7D)
### Union using withsource
The example below returns the union of all datasets that match the pattern `github*` and counts the number of events in each.
```kusto
union withsource=dataset github*
| summarize count() by dataset
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22union%20withsource%3Ddataset%20github*%20%7C%20summarize%20count\(\)%20by%20dataset%22%7D)
## Best practices for the union operator
To maximize the effectiveness of the union operator in APL, here are some best practices to consider:
* Before using the `union` operator, ensure that the fields being merged have compatible data types.
* Use `project` or `project-away` to include or exclude specific fields. This can improve performance and the clarity of your results, especially when you only need a subset of the available data.
# where
Source: https://axiom.co/docs/apl/tabular-operators/where-operator
This page explains how to use the where operator in APL.
The `where` operator in APL is used to filter rows based on specified conditions. You can use the `where` operator to return only the records that meet the criteria you define. It’s a foundational operator in querying datasets, helping you focus on specific data by applying conditions to filter out unwanted rows. This is useful when working with large datasets, logs, traces, or security events, allowing you to extract meaningful information quickly.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, the `where` operator filters events based on boolean expressions. APL’s `where` operator functions similarly, allowing you to filter rows that satisfy a condition.
```sql Splunk example
index=main | where status="200"
```
```kusto APL equivalent
['sample-http-logs']
| where status == '200'
```
In ANSI SQL, the `WHERE` clause filters rows in a `SELECT` query based on a condition. APL’s `where` operator behaves similarly, but the syntax reflects APL’s specific dataset structures.
```sql SQL example
SELECT * FROM sample_http_logs WHERE status = '200'
```
```kusto APL equivalent
['sample-http-logs']
| where status == '200'
```
## Usage
### Syntax
```kusto
| where condition
```
### Parameters
* `condition`: A Boolean expression that specifies the filtering condition. The `where` operator returns only the rows that satisfy this condition.
### Returns
The `where` operator returns a filtered dataset containing only the rows where the condition evaluates to true.
## Use case examples
In this use case, you filter HTTP logs to focus on records where the HTTP status is 404 (Not Found).
**Query**
```kusto
['sample-http-logs']
| where status == '404'
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20where%20status%20%3D%3D%20'404'%22%7D)
**Output**
| \_time | id | status | method | uri | req\_duration\_ms | geo.city | geo.country |
| ------------------- | ----- | ------ | ------ | -------------- | ----------------- | -------- | ----------- |
| 2024-10-17 10:20:00 | 12345 | 404 | GET | /notfound.html | 120 | Seattle | US |
This query filters out all HTTP requests except those that resulted in a 404 error, making it easy to investigate pages that were not found.
Here, you filter OpenTelemetry traces to retrieve spans where the `duration` exceeded 500 milliseconds.
**Query**
```kusto
['otel-demo-traces']
| where duration > 500ms
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20where%20duration%20%3E%20500ms%22%7D)
**Output**
| \_time | span\_id | trace\_id | duration | service.name | kind | status\_code |
| ------------------- | -------- | --------- | -------- | ------------ | ------ | ------------ |
| 2024-10-17 11:15:00 | abc123 | xyz789 | 520ms | frontend | server | OK |
This query helps identify spans with durations longer than 500 milliseconds, which might indicate performance issues.
In this security use case, you filter logs to find requests from users in a specific country, such as Germany.
**Query**
```kusto
['sample-http-logs']
| where ['geo.country'] == 'Germany'
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20where%20%5B'geo.country'%5D%20%3D%3D%20'Germany'%22%7D)
**Output**
| \_time | id | status | method | uri | req\_duration\_ms | geo.city | geo.country |
| ------------------- | ----- | ------ | ------ | ------ | ----------------- | -------- | ----------- |
| 2024-10-17 09:45:00 | 54321 | 200 | POST | /login | 100 | Berlin | Germany |
This query helps filter logs to investigate activity originating from a specific country, useful for security and compliance.
## where \* has
The `* has` pattern in APL is a dynamic and powerful tool within the `where` operator. It offers you the flexibility to search for specific substrings across all fields in a dataset without the need to specify each field name individually. This becomes especially advantageous when dealing with datasets that have numerous or dynamically named fields.
`where * has` is an expensive operation because it searches all fields. For a more efficient query, explicitly list the fields in which you want to search. For example: `where firstName has "miguel" or lastName has "miguel"`.
### Basic where \* has usage
Find events where any field contains a specific substring.
```kusto
['sample-http-logs']
| where * has "GET"
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20where%20%2A%20has%20%5C%22GET%5C%22%22%7D\&queryOptions=%7B%22quickRange%22%3A%2230d%22%7D)
### Combine multiple substrings
Find events where any field contains one of multiple substrings.
```kusto
['sample-http-logs']
| where * has "GET" or * has "text"
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20where%20%2A%20has%20%5C%22GET%5C%22%20or%20%2A%20has%20%5C%22text%5C%22%22%7D\&queryOptions=%7B%22quickRange%22%3A%2230d%22%7D)
### Use \* has with other operators
Find events where any field contains a substring, and another specific field equals a certain value.
```kusto
['sample-http-logs']
| where * has "css" and req_duration_ms == 1
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20where%20%2A%20has%20%5C%22css%5C%22%20and%20req_duration_ms%20%3D%3D%201%22%7D\&queryOptions=%7B%22quickRange%22%3A%2230d%22%7D)
### Advanced chaining
Filter data based on several conditions, including fields containing certain substrings, then summarize by another specific criterion.
```kusto
['sample-http-logs']
| where * has "GET" and * has "css"
| summarize Count=count() by method, content_type, server_datacenter
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20where%20%2A%20has%20%5C%22GET%5C%22%20and%20%2A%20has%20%5C%22css%5C%22%5Cn%7C%20summarize%20Count%3Dcount%28%29%20by%20method%2C%20content_type%2C%20server_datacenter%22%7D\&queryOptions=%7B%22quickRange%22%3A%2230d%22%7D)
### Use with aggregations
Find the average of a specific field for events where any field contains a certain substring.
```kusto
['sample-http-logs']
| where * has "Japan"
| summarize avg(req_duration_ms)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20where%20%2A%20has%20%5C%22Japan%5C%22%5Cn%7C%20summarize%20avg%28req_duration_ms%29%22%7D\&queryOptions=%7B%22quickRange%22%3A%2230d%22%7D)
### String case transformation
The `has` operator is case insensitive. Use `has` if you’re unsure about the case of the substring in the dataset. For the case-sensitive operator, use `has_cs`.
```kusto
['sample-http-logs']
| where * has "mexico"
| summarize avg(req_duration_ms)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20where%20%2A%20has%20%5C%22mexico%5C%22%5Cn%7C%20summarize%20avg%28req_duration_ms%29%22%7D\&queryOptions=%7B%22quickRange%22%3A%2230d%22%7D)
## List of related operators
* [count](/apl/tabular-operators/count-operator): Use `count` to return the number of records that match specific criteria.
* [distinct](/apl/tabular-operators/distinct-operator): Use `distinct` to return unique values in a dataset, complementing filtering.
* [take](/apl/tabular-operators/take-operator): Use `take` to return a specific number of records, typically in combination with `where` for pagination.
# Sample queries
Source: https://axiom.co/docs/apl/tutorial
Explore how to use APL in Axiom’s Query tab to run queries using Tabular Operators, Scalar Functions, and Aggregation Functions.
This page shows you how to query your data using APL through a wide range of sample queries. You can try out each example in the [Axiom Playground](https://play.axiom.co/axiom-play-qf1k/query).
For an introduction to APL and to the structure of an APL query, see [Introduction to APL](/apl/introduction).
## Summarize data
[summarize](/apl/tabular-operators/summarize-operator) produces a table that aggregates the content of the dataset. Use the [aggregation functions](/apl/aggregation-function/statistical-functions) with the `summarize` operator to produce different fields.
The following query counts events by time bins.
```kusto
['sample-http-logs']
| summarize count() by bin_auto(_time)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20summarize%20count%28%29%20by%20bin_auto%28_time%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
The example below summarizes the top 10 GitHub push events by maximum push ID.
```kusto
['github-push-event']
| summarize max_if = maxif(push_id, true) by size
| top 10 by max_if desc
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27github-push-event%27%5D%5Cn%7C%20summarize%20max_if%20%3D%20maxif%28push_id%2C%20true%29%20by%20size%5Cn%7C%20top%2010%20by%20max_if%20desc%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
The example below summarizes the distinct city count by server datacenter.
```kusto
['sample-http-logs']
| summarize cities = dcount(['geo.city']) by server_datacenter
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20summarize%20cities%20%3D%20dcount%28%5B%27geo.city%27%5D%29%20by%20server_datacenter%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
## Tabular operators
### where
[where](/apl/tabular-operators/where-operator) filters the content of the dataset that meets a condition when executed.
The following query filters the data by `method` and `content_type`:
```kusto
['sample-http-logs']
| where method == "GET" and content_type == "application/octet-stream"
| project method , content_type
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20where%20method%20%3D%3D%20%5C%22GET%5C%22%20and%20content_type%20%3D%3D%20%5C%22application%2Foctet-stream%5C%22%5Cn%7C%20project%20method%20%2C%20content_type%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
### count
[count](/apl/tabular-operators/count-operator) returns the number of events from the input dataset.
```kusto
['sample-http-logs']
| count
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20count%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
### project
[project](/apl/tabular-operators/project-operator) selects a subset of fields.
```kusto
['sample-http-logs']
| project content_type, ['geo.country'], method, resp_body_size_bytes, resp_header_size_bytes
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20content_type%2C%20%5B%27geo.country%27%5D%2C%20method%2C%20resp_body_size_bytes%2C%20resp_header_size_bytes%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
### take
[take](/apl/tabular-operators/take-operator) returns up to the specified number of rows.
```kusto
['sample-http-logs']
| take 100
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20take%20100%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
### limit
The `limit` operator is an alias to the `take` operator.
```kusto
['sample-http-logs']
| limit 10
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20limit%2010%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
## Scalar functions
### parse\_json
[parse\_json](/apl/scalar-functions/string-functions#parse-json) extracts the JSON elements from an array.
```kusto
['sample-http-logs']
| project parsed_json = parse_json( "config_jsonified_metrics")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20parsed_json%20%3D%20parse_json%28%20%5C%22config_jsonified_metrics%5C%22%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
### replace\_string
[replace\_string](/apl/scalar-functions/string-functions#parse-json) replaces all string matches with another string.
```kusto
['sample-http-logs']
| extend replaced_string = replace_string( "creator", "method", "machala" )
| project replaced_string
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20replaced_string%20%3D%20replace_string%28%20%5C%22creator%5C%22%2C%20%5C%22method%5C%22%2C%20%5C%22machala%5C%22%20%29%5Cn%7C%20project%20replaced_string%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
### split
[split](/apl/scalar-functions/string-functions#split) splits a given string according to a given delimiter and returns a string array.
```kusto
['sample-http-logs']
| project split_str = split("method_content_metrics", "_")
| take 20
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20split_str%20%3D%20split%28%5C%22method_content_metrics%5C%22%2C%20%5C%22_%5C%22%29%5Cn%7C%20take%2020%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
### strcat\_delim
[strcat\_delim](/apl/scalar-functions/string-functions#strcat-delim) concatenates a string array into a string with a given delimiter.
```kusto
['sample-http-logs']
| project strcat = strcat_delim(":", ['geo.city'], resp_body_size_bytes)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20strcat%20%3D%20strcat_delim%28%5C%22%3A%5C%22%2C%20%5B%27geo.city%27%5D%2C%20resp_body_size_bytes%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
### indexof
[indexof](/apl/scalar-functions/string-functions#indexof) reports the zero-based index of the first occurrence of a specified string within the input string.
```kusto
['sample-http-logs']
| extend based_index = indexof( ['geo.country'], content_type, 45, 60, resp_body_size_bytes ), specified_time = bin(resp_header_size_bytes, 30)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20based_index%20%3D%20%20indexof%28%20%5B%27geo.country%27%5D%2C%20content_type%2C%2045%2C%2060%2C%20resp_body_size_bytes%20%29%2C%20specified_time%20%3D%20bin%28resp_header_size_bytes%2C%2030%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
## Regex examples
**Remove leading characters**
```kusto
['sample-http-logs']
| project remove_cutset = trim_start_regex("[^a-zA-Z]", content_type )
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20remove_cutset%20%3D%20trim_start_regex%28%5C%22%5B%5Ea-zA-Z%5D%5C%22%2C%20content_type%20%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
**Find logs from a city**
```kusto
['sample-http-logs']
| where tostring(geo.city) matches regex "^Camaquã$"
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20where%20tostring%28%5B%27geo.city%27%5D%29%20matches%20regex%20%5C%22%5ECamaqu%C3%A3%24%5C%22%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
**Identify logs from a user agent**
```kusto
['sample-http-logs']
| where tostring(user_agent) matches regex "Mozilla/5.0"
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20where%20tostring%28user_agent%29%20matches%20regex%20%5C%22Mozilla%2F5.0%5C%22%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
**Find logs with response body size in a certain range**
```kusto
['sample-http-logs']
| where toint(resp_body_size_bytes) >= 4000 and toint(resp_body_size_bytes) <= 5000
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20where%20toint%28resp_body_size_bytes%29%20%3E%3D%204000%20and%20toint%28resp_body_size_bytes%29%20%3C%3D%205000%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
**Find logs with user agents containing Windows NT**
```kusto
['sample-http-logs']
| where tostring(user_agent) matches regex @"Windows NT [\d\.]+"
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?qid=m8yNkSVVjGq-s0z19c)
**Find logs with specific response header size**
```kusto
['sample-http-logs']
| where toint(resp_header_size_bytes) == 31
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20where%20toint%28resp_header_size_bytes%29%20%3D%3D%2031%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
**Find logs with specific request duration**
```kusto
['sample-http-logs']
| where toreal(req_duration_ms) < 1
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20where%20toreal%28req_duration_ms%29%20%3C%201%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
**Find logs where TLS is enabled and method is POST**
```kusto
['sample-http-logs']
| where tostring(is_tls) == "true" and tostring(method) == "POST"
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20where%20tostring%28is_tls%29%20%3D%3D%20%5C%22true%5C%22%20and%20tostring%28method%29%20%3D%3D%20%5C%22POST%5C%22%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
## Array functions
### array\_concat
[array\_concat](/apl/scalar-functions/array-functions#array_concat) concatenates a number of dynamic arrays to a single array.
```kusto
['sample-http-logs']
| extend concatenate = array_concat( dynamic([5,4,3,87,45,2,3,45]))
| project concatenate
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20concatenate%20%3D%20array_concat%28%20dynamic%28%5B5%2C4%2C3%2C87%2C45%2C2%2C3%2C45%5D%29%29%5Cn%7C%20project%20concatenate%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
### array\_sum
[array\_sum](/apl/scalar-functions/array-functions#array-sum) calculates the sum of elements in a dynamic array.
```kusto
['sample-http-logs']
| extend summary_array=dynamic([1,2,3,4])
| project summary_array=array_sum(summary_array)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20summary_array%3Ddynamic%28%5B1%2C2%2C3%2C4%5D%29%5Cn%7C%20project%20summary_array%3Darray_sum%28summary_array%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
## Conversion functions
### todatetime
[todatetime](/apl/scalar-functions/conversion-functions#todatetime) converts input to datetime scalar.
```kusto
['sample-http-logs']
| extend dated_time = todatetime("2026-08-16")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20dated_time%20%3D%20todatetime%28%5C%222026-08-16%5C%22%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
### dynamic\_to\_json
[dynamic\_to\_json](/apl/scalar-functions/conversion-functions#dynamic-to-json) converts a scalar value of type dynamic to a canonical string representation.
```kusto
['sample-http-logs']
| extend dynamic_string = dynamic_to_json(dynamic([10,20,30,40 ]))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20dynamic_string%20%3D%20dynamic_to_json%28dynamic%28%5B10%2C20%2C30%2C40%20%5D%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
## Scalar operators
APL supports a wide range of scalar operators:
* [String operators](/apl/scalar-operators/string-operators)
* [Logical operators](/apl/scalar-operators/logical-operators)
* [Numerical operators](/apl/scalar-operators/numerical-operators)
### contains
The query below uses the `contains` operator to find the strings that contain the string `-bot` and `[bot]`:
```kusto
['github-issue-comment-event']
| extend bot = actor contains "-bot" or actor contains "[bot]"
| where bot == true
| summarize count() by bin_auto(_time), actor
| take 20
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27github-issue-comment-event%27%5D%5Cn%7C%20extend%20bot%20%3D%20actor%20contains%20%5C%22-bot%5C%22%20or%20actor%20contains%20%5C%22%5Bbot%5D%5C%22%5Cn%7C%20where%20bot%20%3D%3D%20true%5Cn%7C%20summarize%20count%28%29%20by%20bin_auto%28_time%29%2C%20actor%5Cn%7C%20take%2020%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
```kusto
['sample-http-logs']
| extend user_status = status contains "200" , agent_flow = user_agent contains "(Windows NT 6.4; AppleWebKit/537.36 Chrome/41.0.2225.0 Safari/537.36"
| where user_status == true
| summarize count() by bin_auto(_time), status
| take 15
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20user_status%20%3D%20status%20contains%20%5C%22200%5C%22%20%2C%20agent_flow%20%3D%20user_agent%20contains%20%5C%22%28Windows%20NT%206.4%3B%20AppleWebKit%2F537.36%20Chrome%2F41.0.2225.0%20Safari%2F537.36%5C%22%5Cn%7C%20where%20user_status%20%3D%3D%20true%5Cn%7C%20summarize%20count%28%29%20by%20bin_auto%28_time%29%2C%20status%5Cn%7C%20take%2015%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
## Hash functions
* [hash\_md5](/apl/scalar-functions/hash-functions#hash-md5) returns an MD5 hash value for the input value.
* [hash\_sha256](/apl/scalar-functions/hash-functions#hash-sha256) returns a sha256 hash value for the input value.
* [hash\_sha1](/apl/scalar-functions/hash-functions#hash-sha1) returns a sha1 hash value for the input value.
```kusto
['sample-http-logs']
| extend sha_256 = hash_md5( "resp_header_size_bytes" ), sha_1 = hash_sha1( content_type), md5 = hash_md5( method), sha512 = hash_sha512( "resp_header_size_bytes" )
| project sha_256, sha_1, md5, sha512
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20sha_256%20%3D%20hash_md5%28%20%5C%22resp_header_size_bytes%5C%22%20%29%2C%20sha_1%20%3D%20hash_sha1%28%20content_type%29%2C%20md5%20%3D%20hash_md5%28%20method%29%2C%20sha512%20%3D%20hash_sha512%28%20%5C%22resp_header_size_bytes%5C%22%20%29%5Cn%7C%20project%20sha_256%2C%20sha_1%2C%20md5%2C%20sha512%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
## Rounding functions
* [floor()](/apl/scalar-functions/rounding-functions#floor) calculates the largest integer less than, or equal to, the specified numeric expression.
* [ceiling()](/apl/scalar-functions/rounding-functions#ceiling) calculates the smallest integer greater than, or equal to, the specified numeric expression.
* [bin()](/apl/scalar-functions/rounding-functions#bin) rounds values down to an integer multiple of a given bin size.
```kusto
['sample-http-logs']
| extend largest_integer_less = floor( resp_header_size_bytes ), smallest_integer_greater = ceiling( req_duration_ms ), integer_multiple = bin( resp_body_size_bytes, 5 )
| project largest_integer_less, smallest_integer_greater, integer_multiple
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20largest_integer_less%20%3D%20floor%28%20resp_header_size_bytes%20%29%2C%20smallest_integer_greater%20%3D%20ceiling%28%20req_duration_ms%20%29%2C%20integer_multiple%20%3D%20bin%28%20resp_body_size_bytes%2C%205%20%29%5Cn%7C%20project%20largest_integer_less%2C%20smallest_integer_greater%2C%20integer_multiple%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
**Truncate decimals using round function**
```kusto
['sample-http-logs']
| project rounded_value = round(req_duration_ms, 2)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20project%20rounded_value%20%3D%20round%28req_duration_ms%2C%202%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
**Truncate decimals using floor function**
```kusto
['sample-http-logs']
| project floor_value = floor(resp_body_size_bytes), ceiling_value = ceiling(req_duration_ms)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20project%20floor_value%20%3D%20floor%28resp_body_size_bytes%29%2C%20ceiling_value%20%3D%20ceiling%28req_duration_ms%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
## Other examples
**List all unique groups**
```kusto
['sample-http-logs']
| distinct ['id'], is_tls
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20distinct%20%5B'id'%5D%2C%20is_tls%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
**Count of all events per service**
```kusto
['sample-http-logs']
| summarize Count = count() by server_datacenter
| order by Count desc
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20summarize%20Count%20%3D%20count%28%29%20by%20server_datacenter%5Cn%7C%20order%20by%20Count%20desc%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
**Change the time clause**
```kusto
['github-issues-event']
| where _time == ago(1m)
| summarize count(), sum(['milestone.number']) by _time=bin(_time, 1m)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27github-issues-event%27%5D%5Cn%7C%20where%20_time%20%3D%3D%20ago%281m%29%5Cn%7C%20summarize%20count%28%29%2C%20sum%28%5B%27milestone.number%27%5D%29%20by%20_time%3Dbin%28_time%2C%201m%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
**HTTP 5xx responses for the last 7 days, one bar per day**
```kusto
['sample-http-logs']
| where _time > ago(7d)
| where req_duration_ms >= 5 and req_duration_ms < 6
| summarize count(), histogram(resp_header_size_bytes, 20) by bin(_time, 1d)
| order by _time desc
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20where%20_time%20%3E%20ago\(7d\)%20%7C%20where%20req_duration_ms%20%3E%3D%205%20and%20req_duration_ms%20%3C%206%20%7C%20summarize%20count\(\)%2C%20histogram\(resp_header_size_bytes%2C%2020\)%20by%20bin\(_time%2C%201d\)%20%7C%20order%20by%20_time%20desc%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%227d%22%7D%7D)
**Implement a remapper on remote address logs**
```kusto
['sample-http-logs']
| extend RemappedStatus = case(req_duration_ms >= 0.57, "new data", resp_body_size_bytes >= 1000, "size bytes", resp_header_size_bytes == 40, "header values", "doesntmatch")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20extend%20RemappedStatus%20%3D%20case%28req_duration_ms%20%3E%3D%200.57%2C%20%5C%22new%20data%5C%22%2C%20resp_body_size_bytes%20%3E%3D%201000%2C%20%5C%22size%20bytes%5C%22%2C%20resp_header_size_bytes%20%3D%3D%2040%2C%20%5C%22header%20values%5C%22%2C%20%5C%22doesntmatch%5C%22%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
**Advanced aggregations**
```kusto
['sample-http-logs']
| extend prospect = ['geo.city'] contains "Okayama" or uri contains "/api/v1/messages/back"
| extend possibility = server_datacenter contains "GRU" or status contains "301"
| summarize count(), topk( user_agent, 6 ) by bin(_time, 10d), ['geo.country']
| take 4
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20prospect%20%3D%20%5B%27geo.city%27%5D%20contains%20%5C%22Okayama%5C%22%20or%20uri%20contains%20%5C%22%2Fapi%2Fv1%2Fmessages%2Fback%5C%22%5Cn%7C%20extend%20possibility%20%3D%20server_datacenter%20contains%20%5C%22GRU%5C%22%20or%20status%20contains%20%5C%22301%5C%22%5Cn%7C%20summarize%20count%28%29%2C%20topk%28%20user_agent%2C%206%20%29%20by%20bin%28_time%2C%2010d%29%2C%20%5B%27geo.country%27%5D%5Cn%7C%20take%204%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
**Search map fields**
```kusto
['otel-demo-traces']
| where isnotnull( ['attributes.custom'])
| extend extra = tostring(['attributes.custom'])
| search extra:"0PUK6V6EV0"
| project _time, trace_id, name, ['attributes.custom']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%5Cn%7C%20where%20isnotnull%28%20%5B'attributes.custom'%5D%29%5Cn%7C%20extend%20extra%20%3D%20tostring%28%5B'attributes.custom'%5D%29%5Cn%7C%20search%20extra%3A%5C%220PUK6V6EV0%5C%22%5Cn%7C%20project%20_time%2C%20trace_id%2C%20name%2C%20%5B'attributes.custom'%5D%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
**Configure processing rules**
```kusto
['sample-http-logs']
| where _sysTime > ago(1d)
| summarize count() by method
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20where%20_sysTime%20%3E%20ago%281d%29%5Cn%7C%20summarize%20count%28%29%20by%20method%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%221d%22%7D%7D)
**Return different values based on the evaluation of a condition**
```kusto
['sample-http-logs']
| extend MemoryUsageStatus = iff(req_duration_ms > 10000, "Highest", "Normal")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20MemoryUsageStatus%20%3D%20iff%28req_duration_ms%20%3E%2010000%2C%20%27Highest%27%2C%20%27Normal%27%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
**Working with different operators**
```kusto
['hn']
| extend superman = text contains "superman" or title contains "superman"
| extend batman = text contains "batman" or title contains "batman"
| extend hero = case(
superman and batman, "both",
superman, "superman ", // spaces change the color
batman, "batman ",
"none")
| where (superman or batman) and not (batman and superman)
| summarize count(), topk(type, 3) by bin(_time, 30d), hero
| take 10
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27hn%27%5D%5Cn%7C%20extend%20superman%20%3D%20text%20contains%20%5C%22superman%5C%22%20or%20title%20contains%20%5C%22superman%5C%22%5Cn%7C%20extend%20batman%20%3D%20text%20contains%20%5C%22batman%5C%22%20or%20title%20contains%20%5C%22batman%5C%22%5Cn%7C%20extend%20hero%20%3D%20case%28%5Cn%20%20%20%20superman%20and%20batman%2C%20%5C%22both%5C%22%2C%5Cn%20%20%20%20superman%2C%20%5C%22superman%20%20%20%5C%22%2C%20%2F%2F%20spaces%20change%20the%20color%5Cn%20%20%20%20batman%2C%20%5C%22batman%20%20%20%20%20%20%20%5C%22%2C%5Cn%20%20%20%20%5C%22none%5C%22%29%5Cn%7C%20where%20%28superman%20or%20batman%29%20and%20not%20%28batman%20and%20superman%29%5Cn%7C%20summarize%20count%28%29%2C%20topk%28type%2C%203%29%20by%20bin%28_time%2C%2030d%29%2C%20hero%5Cn%7C%20take%2010%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
```kusto
['sample-http-logs']
| summarize flow = dcount( content_type) by ['geo.country']
| take 50
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20summarize%20flow%20%3D%20dcount%28%20content_type%29%20by%20%5B%27geo.country%27%5D%5Cn%7C%20take%2050%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
**Get the JSON into a property bag using parse-json**
```kusto
example
| where isnotnull(log)
| extend parsed_log = parse_json(log)
| project service, parsed_log.level, parsed_log.message
```
**Get average response using project-keep**
```kusto
['sample-http-logs']
| where ['geo.country'] == "United States" or ['id'] == 'b2b1f597-0385-4fed-a911-140facb757ef'
| extend systematic_view = ceiling( resp_header_size_bytes )
| extend resp_avg = cos( resp_body_size_bytes )
| project-away systematic_view
| project-keep resp_avg
| take 5
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20where%20%5B'geo.country'%5D%20%3D%3D%20%5C%22United%20States%5C%22%20or%20%5B'id'%5D%20%3D%3D%20%5C%22b2b1f597-0385-4fed-a911-140facb757ef%5C%22%5Cn%7C%20extend%20systematic_view%20%3D%20ceiling%28%20resp_header_size_bytes%20%29%5Cn%7C%20extend%20resp_avg%20%3D%20cos%28%20resp_body_size_bytes%20%29%5Cn%7C%20project-away%20systematic_view%5Cn%7C%20project-keep%20resp_avg%5Cn%7C%20take%205%22%7D)
**Combine multiple percentiles into a single chart**
```kusto
['sample-http-logs']
| summarize percentiles_array(req_duration_ms, 50, 75, 90) by bin_auto(_time)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20summarize%20percentiles_array\(req_duration_ms%2C%2050%2C%2075%2C%2090\)%20by%20bin_auto\(_time\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
**Combine mathematical functions**
```kusto
['sample-http-logs']
| extend tangent = tan( req_duration_ms ), cosine = cos( resp_header_size_bytes ), absolute_input = abs( req_duration_ms ), sine = sin( resp_header_size_bytes ), power_factor = pow( req_duration_ms, 4)
| extend angle_pi = degrees( resp_body_size_bytes ), pie = pi()
| project tangent, cosine, absolute_input, angle_pi, pie, sine, power_factor
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20tangent%20%3D%20tan%28%20req_duration_ms%20%29%2C%20cosine%20%3D%20cos%28%20resp_header_size_bytes%20%29%2C%20absolute_input%20%3D%20abs%28%20req_duration_ms%20%29%2C%20sine%20%3D%20sin%28%20resp_header_size_bytes%20%29%2C%20power_factor%20%3D%20pow%28%20req_duration_ms%2C%204%29%5Cn%7C%20extend%20angle_pi%20%3D%20degrees%28%20resp_body_size_bytes%20%29%2C%20pie%20%3D%20pi%28%29%5Cn%7C%20project%20tangent%2C%20cosine%2C%20absolute_input%2C%20angle_pi%2C%20pie%2C%20sine%2C%20power_factor%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
```kusto
['github-issues-event']
| where actor !endswith "[bot]"
| where repo startswith "kubernetes/"
| where action == "opened"
| summarize count() by bin_auto(_time)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27github-issues-event%27%5D%5Cn%7C%20where%20actor%20%21endswith%20%5C%22%5Bbot%5D%5C%22%5Cn%7C%20where%20repo%20startswith%20%5C%22kubernetes%2F%5C%22%5Cn%7C%20where%20action%20%3D%3D%20%5C%22opened%5C%22%5Cn%7C%20summarize%20count%28%29%20by%20bin_auto%28_time%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
**Change global configuration attributes**
```kusto
['sample-http-logs']
| extend status = coalesce(status, "info")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20extend%20status%20%3D%20coalesce\(status%2C%20%5C%22info%5C%22\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
**Set defualt value on event field**
```kusto
['sample-http-logs']
| project status = case(
isnotnull(status) and status != "", content_type, // use the contenttype if it’s not null and not an empty string
"info" // default value
)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20project%20status%20%3D%20case\(isnotnull\(status\)%20and%20status%20!%3D%20%5C%22%5C%22%2C%20content_type%2C%20%5C%22info%5C%22\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
**Extract nested payment amount from custom attributes map field**
```kusto
['otel-demo-traces']
| extend amount = ['attributes.custom']['app.payment.amount']
| where isnotnull( amount)
| project _time, trace_id, name, amount, ['attributes.custom']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20extend%20amount%20%3D%20%5B'attributes.custom'%5D%5B'app.payment.amount'%5D%20%7C%20where%20isnotnull\(%20amount\)%20%7C%20project%20_time%2C%20trace_id%2C%20name%2C%20amount%2C%20%5B'attributes.custom'%5D%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D)
**Filtering GitHub issues by label identifier**
```kusto
['github-issues-event']
| extend data = tostring(labels)
| where labels contains "d73a4a"
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%20%7C%20extend%20data%20%3D%20tostring\(labels\)%20%7C%20where%20labels%20contains%20'd73a4a'%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D)
**Aggregate trace counts by HTTP method attribute in custom map**
```kusto
['otel-demo-traces']
| extend httpFlavor = tostring(['attributes.custom'])
| summarize Count=count() by ['attributes.http.method']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20extend%20httpFlavor%20%3D%20tostring\(%5B'attributes.custom'%5D\)%20%7C%20summarize%20Count%3Dcount\(\)%20by%20%5B'attributes.http.method'%5D%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D)
# Enrich Axiom experience with AWS PrivateLink
Source: https://axiom.co/docs/apps/aws-privatelink
This page explains how to enrich your Axiom experience with AWS PrivateLink.
[AWS PrivateLink](https://aws.amazon.com/privatelink/) is a networking service provided by Amazon Web Services (AWS) that allows you to securely access services hosted on the AWS cloud over a private network connection. With AWS PrivateLink, you can access Axiom directly from your Amazon Virtual Private Cloud (VPC) without an internet gateway or NAT device, simplifying your network setup.
This page explains how to connect to Axiom over AWS PrivateLink by setting up a VPC endpoint within AWS and configuring Axiom to use that endpoint.
Axiom exposes AWS PrivateLink endpoints in the `us-east-1` AWS region. To route traffic from other AWS regions, follow the [setup in `us-east-1`](#setup) and then [configure Amazon VPC peering](#configure-amazon-vpc-peering).
## Setup
1. Connect the AWS Console to region `us-east-1` and create a VPC. For more information, see the [AWS documentation](https://docs.aws.amazon.com/vpc/latest/privatelink/getting-started.html).
2. Start creating a VPC endpoint. For more information, see the [AWS documentation](https://docs.aws.amazon.com/vpc/latest/privatelink/create-interface-endpoint.html#create-interface-endpoint-aws).
3. In **Service category**, select **Other endpoint services**.
4. In **Service name**, enter `com.amazonaws.vpce.us-east-1.vpce-svc-05a64735cdf68866b` to establish AWS PrivateLink for `api.axiom.co`.
5. Click **Verify service**. If this does not succeed, reach out to [Axiom Support](https://axiom.co/contact).
6. Select the VPC and subnets that you want to connect to the Axiom VPC service endpoint. Ensure that **Enable DNS name** is turned on and the security group accepts inbound traffic on TCP port `443`.
7. Finish the setup and wait for the VPC endpoint to become available. This usually takes 10 minutes.
## Configure Amazon VPC Peering
To route traffic to Axiom’s PrivateLink offering in `us-east-1` from other AWS regions, use inter-region [Amazon VPC peering](https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html). Inter-region VPC peering allows you to establish connections between VPCs across different AWS regions. This allows VPC resources in different regions to communicate with each other using private IP addresses.
After following the [setup in `us-east-1`](#setup), configure VPC peering to make the PrivateLink endpoint available in another region to send logs to Axiom over PrivateLink. For more information, see the [AWS documentation](https://docs.aws.amazon.com/vpc/latest/peering/working-with-vpc-peering.html).
When configuring PrivateLink with VPC peering, Amazon Route 53 is useful for resolving private DNS hostnames within your VPCs. Amazon Route 53 allows you to create private hosted zones within your VPC. These private hosted zones allow you to use custom domain names for your resources, such as EC2 instances, ELB load balancers, or RDS instances, without exposing them to the public internet. For more information, see the [AWS documentation](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zones-private.html).
# Connect Axiom with Cloudflare Logpush
Source: https://axiom.co/docs/apps/cloudflare-logpush
Axiom gives you an all-at-once view of key Cloudflare Logpush metrics and logs, out of the box, with our dynamic Cloudflare Logpush dashboard.
Cloudflare Logpush is a feature that allows you to push HTTP request logs and other Cloudflare-generated logs directly to your desired storage, analytics, and monitoring solutions like Axiom. The integration with Axiom aims to provide real-time insights into web traffic, and operational issues, thereby helping to monitor and troubleshoot effectively.
## What’s Cloudflare Logpush?
Cloudflare Logpush enables Cloudflare users to automatically export their logs in JSON format to a variety of endpoints. This feature is incredibly useful for analytics, auditing, debugging, and monitoring the performance and security of websites. Types of logs you can export include HTTP request logs, firewall events, and more.
## Installing Cloudflare Logpush app
### Prerequisites
* An active Cloudflare Enterprise account
* API token or global API key
You can create a token that has access to a single zone, single account or a mix of all these, depending on your needs. For account access, the token must
have theses permissions:
* Logs: Edit
* Account settings: Read
For the zones, only edit permission is required for logs.
## Steps
* Log in to Cloudflare, go to your Cloudflare dashboard, and then select the Enterprise zone (domain) you want to enable Logpush for.
* Optionally, set filters and fields. You can filter logs by field (like Client IP, User Agent, etc.) and set the type of logs you want (for example, HTTP requests, firewall events).
* In Axiom, click **Settings**, select **Apps**, and install the Cloudflare Logpush app with the token you created from the profile settings in Cloudflare.
* You see your available accounts and zones. Select the Cloudflare datasets you want to subscribe to.
* The installation uses the Cloudflare API to create Logpush jobs for each selected dataset.
* After the installation completes, you can find the installed Logpush jobs at Cloudflare.
For zone-scoped Logpush jobs:
For account-scoped Logpush jobs:
* In the Axiom, you can see your Cloudflare Logpush dashboard.
Using Axiom with Cloudflare Logpush offers a powerful solution for real-time monitoring, observability, and analytics. Axiom can help you gain deep insights into your app’s performance, errors, and app bottlenecks.
### Benefits of using the Axiom Cloudflare Logpush Dashboard
* Real-time visibility into web performance: One of the most crucial features is the ability to see how your website or app is performing in real-time. The dashboard can show everything from page load times to error rates, giving you immediate insights that can help in timely decision-making.
* Actionable insights for troubleshooting: The dashboard doesn’t just provide raw data; it provides insights. Whether it’s an error that needs immediate fixing or performance metrics that show an error from your app, having this information readily available makes it easier to identify problems and resolve them swiftly.
* DNS metrics: Understanding the DNS requests, DNS queries, and DNS cache hit from your app is vital to track if there’s a request spike or get the total number of queries in your system.
* Centralized logging and error tracing: With logs coming in from various parts of your app stack, centralizing them within Axiom makes it easier to correlate events across different layers of your infrastructure. This is crucial for troubleshooting complex issues that may span multiple services or components.
## Supported Cloudflare Logpush Datasets
Axiom supports all the Cloudflare account-scoped datasets.
Zone-scoped
* DNS logs
* Firewall events
* HTTP requests
* NEL reports
* Spectrum events
Account-scoped
* Access requests
* Audit logs
* CASB Findings
* Device posture results
* DNS Firewall Logs
* Gateway DNS
* Gateway HTTP
* Gateway Network
* Magic IDS Detections
* Network Analytics Logs
* Workers Trace Events
* Zero Trust Network Session Logs
# Connect Axiom with Cloudflare Workers
Source: https://axiom.co/docs/apps/cloudflare-workers
This page explains how to enrich your Axiom experience with Cloudflare Workers.
The Axiom Cloudflare Workers app provides granular detail about the traffic coming in from your monitored sites. This includes edge requests, static resources, client auth, response duration, and status. Axiom gives you an all-at-once view of key Cloudflare Workers metrics and logs, out of the box, with our dynamic Cloudflare Workers dashboard.
The data obtained with the Axiom dashboard gives you better insights into the state of your Cloudflare Workers so you can easily monitor bad requests, popular URLs, cumulative execution time, successful requests, and more. The app is part of Axiom’s unified logging and observability platform, so you can easily track Cloudflare Workers edge requests alongside a comprehensive view of other resources in your Cloudflare Worker environments.
Axiom Cloudflare Workers is an open-source project and welcomes your contributions. For more information, see the [GitHub repository](https://github.com/axiomhq/axiom-cloudflare-workers).
## What is Cloudflare Workers
[Cloudflare Workers](https://developers.cloudflare.com/workers/) is a serverless computing platform developed by Cloudflare. The Workers platform allows developers to deploy and run JavaScript code directly at the network edge in more than 200 data centers worldwide. This serverless architecture enables high performance, low latency, and efficient scaling for web apps and APIs.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
## Send Cloudflare Worker logs to Axiom
1. In Cloudflare, create a new worker. For more information, see the [Cloudflare documentation](https://developers.cloudflare.com/workers/get-started/guide/).
2. Copy the contents of the [src/worker.js](https://github.com/axiomhq/axiom-cloudflare-workers/blob/main/src/worker.js) file into the worker you have created.
3. Update the authentication variables:
```bash
const axiomDataset = "DATASET_NAME"
const axiomToken = "API_TOKEN"
```
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
4. Add triggers for the worker. For example, add a route trigger using the [Cloudflare documentation](https://developers.cloudflare.com/workers/configuration/routing/routes/#set-up-a-route-in-the-dashboard).
When the routes receive requests, the worker is triggered and the logs are sent to your Axiom dataset.
# Connect Axiom with Grafana
Source: https://axiom.co/docs/apps/grafana
Learn how to extend the functionality of Grafana by installing the Axiom data source plugin.
## What is a Grafana data source plugin?
Grafana is an open-source tool for time-series analytics, visualization, and alerting. It’s frequently used in DevOps and IT Operations roles to provide real-time information on system health and performance.
Data sources in Grafana are the actual databases or services where the data is stored. Grafana has a variety of data source plugins that connect Grafana to different types of databases or services. This enables Grafana to query those sources from display that data on its dashboards. The data sources can be anything from traditional SQL databases to time-series databases or metrics, and logs from Axiom.
A Grafana data source plugin extends the functionality of Grafana by allowing it to interact with a specific type of data source. These plugins enable users to extract data from a variety of different sources, not just those that come supported by default in Grafana.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/).
* [Create a dataset in Axiom](/reference/datasets) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to create, read, update, and delete datasets.
## Install the Axiom Grafana data source plugin on Grafana Cloud
* In Grafana, click Administration > Plugins in the side navigation menu to view installed plugins.
* In the filter bar, search for the Axiom plugin
* Click on the plugin logo.
* Click Install.
When the update is complete, a confirmation message is displayed, indicating that the installation was successful.
* The Axiom Grafana Plugin is also installable from the [Grafana Plugins page](https://grafana.com/grafana/plugins/axiomhq-axiom-datasource/)
## Install the Axiom Grafana data source plugin on local Grafana
The Axiom data source plugin for Grafana is [open source on GitHub](https://github.com/axiomhq/axiom-grafana). It can be installed via the Grafana CLI, or via Docker.
### Install the Axiom Grafana Plugin using Grafana CLI
```bash
grafana-cli plugins install axiomhq-axiom-datasource
```
### Install Via Docker
* Add the plugin to your `docker-compose.yml` or `Dockerfile`
* Set the environment variable `GF_INSTALL_PLUGINS` to include the plugin
Example:
`GF_INSTALL_PLUGINS="axiomhq-axiom-datasource"`
## Configuration
* Add a new data source in Grafana
* Select the Axiom data source type.
* Enter the previously generated API token.
* Save and test the data source.
## Build Queries with Query Editor
The Axiom data source Plugin provides a custom query editor to build and visualize your Axiom event data. After configuring the Axiom data source, start building visualizations from metrics and logs stored in Axiom.
* Create a new panel in Grafana by clicking on Add visualization
* Select the Axiom data source.
* Use the query editor to choose the desired metrics, dimensions, and filters.
## Benefits of the Axiom Grafana data source plugin
The Axiom Grafana data source plugin allows users to display and interact with their Axiom data directly from within Grafana. By doing so, it provides several advantages:
1. **Unified visualization:** The Axiom Grafana data source plugin allows users to utilize Grafana’s powerful visualization tools with Axiom’s data. This enables users to create, explore, and share dashboards which visually represent their Axiom logs and metrics.
2. **Rich Querying Capability:** Grafana has a powerful and flexible interface for building data queries. With the Axiom plugin, and leverage this capability to build complex queries against your Axiom data.
3. **Customizable Alerting:** Grafana’s alerting feature allows you to set alerts based on your queries' results, and set up custom alerts based on specific conditions in your Axiom log data.
4. **Sharing and Collaboration:** Grafana’s features for sharing and collaboration can help teams work together more effectively. Share Axiom data visualizations with others, collaborate on dashboards, and discuss insights directly in Grafana.
# Map location data with Axiom and Hex
Source: https://axiom.co/docs/apps/hex
This page exlains how to visualize geospatial log data from Axiom using Hex interactive maps.
Hex is a powerful collaborative data platform that allows you to create notebooks with Python/SQL code and interactive visualizations.
This page explains how to integrate Hex with Axiom to visualize geospatial data from your logs. You ingest location data into Axiom, query it using APL, and create interactive map visualizations in Hex.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
- [Create a Hex account](https://app.hex.tech/).
## Send geospatial data to Axiom
Send your sample location data to Axiom using the API endpoint. For example, the following HTTP request sends sample robot location data with latitude, longitude, status, and satellite information.
```bash
curl -X 'POST' 'https://AXIOM_DOMAIN/v1/datasets/DATASET_NAME/ingest' \
-H 'Authorization: Bearer API_TOKEN' \
-H 'Content-Type: application/json' \
-d '[
{
"data": {
"robot_id": "robot-001",
"latitude": 37.7749,
"longitude": -122.4194,
"num_satellites": 8,
"status": "active"
}
}
]'
```
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Verify that your data has been ingested correctly by running an APL query in the Axiom UI.
## Set up your Hex project
1. Create a new Hex project. For more information, see the [Hex documentation](https://learn.hex.tech/docs/getting-started/create-your-first-project).
2. Save your Axiom API token as a secret in Hex. This example uses the secret name `AXIOM_API_TOKEN`. For more information, see the [Hex documentation](https://learn.hex.tech/docs/explore-data/projects/environment-configuration/environment-views#secrets).
## Query data from Axiom
Write the Python code in your Hex notebook that retrieves data from Axiom. For example, customize the code below:
```python
import requests
import pandas as pd
from datetime import datetime, timedelta
import os
# Retrieve the API token from Hex secrets
axiom_token = os.environ.get("AXIOM_API_TOKEN")
# Define Axiom API endpoint and headers
base_url = "https://AXIOM_DOMAIN/v1/datasets/_apl"
headers = {
'Authorization': f'Bearer {axiom_token}',
'Content-Type': 'application/json',
'Accept': 'application/json',
'Accept-Encoding': 'gzip'
}
# Define the time range for your query
end_time = datetime.utcnow()
start_time = end_time - timedelta(days=3) # Get data from the last 3 days
# Construct the APL query
query = {
"apl": """DATASET_NAME
| project ['data.latitude'], ['data.longitude'], ['data.num_satellites'], ['data.robot_id'], ['data.status']""",
"startTime": start_time.strftime("%Y-%m-%dT%H:%M:%SZ"),
"endTime": end_time.strftime("%Y-%m-%dT%H:%M:%SZ")
}
try:
# Send the request to Axiom API
response = requests.post(
f"{base_url}?format=tabular",
headers=headers,
json=query,
timeout=10
)
# Print request details for debugging
print("Request Details:")
print(f"URL: {base_url}?format=tabular")
print(f"Query: {query['apl']}")
print(f"Response Status: {response.status_code}")
if response.status_code == 200:
data = response.json()
if 'tables' in data:
table = data['tables'][0]
if table.get('columns') and len(table['columns']) > 0:
columns = [field['name'] for field in table['fields']]
rows = table['columns']
# Create DataFrame with proper column orientation
df = pd.DataFrame(list(zip(*rows)), columns=columns)
# Ensure data types are appropriate for mapping
df['data.latitude'] = pd.to_numeric(df['data.latitude'])
df['data.longitude'] = pd.to_numeric(df['data.longitude'])
df['data.num_satellites'] = pd.to_numeric(df['data.num_satellites'])
# Display the first few rows to verify our data
print("\nDataFrame Preview:")
display(df.head())
# Store the DataFrame for visualization
robot_locations = df
else:
print("\nNo data found in the specified time range.")
else:
print("\nNo tables found in response")
print("Response structure:", data.keys())
except Exception as e:
print(f"\nError: {str(e)}")
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
## Create map visualisation
Create an interactive map visualization in Hex and customize it. For more information, see the [Hex documentation](https://learn.hex.tech/docs/explore-data/cells/visualization-cells/map-cells).
# Extensions
Source: https://axiom.co/docs/apps/introduction
Enrich your Axiom organization with dedicated extensions.
This section walks you through a catalog of dedicated extensions that enrich your experience in Axiom’s Console.
To use standard APIs and other data shippers like the Elasticsearch Bulk API, Fluent Bit log processor or Fluentd log collector, go to [Send data](/send-data/methods) instead.
A pre-configured dashboard for logs sent from AWS Lambda.
A configuration path for sending data to Axiom via AWS PrivateLink.
A pre-configured dashboard for logs sent from Cloudflare Workers.
An integrated workflow for sending data from Cloudflare Logpush to Axiom.
A data source plugin for visualizing data stored in Axiom in Grafana.
A recommended pathway to visualize data stored in Axiom in Hex.
An integreated workflow for sending data from Netlify to Axiom.
A pre-configured dashboard for audit and network flow logs sent from Tailscale.
A walkthrough on configuring resources in Axiom using Terraform.
A pre-configured dashboard for logs sent from Vercel.
# Enrich Axiom experience with AWS Lambda
Source: https://axiom.co/docs/apps/lambda
This page explains how to enrich your Axiom experience with AWS Lambda.
Use the Axiom Lambda Extension to enrich your Axiom organization with quick filters and a dashboard.
For information on how to send logs and platform events of your Lambda function to Axiom, see [Send data from AWS Lambda](/send-data/aws-lambda).
## What’s the Axiom Lambda Extension
AWS Lambda is a compute service that allows you to build applications and run your code at scale without provisioning or maintaining any servers.
Use the AWS Lambda Extension to collect Lambda logs, performance metrics, platform events, and memory usage from your Lambda functions. With the Axiom Lambda Extension, you can monitor Lambda performance and aggregate system-level metrics for your serverless applications and optimize lambda functions through easy-to-use automatic dashboards.
With the Axiom Lambda extension, you can:
* Monitor your Lambda functions and invocations.
* Get full visibility into your AWS Lambda events in minutes.
* Collect metrics and logs from your Lambda-based Serverless Applications.
* Track and view enhanced memory usage by versions, durations, and cold start.
* Detect and get alerts on Lambda event errors, Lambda request timeout, and low execution time.
## Comprehensive AWS Lambda dashboards
The Axiom AWS Lambda integration comes with a pre-built dashboard where you can see and group your functions with the versions and AWS resource that triggers them, making this the ideal starting point for getting an advanced view of the performance and health of your AWS Lambda serverless services and Lambda function events. The AWS Lambda dashboards automatically show up in Axiom through schema detection after installing the Axiom Lambda Extension.
These new zero-config dashboards help you spot and troubleshoot Lambda function errors. For example, if there’s high memory usage on your functions, you can spot the unusual delay from the max execution dashboard and filter your errors by functions, durations, invocations, and versions. With your Lambda version name, you can gain and expand your views on what’s happening in your Lambda event source mapping and invocation type.
## Monitor Lambda functions and usage in Axiom
Having real-time visibility into your function logs is important because any duration between sending your lambda request and the execution time can cause a delay and adds to customer-facing latency. You need to be able to measure and track your Lambda invocations, maximum and minimum execution time, and all invocations by function.
The Axiom Lambda Extension gives you full visibility into the most important metrics and logs coming from your Lambda function out of the box without any further configuration required.
## Track cold start on your Lambda function
A cold start occurs when there’s a delay between your invocation and runtime created during the initialization process. During this period, there’s no available function instance to respond to an invocation. With the Axiom built-in Serverless AWS Lambda dashboard, you can track and see the effect of cold start on your Lambda functions and its impact on every Lambda function. This data lets you know when to take actionable steps, such as using provisioned concurrency or reducing function dependencies.
## Optimize slow-performing Lambda queries
Grouping logs with Lambda invocations and execution time by function provides insights into your events request and response pattern. You can extend your query to view when an invocation request is rejected and configure alerts to be notified on Serverless log patterns and Lambda function payloads. With the invocation request dashboard, you can monitor request function logs and see how your Lambda serverless functions process your events and Lambda queues over time.
## Detect timeout on your Lambda function
Axiom Lambda function monitors let you identify the different points of invocation failures, cold-start delays, and AWS Lambda errors on your Lambda functions. With standard function logs like invocations by function, and Lambda cold start, monitoring the rate of your execution time can alert you to be aware of a significant spike whenever an error occurs in your Lambda function.
## Smart filters
Axiom Lambda Serverless Smart Filters lets you easily filter down to specific AWS Lambda functions or Serverless projects and use saved queries to get deep insights on how functions are performing with a single click.
# Connect Axiom with Netlify
Source: https://axiom.co/docs/apps/netlify
Integrating Axiom with Netlify to get a comprehensive observability experience for your Netlify projects. This app will give you a better understanding of how your Jamstack apps are performing.
Integrate Axiom with Netlify to get a comprehensive observability experience for your Netlify projects. This integration will give you a better understanding of how your Jamstack apps are performing.
You can easily monitor logs and metrics related to your website traffic, serverless functions, and app requests. The integration is easy to set up, and you don’t need to configure anything to get started.
With Axiom’s Zero-Config Observability app, you can see all your metrics in real-time, without sampling. That means you can get a complete view of your app’s performance without any gaps in data.
Axiom’s Netlify app is complete with a pre-built dashboard that gives you control over your Jamstack projects. You can use this dashboard to track key metrics and make informed decisions about your app’s performance.
Overall, the Axiom Netlify app makes it easy to monitor and optimize your Jamstack apps. However, do note that this integration is only available for Netlify customers enterprise-level plans where [Log Drains are supported](https://docs.netlify.com/monitor-sites/log-drains/).
## What is Netlify
Netlify is a platform for building highly-performant and dynamic websites, e-commerce stores, and web apps. Netlify automatically builds your site and deploys it across its global edge network.
The Netlify platform provides teams everything they need to take modern web projects from the first preview to full production.
## Sending logs to Axiom
The log events gotten from Axiom gives you better insight into the state of your Netlify sites environment so that you can easily monitor traffic volume, website configurations, function logs, resource usage, and more.
1. Simply login to your [Axiom account](https://app.axiom.co/), click on **Apps** from the **Settings** menu, select the **Netlify app** and click on **Install now**.
* It’ll redirect you to Netlify to authorize Axiom.
* Click **Authorize**, and then copy the integration token.
2. Log into your **Netlify Team Account**, click on your site settings and select **Log Drains**.
* In your log drain service, select **Axiom**, paste the integration token from Step 1, and then click **Connect**.
## App overview
### Traffic and function Logs
With Axiom, you can instrument, and actively monitor your Netlify sites, stream your build logs, and analyze your deployment process, or use our pre-build Netlify Dashboard to get an overview of all the important traffic data, usage, and metrics. Various logs will be produced when users collaborate and interact with your sites and websites hosted on Netlify. Axiom captures and ingests all these logs into the `netlify` dataset.
You can also drill down to your site source with our advanced query language and fork our dashboard to start building your own site monitors.
* Back in your Axiom datasets console you'll see all your traffic and function logs in your `netlify` dataset.
### Live stream logs
Stream your sites and app logs live, and filter them to see important information.
### Zero-config dashboard for your Netlify sites
Use our pre-build Netlify Dashboard to get an overview of all the important metrics. When ready, you can fork our dashboard and start building your own!
## Start logging Netlify Sites today
Axiom Netlify integration allows you to monitor, and log all of your sites, and apps in one place. With the Axiom app, you can quickly detect site errors, and get high-level insights into your Netlify projects.
* We welcome ideas, feedback, and collaboration, join us in our [Discord Community](http://axiom.co/discord) to share them with us.
# Connect Axiom with Tailscale
Source: https://axiom.co/docs/apps/tailscale
This page explains how to integrate Axiom with Tailscale.
Tailscale is a secure networking solution that allows you to create and manage a private network (tailnet), securely connecting all your devices.
Integrating Axiom with Tailscale allows you to stream your audit and network flow logs directly to Axiom seamlessly, unlocking powerful insights and analysis. Whether you’re conducting a security audit, optimizing performance, or ensuring compliance, Axiom’s Tailscale dashboard equips you with the tools to maintain a secure and efficient network, respond quickly to potential issues, and make informed decisions about your network configuration and usage.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
- [Create a Tailscale account](https://login.tailscale.com/start).
## Setup
1. In Tailscale, go to the [configuration logs page](https://login.tailscale.com/admin/logs) of the admin console.
2. Add Axiom as a configuration log streaming destination in Tailscale. For more information, see the [Tailscale documentation](https://tailscale.com/kb/1255/log-streaming?q=stream#add-a-configuration-log-streaming-destination).
## Tailscale dashboard
Axiom displays the data it receives in a pre-built Tailscale dashboard that delivers immediate, actionable insights into your tailnet’s activity and health.
This comprehensive overview includes:
* **Log type distribution**: Understand the balance between configuration audit logs and network flow logs over time.
* **Top actions and hosts**: Identify the most common network actions and most active devices.
* **Traffic visualization**: View physical, virtual, and exit traffic patterns for both sources and destinations.
* **User activity tracking**: Monitor actions by user display name, email, and ID for security audits and compliance.
* **Configuration log stream**: Access a detailed audit trail of all configuration changes.
With these insights, you can:
* Quickly identify unusual network activity or traffic patterns.
* Track configuration changes and user actions.
* Monitor overall network health and performance.
* Investigate specific events or users as needed.
* Understand traffic distribution across your tailnet.
# Connect Axiom with Terraform
Source: https://axiom.co/docs/apps/terraform
Provision and manage Axiom resources such as datasets and monitors with Terraform.
Axiom Terraform Provider lets you provision and manage Axiom resources (datasets, notifiers, monitors, and users) with Terraform. This means that you can programmatically create resources, access existing ones, and perform further infrastructure automation tasks.
Install the Axiom Terraform Provider from the [Terraform Registry](https://registry.terraform.io/providers/axiomhq/axiom/latest). To see the provider in action, check out the [example](https://github.com/axiomhq/terraform-provider-axiom/blob/main/example/main.tf).
This guide explains how to install the provider and perform some common procedures such as creating new resources and accessing existing ones. For the full API reference, see the [documentation in the Terraform Registry](https://registry.terraform.io/providers/axiomhq/axiom/latest/docs).
## Prerequisites
* [Sign up for a free Axiom account](https://app.axiom.co/register). All you need is an email address.
* [Create an advanced API token in Axiom](/reference/tokens#create-advanced-api-token) with the permissions to perform the actions you want to use Terraform for. For example, to use Terraform to create and update datasets, create the advanced API token with these permissions.
* [Create a Terraform account](https://app.terraform.io/signup/account).
* [Install the Terraform CLI](https://developer.hashicorp.com/terraform/cli).
## Install the provider
To install the Axiom Terraform Provider from the [Terraform Registry](https://registry.terraform.io/providers/axiomhq/axiom/latest), follow these steps:
1. Add the following code to your Terraform configuration file. Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
```hcl
terraform {
required_providers {
axiom = {
source = "axiomhq/axiom"
}
}
}
provider "axiom" {
api_token = "API_TOKEN"
}
```
2. In your terminal, go to the folder of your main Terraform configuration file, and then run the command `terraform init`.
## Create new resources
### Create dataset
To create a dataset in Axiom using the provider, add the following code to your Terraform configuration file. Customize the `name` and `description` fields.
```hcl
resource "axiom_dataset" "test_dataset" {
name = "test_dataset"
description = "This is a test dataset created by Terraform."
}
```
### Create notifier
To create a Slack notifier in Axiom using the provider, add the following code to your Terraform configuration file. Replace `SLACK_URL` with the webhook URL from your Slack instance. For more information on obtaining this URL, see the [Slack documentation](https://api.slack.com/messaging/webhooks).
```hcl
resource "axiom_notifier" "test_slack_notifier" {
name = "test_slack_notifier"
properties = {
slack = {
slack_url = "SLACK_URL"
}
}
}
```
To create a Discord notifier in Axiom using the provider, add the following code to your Terraform configuration file.
* Replace `DISCORD_CHANNEL` with the webhook URL from your Discord instance. For more information on obtaining this URL, see the [Discord documentation](https://discord.com/developers/resources/webhook).
* Replace `DISCORD_TOKEN` with your Discord API token. For more information on obtaining this token, see the [Discord documentation](https://discord.com/developers/topics/oauth2).
```hcl
resource "axiom_notifier" "test_discord_notifier" {
name = "test_discord_notifier"
properties = {
discord = {
discord_channel = "DISCORD_CHANNEL"
discord_token = "DISCORD_TOKEN"
}
}
}
```
To create an email notifier in Axiom using the provider, add the following code to your Terraform configuration file. Replace `EMAIL1` and `EMAIL2` with the email addresses you want to notify.
```hcl
resource "axiom_notifier" "test_email_notifier" {
name = "test_email_notifier"
properties = {
email= {
emails = ["EMAIL1","EMAIL2"]
}
}
}
```
For more information on the types of notifier you can create, see the [documentation in the Terraform Registry](https://registry.terraform.io/providers/axiomhq/axiom/latest/resources/notifier).
### Create monitor
To create a monitor in Axiom using the provider, add the following code to your Terraform configuration file and customize it:
```hcl
resource "axiom_monitor" "test_monitor" {
depends_on = [axiom_dataset.test_dataset, axiom_notifier.test_slack_notifier]
# `type` can be one of the following:
# - "Threshold": For numeric values against thresholds. It requires `operator` and `threshold`.
# - "MatchEvent": For detecting specific events. It doesn’t require `operator` and `threshold`.
# - "AnomalyDetection": For detecting anomalies. It requires `compare_days` and `tolerance, operator`.
type = "Threshold"
name = "test_monitor"
description = "This is a test monitor created by Terraform."
apl_query = "['test_dataset'] | summarize count() by bin_auto(_time)"
interval_minutes = 5
# `operator` is required for threshold and anomaly detection monitors.
# Valid values are "Above", "AboveOrEqual", "Below", "BelowOrEqual".
operator = "Above"
range_minutes = 5
# `threshold` is required for threshold monitors
threshold = 1
# `compare_days` and `tolerance` are required for anomaly detection monitors.
# Uncomment the two lines below for anomaly detection monitors.
# compare_days = 7
# tolerance = 25
notifier_ids = [
axiom_notifier.test_slack_notifier.id
]
alert_on_no_data = false
notify_by_group = false
}
```
This example creates a monitor using the dataset `test_dataset` and the notifier `test_slack_notifier`. These are resources you have created and accessed in the sections above.
* Customize the `name` and the `description` fields.
* In the `apl_query` field, specify the APL query for the monitor.
For more information on these fields, see the [documentation in the Terraform Registry](https://registry.terraform.io/providers/axiomhq/axiom/latest/resources/monitor).
### Create user
To create a user in Axiom using the provider, add the following code to your Terraform configuration file. Customize the `name`, `email`, and `role` fields.
```hcl
resource "axiom_user" "test_user" {
name = "test_user"
email = "test@abc.com"
role = "user"
}
```
## Access existing resources
### Access existing dataset
To access an existing dataset, follow these steps:
1. Determine the ID of the Axiom dataset by sending a GET request to the [`datasets` endpoint of the Axiom API](/restapi/endpoints/getDatasets).
2. Add the following code to your Terraform configuration file. Replace `DATASET_ID` with the ID of the Axiom dataset.
```hcl
data "axiom_dataset" "test_dataset" {
id = "DATASET_ID"
}
```
### Access existing notifier
To access an existing notifier, follow these steps:
1. Determine the ID of the Axiom notifier by sending a GET request to the `notifiers` endpoint of the Axiom API.
2. Add the following code to your Terraform configuration file. Replace `NOTIFIER_ID` with the ID of the Axiom notifier.
```hcl
data "axiom_dataset" "test_slack_notifier" {
id = "NOTIFIER_ID"
}
```
### Access existing monitor
To access an existing monitor, follow these steps:
1. Determine the ID of the Axiom monitor by sending a GET request to the `monitors` endpoint of the Axiom API.
2. Add the following code to your Terraform configuration file. Replace `MONITOR_ID` with the ID of the Axiom monitor.
```hcl
data "axiom_monitor" "test_monitor" {
id = "MONITOR_ID"
}
```
### Access existing user
To access an existing user, follow these steps:
1. Determine the ID of the Axiom user by sending a GET request to the `users` endpoint of the Axiom API.
2. Add the following code to your Terraform configuration file. Replace `USER_ID` with the ID of the Axiom user.
```hcl
data "axiom_user" "test_user" {
id = "USER_ID"
}
```
# Connect Axiom with Vercel
Source: https://axiom.co/docs/apps/vercel
Easily monitor data from requests, functions, and web vitals in one place to get the deepest observability experience for your Vercel projects.
Connect Axiom with Vercel to get the deepest observability experience for your Vercel projects.
Easily monitor data from requests, functions, and web vitals in one place. 100% live and 100% of your data, no sampling.
Axiom’s Vercel app ships with a pre-built dashboard and pre-installed monitors so you can be in complete control of your projects with minimal effort.
If you use Axiom Vercel integration, [annotations](/query-data/annotate-charts) are automatically created for deployments.
## What is Vercel?
Vercel is a platform for frontend frameworks and static sites, built to integrate with your headless content, commerce, or database.
Vercel provides a frictionless developer experience to take care of the hard things: deploying instantly, scaling automatically, and serving personalized content around the globe.
Vercel makes it easy for frontend teams to develop, preview, and ship delightful user experiences, where performance is the default.
## Send logs to Axiom
Simply install the [Axiom Vercel app from here](https://vercel.com/integrations/axiom) and be streaming logs and web vitals within minutes!
## App Overview
### Request and function logs
For both requests and serverless functions, Axiom automatically installs a [log drain](https://vercel.com/blog/log-drains) in your Vercel account to capture data live.
As users interact with your website, various logs will be produced. Axiom captures all these logs and ingests them into the `vercel` dataset. You can stream and analyze these logs live, or use our pre-build Vercel Dashboard to get an overview of all the important metrics. When you’re ready, you can fork our dashboard and start building your own!
For function logs, if you call `console.log`, `console.warn` or `console.error` in your function, the output will also be captured and made available as part of the log. You can use our extended query language, APL, to easily search these logs.
## Web vitals
Axiom supports capturing and analyzing Web Vital data directly from your user’s browser without any sampling and with more data than is available elsewhere. It is perfect to pair with Vercel’s in-built analytics when you want to get really deep into a specific problem or debug issues with a specific audience (user-agent, location, region, etc).
Web Vitals are only currently supported for Next.js websites. Expanded support is coming soon.
### Installation
Perform the following steps to install Web Vitals:
1. In your Vercel project, run `npm install --save next-axiom`.
2. In `next.config.js`, wrap your NextJS config in `withAxiom` as follows:
```js
const { withAxiom } = require('next-axiom');
module.exports = withAxiom({
// ... your existing config
})
```
This will proxy the Axiom ingest call to improve deliverability.
3. For Web Vitals, navigate to `app/layout.tsx` and add the `AxiomWebVitals` component:
```js
import { AxiomWebVitals } from 'next-axiom';
export default function RootLayout() {
return (
...
...
);
}
```
WebVitals are sent only from production deployments.
4. Deploy your site and watch data coming into your Axiom dashboard.
* To send logs from different parts of your app, make use of the provided logging functions. For example:
```js
log.info('Payment completed', { userID: '123', amount: '25USD' });
```
### Client Components
For Client Components, replace the `log` prop usage with the `useLogger` hook:
```js
'use client';
import { useLogger } from 'next-axiom';
export default function ClientComponent() {
const log = useLogger();
log.debug('User logged in', { userId: 42 });
return
Logged in
;
}
```
### Server Components
For Server Components, create a logger and make sure to call flush before returning:
```js
import { Logger } from 'next-axiom';
export default async function ServerComponent() {
const log = new Logger();
log.info('User logged in', { userId: 42 });
// ... other operations ...
await log.flush();
return
Logged in
;
}
```
### Route Handlers
For Route Handlers, wrapping your Route Handlers in `withAxiom` will add a logger to your request and automatically log exceptions:
```js
import { withAxiom, AxiomRequest } from 'next-axiom';
export const GET = withAxiom((req: AxiomRequest) => {
req.log.info('Login function called');
// You can create intermediate loggers
const log = req.log.with({ scope: 'user' });
log.info('User logged in', { userId: 42 });
return NextResponse.json({ hello: 'world' });
});
```
## Use Next.js 12 for Web Vitals
If you’re using Next.js version 12, follow the instructions below to integrate Axiom for logging and capturing Web Vitals data.
In your `pages/_app.js` or `pages/_app.ts` and add the following line:
```js
export { reportWebVitals } from 'next-axiom';
```
## Upgrade to Next.js 13 from Next.js 12
If you plan on upgrading to Next.js 13, you'll need to make specific changes to ensure compatibility:
* Upgrade the next-axiom package to version `1.0.0` or higher:
* Make sure any exported variables have the `NEXT_PUBLIC_ prefix`, for example,, `NEXT_PUBLIC_AXIOM_TOKEN`.
* In client components, use the `useLogger` hook instead of the `log` prop.
* For server-side components, you need to create an instance of the `Logger` and flush the logs before the component returns.
* For Web Vitals tracking, you'll replace the previous method of capturing data. Remove the `reportWebVitals()` line and instead integrate the `AxiomWebVitals` component into your layout.
## Vercel Function logs 4KB limit
The Vercel 4KB log limit refers to a restriction placed by Vercel on the size of log output generated by serverless functions running on their platform. The 4KB log limit means that each log entry produced by your function should be at most 4 Kilobytes in size.
If your log output is larger than 4KB, you might experience truncation or missing logs. To log above this limit, you can send your function logs using [next-axiom](https://github.com/axiomhq/next-axiom).
## Parse JSON on the message field
If you use a logging library in your Vercel project that prints JSON, your **message** field will contain a stringified and therefore escaped JSON object.
* If your Vercel logs are encoded as JSON, they will look like this:
```json
{
"level": "error",
"message": "{ \"message\": \"user signed in\", \"metadata\": { \"userId\": 2234, \"signInType\": \"sso-google\" }}",
"request": {
"host": "www.axiom.co",
"id": "iad1:iad1::sgh2r-1655985890301-f7025aa764a9",
"ip": "199.16.157.13",
"method": "GET",
"path": "/sign-in/google",
"scheme": "https",
"statusCode": 500,
"teamName": "AxiomHQ",
},
"vercel": {
"deploymentId": "dpl_7UcdgdgNsdgbcPY3Lg6RoXPfA6xbo8",
"deploymentURL": "axiom-bdsgvweie6au-axiomhq.vercel.app",
"projectId": "prj_TxvF2SOZdgdgwJ2OBLnZH2QVw7f1Ih7",
"projectName": "axiom-co",
"region": "iad1",
"route": "/signin/[id]",
"source": "lambda-log"
}
}
```
* The **JSON** data in your **message** would be:
```json
{
"message": "user signed in",
"metadata": {
"userId": 2234,
"signInType": "sso-google"
}
}
```
You can **parse** the JSON using the [parse\_json function](/apl/scalar-functions/string-functions#parse-json\(\)) and run queries against the **values** in the **message** field.
### Example
```kusto
['vercel']
| extend parsed = parse_json(message)
```
* You can select the field to **insert** into new columns using the [project operator](/apl/tabular-operators/project-operator)
```kusto
['vercel']
| extend parsed = parse_json('{"message":"user signed in", "metadata": { "userId": 2234, "SignInType": "sso-google" }}')
| project parsed["message"]
```
### More Examples
* If you have **null values** in your data you can use the **isnotnull()** function
```kusto
['vercel']
| extend parsed = parse_json(message)
| where isnotnull(parsed)
| summarize count() by parsed["message"], parsed["metadata"]["userId"]
```
* Check out our [APL Documentation on how to use more functions](/apl/scalar-functions/string-functions) and run your own queries against your Vercel logs.
## Migrate from Vercel app to next-axiom
In May 2024, Vercel [introduced higher costs](https://axiom.co/blog/changes-to-vercel-log-drains) for using Vercel Log Drains. Because the Axiom Vercel app depends on Log Drains, using the next-axiom library can be the cheaper option to analyze telemetry data for higher volume projects.
To migrate from the Axiom Vercel app to the next-axiom library, follow these steps:
1. Delete the existing log drain from your Vercel project.
2. Delete `NEXT_PUBLIC_AXIOM_INGEST_ENDPOINT` from the environment variables of your Vercel project. For more information, see the [Vercel documentation](https://vercel.com/projects/environment-variables).
3. [Create a new dataset in Axiom](/reference/datasets), and [create a new advanced API token](/reference/tokens) with ingest permissions for that dataset.
4. Add the following environment variables to your Vercel project:
* `NEXT_PUBLIC_AXIOM_DATASET` is the name of the Axiom dataset where you want to send data.
* `NEXT_PUBLIC_AXIOM_TOKEN` is the Axiom API token you have generated.
5. In your terminal, go to the root folder of your Next.js app, and then run `npm install --save next-axiom` to install the latest version of next-axiom.
6. In the `next.config.ts` file, wrap your Next.js configuration in `withAxiom`:
```js
const { withAxiom } = require('next-axiom');
module.exports = withAxiom({
// Your existing configuration
});
```
For more configuration options, see the [documentation in the next-axiom GitHub repository](https://github.com/axiomhq/next-axiom).
## Send logs from Vercel preview deployments
To send logs from Vercel preview deployments to Axiom, enable preview deployments for the environment variable `NEXT_PUBLIC_AXIOM_INGEST_ENDPOINT`. For more information, see the [Vercel documentation](https://vercel.com/docs/projects/environment-variables/managing-environment-variables).
# Intelligence
Source: https://axiom.co/docs/console/intelligence
Learn about Axiom's intelligence features that accelerate insights and automate data analysis.
Axiom’s Console is evolving from a powerful space for human-led investigation to incorporate assistance that helps you reach insights faster. By combining Axiom's cost-efficient data platform with intelligence features, Axiom is transforming common workflows from reactive chores into proactive, machine-assisted actions.
This section covers the intelligence features built into the Axiom Console.
## Spotlight
Spotlight is an interactive analysis feature that helps you find the root cause faster.
Rather than reviewing data from a notable period event-by-event, Spotlight allows you to compare a subset against baseline automatically. Spotlight analyzes every field to surface the most significant differences, turning manual investigation into a fast, targeted discovery process.
Learn more this feature in the [Spotlight](/console/intelligence/spotlight) page.
## AI-assisted workflows
Axiom also includes several features that leverage AI to accelerate common workflows.
### Natural language querying
Ask questions of your data in plain English and have Axiom translate your request directly into a valid Axiom Processing Language (APL) query. This lowers the barrier to exploring your data and makes it easier for everyone on your team to find the answers they need.
Learn more about [generating queries using AI](/query-data/explore#generate-query-using-ai).
### AI-powered dashboard generation
Instead of building dashboards from scratch, you can describe your requirements in natural language and have Axiom generate a complete dashboard for you instantly. Axiom analyzes your events and goals to select the most appropriate visualizations.
Learn more about [generating dashboards using AI](/dashboards/create#generate-dashboards-using-ai).
# Spotlight
Source: https://axiom.co/docs/console/intelligence/spotlight
This page explains how to use Spotlight to automatically identify differences between selected events and baseline data.
Spotlight allows you to highlight a region of event data and automatically identify how it deviates from baseline across different fields. Instead of manually crafting queries to investigate anomalies, Spotlight analyzes every field in your data and presents the most significant differences through intelligent visualizations.
Spotlight is particularly useful for:
* **Root cause analysis**: Quickly identify why certain traces are slower, errors are occurring, or performance is degraded.
* **Anomaly investigation**: Understand what makes problematic events different from normal baseline behavior.
* **Pattern discovery**: Spot trends and correlations in your data that might not be immediately obvious.
## How Spotlight works
Spotlight compares two sets of events:
* **Comparison set**: The events you select by highlighting a region on a chart.
* **Baseline set**: All other events that contributed to the chart.
For each field present in your data, Spotlight calculates the differences between these two sets and ranks them by significance. The most interesting differences are displayed first using visualizations that adapt to your data types.
## Use Spotlight
### Start Spotlight analysis
1. In the **Query** tab, create a query that produces a heatmap or time series chart.
2. On the chart, click and drag to select the region you want to investigate.
3. In the selection tooltip, click **Run Spotlight**.
Spotlight analyzes the selected events and displays the results in a new panel showing the most significant differences across all fields.
Alternatively, start Spotlight from a table view by right-clicking on the value you want to select and choosing **Run spotlight**.
### Interpret results
Spotlight displays results using two types of visualization, depending on your data:
* **Bar charts** for categorical fields (strings, booleans)
* Compares the proportion of events that have a given value for selected and baseline events.
* Useful for understanding differences in status codes, service names, or boolean flags.
* **Boxplots** for numeric fields (integers, floats, timespans) with many distinct values
* Shows the range of values in both comparison and baseline sets.
* Identifies the minimum, P25, P75, and maximum values.
* Useful for understanding differences in response times or other numeric quantities.
For each visualization, Axiom displays the proportion of selected and baseline events (where the field is present).
### Dig deeper
To dig deeper, iteratively refine your Spotlight analysis or jump to a view of matching events.
1. **Filter and re-run**: Right-click specific values in the results and select **Re-run spotlight** to filter your data and run Spotlight again with a more focused scope.
2. **Show events**: Rick-click specific values in the results and select **Show events** to filter your data and see matching events.
## Spotlight limitations
* **Custom attributes**: Currently, custom attributes in OTel spans aren’t included in the Spotlight results. Axiom will soon support custom attributes in Spotlight.
* **Complex queries**: Spotlight works well for queries with maximum one aggregation step. Complex queries with multiple aggregations aren’t supported.
## Example workflows
### Investigate slow traces
1. Create a heatmap query:
```kusto
['traces']
| summarize histogram(duration, 20) by bin_auto(_time)
```
2. Select the region showing the slowest traces.
3. Run Spotlight to see if slow traces are associated with specific endpoints, regions, or user segments.
### Understand error spikes
1. Create a line chart:
```kusto
['logs']
| where level == "error"
| summarize count() by bin_auto(_time)
```
2. Select the time period where errors spiked.
3. Run Spotlight to identify if there's anything different about the selected errors.
# Configure dashboard elements
Source: https://axiom.co/docs/dashboard-elements/configure
This section explains how to configure dashboard elements.
When you create a chart, click to access the following options.
## Values
Specify how to treat missing or undefined values:
* **Auto:** This option automatically decides the best way to represent missing or undefined values in the data series based on the chart type and the rest of the data.
* **Ignore:** This option ignores any missing or undefined values in the data series. This means that the chart only displays the known, defined values.
* **Join adjacent values:** This option connects adjacent data points in the data series, effectively filling in any gaps caused by missing values. The benefit of joining adjacent values is that it can provide a smoother, more continuous visualization of your data.
* **Fill with zeros:** This option replaces any missing or undefined values in the data series with zero. This can be useful if you want to emphasize that the data is missing or undefined, as it causes a drop to zero in your chart.
## Variant
Specify the chart type.
**Area:** An area chart displays the area between the data line and the axes, often filled with a color or pattern. Stacked charts provide the capability to design and implement intricate query dashboards while integrating advanced visualizations, enriching your logging experience over time.
**Bars:** A bar chart represents data in rectangular bars. The length of each bar is proportional to the value it represents. Bar charts can be used to compare discrete quantities, or when you have categorical data.
**Line:** A line chart connects individual data points into a continuous line, which is useful for showing logs over time. Line charts are often used for time series data.
## Y-Axis
Specify the scale of the vertical axis.
**Linear:** A linear scale maintains a consistent scale where equal distances represent equal changes in value. This is the most common scale type and is useful for most types of data.
**Log:** A logarithmic (or log) scale represents values in terms of their order of magnitude. Each unit of distance on a log scale represents a tenfold increase in value. Log scales make it easy to see backend errors and compare values across a wide range.
## Annotations
Specify the types of annotations to display in the chart:
* Show all annotations
* Hide all annotations
* Selective determine the annotations types to display
# Create dashboard elements
Source: https://axiom.co/docs/dashboard-elements/create
This section explains how to create dashboard elements.
To create new dashboard elements:
1. [Create a dashboard](/dashboards/create) or open an existing dashboard.
2. Click **Add element** in the top right corner.
3. Choose the dashboard element from the list.
4. For charts, select one of the following:
* Click **Simple Query Builder** to create your chart using a [visual query builder](#create-chart-using-visual-query-builder).
* Click **Advanced Query Language** to create your chart using the Axiom Processing Language (APL). Create a chart in the same way you create a chart in the APL query builder of the [Query tab](/query-data/explore#create-a-query-using-apl).
5. Optional: [Configure chart options](/dashboard-elements/configure).
6. Optional: Set a custom time range that is different from the dashboard’s time range.
7. Click **Save**.
The new element appears in your dashboard. At the bottom, click **Save** to save your changes to the dashboard.
## Create chart using visual query builder
Use the query builder to create or edit queries for the selected dataset:
This component is a visual query builder that eases the process of building visualizations and segments of your data.
This guide walks you through the individual sections of the query builder.
### Time range
Every query has a start and end time and the time range component allows quick selection of common time ranges as well as the ability to input specific start and end timestamps:
* Use the **Quick Range** items to quickly select popular ranges
* Use the **Custom Start/End Date** inputs to select specific times
* Use the **Resolution** items to choose between various time bucket resolutions
### Against
When a time series visualization is selected, such as `count`, the **Against** menu is enabled and it’s possible to select a historical time to compare the results of your time range too.
For example, to compare the last hour’s average response time to the same time yesterday, select `1 hr` in the time range menu, and then select `-1D` from the **Against** menu:
The results look like this:
The dotted line represents results from the base date, and the totals table includes the comparative totals.
When you add `field` to the `group by` clause, the **time range against** values are attached to each `events`.
### Visualizations
Axiom provides powerful visualizations that display the output of running aggregate functions across your dataset. The Visualization menu allows you to add these visualizations and, where required, input their arguments:
You can select a visualization to add it to the query. If a visualization requires an argument (such as the field and/or other parameters), the menu allows you to select eligible fields and input those arguments. Press Enter to complete the addition:
Click Visualization in the query builder to edit it at any time.
[Learn about supported visualizations](/query-data/visualizations)
### Filters
Use the filter menu to attach filter clauses to your search.
Axiom supports AND/OR operators at the top-level as well as one level deep. This means you can create filters that would read as `status == 200 AND (method == get OR method == head) AND (user-agent contains Mozilla or user-agent contains Webkit)`.
Filters are divided up by the field type they operate on, but some may apply to more than one field type.
#### List of filters
*String Fields*
* `==`
* `!=`
* `exists`
* `not-exists`
* `starts-with`
* `not-starts-with`
* `ends-with`
* `not-ends-with`
* `contains`
* `not-contains`
* `regexp`
* `not-regexp`
*Number Fields*
* `==`
* `!=`
* `exists`
* `not-exists`
* `>`
* `>=`
* `<`
* `<=`
*Boolean Fields*
* `==`
* `!=`
* `exists`
* `not-exists`
*Array Fields*
* `contains`
* `not-contains`
* `exists`
* `not-exists`
#### Special fields
Axiom creates the following two fields automatically for a new dataset:
* `_time` is the timestamp of the event. If the data you ingest doesn’t have a `_time` field, Axiom assigns the time of the data ingest to the events.
* `_sysTime` is the time when you ingested the data.
In most cases, you can use `_time` and `_sysTime` interchangeably. The difference between them can be useful if you experience clock skews on your event-producing systems.
### Group by (segmentation)
When visualizing data, it can be useful to segment data into specific groups to more clearly understand how the data behaves.
The Group By component enables you to add one or more fields to group events by:
### Other options
#### Order
By default, Axiom automatically chooses the best ordering for results. However, you can manually set the desired order through this menu.
#### Limit
By default, Axiom chooses a reasonable limit for the query that has been passed in. However, you can control that limit manually through this component.
## Change element’s position
To change element’s position on the dashboard, drag the title bar of the chart.
## Change element size
To change the size of the element, drag the bottom-right corner.
## Set custom time range
You can set a custom time range for individual dashboard elements that is different from the dashboard’s time range. For example, the dashboard displays data about the last 30 minutes but individual dashboard elements display data for different time ranges. This can be useful for visualizing the same chart or statistic for different time periods, among others.
To set a custom time range for a dashboard element:
1. In the top right of the dashboard element, click **More >** **Edit**.
2. In the top right above the chart, click .
3. Click **Custom**.
4. Choose one of the following options:
* Use the **Quick range** items to quickly select popular time ranges.
* Use the **Custom start/end date** fields to select specific times.
5. Click **Save**.
Axiom displays the new time range in the top left of the dashboard element.
### Set custom time range in APL
To set a custom time range for dashboard elements created with APL, you can use the [procedure above](#set-custom-time-range) or define the time range in the APL query:
1. In the top right of the dashboard element, click **More >** **Edit**.
2. In the APL query, specify the custom time range using the [where](/apl/tabular-operators/where-operator) operator. For example:
```kusto
| where _time > now(-6h)
```
3. Click **Run query** to preview the result.
4. Click **Save**.
Axiom displays in the top left of the dashboard element to indicate that its time range is defined in the APL query and might be different from the dashboard’s time range.
## Set custom comparison period
You can set a custom comparison time period for individual dashboard elements that is different from the dashboard’s. For example, the dashboard compares against data from yesterday but individual dashboard elements display data for different comparison periods.
To set a custom comparison period for a dashboard element:
1. In the top right of the dashboard element, click **More >** **Edit**.
2. In the top right above the chart, click **Compare period**.
3. Click **Custom**.
4. Choose one of the following options:
* Use the **Quick range** items to quickly select popular comparison periods.
* Use the **Custom time** field to select specific comparison periods.
5. Click **Save**.
Axiom displays the new comparison period in the top left of the dashboard element.
# Heatmap
Source: https://axiom.co/docs/dashboard-elements/heatmap
This section explains how to create heatmap dashboard elements and add them to your dashboard.
export const elementName_0 = "heatmap"
export const elementButtonLabel_0 = "Heatmap"
Heatmaps represent the distribution of numerical data by grouping values into ranges or buckets. Each bucket reflects a frequency count of data points that fall within its range. Instead of showing individual events or measurements, heatmaps give a clear view of the overall distribution patterns. This allows you to identify performance bottlenecks, outliers, or shifts in behavior. For instance, you can use heatmaps to track response times, latency, or error rates.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets) where you send your data.
* [Send data](/send-data/methods) to your Axiom dataset.
* [Create an empty dashboard](/dashboards/create).
## Create {elementName_0}
1. Go to the Dashboards tab and open the dashboard to which you want to add the {elementName_0}.
2. Click **Add element** in the top right corner.
3. Click **{elementButtonLabel_0}** from the list.
4. Choose one of the following:
* Click **Simple Query Builder** to create your chart using a visual query builder. For more information, see [Create chart using visual query builder](/dashboard-elements/create#create-chart-using-visual-query-builder).
* Click **Advanced Query Language** to create your chart using the Axiom Processing Language (APL). Create a chart in the same way you create a chart in the APL query builder of the [Query tab](/query-data/explore#create-a-query-using-apl).
5. Optional: [Configure the dashboard element](/dashboard-elements/configure).
6. Click **Save**.
The new element appears in your dashboard. At the bottom, click **Save** to save your changes to the dashboard.
## Example with Simple Query Builder
## Example with Advanced Query Language
```kusto
['http-logs']
| summarize histogram(req_duration_ms, 15) by bin_auto(_time)
```
# Log stream
Source: https://axiom.co/docs/dashboard-elements/log-stream
This section explains how to create log stream dashboard elements and add them to your dashboard.
The log stream dashboard element displays your logs as they come in real-time. Each log appears as a separate line with various details. The benefit of a log stream is that it provides immediate visibility into your system’s operations. When you’re debugging an issue or trying to understand an ongoing event, the log stream allows you to see exactly what’s happening as it occurs.
## Example with Simple Query Builder
## Example with Advanced Query Language
```kusto
['sample-http-logs']
| project method, status, content_type
```
# Monitor list
Source: https://axiom.co/docs/dashboard-elements/monitor-list
This section explains how to create monitor list dashboard elements and add them to your dashboard.
The monitor list dashboard element provides a visual overview of the monitors you specify. It offers a quick glance into important developments about the monitors such as their status and history.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create an empty dashboard](/dashboards/create).
- [Create a monitor](/monitor-data/monitors).
## Create monitor list
1. Go to the Dashboards tab and open the dashboard to which you want to add the monitor list.
2. Click **Add element** in the top right corner.
3. Click **Monitor list** from the list.
4. In **Columns**, select the type of information you want to display for each monitor:
* **Status** displays if the monitor state is normal, triggered, or disabled.
* **History** provides a visual overview of the recent runs of the monitor. Green squares mean normal operation and red squares mean triggered state.
* **Dataset** is the name of the dataset on which the monitor operates.
* **Type** is the type of the monitor.
* **Notifiers** displays the notifiers connected to the monitor.
5. From the list, select the monitors you want to display on the dashboard.
6. Click **Save**.
The new element appears in your dashboard. At the bottom, click **Save** to save your changes to the dashboard.
# Note
Source: https://axiom.co/docs/dashboard-elements/note
This section explains how to create note dashboard elements and add them to your dashboard.
The note dashboard element adds a textbox to your dashboard that you can customise to your needs. For example, you can provide context in a note about the other dashboard elements.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create an empty dashboard](/dashboards/create).
## Create note
1. Go to the Dashboards tab and open the dashboard to which you want to add the note.
2. Click **Add element** in the top right corner.
3. Click **Note** from the list.
4. Enter your text on the left in [GitHub Flavored Markdown](https://docs.github.com/en/get-started/writing-on-github/getting-started-with-writing-and-formatting-on-github/basic-writing-and-formatting-syntax) format. You see the preview of the note dashboard element on the right.
5. Click **Save**.
The new element appears in your dashboard. At the bottom, click **Save** to save your changes to the dashboard.
# Dashboard elements
Source: https://axiom.co/docs/dashboard-elements/overview
This section explains how to create different dashboard elements and add them to your dashboard.
Dashboard elements are the different visual elements that you can include in your dashboard to display your data and other information. For example, you can track key metrics, logs, and traces, and monitor real-time data flow.
Choose one of the following to learn more about a dashboard element:
# Pie chart
Source: https://axiom.co/docs/dashboard-elements/pie-chart
This section explains how to create pie chart dashboard elements and add them to your dashboard.
export const elementName_0 = "pie chart"
export const elementButtonLabel_0 = "Pie"
Pie charts can illustrate the distribution of different types of event data. Each slice represents the proportion of a specific value relative to the total. For example, a pie chart can show the breakdown of status codes in HTTP logs. This helps quickly identify the dominant types of status responses and assess the system’s health at a glance.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets) where you send your data.
* [Send data](/send-data/methods) to your Axiom dataset.
* [Create an empty dashboard](/dashboards/create).
## Create {elementName_0}
1. Go to the Dashboards tab and open the dashboard to which you want to add the {elementName_0}.
2. Click **Add element** in the top right corner.
3. Click **{elementButtonLabel_0}** from the list.
4. Choose one of the following:
* Click **Simple Query Builder** to create your chart using a visual query builder. For more information, see [Create chart using visual query builder](/dashboard-elements/create#create-chart-using-visual-query-builder).
* Click **Advanced Query Language** to create your chart using the Axiom Processing Language (APL). Create a chart in the same way you create a chart in the APL query builder of the [Query tab](/query-data/explore#create-a-query-using-apl).
5. Optional: [Configure the dashboard element](/dashboard-elements/configure).
6. Click **Save**.
The new element appears in your dashboard. At the bottom, click **Save** to save your changes to the dashboard.
## Example with Simple Query Builder
## Example with Advanced Query Language
```kusto
['http-logs']
| summarize count() by status
```
# Scatter plot
Source: https://axiom.co/docs/dashboard-elements/scatter-plot
This section explains how to create scatter plot dashboard elements and add them to your dashboard.
Scatter plots are used to visualize the correlation or distribution between two distinct metrics or logs. Each point in the scatter plot could represent a log entry, with the X and Y axes showing different log attributes (like request time and response size). The scatter plot chart can be created using the simple query builder or advanced query builder.
For example, plot response size against response time for an API to see if larger responses are correlated with slower response times.
## Example with Simple Query Builder
## Example with Advanced Query Language
```kusto
['sample-http-logs']
| summarize avg(req_duration_ms), avg(resp_header_size_bytes) by resp_body_size_bytes
```
# Statistic
Source: https://axiom.co/docs/dashboard-elements/statistic
This section explains how to create statistic dashboard elements and add them to your dashboard.
Statistics dashboard elements display a summary of the selected metrics over a given time period. For example, you can use a statistic dashboard element to show the average, sum, min, max, and count of response times or error counts.
## Example with Simple Query Builder
## Example with Advanced Query Language
```kusto
['sample-http-logs']
| summarize avg(resp_body_size_bytes)
```
# Table
Source: https://axiom.co/docs/dashboard-elements/table
This section explains how to create table dashboard elements and add them to your dashboard.
The table dashboard element displays a summary of any attributes from your metrics, logs, or traces in a sortable table format. Each row in the table could represent a different service, host, or other entity, with columns showing various attributes or metrics for that entity.
## Example with Simple Query Builder
## Example with Advanced Query Language
With this option, the table chart type has the capability to display a non-aggregated view of events.
```kusto
['sample-http-logs']
| summarize avg(resp_body_size_bytes) by bin_auto(_time)
```
# Time series
Source: https://axiom.co/docs/dashboard-elements/time-series
This section explains how to create time series dashboard elements and add them to your dashboard.
Time series charts show the change in your data over time which can help identify infrastructure issues, spikes, or dips in the data. This can be a simple line chart, an area chart, or a bar chart. A time series chart might be used to show the change in the volume of log events, error rates, latency, or other time-sensitive data.
## Example with Simple Query Builder
## Example with Advanced Query Language
```kusto
['sample-http-logs']
| summarize count() by bin_auto(_time)
```
# Configure dashboards
Source: https://axiom.co/docs/dashboards/configure
This page explains how to configure your dashboards.
## Select time range
When you select the time range, you specify the time interval for which you want to display data in the dashboard. Changing the time range affects the data displayed in all dashboard elements.
To select the time range:
1. In the top right, click **Time range**.
2. Choose one of the following options:
* Use the **Quick range** items to quickly select popular time ranges.
* Use the **Custom start/end date** fields to select specific times.
3. Click **Apply**.
## Select refresh rate
Your dashboard regularly queries your data in the background to show the latest trends. The refresh rate is the time interval between these queries.
To select the refresh rate:
1. In the top right, click **Time range**.
2. Select one of the options in **Refresh rate**.
3. Click **Apply**.
Each time your dashboard refreshes, it runs a query on your data which results in query costs. Selecting a short refresh rate (such as 15s) for a long time range (such as 90 days) means that your dashboard frequently runs large queries in the background. To optimize query costs, choose a refresh rate that is appropriate for the time range of your dashboard.
## Share dashboards
To specify who can access a dashboard:
1. In the top right, click **Share**.
2. Select one of the following:
* Select **Just Me** to make the dashboard private. Only you can access the dashboard.
* Select a group in your Axiom organization. Only members of the selected group can access the dashboard. For more information about groups, see [Access](/reference/settings).
* Select **Everyone** to make the dashboard accessible to all users in your Axiom organization.
3. At the bottom, click **Save** to save your changes to the dashboard.
The data that individual users see in the dashboard is determined by the datasets the users have access to. If a user has access to a dashboard but only to some of the datasets referenced in the dashboard’s charts, the user only sees data from the datasets they have access to.
## Control display of annotations
To specify the types of annotations to display in all dashboard elements:
1. In the top right, click **Annotations**.
2. Select one of the following:
* Show all annotations
* Hide all annotations
* Selective determine the annotations types to display
3. At the bottom, click **Save** to save your changes to the dashboard.
## Set dashboard as homepage
To set a dashboard as the homepage of your browser, click **Set as homepage** in the top right.
## Enter full screen
Full-screen mode is useful for displaying the dashboard on a TV or shared monitor.
To enter full-screen mode, click **Full screen** in the top right.
# Create dashboards
Source: https://axiom.co/docs/dashboards/create
This section explains how to create and delete dashboards.
To create a dashboard, choose one of the following:
* [Generate a dashboard](#generate-dashboards-using-ai) using AI based on a natural-language prompt.
* [Create an empty dashboard](#create-empty-dashboards).
* [Fork an existing dashboard](#fork-dashboards). This is how you make a copy of prebuilt integration dashboards that you cannot directly edit.
* [Duplicate an existing dashboard](#duplicate-dashboards). This is how you make a copy of dashboards other than prebuilt integration dashboards.
After creating a dashboard:
* [Add dashboard elements](/dashboard-elements/create). For example, add a table or a time series chart.
* [Configure the dashboard](/dashboards/configure). For example, control who can access the dashboard and change the time range.
## Generate dashboards using AI
Explain in your own words what you want to see in your dashboard and Axiom’s AI generates it in seconds.
1. Click the Dashboards tab.
2. In the top right corner, click **New dashboard**.
3. In Type, click **Generate dashboard**.
4. In Dataset, select the dataset from which you want to generate the dashboard.
5. Add a name and a description. Explain in detail what you want to see in your dashboard. The more specific you are, the closer the generated dashboard matches your expectations.
6. Click **Create dashboard**.
You can currently generate dashboards based on a single dataset. After generating the dashboard, you can edit it and add dashboard elements that rely on data from other datasets.
## Create empty dashboards
1. Click the Dashboards tab.
2. In the top right corner, click **New dashboard**.
3. In Type, select **Empty dashboard**.
4. Add a name and a description.
5. Click **Create dashboard**.
## Fork dashboards
1. Click the Dashboards tab.
2. Find the dashboard in the list.
3. Click **More**.
4. Click **Fork dashboard**.
Alternatively:
1. Open the dashboard.
2. Click **Fork dashboard** in the top right corner.
## Duplicate dashboards
1. Click the Dashboards tab.
2. Find the dashboard in the list.
3. Click **More**.
4. Click **Duplicate dashboard**.
## Delete dashboard
1. Click the Dashboards tab.
2. Find the dashboard in the list.
3. Click **More**.
4. Click **Delete dashboard**.
5. Click **Delete**.
# Dashboards
Source: https://axiom.co/docs/dashboards/overview
This section introduces the Dashboards tab and explains how to create your first dashboard.
Dashboards provide a single view into your data.
Axiom provides a mature dashboards experience that allows you to visualize collections of queries across multiple datasets in one place.
Dashboards are easy to share, benefit from collaboration, and bring separate datasets together in a single view.
## Dashboards tab
The Dashboards tab lists the dashboards you have access to.
* The **Integrations** section lists prebuilt dashboards. Axiom automatically built these dashboards as part of the [apps that enrich your Axiom experience](/apps/introduction). The integration dashboards are read-only and you cannot edit them. To create a copy of an integration dashboard that you can edit, [fork the original dashboard](/dashboards/configure#fork-dashboards).
* The sections below list the private and shared dashboards you can access.
To open a dashboard, click a dashboard in the list.
## Work with dashboards
# Send data from Honeycomb to Axiom
Source: https://axiom.co/docs/endpoints/honeycomb
Integrate Axiom in your existing Honeycomb stack with minimal effort and without breaking any of your existing Honeycomb workflows.
export const endpointName_0 = "Honeycomb"
This page explains how to send data from Honeycomb to Axiom.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
## Configure {endpointName_0} endpoint in Axiom
1. Click **Settings > Endpoints**.
2. Click **New endpoint**.
3. Click **{endpointName_0}**.
4. Name the endpoint.
5. Select the dataset where you want to send data.
6. Copy the URL displayed for the newly created endpoint. This is the target URL where you send the data.
## Configure Honeycomb
In Honeycomb, specify the following environment variables:
* `APIKey` or `WriteKey` is your Honeycomb API token. For information, see the [Honeycomb documentation](https://docs.honeycomb.io/get-started/configure/environments/manage-api-keys/).
* `APIHost` is the target URL for the endpoint you have generated in Axiom by following the procedure above. For example, `https://opbizplsf8klnw.ingress.axiom.co`.
* `Dataset` is the name of the Axiom dataset where you want to send data.
## Examples
### Send logs from Honeycomb using JavaScript
```js
const Libhoney = require('libhoney');
const hny = new Libhoney({
writeKey: '',
dataset: '',
apiHost: '',
});
hny.sendNow({ message: 'Welcome to Axiom Endpoints!' });
```
### Send logs from Honeycomb using Python
```py
import libhoney
libhoney.init(writekey="", dataset="", api_host="")
event = libhoney.new_event()
event.add_field("foo", "bar")
event.add({"message": "Welcome, to Axiom Endpoints!"})
event.send()
```
### Send logs from Honeycomb using Golang
```go
package main
import (
"github.com/honeycombio/libhoney-go"
)
func main() {
libhoney.Init(libhoney.Config{
WriteKey: "",
Dataset: "",
APIHost: "",
})
defer libhoney.Close() // Flush any pending calls to Honeycomb
var ev = libhoney.NewEvent()
ev.Add(map[string]interface{}{
"duration_ms": 155.67,
"method": "post",
"hostname": "endpoints",
"payload_length": 43,
})
ev.Send()
}
```
# Send data from Loki to Axiom
Source: https://axiom.co/docs/endpoints/loki
Integrate Axiom in your existing Loki stack with minimal effort and without breaking any of your existing Loki workflows.
export const endpointName_0 = "Loki"
This page explains how to send data from Loki to Axiom.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
## Configure {endpointName_0} endpoint in Axiom
1. Click **Settings > Endpoints**.
2. Click **New endpoint**.
3. Click **{endpointName_0}**.
4. Name the endpoint.
5. Select the dataset where you want to send data.
6. Copy the URL displayed for the newly created endpoint. This is the target URL where you send the data.
## Configure Loki
In Loki, specify the following environment variables:
* `host` or `url` is the target URL for the endpoint you have generated in Axiom by following the procedure above. For example, `https://opbizplsf8klnw.ingress.axiom.co`.
* Optional: Use `labels` or `tags` to specify labels or tags for your app.
## Examples
### Send logs from Loki using JavaScript
```js
const { createLogger, transports, format, } = require("winston");
const LokiTransport = require("winston-loki");
let logger;
const initializeLogger = () => {
if (logger) {
return;
}
logger = createLogger({
transports: [
new LokiTransport({
host: "$LOKI_ENDPOINT_URL",
labels: { app: "axiom-loki-endpoint" },
json: true,
format: format.json(),
replaceTimestamp: true,
onConnectionError: (err) => console.error(err),
}),
new transports.Console({
format: format.combine(format.simple(), format.colorize()),
}),
],
});
};
initializeLogger()
logger.info("Starting app...");
```
### Send logs from Loki using Python
```py
import logging
import logging_loki
# Create a handler
handler = logging_loki.LokiHandler(
url='$LOKI_ENDPOINT_URL',
tags={'app': 'axiom-loki-py-endpoint'},
version='1',
)
# Create a logger
logger = logging.getLogger('loki')
# Add the handler to the logger
logger.addHandler(handler)
# Log some messages
logger.info('Hello, world from Python!')
logger.warning('This is a warning')
logger.error('This is an error')
```
# Send data from Splunk to Axiom
Source: https://axiom.co/docs/endpoints/splunk
Integrate Axiom in your existing Splunk app with minimal effort and without breaking any of your existing Splunk stack.
export const endpointName_0 = "Splunk"
This page explains how to send data from Splunk to Axiom.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
## Configure {endpointName_0} endpoint in Axiom
1. Click **Settings > Endpoints**.
2. Click **New endpoint**.
3. Click **{endpointName_0}**.
4. Name the endpoint.
5. Select the dataset where you want to send data.
6. Copy the URL displayed for the newly created endpoint. This is the target URL where you send the data.
## Configure Splunk
In Splunk, specify the following environment variables:
* `token` is your Splunk API token. For information, see the [Splunk documentation](https://help.splunk.com/en/splunk-observability-cloud/administer/authentication-and-security/authentication-tokens/api-access-tokens).
* `url` or `host` is the target URL for the endpoint you have generated in Axiom by following the procedure above. For example, `https://opbizplsf8klnw.ingress.axiom.co`.
## Examples
### Send logs from Splunk using JavaScript
```js
var SplunkLogger = require('splunk-logging').Logger;
var config = {
token: '$SPLUNK_TOKEN',
url: '$AXIOM_ENDPOINT_URL',
};
var Logger = new SplunkLogger({
token: config.token,
url: config.url,
host: '$AXIOM_ENDPOINT_URL',
});
var payload = {
// Message can be anything; doesn’t have to be an object
message: {
temperature: '70F',
chickenCount: 500,
},
};
console.log('Sending payload', payload);
Logger.send(payload, function (err, resp, body) {
// If successful, body will be { text: 'Success', code: 0 }
console.log('Response from Splunk', body);
});
```
### Send logs from Splunk using Python
* Your Splunk deployment `port` and `index` values are required in your Python code.
```py
import logging
from splunk_handler import SplunkHandler
splunk = SplunkHandler(
host="$AXIOM_SPLUNK_ENDPOINT_URL",
port='8088',
token='',
index='main'
)
logging.getLogger('').addHandler(splunk)
logging.warning('Axiom endpoints!')
```
### Send logs from Splunk using Golang
```js
package main
import "github.com/docker/docker/daemon/logger/splunk"
func main() {
// Create new Splunk client
splunk := splunk.NewClient(
nil,
"https://{$AXIOM_SPLUNK_ENDPOINT}:8088/services/collector",
"{your-token}",
"{your-source}",
"{your-sourcetype}",
"{your-index}"
)
err := splunk.Log(
interface{"msg": "axiom endpoints", "msg2": "endpoints"}
)
if err != nil {
return err
}
err = splunk.LogWithTime(
time.Now(),
interface{"msg": "axiom endpoints", "msg2": "endpoints"}
)
if err != nil {
return err
}
```
# Frequently asked questions
Source: https://axiom.co/docs/get-help/faq
Learn more about Axiom.
This page aims to offer a deeper understanding of Axiom. If you can’t find an answer to your questions, please feel free to [contact our team](https://axiom.co/contact).
## What is Axiom?
Axiom is a log management and analytics solution that reduces the cost and management overhead of logging as much data as you want.
With Axiom, organizations no longer need to choose between their data and their costs. Axiom has been built from the ground up to allow for highly efficient data ingestion and storage, and then a zero-to-infinite query scaling that allows you to query all your data, all the time.
Organizations use Axiom for continuous monitoring and observability, as well as an event store for running analytics and deriving insights from all their event data.
Axiom consists of a datastore and a user experience that work in tandem to provide a completely unique log-management and analytics experience.
## What are Axiom's deployment options?
Axiom offers two deployment options:
* **Axiom Cloud:** A fully managed cloud service with usage-based pricing.
* **Bring Your Own Cloud (BYOC):** Deploy Axiom in your own cloud environment with a predictable annual platform fee.
## Can I run Axiom in my own cloud/infrastructure?
Bring Your Own Cloud (BYOC) is an alternative deployment option and pricing plan. We deploy Axiom into your own AWS infrastructure, and you store and process data in your own environment.
## How do I choose between Axiom Cloud and BYOC?
Choose Axiom Cloud if you prefer quick setup, usage-based pricing, and don’t need to keep data in your own cloud environment.
Choose BYOC if you require data isolation, have compliance requirements necessitating data storage in your environment, can benefit from your cloud provider’s volume discounts, or have very large-scale workloads.
For more information, see [Pricing](https://axiom.co/pricing).
## What regions are available for Axiom Cloud deployments?
Axiom Cloud offers deployments in both US and EU regions. For more information, see [Regions](/reference/regions).
## Can I migrate from Axiom Cloud to BYOC later?
Yes, Axiom can support migration between deployment options. The Axiom team can assist with this process. If you expect to require a migration, we recommend reaching out to our team in advance.
## How is Axiom different than other logging solutions?
At Axiom, our goal is that no organization has to ignore or delete a single piece of data no matter its source: logs, events, frontend, backends, audits, etc.
We found that existing solutions would place restrictions on how much data can be collected either on purpose or as a side-effect of their architectures.
For example, state of the art in logging is running stateful clusters that need shared knowledge of ingestion and will use a mixture of local SSD-based storage and remote object storage.
### Side-effects of legacy vendors
1. There is a non-trivial cost in increasing your data ingest as clusters need to be scaled and more SSD storage and IOPs need to be provided
2. The choice needs to be made between hot and cold data, and also what is archived. Now your data is in 2-3 different places and queries can be fast or slow depending on where the data is
The end result is needing to carefully consider all data that is ingested, and putting limits and/or sampling to control the DevOps and cost burden.
### The ways Axiom is different
1. Decoupled ingest and querying pipelines
2. Stateless ingest pipeline that requires minimal compute/memory to storage as much as 1.5TB/day per vCPU
3. Ingests all data into object storage, enabling the cheapest storage possible for all ingested data
4. Enables querying scale-out with cloud functions, requiring no constantly-running servers waiting for a query to be processed. Instead, enjoy zero-to-infinity querying instantly
### The benefits of Axiom’s approach
1. The most efficient ingestion pipeline for massive amounts of data
2. Store more data for less by exclusively using inexpensive object storage for all data
3. Query data that’s 10 milliseconds or 10 years old at any time
4. Reduce the total cost of ownership of your log management and analytics pipelines with simple scale and maintenance that Axiom provides
5. Free your organization to do more with it’s data
## How long can I retain data with Axiom?
The free forever Personal plan provides a generous 30 days of retention.
The Axiom Cloud and the Bring Your Own Cloud plans allow you to customize the data retention period to your needs, with the option for “Forever” retention so your organization has access to all its data, all the time.
For more information, see [Pricing](https://axiom.co/pricing).
## Can I try Axiom for free?
Yes. The Personal plan is free forever with a generous allowance. It is available to all customers.
With unlimited users included, the Axiom Cloud plan starting at \$25/month is a great choice for growing companies and for enterprise organizations who want to run a proof-of-concept.
For more information, see [Pricing](https://axiom.co/pricing).
## How is Axiom licensed?
The Axiom Cloud plan is billed on a monthly basis.
For the Bring Your Own Cloud plan, the platform fee is billed on an annual basis.
For more information, see [Pricing](https://axiom.co/pricing).
# Axiom tour
Source: https://axiom.co/docs/getting-started-guide/axiom-tour
Interactive demonstration of Axiom and its features
[Click here](https://axiom.navattic.com/d8e0yrj) to start the interactive demo in a new tab.
# Event data
Source: https://axiom.co/docs/getting-started-guide/event-data
This page explains the fundamentals of timestamped event data in Axiom.
Axiom’s mission is to operationalize every bit of event data in your organization.
Timestamped event data records every digital interaction between human, sensor, and machine, making it the atomic unit of activity for organizations. For this reason, every function in any business with digital activity can benefit from leveraging event data.
Each event is simply a structured record—composed of key-value pairs—that captures meaningful interactions or changes in state within a system. While these can appear in various forms, they usually contain the following:
* **Timestamp**: When the event occurred.
* **Attributes**: A set of key-value pairs offering details about the event context.
* **Metadata**: Contextual labels and IDs that connect related events.
## Uses of event data
Event data, understood as the atomic unit of digital activity, is the lifeblood of modern businesses. Leveraging the power of event data is essential in the following areas, among others:
* [Observability](/getting-started-guide/observability)
* Security
* Product analytics
* Business intelligence
* AI and machine learning
# Explore Axiom Playground
Source: https://axiom.co/docs/getting-started-guide/explore-axiom-playground
This page explains how to try out Axiom without registration with the [Axiom Playground](https://play.axiom.co/). It walks you through an example where you run a website and you want to keep an overview of the HTTP requests to this site with Axiom.
## 1. Explore sample data
1. Go to the [Axiom Playground](https://play.axiom.co/).
2. Click the **Datasets** tab at the top of the page.
3. In the list of datasets, click `sample-http-logs`.
This displays the fields in the sample dataset `sample-http-logs`. In Axiom, an individual piece of data is an event, and a dataset is a collection of similar events. In this example, an event is an HTTP request to your website, and the dataset holds incoming data about all these HTTP requests.
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/datasets/sample-http-logs)
## 2. Display stream of incoming data
1. Click the **Stream** tab at the top of the page.
2. Click **sample-http-logs** in the list.
You see the data that Axiom receives realtime. In this example, this page displays the HTTP requests to your imaginary website.
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/stream/sample-http-logs)
## 3. Analyze data
### Query data
1. Click the **Query** tab at the top of the page, and then click **Builder** in the top left. This enables you to query your data with a visual query builder.
2. In the **Dataset** section, select **sample-http-logs** from the list of datasets.
3. In the **Where** section, click **+**.
4. Write **status == "500"**, and then press **Enter**.
5. Click **Run**.
You see all the HTTP requests with the status 500. This is important to know because this status indicates an internal server error, meaning that the server has encountered a situation that it can’t handle.
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20where%20status%20%3D%3D%20'500'%22%7D)
### Run an APL query
1. Click the **Query** tab at the top of the page, and then click **APL** in the top left. This enables you to query your data with the Axiom Processing Language (APL). For more information, see [Introduction to APL](/apl/introduction).
2. In the text field, enter the following:
```kusto
['sample-http-logs']
| summarize count() by bin_auto(_time), status
```
3. Click **Run**.
You see the number of HTTP requests of each status over time.
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20count\(\)%20by%20bin_auto\(_time\)%2C%20status%22%7D)
## 4. Visualize data
1. Click the **Dashboards** tab at the top of the page.
2. Click **HTTP logs** in the list.
You see a dashboard that displays important information about the HTTP requests to the website.
## What’s next
To try out Axiom with sample data, see [Quickstart using sample data](/getting-started-guide/quickstart-using-sample-data).
To check out Axiom with a sample app, see [Get started with example app](/getting-started-guide/get-started-example-app).
# Get started with example apps
Source: https://axiom.co/docs/getting-started-guide/get-started-example-app
This page explains how to try out Axiom with an example app that emits OpenTelemetry (OTel) trace data. There are many other ways you can send data to Axiom. For a full list, see [Send data](/send-data/methods).
Choose one of the following example apps:
To try out Axiom without registration, see [Explore Axiom Playground](/getting-started-guide/explore-axiom-playground).
To try out Axiom with sample data, see [Quickstart using sample data](/getting-started-guide/quickstart-using-sample-data).
# Get started
Source: https://axiom.co/docs/getting-started-guide/getting-started
This guide introduces you to the concepts behind working with Axiom and give a short introduction to each of the high-level features.
Axiom enables you to make the most of your event data without compromises: all your data, all the time, for all possible needs. Say goodbye to data sampling, waiting times, and hefty fees.
This page explains how to start using Axiom and leverage the power of event data in your organization.
## 1. Send your data to Axiom
You can send data to Axiom in a variety of ways. Each individual piece of data is an event.
Events can be emitted from internal or third-party services, cloud functions, containers, virtual machines (VMs), or even scripts. Events follow the [JSON specification](https://www.json.org/json-en.html) for which field types are supported, an event could look like this:
```json
{
"service": "api-http",
"severity": "error",
"duration": 231,
"customer_id": "ghj34g32poiu4",
"tags": ["aws-east-1", "zone-b"],
"metadata": {
"version": "3.1.2"
}
}
```
An event must belong to a dataset which is a collection of similar events. You can have multiple datasets that help to segment your events to make them easier to query and visualize, and also aide in access control.
Axiom stores every event you send and makes it available to you for querying either by streaming logs in real-time, or by analyzing events to produce visualizations.
The underlying data store of Axiom is a time series database. This means every event is indexed with a timestamp specified at ingress or set automatically.
Axiom doesn’t sample your data on ingest or querying, unless you’ve expressly instructed it to.
## 2. Stream your data
Axiom makes it really easy to view your data as it’s being ingested live. This is also referred to as "Live Stream" or "Live Tail," and the result is having a terminal-like feel of being able to view all your events in real-time:
From the Stream tab, you can easily add filters to narrow down the results as well as save popular searches and share them with your organization members. You can also hide/show specific fields
Another useful feature of the Stream tab is to only show events in a particular time-window. This could be the last N minutes or a more-specific time range you specify manually. This feature is extremely useful when you need to closely inspect your data, allowing you to get an chronological view of every event in that time window.
## 3. Analyze your data
In Axiom, an individual piece of data is an event, and a dataset is a collection of related events. Datasets contain incoming event data. The Datasets tab allows you to analyze fields within your datasets. For example:
* Determine field data types and names.
* Edit field properties.
* Gain insights about the underlying data using quick charts.
* Add virtual fields.
## 4. Explore your data
While viewing individual events can be very useful, at scale and for general monitoring and observability, it’s important to be able to quickly aggregate, filter, and segment your data.
The Query tab gives you various tools to extract insights from your data:
* Visualize aggregations with count, min, max, average, percentiles, heatmaps, and more.
* Filter events.
* Segment data with `group-by`.
## 5. Monitor for problems
Get alerted when there are problems with your data. For example:
* A queue size is larger than acceptable limits.
* Web containers take too long to respond.
* A specific customer starts using a new feature.
## 6. Integrate with data shippers
Integrations can be installed and configured using different third-party Data shippers to quickly get insights from your logs and services by setting up a background task that continuously synchronizes events into Axiom.
## 7. Customize your organization
As your use of Axiom widens, customize it for your organization’s needs. For example:
* Add users.
* Set up third-party authentication providers.
* Set up role-based access control.
* Create and manage API tokens.
# Glossary of key Axiom terms
Source: https://axiom.co/docs/getting-started-guide/glossary
The glossary explains the key concepts in Axiom.
[A](#a) [B](#b) [C](#c) [D](#d) [E](#e) [F](#f) G H I K [L](#l) [M](#m) [N](#n) [O](#o) [P](#p) [Q](#q) [R](#r) S [T](#t) W X Y Z
## A
### Anomaly monitor
Anomaly monitors allow you to aggregate your event data and compare the results of this aggregation to what can be considered normal for the query. When the results are too much above or below the value that Axiom expects based on the event history, the monitor enters the alert state. The monitor remains in the alert state until the results no longer deviate from the expected value. This can happen without the results returning to their previous level if they stabilize around a new value. An anomaly monitor sends you a notification each time it enters or exits the alert state.
For more information, see [Anomaly monitors](/monitor-data/anomaly-monitors).
### API
The Axiom API allows you to ingest structured data logs, handle queries, and manage your deployments.
For more information, see [Introduction to Axiom API](/restapi/introduction).
### API token
See [Tokens](#token).
### App
Axiom’s dedicated apps enrich your Axiom organization by integrating into popular external services and providing out-of-the-box features such as prebuilt dashboards.
For more information, see [Introduction to apps](/apps/introduction).
### Axiom
Axiom represents the next generation of business intelligence. Designed and built for the cloud, Axiom is an event platform for logs, traces, and all technical data.
Axiom efficiently ingests, stores, and queries vast amounts of event data from any source at a fraction of the cost. The Axiom platform is built for unmatched efficiency, scalability, and performance.
### Axiom Cloud
Axiom Cloud is a deployment option and pricing plan with a fully managed cloud service and usage-based pricing.
### Axiom Processing Language (APL)
The Axiom Processing Language (APL) is a query language that is perfect for getting deeper insights from your data. Whether logs, events, analytics, or similar, APL provides the flexibility to filter, manipulate, and summarize your data exactly the way you need it.
For more information, see [Introduction to APL](/apl/introduction).
## B
### Bring Your Own Cloud (BYOC)
BYOC is a deployment option and pricing plan. We deploy Axiom into your own AWS infrastructure, and you store and process data in your own environment.
## C
### CLI
Axiom’s command line interface (CLI) is an Axiom tool that lets you test, manage, and build your Axiom organizations by typing commands on the command-line. You can use the command line to ingest data, manage authentication state, and configure multiple organizations.
For more information, see [Introduction to CLI](/reference/cli).
## D
### Dashboard
Dashboards allow you to visualize collections of queries across multiple datasets in one place. Dashboards are easy to share, benefit from collaboration, and bring separate datasets together in a single view.
For more information, see [Introduction to dashboards](/dashboards/overview).
### Dashboard element
Dashboard elements are the different visual elements that you can include in your dashboard to display your data and other information. For example, you can track key metrics, logs, and traces, and monitor real-time data flow.
For more information, see [Introduction to dashboard elements](/dashboard-elements/overview).
### Dataset
Axiom’s datastore is tuned for the efficient collection, storage, and analysis of timestamped event data. An individual piece of data is an event, and a dataset is a collection of related events. Datasets contain incoming event data.
For more information, see [Datasets](/reference/datasets).
### Destination
To transform and route data from an Axiom dataset to a destination, you need to set up a destination. This is where data is routed. Once you set up a destination, it can be used in any flow.
For more information, see [Manage destinations](/process-data/destinations/manage-destinations).
## E
### Event
An event is a granular record capturing a specific action or interaction within a system, often represented as key-value pairs. It’s the smallest unit of information detailing what occurred, who or what was involved, and potentially when and where it took place. In Axiom’s context, events are timestamped records, originating from human, machine, or sensor interactions, providing a foundational data point that informs a broader view of activities across different business units, from product, through security, to marketing, and more.
For more information, see [Event data](/getting-started-guide/event-data).
## F
### Flow
Flow provides onward event processing, including filtering, shaping, and routing. Flow works after persisting data in Axiom’s highly efficient queryable store, and uses APL to define processing.
A flow consists of three elements:
* **Source:** This is the Axiom dataset used as the flow origin.
* **Transformation:** This is the APL query used to filter, shape, and enrich the events.
* **Destination:** This is where events are routed.
For more information, see [Introduction to Flow](/process-data/introduction).
## L
### Log
A log is a structured or semi-structured data record typically used to document actions or system states over time, primarily for monitoring, debugging, and auditing. Traditionally formatted as text entries with timestamps and message content, logs have evolved to include standardized key-value structures, making them easier to search, interpret, and correlate across distributed systems. In Axiom, logs represent historical records designed for consistent capture, storage, and collaborative analysis, allowing for real-time visibility and troubleshooting across services.
For more information, see [Axiom for observability](/getting-started-guide/observability).
## M
### Match monitor
Match monitors allow you to continuously filter your log data and send you matching events. Axiom sends a notification for each matching event. By default, the notification message contains the entire matching event in JSON format. When you define your match monitor using APL, you can control which event attributes to include in the notification message.
For more information, see [Match monitors](/monitor-data/match-monitors).
### Metric
A metric is a quantitative measurement collected at specific time intervals, reflecting the state or performance of a system or component. Metrics focus on numeric values, such as CPU usage or memory consumption, enabling aggregation, trend analysis, and alerting based on thresholds. Within Axiom, metrics are data points associated with timestamps, labels, and values, designed to monitor resource utilization or performance. Metrics enable predictive insights by identifying patterns over time, offering foresight into system health and potential issues before they escalate.
For more information, see [Axiom for observability](/getting-started-guide/observability).
### Monitor
A monitor is a background task that periodically runs a query that you define. For example, it counts the number of error messages in your logs over the previous 5 minutes. A notifier defines how Axiom notifies you about the monitor output. For example, Axiom can send you an email.
You can use the following types of monitor:
* [Anomaly monitors](#anomaly-monitor) aggregate event data over time and look for values that are unexpected based on the event history. When the results of the aggregation are too high or low compared to the expected value, Axiom sends you an alert.
* [Match monitors](#match-monitor) filter for key events and send them to you.
* [Threshold monitors](#threshold-monitor) aggregate event data over time. When the results of the aggregation cross a threshold, Axiom sends you an alert.
For more information, see [Introduction to monitors](/monitor-data/monitors).
## N
### Notifier
A monitor is a background task that periodically runs a query that you define. For example, it counts the number of error messages in your logs over the previous 5 minutes. A notifier defines how Axiom notifies you about the monitor output. For example, Axiom can send you an email.
For more information, see [Introduction to notifiers](/monitor-data/notifiers-overview).
## O
### Observability
Observability is a principle in software engineering and systems monitoring that focuses on the ability to understand and diagnose the internal state of a system by examining the data it generates, such as logs, metrics, and traces. It goes beyond traditional monitoring by giving teams the power to pinpoint and resolve issues, optimize performance, and understand user behaviors across complex, interconnected services. Observability leverages various types of [event data](#event) to provide granular insights that span everything from simple log messages to multi-service transactions (traces) and performance metrics.
Traditionally, observability has been associated with three pillars:
* Logs capture individual events or errors.
* Metrics provide quantitative data over time, like CPU usage.
* Traces represent workflows across microservices.
However, modern observability expands on this by aggregating diverse data types from engineering, product, marketing, and security functions, all of which contribute to understanding the deeper “why” behind user interactions and system behaviors. This holistic view, in turn, enables real-time diagnostics, predictive analyses, and proactive issue resolution.
In essence, observability transforms raw event data into actionable insights, helping organizations not only to answer “what happened?” but also to delve into “why it happened” and “what might happen next.”
For more information, see [Axiom for observability](/getting-started-guide/observability).
## P
### Personal access token (PAT)
See [Tokens](#token).
### Playground
The Axiom Playground is an interactive sandbox environment where you can quickly try out Axiom’s capabilities.
To try out Axiom, go to the [Axiom Playground](https://play.axiom.co/).
## Q
### Query
In Axiom, a query is a specific, structured request used to get deeper insights into your data. It typically involves looking for information based on defined parameters like keywords, date ranges, or specific fields. The intent of a query is precision: to locate, analyze, or manipulate specific subsets of data within vast data structures, enhancing insights into various operational aspects or user behaviors.
Querying enables you to filter, manipulate, extend, and summarize your data.
{/*
As opposed to [searching](#search) which relies on sampling, querying allows you to explore all your event data. For this reason, querying is the modern way of making sense of your event data.
*/}
### Query-hours
When you run queries, your usage of the Axiom platform is measured in query-hours. The unit of this measurement is GB-hours which reflects the duration (measured in milliseconds) serverless functions are running to execute your query multiplied by the amount of memory (GB) allocated to execution. This metric is important for monitoring and managing your usage against the monthly allowance included in your plan.
For more information, see [Query costs](/reference/query-hours).
## R
### Role-based access control (RBAC)
Role-based access control (RBAC) allows you to manage and restrict access to your data and resources efficiently.
For more information, see [Access](/reference/settings).
{/*
## S
### Search
Most observability solutions rely on search to seek information within event data. In contrast, Axiom’s approach is [query](#query). Unlike search that only gives you approximate results because it relies on sampling, a query is precise because it explores all your data. For this reason, querying is the modern way of making sense of your event data.
*/}
## T
### Threshold monitor
Threshold monitors allow you to periodically aggregate your event data and compare the results of this aggregation to a threshold that you define. When the results cross the threshold, the monitor enters the alert state. The monitor remains in the alert state until the results no longer cross the threshold. A threshold monitor sends you a notification each time it enters or exits the alert state.
For more information, see [Threshold monitors](/monitor-data/threshold-monitors).
### Token
You can use the Axiom API and CLI to programmatically ingest and query data, and manage settings and resources. For example, you can create new API tokens and change existing datasets with API requests. To prove that these requests come from you, you must include forms of authentication called tokens in your API requests. Axiom offers two types of tokens:
* API tokens let you control the actions that can be performed with the token. For example, you can specify that requests authenticated with a certain API token can only query data from a particular dataset.
* Personal access tokens (PATs) provide full control over your Axiom account. Requests authenticated with a PAT can perform every action you can perform in Axiom.
For more information, see [Tokens](/reference/tokens).
### Trace
A trace is a sequence of events that captures the path and flow of a single request as it navigates through multiple services or components within a distributed system. Utilizing trace IDs to group-related spans (individual actions or operations within a request), traces enable visibility into the lifecycle of a request, illustrating how it progresses, where delays or errors may occur, and how components interact. By connecting each event in the request journey, traces provide insights into system performance, pinpointing bottlenecks and latency.
# Axiom for observability
Source: https://axiom.co/docs/getting-started-guide/observability
This page explains how Axiom helps you leverage timestamped event data for observability purposes.
Axiom helps you leverage the power of timestamped event data. A common use case of event data is observability (o11y) in the field of software engineering. Observability is the ability to explain what is happening inside a software system by observing it from the outside. It allows you to understand the behavior of systems based on their outputs such as telemetry data, which is a type of event data.
Software engineers most often work with timestamped event data in the form of logs or metrics. However, Axiom believes that event data reflects a much broader range of interactions, crossing boundaries from engineering to product management, security, and beyond. For a more general explanation of event data in Axiom, see [Events](/getting-started-guide/event-data).
## Types of event data in observability
Traditionally, observability has been associated with three pillars, each effectively a specialized view of event data:
* **Logs**: Logs record discrete events, such as error messages or access requests, typically associated with engineering or security.
* **Traces**: Traces track the path of requests through a system, capturing each step’s duration. By linking related spans within a trace, developers can identify bottlenecks and dependencies.
* **Metrics**: Metrics quantify state over time, recording data like CPU usage or user count at intervals. Product or engineering teams can then monitor and aggregate these values for performance insights.
In Axiom, these observability elements are stored as event data, allowing for fine-grained, efficient tracking across all three pillars.
## Logs and traces support
Axiom excels at collecting, storing, and analyzing timestamped event data.
For logs and traces, Axiom offers unparalleled efficiency and query performance. You can send logs and traces to Axiom from a wide range of popular sources. For information, see [Send data to Axiom
](/send-data/methods).
## Metrics support
For metrics data, Axiom is well-suited for event-level metrics that behave like logs, with each data point representing a discrete event.
For example, you have the following timestamped data in Axiom:
```json
{
"job_id": "train_123",
"user_name": "acme",
"timestamp": "2024-10-08T15:30:00Z",
"node_host": "worker-01",
"metric_name": "gpu_utilization",
"metric_value": 87.5,
"training_type": "image_classification"
}
```
You can easily query and analyze this type of metrics data in Axiom. The query below computes the average GPU utilization across nodes:
```kusto
dataset
| summarize avg(metric_value) by node_host, bin_auto(_time)
```
Axiom’s support for metrics data currently comes with the following limitations:
* Axiom doesn’t support pre-aggregated metrics such as scrape samples.
* Axiom isn’t optimized for high-dimensional metric time series with a very large number of metric/label combinations.
Support for these types of metrics data is coming soon in the first half of 2025.
# Quickstart using sample data
Source: https://axiom.co/docs/getting-started-guide/quickstart-using-sample-data
This page explains how to try out Axiom with sample data. It walks you through an example where you want to keep track of OpenTelemetry (OTel) traces with Axiom.
By following this page, you will:
1. Explore the structure of sample data.
2. Display a stream of incoming data.
3. Analyze the data.
4. Visualize the data by creating a simple dashboard.
5. Set up a monitor that alerts you about internal server errors.
To try out Axiom without registration, see [Explore Axiom Playground](/getting-started-guide/explore-axiom-playground).
## 1. Explore sample data
1. [Sign up for an Axiom account](https://app.axiom.co/register). All you need is an email address.
2. Click the **Datasets** tab at the top of the page.
3. In the list of datasets, click `otel-demo-traces`.
This displays the fields in the sample dataset `otel-demo-traces`.
## 2. Display stream of incoming data
1. Click the **Stream** tab at the top of the page.
2. Click **otel-demo-traces** in the list.
You see the data that Axiom receives realtime.
## 3. Analyze data
### Query data
1. Click the **Query** tab at the top of the page, and then click **Builder** in the top left. This enables you to query your data with a visual query builder.
2. In the **Dataset** section, select **otel-demo-traces** from the list of datasets.
3. In the **Where** section, click **+**.
4. Write **service.name == frontend**, and then press **Enter**.
5. Click **Run**.
You see all the traces for the Frontend service.
### Run an APL query
1. Click the **Query** tab at the top of the page, and then click **APL** in the top left. This enables you to query your data with the Axiom Processing Language (APL). For more information, see [Introduction to APL](/apl/introduction).
2. In the text field, enter the following:
```kusto
['otel-demo-traces']
| where duration > 5ms
| summarize count() by bin_auto(_time), ['service.name']
```
3. Click **Run**.
You see the number of requests taking longer than 5 ms for each service over time.
## 4. Visualize data
1. Click the **Dashboards** tab at the top of the page.
2. Click **OpenTelemetry Traces (otel-demo-traces)** in the list.
You see a dashboard that displays important information about the OTel traces.
## 5. Create new dashboard
1. Click the **Dashboards** tab at the top of the page, and then click **New dashboard** on the right.
2. Name the dashboard and click **Create**.
3. Click **Add a chart**, and then click **Timeseries**.
4. In the **Dataset** section, select **otel-demo-traces**.
5. In the **Summarize** section, click **no group** to the right of **by**, and then select **service.name**.
6. Click **Save**.
You created a simple dashboard that displays the number of requests for each service over time.
## 6. Monitor data for issues
### Create notifier
1. Click the **Monitors** tab at the top of the page.
2. In the top left, click **Notifiers**, and then click **New notifier** on the top right.
3. In **Name**, enter **Slow requests notifier**.
4. In **Users**, enter your email address, and then click **+** on the right.
5. Click **Create**.
### Create monitor
1. In the top left, click **Monitors**, and then click **New monitor** on the top right.
2. Click **Threshold monitor**.
3. In **Check options**, enter `10000` as the value.
4. Click **+ Add notifier**, and then select **Email: Slow requests notifier**.
5. In the **APL** section, enter the following:
```kusto
['otel-demo-traces']
| where duration > 5ms
| summarize count() by bin(_time, 1m)
```
6. Click **Create**.
You created a monitor that automatically sends a notification to your email address if the number of requests taking longer than 5 ms is higher than 10,000 during a one minute period.
## What's next
To check out Axiom with a sample app, see [Get started with example app](/getting-started-guide/get-started-example-app).
# Axiom Go adapter for Apex
Source: https://axiom.co/docs/guides/apex
This page explains how to send logs generated by the apex/log library to Axiom.
Use the adapter of the Axiom Go SDK to send logs generated by the [apex/log](https://github.com/apex/log) library to Axiom.
The Axiom Go SDK is an open-source project and welcomes your contributions. For more information, see the [GitHub repository](https://github.com/axiomhq/axiom-go).
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
## Set up SDK
1. Install the Axiom Go SDK and configure your environment as explained in [Send data from Go app to Axiom](/guides/go).
2. In your Go app, import the `apex` package. It is imported as an `adapter` so that it doesn’t conflict with the `apex/log` package.
```go
import adapter "github.com/axiomhq/axiom-go/adapters/apex"
```
Alternatively, configure the adapter using [options](https://pkg.go.dev/github.com/axiomhq/axiom-go/adapters/apex#Option) passed to the [New](https://pkg.go.dev/github.com/axiomhq/axiom-go/adapters/apex#New) function:
```go
handler, err := adapter.New(
adapter.SetDataset("DATASET_NAME"),
)
```
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
## Configure client
To configure the underlying client manually, choose one of the following:
* Use [SetClient](https://pkg.go.dev/github.com/axiomhq/axiom-go/adapters/apex#SetClient) to pass in the client you have previously created with [Send data from Go app to Axiom](/guides/go).
* Use [SetClientOptions](https://pkg.go.dev/github.com/axiomhq/axiom-go/adapters/apex#SetClientOptions) to pass [client options](https://pkg.go.dev/github.com/axiomhq/axiom-go/axiom#Option) to the adapter.
```go
import (
"github.com/axiomhq/axiom-go/axiom"
adapter "github.com/axiomhq/axiom-go/adapters/apex"
)
// ...
handler, err := adapter.New(
adapter.SetClientOptions(
axiom.SetPersonalTokenConfig("AXIOM_TOKEN"),
),
)
```
The adapter uses a buffer to batch events before sending them to Axiom. Flush this buffer explicitly by calling [Close](https://pkg.go.dev/github.com/axiomhq/axiom-go/adapters/apex#Handler.Close). For more information, see the [example in GitHub](https://github.com/axiomhq/axiom-go/blob/main/examples/apex/main.go).
## Reference
For a full reference of the adapter’s functions, see the [Go Packages page](https://pkg.go.dev/github.com/axiomhq/axiom-go/adapters/apex).
# Send data from Go app to Axiom
Source: https://axiom.co/docs/guides/go
This page explains how to send data from a Go app to Axiom.
To send data from a Go app to Axiom, use the Axiom Go SDK.
The Axiom Go SDK is an open-source project and welcomes your contributions. For more information, see the [GitHub repository](https://github.com/axiomhq/axiom-go).
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
## Install SDK
To install the SDK, run the following:
```shell
go get github.com/axiomhq/axiom-go/axiom
```
Import the package:
```go
import "github.com/axiomhq/axiom-go/axiom"
```
If you use the [Axiom CLI](/reference/cli), run `eval $(axiom config export -f)` to configure your environment variables. Otherwise, [create an API token](/reference/tokens) and export it as `AXIOM_TOKEN`.
Alternatively, configure the client using [options](https://pkg.go.dev/github.com/axiomhq/axiom-go/axiom#Option) passed to the `axiom.NewClient` function:
```go
client, err := axiom.NewClient(
axiom.SetPersonalTokenConfig("AXIOM_TOKEN"),
)
```
## Use client
Create and use a client in the following way:
```go
package main
import (
"context"
"fmt"
"log"
"github.com/axiomhq/axiom-go/axiom"
"github.com/axiomhq/axiom-go/axiom/ingest"
)
func main() {
ctx := context.Background()
client, err := axiom.NewClient()
if err != nil {
log.Fatal(err)
}
if _, err = client.IngestEvents(ctx, "my-dataset", []axiom.Event{
{ingest.TimestampField: time.Now(), "foo": "bar"},
{ingest.TimestampField: time.Now(), "bar": "foo"},
}); err != nil {
log.Fatal(err)
}
res, err := client.Query(ctx, "['my-dataset'] | where foo == 'bar' | limit 100")
if err != nil {
log.Fatal(err)
} else if res.Status.RowsMatched == 0 {
log.Fatal("No matches found")
}
rows := res.Tables[0].Rows()
if err := rows.Range(ctx, func(_ context.Context, row query.Row) error {
_, err := fmt.Println(row)
return err
}); err != nil {
log.Fatal(err)
}
}
```
For more examples, see the [examples in GitHub](https://github.com/axiomhq/axiom-go/tree/main/examples).
## Adapters
To use a logging package, see the following adapters:
* [Apex](/guides/apex)
* [Logrus](/guides/logrus)
* [Zap](/guides/zap)
# Send data from JavaScript app to Axiom
Source: https://axiom.co/docs/guides/javascript
This page explains how to send data from a JavaScript app to Axiom.
JavaScript is a versatile, high-level programming language primarily used for creating dynamic and interactive web content.
To send data from a JavaScript app to Axiom, use one of the following libraries of the Axiom JavaScript SDK:
* [@axiomhq/js](#use-axiomhq-js)
* [@axiomhq/logging](#use-axiomhq-logging)
The choice between these options depends on your individual requirements:
| Capabilities | @axiomhq/js | @axiomhq/logging |
| --------------------------------------------------- | ----------- | ---------------- |
| Send data to Axiom | Yes | Yes |
| Query data | Yes | No |
| Capture errors | Yes | No |
| Create annotations | Yes | No |
| Transports | No | Yes |
| Structured logging by default | No | Yes |
| Send data to multiple places from a single function | No | Yes |
The `@axiomhq/logging` library is a logging solution that also serves as the base for other libraries like `@axiomhq/react` and `@axiomhq/nextjs`.
The @axiomhq/js and the @axiomhq/logging libraries are part of the Axiom JavaScript SDK, an open-source project and welcomes your contributions. For more information, see the [GitHub repository](https://github.com/axiomhq/axiom-js).
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
## Use @axiomhq/js
### Install @axiomhq/js
In your terminal, go to the root folder of your JavaScript app and run the following command:
```shell
npm install @axiomhq/js
```
### Configure environment variables
Configure the environment variables in one of the following ways:
* Export the API token as `AXIOM_TOKEN`.
* Pass the API token to the constructor of the client:
```ts
import { Axiom } from '@axiomhq/js';
const axiom = new Axiom({
token: process.env.AXIOM_TOKEN,
});
```
* Install the [Axiom CLI](/reference/cli), and then run the following command:
```sh
eval $(axiom config export -f)
```
### Send data to Axiom
The following example sends data to Axiom:
```ts
axiom.ingest('DATASET_NAME', [{ foo: 'bar' }]);
await axiom.flush();
```
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
The client automatically batches events in the background. In most cases, call `flush()` only before your application exits.
### Query data
The following example queries data from Axiom:
```ts
const res = await axiom.query(`['DATASET_NAME'] | where foo == 'bar' | limit 100`);
console.log(res);
```
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
For more examples, see the [examples in GitHub](https://github.com/axiomhq/axiom-js/tree/main/examples).
### Capture errors
To capture errors, pass a method `onError` to the client:
```ts
let client = new Axiom({
token: '',
...,
onError: (err) => {
console.error('ERROR:', err);
}
});
```
By default, `onError` is set to `console.error`.
### Create annotations
The following example creates an annotation:
```ts
import { annotations } from '@axiomhq/js';
const client = new annotations.Service({ token: process.env.AXIOM_TOKEN });
await annotations.create({
type: 'deployment',
datasets: ['DATASET_NAME'],
title: 'New deployment',
description: 'Deployed version 1.0.0',
})
```
## Use @axiomhq/logging
### Install @axiomhq/logging
In your terminal, go to the root folder of your JavaScript app and run the following command:
```bash
npm install @axiomhq/logging
```
### Send data to Axiom
The following example sends data to Axiom:
```ts
import { Logger, AxiomJSTransport, ConsoleTransport } from "@axiomhq/logging";
import { Axiom } from "@axiomhq/js";
const axiom = new Axiom({
token: process.env.AXIOM_TOKEN,
});
const logger = new Logger(
{
transports: [
new AxiomJSTransport({
axiom,
dataset: process.env.AXIOM_DATASET,
}),
new ConsoleTransport(),
],
}
);
logger.info("Hello, world!");
```
#### Transports
The `@axiomhq/logging` library includes the following transports:
* `ConsoleTransport`: Logs to the console.
```ts
import { ConsoleTransport } from "@axiomhq/logging";
const transport = new ConsoleTransport({
logLevel: "warn",
prettyPrint: true,
});
```
* `AxiomJSTransport`: Sends logs to Axiom using the @axiomhq/js library.
```ts
import { Axiom } from "@axiomhq/js";
import { AxiomJSTransport } from "@axiomhq/logging";
const axiom = new Axiom({
token: process.env.AXIOM_TOKEN,
});
const transport = new AxiomJSTransport({
axiom,
dataset: process.env.AXIOM_DATASET,
logLevel: "warn",
});
```
* `ProxyTransport`: Sends logs the [proxy server function](/send-data/nextjs#proxy-for-client-side-usage) that acts as a proxy between your application and Axiom. It’s particularly useful when your application runs on top of a server-enabled framework like Next.js or Remix.
```ts
import { ProxyTransport } from "@axiomhq/logging";
const transport = new ProxyTransport({
url: "/proxy",
logLevel: "warn",
autoFlush: { durationMs: 1000 },
});
```
Alternatively, create your own transports by implementing the `Transport` interface:
```ts
import { Transport } from "@axiomhq/logging";
class MyTransport implements Transport {
log(log: Transport['log']) {
console.log(log);
}
flush() {
console.log("Flushing logs");
}
}
```
#### Logging levels
The `@axiomhq/logging` library includes the following logging levels:
* `debug`: Debug-level logs.
* `info`: Informational logs.
* `warn`: Warning logs.
* `error`: Error logs.
#### Formatters
Formatters are used to change the content of a log before sending it to a transport. For example:
```ts
import { Logger, LogEvent } from "@axiomhq/logging";
const myCustomFormatter = (event: LogEvent) => {
const upperCaseKeys = {
...event,
fields: Object.fromEntries(
Object.entries(event.fields).map(([key, value]) => [key.toUpperCase(), value])
),
};
return upperCaseKeys;
};
const logger = new Logger({
formatters: [myCustomFormatter],
});
logger.info("Hello, world!");
```
## Related logging options
### Send data from JavaScript libraries and frameworks
To send data to Axiom from JavaScript libraries and frameworks, see the following:
* [Send data from React app](/send-data/react)
* [Send data from Next.js app](/send-data/nextjs)
### Send data from Node.js
While the Axiom JavaScript SDK works on both the backend and the browsers, Axiom provides transports for some of the popular loggers:
* [Pino](/guides/pino)
* [Winston](/guides/winston)
# Axiom Go adapter for Logrus
Source: https://axiom.co/docs/guides/logrus
This page explains how to send logs generated by the sirupsen/logrus library to Axiom.
Use the adapter of the Axiom Go SDK to send logs generated by the [sirupsen/logrus](https://github.com/sirupsen/logrus) library to Axiom.
The Axiom Go SDK is an open-source project and welcomes your contributions. For more information, see the [GitHub repository](https://github.com/axiomhq/axiom-go).
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
## Set up SDK
1. Install the Axiom Go SDK and configure your environment as explained in [Send data from Go app to Axiom](/guides/go).
2. In your Go app, import the `logrus` package. It is imported as an `adapter` so that it doesn’t conflict with the `sirupsen/logrus` package.
```go
import adapter "github.com/axiomhq/axiom-go/adapters/logrus"
```
Alternatively, configure the adapter using [options](https://pkg.go.dev/github.com/axiomhq/axiom-go/adapters/logrus#Option) passed to the [New](https://pkg.go.dev/github.com/axiomhq/axiom-go/adapters/logrus#New) function:
```go
hook, err := adapter.New(
adapter.SetDataset("DATASET_NAME"),
)
```
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
## Configure client
To configure the underlying client manually, choose one of the following:
* Use [SetClient](https://pkg.go.dev/github.com/axiomhq/axiom-go/adapters/logrus#SetClient) to pass in the client you have previously created with [Send data from Go app to Axiom](/guides/go).
* Use [SetClientOptions](https://pkg.go.dev/github.com/axiomhq/axiom-go/adapters/logrus#SetClientOptions) to pass [client options](https://pkg.go.dev/github.com/axiomhq/axiom-go/axiom#Option) to the adapter.
```go
import (
"github.com/axiomhq/axiom-go/axiom"
adapter "github.com/axiomhq/axiom-go/adapters/logrus"
)
// ...
hook, err := adapter.New(
adapter.SetClientOptions(
axiom.SetPersonalTokenConfig("AXIOM_TOKEN"),
),
)
```
The adapter uses a buffer to batch events before sending them to Axiom. Flush this buffer explicitly by calling [Close](https://pkg.go.dev/github.com/axiomhq/axiom-go/adapters/logrus#Hook.Close). For more information, see the [example in GitHub](https://github.com/axiomhq/axiom-go/blob/main/examples/logrus/main.go).
## Reference
For a full reference of the adapter’s functions, see the [Go Packages page](https://pkg.go.dev/github.com/axiomhq/axiom-go/adapters/logrus).
# OpenTelemetry using Cloudflare Workers
Source: https://axiom.co/docs/guides/opentelemetry-cloudflare-workers
This guide explains how to configure a Cloudflare Workers app to send telemetry data to Axiom.
This guide demonstrates how to configure OpenTelemetry in Cloudflare Workers to send telemetry data to Axiom using the [OTel CF Worker package](https://github.com/evanderkoogh/otel-cf-workers).
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
- Create a Cloudflare account.
- [Install Wrangler](https://developers.cloudflare.com/workers/wrangler/install-and-update/), the CLI tool for Cloudflare.
## Setting up your Cloudflare Workers environment
Create a new directory for your project and navigate into it:
```bash
mkdir my-axiom-worker && cd my-axiom-worker
```
Initialize a new Wrangler project using this command:
```bash
wrangler init --type="javascript"
```
## Cloudflare Workers Script Configuration (index.ts)
Configure and implement your Workers script by integrating OpenTelemetry with the `@microlabs/otel-cf-workers` package to send telemetry data to Axiom, as illustrated in the example `index.ts` below:
```js
// index.ts
import { trace } from '@opentelemetry/api';
import { instrument, ResolveConfigFn } from '@microlabs/otel-cf-workers';
export interface Env {
AXIOM_API_TOKEN: string,
AXIOM_DATASET: string
}
const handler = {
async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise {
await fetch('https://cloudflare.com');
const greeting = "Welcome to Axiom Cloudflare instrumentation";
trace.getActiveSpan()?.setAttribute('greeting', greeting);
ctx.waitUntil(fetch('https://workers.dev'));
return new Response(`${greeting}!`);
},
};
const config: ResolveConfigFn = (env: Env, _trigger) => {
return {
exporter: {
url: 'https://AXIOM_DOMAIN/v1/traces',
headers: {
'Authorization': `Bearer ${env.AXIOM_API_TOKEN}`,
'X-Axiom-Dataset': `${env.AXIOM_DATASET}`
},
},
service: { name: 'axiom-cloudflare-workers' },
};
};
export default instrument(handler, config);
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
## Wrangler Configuration (`wrangler.toml`)
Configure **`wrangler.toml`** with your Cloudflare account details and set environment variables for the Axiom API token and dataset.
```toml
name = "my-axiom-worker"
type = "javascript"
account_id = "$YOUR_CLOUDFLARE_ACCOUNT_ID" # Replace with your actual Cloudflare account ID
workers_dev = true
compatibility_date = "2023-03-27"
compatibility_flags = ["nodejs_compat"]
main = "index.ts"
# Define environment variables here
[vars]
AXIOM_API_TOKEN = "API_TOKEN"
AXIOM_DATASET = "DATASET_NAME"
```
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
## Install Dependencies
Navigate to the root directory of your project and add `@microlabs/otel-cf-workers` and other OTel packages to the `package.json` file.
```json
{
"name": "my-axiom-worker",
"version": "1.0.0",
"description": "A template for kick-starting a Cloudflare Workers project",
"main": "index.ts",
"scripts": {
"start": "wrangler dev",
"deploy": "wrangler publish"
},
"dependencies": {
"@microlabs/otel-cf-workers": "^1.0.0-rc.20",
"@opentelemetry/api": "^1.6.0",
"@opentelemetry/core": "^1.17.1",
"@opentelemetry/exporter-trace-otlp-http": "^0.43.0",
"@opentelemetry/otlp-exporter-base": "^0.43.0",
"@opentelemetry/otlp-transformer": "^0.43.0",
"@opentelemetry/resources": "^1.17.1",
"@opentelemetry/sdk-trace-base": "^1.17.1",
"@opentelemetry/semantic-conventions": "^1.17.1",
"deepmerge": "^4.3.1",
"husky": "^8.0.3",
"lint-staged": "^15.0.2",
"ts-checked-fsm": "^1.1.0"
},
"devDependencies": {
"@changesets/cli": "^2.26.2",
"@cloudflare/workers-types": "^4.20231016.0",
"prettier": "^3.0.3",
"rimraf": "^4.4.1",
"typescript": "^5.2.2",
"wrangler": "2.13.0"
},
"private": true
}
```
Run `npm install` to install the packages. This command will install all the necessary packages listed in your `package.json` file.
## Running the instrumented app
To run your Cloudflare Workers app with OpenTelemetry instrumentation, ensure your API token and dataset are correctly set in your `wrangler.toml` file. As outlined in our `package.json` file, you have two primary scripts to manage your app’s lifecycle.
### In development mode
For local development and testing, you can start a local development server by running:
```bash
npm run start
```
This command runs `wrangler dev` allowing you to preview and test your app locally.
### Deploying to production
Deploy your app to the Cloudflare Workers environment by running:
```bash
npm run deploy
```
This command runs **`wrangler publish`**, deploying your project to Cloudflare Workers.
### Alternative: Use Wrangler directly
If you prefer not to use **`npm`** commands or want more direct control over the deployment process, you can use Wrangler commands directly in your terminal.
For local development:
```bash
wrangler dev
```
For deploying to Cloudflare Workers:
```bash
wrangler deploy
```
## View your app in Cloudflare Workers
Once you've deployed your app using Wrangler, view and manage it through the Cloudflare dashboard. To see your Cloudflare Workers app, follow these steps:
* In your [Cloudflare dashboard](https://dash.cloudflare.com/), click **Workers & Pages** to access the Workers section. You see a list of your deployed apps.
* Locate your app by its name. For this tutorial, look for `my-axiom-worker`.
* Click your app’s name to view its details. Within the app’s page, select the triggers tab to review the triggers associated with your app.
* Under the routes section of the triggers tab, you will find the URL route assigned to your Worker. This is where your Cloudflare Worker responds to incoming requests. Vist the [Cloudflare Workers documentation](https://developers.cloudflare.com/workers/get-started/guide/) to learn how to configure routes
## Observe the telemetry data in Axiom
As you interact with your app, traces will be collected and exported to Axiom, allowing you to monitor, analyze, and gain insights into your app’s performance and behavior.
## Dynamic OpenTelemetry traces dashboard
This data can then be further viewed and analyzed in Axiom’s dashboard, offering a deeper understanding of your app’s performance and behavior.
**Working with Cloudflare Pages Functions:** Integration with OpenTelemetry is similar to Workers but uses the Cloudflare Dashboard for configuration, bypassing **`wrangler.toml`**. This simplifies setup through the Cloudflare dashboard web interface.
## Manual Instrumentation
Manual instrumentation requires adding code into your Worker’s script to create and manage spans around the code blocks you want to trace.
1. Initialize Tracer:
Use the OpenTelemetry API to create a tracer instance at the beginning of your script using the **`@microlabs/otel-cf-workers`** package.
```js
import { trace } from '@opentelemetry/api';
const tracer = trace.getTracer('your-service-name');
```
2. Create start and end Spans:
Manually start spans before the operations or events you want to trace and ensure you end them afterward to complete the tracing lifecycle.
```js
const span = tracer.startSpan('operationName');
try {
// Your operation code here
} finally {
span.end();
}
```
3. Annotate Spans:
Add important metadata to spans to provide additional context. This can include setting attributes or adding events within the span.
```js
span.setAttribute('key', 'value');
span.addEvent('eventName', { 'eventAttribute': 'value' });
```
## Automatic Instrumentation
Automatic instrumentation uses the **`@microlabs/otel-cf-workers`** package to automatically trace incoming requests and outbound fetch calls without manual span management.
1. Instrument your Worker:
Wrap your Cloudflare Workers script with the `instrument` function from the **`@microlabs/otel-cf-workers`** package. This automatically instruments incoming requests and outbound fetch calls.
```js
import { instrument } from '@microlabs/otel-cf-workers';
export default instrument(yourHandler, yourConfig);
```
2. Configuration: Provide configuration details, including how to export telemetry data and service metadata to Axiom as part of the `instrument` function call.
```js
const config = (env) => ({
exporter: {
url: 'https://AXIOM_DOMAIN/v1/traces',
headers: {
'Authorization': `Bearer ${env.AXIOM_API_TOKEN}`,
'X-Axiom-Dataset': `${env.AXIOM_DATASET}`
},
},
service: { name: 'axiom-cloudflare-workers' },
});
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
After instrumenting your Worker script, the `@microlabs/otel-cf-workers` package takes care of tracing automatically.
## Reference
### List of OpenTelemetry trace fields
| Field Category | Field Name | Description |
| ---------------------------- | ------------------------------------------- | ------------------------------------------------------------------------------------- |
| **Unique Identifiers** | | |
| | \_rowid | Unique identifier for each row in the trace data. |
| | span\_id | Unique identifier for the span within the trace. |
| | trace\_id | Unique identifier for the entire trace. |
| **Timestamps** | | |
| | \_systime | System timestamp when the trace data was recorded. |
| | \_time | Timestamp when the actual event being traced occurred. |
| **HTTP Attributes** | | |
| | attributes.custom\["http.host"] | Host information where the HTTP request was sent. |
| | attributes.custom\["http.server\_name"] | Server name for the HTTP request. |
| | attributes.http.flavor | HTTP protocol version used. |
| | attributes.http.method | HTTP method used for the request. |
| | attributes.http.route | Route accessed during the HTTP request. |
| | attributes.http.scheme | Protocol scheme (HTTP/HTTPS). |
| | attributes.http.status\_code | HTTP response status code. |
| | attributes.http.target | Specific target of the HTTP request. |
| | attributes.http.user\_agent | User agent string of the client. |
| | attributes.custom.user\_agent.original | Original user agent string, providing client software and OS. |
| | attributes.custom\["http.accepts"] | Accepted content types for the HTTP request. |
| | attributes.custom\["http.mime\_type"] | MIME type of the HTTP response. |
| | attributes.custom.http.wrote\_bytes | Number of bytes written in the HTTP response. |
| | attributes.http.request.method | HTTP request method used. |
| | attributes.http.response.status\_code | HTTP status code returned in response. |
| **Network Attributes** | | |
| | attributes.net.host.port | Port number on the host receiving the request. |
| | attributes.net.peer.port | Port number on the peer (client) side. |
| | attributes.custom\["net.peer.ip"] | IP address of the peer in the network interaction. |
| | attributes.net.sock.peer.addr | Socket peer address, indicating the IP version used. |
| | attributes.net.sock.peer.port | Socket peer port number. |
| | attributes.custom.net.protocol.version | Protocol version used in the network interaction. |
| | attributes.network.protocol.name | Name of the network protocol used. |
| | attributes.network.protocol.version | Version of the network protocol used. |
| | attributes.server.address | Address of the server handling the request. |
| | attributes.url.full | Full URL accessed in the request. |
| | attributes.url.path | Path component of the URL accessed. |
| | attributes.url.query | Query component of the URL accessed. |
| | attributes.url.scheme | Scheme component of the URL accessed. |
| **Operational Details** | | |
| | duration | Time taken for the operation. |
| | kind | Type of span (for example,, server, client). |
| | name | Name of the span. |
| | scope | Instrumentation scope. |
| | scope.name | Name of the scope for the operation. |
| | service.name | Name of the service generating the trace. |
| | service.version | Version of the service generating the trace. |
| **Resource Attributes** | | |
| | resource.environment | Environment where the trace was captured, for example,, production. |
| | resource.cloud.platform | Platform of the cloud provider, for example,, cloudflare.workers. |
| | resource.cloud.provider | Name of the cloud provider, for example,, cloudflare. |
| | resource.cloud.region | Cloud region where the service is located, for example,, earth. |
| | resource.faas.max\_memory | Maximum memory allocated for the function as a service (FaaS). |
| **Telemetry SDK Attributes** | | |
| | telemetry.sdk.language | Language of the telemetry SDK, for example,, js. |
| | telemetry.sdk.name | Name of the telemetry SDK, for example,, @microlabs/otel-workers-sdk. |
| | telemetry.sdk.version | Version of the telemetry SDK. |
| **Custom Attributes** | | |
| | attributes.custom.greeting | Custom greeting message, for example,, "Welcome to Axiom Cloudflare instrumentation." |
| | attributes.custom\["http.accepts"] | Specifies acceptable response formats for HTTP request. |
| | attributes.custom\["net.asn"] | Autonomous System Number representing the hosting entity. |
| | attributes.custom\["net.colo"] | Colocation center where the request was processed. |
| | attributes.custom\["net.country"] | Country where the request was processed. |
| | attributes.custom\["net.request\_priority"] | Priority of the request processing. |
| | attributes.custom\["net.tcp\_rtt"] | Round Trip Time of the TCP connection. |
| | attributes.custom\["net.tls\_cipher"] | TLS cipher suite used for the connection. |
| | attributes.custom\["net.tls\_version"] | Version of the TLS protocol used for the connection. |
| | attributes.faas.coldstart | Indicates if the function execution was a cold start. |
| | attributes.faas.invocation\_id | Unique identifier for the function invocation. |
| | attributes.faas.trigger | Trigger that initiated the function execution. |
### List of imported libraries
**`@microlabs/otel-cf-workers`**
This package is designed for integrating OpenTelemetry within Cloudflare Workers. It provides automatic instrumentation capabilities, making it easier to collect telemetry data from your Workers apps without extensive manual instrumentation. This package simplifies tracing HTTP requests and other asynchronous operations within Workers.
**`@opentelemetry/api`**
The core API for OpenTelemetry in JavaScript, providing the necessary interfaces and utilities for tracing, metrics, and context propagation. In the context of Cloudflare Workers, it allows developers to manually instrument custom spans, manipulate context, and access the active span if needed.
**`@opentelemetry/exporter-trace-otlp-http`**
This exporter enables your Cloudflare Workers app to send trace data over HTTP to any backend that supports the OTLP (OpenTelemetry Protocol), such as Axiom. Using OTLP ensures compatibility with a wide range of observability tools and standardizes the data export process.
**`@opentelemetry/otlp-exporter-base`**, **`@opentelemetry/otlp-transformer`**
These packages provide the foundational elements for OTLP exporters, including the transformation of telemetry data into the OTLP format and base classes for implementing OTLP exporters. They are important for ensuring that the data exported from Cloudflare Workers adheres to the OTLP specification.
**`@opentelemetry/resources`**
Defines the Resource, which represents the entity producing telemetry. In Cloudflare Workers, Resources can be used to describe the worker (for example,, service name, version) and are attached to all exported telemetry, aiding in identifying data in backend systems.
# Send OpenTelemetry data from a Django app to Axiom
Source: https://axiom.co/docs/guides/opentelemetry-django
This guide explains how to send OpenTelemetry data from a Django app to Axiom using the Python OpenTelemetry SDK.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
- [Install Python version 3.7 or higher](https://www.python.org/downloads/).
## Install required dependencies
Install the necessary Python dependencies by running the following command in your terminal:
```bash
pip install django opentelemetry-api opentelemetry-sdk opentelemetry-exporter-otlp-proto-http opentelemetry-instrumentation-django
```
Alternatively, you can add these dependencies to your `requirements.txt` file:
```bash
django
opentelemetry-api
opentelemetry-sdk
opentelemetry-exporter-otlp-proto-http
opentelemetry-instrumentation-django
```
Then, install them using the command:
```bash
pip install -r requirements.txt
```
## Get started with a Django project
1. Create a new Django project if you don’t have one already:
```bash
django-admin startproject your_project_name
```
2. Go to your project directory:
```bash
cd your_project_name
```
3. Create a Django app:
```bash
python manage.py startapp your_app_name
```
## Set up OpenTelemetry Tracing
### Update `manage.py` to initialize tracing
This code initializes OpenTelemetry instrumentation for Django when the project is run. Adding `DjangoInstrumentor().instrument()` ensures that all incoming HTTP requests are automatically traced, which helps in monitoring the app’s performance and behavior without manually adding trace points in every view.
```py
# manage.py
#!/usr/bin/env python
import os
import sys
from opentelemetry.instrumentation.django import DjangoInstrumentor
def main():
"""Run administrative tasks."""
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'your_project_name.settings')
# Initialize OpenTelemetry instrumentation
DjangoInstrumentor().instrument()
try:
from django.core.management import execute_from_command_line
except ImportError as exc:
raise ImportError(
"Couldn't import Django. Are you sure it's installed and "
"available on your PYTHONPATH environment variable? Did you "
"forget to activate a virtual environment?"
) from exc
execute_from_command_line(sys.argv)
if __name__ == '__main__':
main()
```
### Create `exporter.py` for tracer configuration
This file configures the OpenTelemetry tracing provider and exporter. By setting up a `TracerProvider` and configuring the `OTLPSpanExporter`, you define how and where the trace data is sent. The `BatchSpanProcessor` is used to batch and send trace spans efficiently. The tracer created at the end is used throughout the app to create new spans.
```py
# exporter.py
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.sdk.resources import Resource, SERVICE_NAME
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
# Define the service name resource
resource = Resource(attributes={
SERVICE_NAME: "your-service-name" # Replace with your actual service name
})
# Create a TracerProvider with the defined resource
provider = TracerProvider(resource=resource)
# Configure the OTLP/HTTP Span Exporter with necessary headers and endpoint
otlp_exporter = OTLPSpanExporter(
endpoint="https://AXIOM_DOMAIN/v1/traces",
headers={
"Authorization": "Bearer API_TOKEN", # Replace with your actual API token
"X-Axiom-Dataset": "DATASET_NAME" # Replace with your dataset name
}
)
# Create a BatchSpanProcessor with the OTLP exporter
processor = BatchSpanProcessor(otlp_exporter)
provider.add_span_processor(processor)
# Set the TracerProvider as the global tracer provider
trace.set_tracer_provider(provider)
# Define a tracer for external use
tracer = trace.get_tracer("your-service-name")
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
### Use the tracer in your views
In this step, modify the Django views to use the tracer defined in `exporter.py`. By wrapping the view logic within `tracer.start_as_current_span`, you create spans that capture the execution of these views. This provides detailed insights into the performance of individual request handlers, helping to identify slow operations or errors.
```py
# views.py
from django.http import HttpResponse
from .exporter import tracer # Import the tracer
def roll_dice(request):
with tracer.start_as_current_span("roll_dice_span"):
# Your logic here
return HttpResponse("Dice rolled!")
def home(request):
with tracer.start_as_current_span("home_span"):
return HttpResponse("Welcome to the homepage!")
```
### Update `settings.py` for OpenTelemetry instrumentation
In your Django project’s `settings.py`, add the OpenTelemetry Django instrumentation. This setup automatically creates spans for HTTP requests handled by Django:
```py
# settings.py
from pathlib import Path
from opentelemetry.instrumentation.django import DjangoInstrumentor
DjangoInstrumentor().instrument()
# Build paths inside the project like this: BASE_DIR / 'subdir'.
BASE_DIR = Path(__file__).resolve().parent.parent
```
### Update the app’s urls.py to include the views
Include your views in the URL routing by updating [`urls.py`](http://urls.py) Updating `urls.py` with these entries sets up the URL routing for the Django app. It connects the URL paths to the corresponding view functions. This ensures that when users visit the specified paths, the corresponding views are executed, and their spans are created and sent to Axiom for monitoring.
```python
# urls.py
from django.urls import path
from .views import roll_dice, home
urlpatterns = [
path('', home, name='home'),
path('rolldice/', roll_dice, name='roll_dice'),
]
```
## Run the project
Run the command to start the Django project:
```bash
python3 manage.py runserver
```
In your browser, go to `http://127.0.0.1:8000/rolldice` to interact with your Django app. Each time you load the page, the app displays a message and sends the collected traces to Axiom.
## Send data from an existing Django project
### Manual instrumentation
Manual instrumentation in Python with OpenTelemetry involves adding code to create and manage spans around the blocks of code you want to trace. This approach allows for precise control over the trace data.
1. Install necessary OpenTelemetry packages to enable manual tracing capabilities in your Django app.
```py
pip install django opentelemetry-api opentelemetry-sdk opentelemetry-exporter-otlp-proto-http opentelemetry-instrumentation-django
```
2. Set up OpenTelemetry in your Django project to manually trace app activities.
```py
# otel_config.py
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
def configure_opentelemetry():
resource = Resource(attributes={"service.name": "your-django-app"})
trace.set_tracer_provider(TracerProvider(resource=resource))
otlp_exporter = OTLPSpanExporter(
endpoint="https://AXIOM_DOMAIN/v1/traces",
headers={"Authorization": "Bearer API_TOKEN", "X-Axiom-Dataset": "DATASET_NAME"}
)
span_processor = BatchSpanProcessor(otlp_exporter)
trace.get_tracer_provider().add_span_processor(span_processor)
return trace.get_tracer(__name__)
tracer = configure_opentelemetry()
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
3. Configure OpenTelemetry to your Django settings to capture telemetry data upon app startup.
```py
# settings.py
from otel_config import configure_opentelemetry
configure_opentelemetry()
```
4. Manually instrument views to create custom spans that trace specific operations within your Django app.
```py
# views.py
from django.http import HttpResponse
from otel_config import tracer
def home_view(request):
with tracer.start_as_current_span("home_view") as span:
span.set_attribute("http.method", request.method)
span.set_attribute("http.url", request.build_absolute_uri())
response = HttpResponse("Welcome to the home page!")
span.set_attribute("http.status_code", response.status_code)
return response
```
5. Apply manual tracing to database operations by wrapping database cursor executions with OpenTelemetry spans.
```py
# db_tracing.py
from django.db import connections
from otel_config import tracer
class TracingCursorWrapper:
def __init__(self, cursor):
self.cursor = cursor
def execute(self, sql, params=None):
with tracer.start_as_current_span("database_query") as span:
span.set_attribute("db.statement", sql)
span.set_attribute("db.type", "sql")
return self.cursor.execute(sql, params)
def __getattr__(self, attr):
return getattr(self.cursor, attr)
def patch_database():
for connection in connections.all():
connection.cursor_wrapper = TracingCursorWrapper
# settings.py
from db_tracing import patch_database
patch_database()
```
### Automatic instrumentation
Automatic instrumentation in Django with OpenTelemetry simplifies the process of adding telemetry data to your app. It uses pre-built libraries that automatically instrument the frameworks and libraries.
1. Install required packages that support automatic instrumentation.
```bash
pip install django opentelemetry-api opentelemetry-sdk opentelemetry-exporter-otlp-proto-http opentelemetry-instrumentation-django
```
2. Automatically configure OpenTelemetry to trace Django app operations without manual span management.
```py
# otel_config.py
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.instrumentation.django import DjangoInstrumentor
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
def configure_opentelemetry():
resource = Resource(attributes={"service.name": "your-django-app"})
trace.set_tracer_provider(TracerProvider(resource=resource))
otlp_exporter = OTLPSpanExporter(
endpoint="https://AXIOM_DOMAIN/v1/traces",
headers={"Authorization": "Bearer API_TOKEN", "X-Axiom-Dataset": "DATASET_NAME"}
)
span_processor = BatchSpanProcessor(otlp_exporter)
trace.get_tracer_provider().add_span_processor(span_processor)
DjangoInstrumentor().instrument()
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
3. Initialize OpenTelemetry in Django to capture telemetry data from all HTTP requests automatically.
```py
# settings.py
from otel_config import configure_opentelemetry
configure_opentelemetry()
```
4. Update `manage.py` to include OpenTelemetry initialization, ensuring that tracing is active before the Django app fully starts.
```py
#!/usr/bin/env python
import os
import sys
def main():
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'your_project.settings')
from otel_config import configure_opentelemetry
configure_opentelemetry()
try:
from django.core.management import execute_from_command_line
except ImportError as exc:
raise ImportError("Couldn't import Django.") from exc
execute_from_command_line(sys.argv)
if __name__ == '__main__':
main()
```
5. (Optional) Combine automatic and custom manual spans in Django views to enhance trace details for specific complex operations.
```py
# views.py
from opentelemetry import trace
tracer = trace.get_tracer(__name__)
def complex_view(request):
with tracer.start_as_current_span("complex_operation"):
result = perform_complex_operation()
return HttpResponse(result)
```
## Reference
### List of OpenTelemetry trace fields
| Field Category | Field Name | Description |
| ------------------------- | --------------------------------------- | ----------------------------------------------------------------------------------- |
| General Trace Information | | |
| | \_rowId | Unique identifier for each row in the trace data. |
| | \_sysTime | System timestamp when the trace data was recorded. |
| | \_time | Timestamp when the actual event being traced occurred. |
| | trace\_id | Unique identifier for the entire trace. |
| | span\_id | Unique identifier for the span within the trace. |
| | parent\_span\_id | Unique identifier for the parent span within the trace. |
| HTTP Attributes | | |
| | attributes.http.method | HTTP method used for the request. |
| | attributes.http.status\_code | HTTP status code returned in response. |
| | attributes.http.route | Route accessed during the HTTP request. |
| | attributes.http.scheme | Protocol scheme (HTTP/HTTPS). |
| | attributes.http.url | Full URL accessed during the HTTP request. |
| User Agent | | |
| | attributes.http.user\_agent | User agent string, providing client software and OS. |
| Custom Attributes | | |
| | attributes.custom\["http.host"] | Host information where the HTTP request was sent. |
| | attributes.custom\["http.server\_name"] | Server name for the HTTP request. |
| | attributes.custom\["net.peer.ip"] | IP address of the peer in the network interaction. |
| Network Attributes | | |
| | attributes.net.host.port | Port number on the host receiving the request. |
| Operational Details | | |
| | duration | Time taken for the operation, typically in microseconds or milliseconds. |
| | kind | Type of span (For example, server, internal). |
| | name | Name of the span, often a high-level title for the operation. |
| Scope and Instrumentation | | |
| | scope | Instrumentation scope, (For example., opentelemetry.instrumentation.django.) |
| Service Attributes | | |
| | service.name | Name of the service generating the trace, typically set as the app or service name. |
| Telemetry SDK Attributes | | |
| | telemetry.sdk.language | Programming language of the SDK used for telemetry, typically 'python' for Django. |
| | telemetry.sdk.name | Name of the telemetry SDK, for example., OpenTelemetry. |
| | telemetry.sdk.version | Version of the telemetry SDK used in the tracing setup. |
### List of imported libraries
The `exporter.py` file and other relevant parts of the Django OpenTelemetry setup import the following libraries:
### `exporter.py`
This module creates and manages trace data in your app. It creates spans and tracers which track the execution flow and performance of your app.
```py
from opentelemetry import trace
```
TracerProvider acts as a container for the configuration of your app’s tracing behavior. It allows you to define how spans are generated and processed, essentially serving as the central point for managing trace creation and propagation in your app.
```py
from opentelemetry.sdk.trace import TracerProvider
```
BatchSpanProcessor is responsible for batching spans before they’re exported. This is an important aspect of efficient trace data management as it aggregates multiple spans into fewer network requests, reducing the overhead on your app’s performance and the tracing backend.
```py
from opentelemetry.sdk.trace.export import BatchSpanProcessor
```
The Resource class is used to describe your app’s service attributes, such as its name, version, and environment. This contextual information is attached to the traces and helps in identifying and categorizing trace data, making it easier to filter and analyze in your monitoring setup.
```py
from opentelemetry.sdk.resources import Resource, SERVICE_NAME
```
The OTLPSpanExporter is responsible for sending your app’s trace data to a backend that supports the OTLP such as Axiom. It formats the trace data according to the OTLP standards and transmits it over HTTP, ensuring compatibility and standardization in how telemetry data is sent across different systems and services.
```py
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
```
### `manage.py`
The DjangoInstrumentor module is used to automatically instrument Django applications. It integrates OpenTelemetry with Django, enabling automatic creation of spans for incoming HTTP requests handled by Django, and simplifying the process of adding telemetry to your app.
```py
from opentelemetry.instrumentation.django import DjangoInstrumentor
```
### `views.py`
This import brings in the tracer instance defined in `exporter.py`, which is used to create spans for tracing the execution of Django views. By wrapping view logic within `tracer.start_as_current_span`, it captures detailed insights into the performance of individual request handlers.
```py
from .exporter import tracer
```
# OpenTelemetry using .NET
Source: https://axiom.co/docs/guides/opentelemetry-dotnet
This guide explains how to configure a .NET app using the .NET OpenTelemetry SDK to send telemetry data to Axiom.
OpenTelemetry provides a [unified approach to collecting telemetry data](https://opentelemetry.io/docs/languages/net/) from your .NET applications. This guide explains how to configure OpenTelemetry in a .NET application to send telemetry data to Axiom using the OpenTelemetry SDK.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
- Install the .NET 6.0 SDK on your development machine.
- Use your existing .NET application or start with the sample provided in the `program.cs` below.
## Install dependencies
Run the following command in your terminal to install the necessary NuGet packages:
```bash
dotnet add package OpenTelemetry --version 1.7.0
dotnet add package OpenTelemetry.Exporter.Console --version 1.7.0
dotnet add package OpenTelemetry.Exporter.OpenTelemetryProtocol --version 1.7.0
dotnet add package OpenTelemetry.Extensions.Hosting --version 1.7.0
dotnet add package OpenTelemetry.Instrumentation.AspNetCore --version 1.7.1
dotnet add package OpenTelemetry.Instrumentation.Http --version 1.6.0-rc.1
```
Replace the `dotnet.csproj` file in your project with the following:
```csharp
net6.0enableenable
```
The `dotnet.csproj` file is important for defining your project’s settings, including target framework, nullable reference types, and package references. It informs the .NET SDK and build tools about the components and configurations your project requires.
## Core application
`program.cs` is the core of the .NET application. It uses ASP.NET to create a simple web server. The server has an endpoint `/rolldice` that returns a random number, simulating a basic API.
```csharp
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Logging;
using System;
using System.Globalization;
// Set up the web application builder
var builder = WebApplication.CreateBuilder(args);
// Configure OpenTelemetry for detailed tracing information
TracingConfiguration.ConfigureOpenTelemetry();
var app = builder.Build();
// Map the GET request for '/rolldice/{player?}' to a handler
app.MapGet("/rolldice/{player?}", (ILogger logger, string? player) =>
{
// Start a manual tracing activity
using var activity = TracingConfiguration.StartActivity("HandleRollDice");
// Call the RollDice function to get a dice roll result
var result = RollDice();
if (activity != null)
{
// Add detailed information to the tracing activity for debugging and monitoring
activity.SetTag("player.name", player ?? "anonymous"); // Tag the player’s name, default to 'anonymous' if not provided
activity.SetTag("dice.rollResult", result); // Tag the result of the dice roll
activity.SetTag("operation.success", true); // Flag the operation as successful
activity.SetTag("custom.attribute", "Additional detail here"); // Add a custom attribute for potential further detail
}
// Log the dice roll event
LogRollDice(logger, player, result);
// Retur the dice roll result as a string
return result.ToString(CultureInfo.InvariantCulture);
});
// Start the web application
app.Run();
// Log function to log the result of a dice roll
void LogRollDice(ILogger logger, string? player, int result)
{
// Log message varies based on whether a player’s name is provided
if (string.IsNullOrEmpty(player))
{
// Log for an anonymous player
logger.LogInformation("Anonymous player is rolling the dice: {result}", result);
}
else
{
// Log for a named player
logger.LogInformation("{player} is rolling the dice: {result}", player, result);
}
}
// Function to roll a dice and return a random number between 1 and 6
int RollDice()
{
// Use the shared instance of Random for thread safety
return Random.Shared.Next(1, 7);
}
```
## Exporter
The `tracing.cs` file sets up the OpenTelemetry instrumentation. It configures the OTLP (OpenTelemetry Protocol) exporters for traces and initializes the ASP.NET SDK with automatic instrumentation capabilities.
```csharp
using OpenTelemetry;
using OpenTelemetry.Resources;
using OpenTelemetry.Trace;
using System;
using System.Diagnostics;
using System.Reflection;
// Class to configure OpenTelemetry tracing
public static class TracingConfiguration
{
// Declare an ActivitySource for creating tracing activities
private static readonly ActivitySource ActivitySource = new("MyCustomActivitySource");
// Configure OpenTelemetry with custom settings and instrumentation
public static void ConfigureOpenTelemetry()
{
// Retrieve the service name and version from the executing assembly metadata
var serviceName = Assembly.GetExecutingAssembly().GetName().Name ?? "UnknownService";
var serviceVersion = Assembly.GetExecutingAssembly().GetName().Version?.ToString() ?? "UnknownVersion";
// Set up the tracer provider with various configurations
Sdk.CreateTracerProviderBuilder()
.SetResourceBuilder(
// Set resource attributes including service name and version
ResourceBuilder.CreateDefault().AddService(serviceName, serviceVersion: serviceVersion)
.AddAttributes(new[] { new KeyValuePair("environment", "development") }) // Additional attributes
.AddTelemetrySdk() // Add telemetry SDK information to the traces
.AddEnvironmentVariableDetector()) // Detect resource attributes from environment variables
.AddSource(ActivitySource.Name) // Add the ActivitySource defined above
.AddAspNetCoreInstrumentation() // Add automatic instrumentation for ASP.NET Core
.AddHttpClientInstrumentation() // Add automatic instrumentation for HttpClient requests
.AddOtlpExporter(options => // Configure the OTLP exporter
{
options.Endpoint = new Uri("https://AXIOM_DOMAIN/v1/traces"); // Set the endpoint for the exporter
options.Protocol = OpenTelemetry.Exporter.OtlpExportProtocol.HttpProtobuf; // Set the protocol
options.Headers = "Authorization=Bearer API_TOKEN, X-Axiom-Dataset=DATASET_NAME"; // Update API token and dataset
})
.Build(); // Build the tracer provider
}
// Method to start a new tracing activity with an optional activity kind
public static Activity? StartActivity(string activityName, ActivityKind kind = ActivityKind.Internal)
{
// Starts and returns a new activity if sampling allows it, otherwise returns null
return ActivitySource.StartActivity(activityName, kind);
}
}
```
Replace the value of the `serviceName` variable with the name of the service you want to trace. This is used for identifying and categorizing trace data, particularly in systems with multiple services.
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
## Run the instrumented application
1. Run in local development mode using the development settings in `appsettings.development.json`. Ensure your Axiom API token and dataset name are correctly set in `tracing.cs`.
2. Before deploying, run in production mode by switching to `appsettings.json` for production settings. Ensure your Axiom API token and dataset name are correctly set in `tracing.cs`.
3. Run your application with `dotnet run`. Your application starts and you can interact with it by sending requests to the `/rolldice` endpoint.
For example, if you are using port `8080`, your application is accessible locally at `http://localhost:8080/rolldice`. This URL will direct your requests to the `/rolldice` endpoint of your server running on your local machine.
## Observe the telemetry data
As you interact with your application, traces are collected and exported to Axiom where you can monitor and analyze your application’s performance and behavior.
1. Log into your Axiom account and click the **Datasets** or **Stream** tab.
2. Select your dataset from the list.
3. From the list of fields, click on the **trace\_id**, to view your spans.
## Dynamic OpenTelemetry Traces dashboard
The data can then be further viewed and analyzed in the traces dashboard, providing insights into the performance and behavior of your application.
1. Log into your Axiom account, select **Dashboards**, and click on the traces dashboard named after your dataset.
2. View the dashboard which displays your total traces, incoming spans, average span duration, errors, slowest operations, and top 10 span errors across services.
## Send data from an existing .NET project
### Manual Instrumentation
Manual instrumentation involves adding code to create, configure, and manage telemetry data, such as traces and spans, providing control over what data is collected.
1. Initialize ActivitySource. Define an `ActivitySource` to create activities (spans) for tracing specific operations within your application.
```csharp
private static readonly ActivitySource MyActivitySource = new ActivitySource("MyActivitySourceName");
```
2. Start and stop activities. Manually start activities (spans) at the beginning of the operations you want to trace and stop them when the operations complete. You can add custom attributes to these activities for more detailed tracing.
```csharp
using var activity = MyActivitySource.StartActivity("MyOperationName");
activity?.SetTag("key", "value");
// Perform the operation here
activity?.Stop();
```
3. Add custom attributes. Enhance activities with custom attributes to provide additional context, making it easier to analyze telemetry data.
```csharp
activity?.SetTag("UserId", userId);
activity?.SetTag("OperationDetail", "Detail about the operation");
```
### Automatic Instrumentation
Automatic instrumentation uses the OpenTelemetry SDK and additional libraries to automatically generate telemetry data for certain operations, such as incoming HTTP requests and database queries.
1. Configure OpenTelemetry SDK. Use the OpenTelemetry SDK to configure automatic instrumentation in your application. This typically involves setting up a `TracerProvider` in your `program.cs` or startup configuration, which automatically captures telemetry data from supported libraries.
```csharp
Sdk.CreateTracerProviderBuilder()
.AddAspNetCoreInstrumentation()
.AddHttpClientInstrumentation()
.AddOtlpExporter(options =>
{
options.Endpoint = new Uri("https://AXIOM_DOMAIN/v1/traces");
options.Headers = $"Authorization=Bearer API_TOKEN, X-Axiom-Dataset=DATASET_NAME";
})
.Build();
```
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
2. Install and configure additional OpenTelemetry instrumentation packages as needed, based on the technologies your application uses. For example, to automatically trace SQL database queries, you might add the corresponding database instrumentation package.
3. With automatic instrumentation set up, no further code changes are required for tracing basic operations. The OpenTelemetry SDK and its instrumentation packages handle the creation and management of traces for supported operations.
## Reference
### List of OpenTelemetry trace fields
| Field Category | Field Name | Description |
| ----------------------------- | --------------------------------------- | ------------------------------------------------------------------------------------ |
| **General Trace Information** | | |
| | \_rowId | Unique identifier for each row in the trace data. |
| | \_sysTime | System timestamp when the trace data was recorded. |
| | \_time | Timestamp when the actual event being traced occurred. |
| | trace\_id | Unique identifier for the entire trace. |
| | span\_id | Unique identifier for the span within the trace. |
| | parent\_span\_id | Unique identifier for the parent span within the trace. |
| **HTTP Attributes** | | |
| | attributes.http.request.method | HTTP method used for the request. |
| | attributes.http.response.status\_code | HTTP status code returned in response. |
| | attributes.http.route | Route accessed during the HTTP request. |
| | attributes.url.path | Path component of the URL accessed. |
| | attributes.url.scheme | Scheme component of the URL accessed. |
| | attributes.server.address | Address of the server handling the request. |
| | attributes.server.port | Port number on the server handling the request. |
| **Network Attributes** | | |
| | attributes.network.protocol.version | Version of the network protocol used. |
| **User Agent** | | |
| | attributes.user\_agent.original | Original user agent string, providing client software and OS. |
| **Custom Attributes** | | |
| | attributes.custom\["custom.attribute"] | Custom attribute provided in the trace. |
| | attributes.custom\["dice.rollResult"] | Result of a dice roll operation. |
| | attributes.custom\["operation.success"] | Indicates if the operation was successful. |
| | attributes.custom\["player.name"] | Name of the player in the operation. |
| **Operational Details** | | |
| | duration | Time taken for the operation. |
| | kind | Type of span (e.g., server, client, internal). |
| | name | Name of the span. |
| **Resource Attributes** | | |
| | resource.custom.environment | Environment where the trace was captured, e.g., development. |
| **Telemetry SDK Attributes** | | |
| | telemetry.sdk.language | Language of the telemetry SDK, e.g., dotnet. |
| | telemetry.sdk.name | Name of the telemetry SDK, e.g., opentelemetry. |
| | telemetry.sdk.version | Version of the telemetry SDK, e.g., 1.7.0. |
| **Service Attributes** | | |
| | service.instance.id | Unique identifier for the instance of the service. |
| | service.name | Name of the service generating the trace, e.g., dotnet. |
| | service.version | Version of the service generating the trace, e.g., 1.0.0.0. |
| **Scope Attributes** | | |
| | scope.name | Name of the scope for the operation, e.g., OpenTelemetry.Instrumentation.AspNetCore. |
| | scope.version | Version of the scope, e.g., 1.0.0.0. |
### List of imported libraries
### OpenTelemetry
``
This is the core SDK for OpenTelemetry in .NET. It provides the foundational tools needed to collect and manage telemetry data within your .NET applications. It’s the base upon which all other OpenTelemetry instrumentation and exporter packages build.
### OpenTelemetry.Exporter.Console
``
This package allows applications to export telemetry data to the console. It is primarily useful for development and testing purposes, offering a simple way to view the telemetry data your application generates in real time.
### OpenTelemetry.Exporter.OpenTelemetryProtocol
``
This package enables your application to export telemetry data using the OpenTelemetry Protocol (OTLP) over gRPC or HTTP. It’s vital for sending data to observability platforms that support OTLP, ensuring your telemetry data can be easily analyzed and monitored across different systems.
### OpenTelemetry.Extensions.Hosting
``
Designed for .NET applications, this package integrates OpenTelemetry with the .NET Generic Host. It simplifies the process of configuring and managing the lifecycle of OpenTelemetry resources such as TracerProvider, making it easier to collect telemetry data in applications that use the hosting model.
### OpenTelemetry.Instrumentation.AspNetCore
``
This package is designed for instrumenting ASP.NET Core applications. It automatically collects telemetry data about incoming requests and responses. This is important for monitoring the performance and reliability of web applications and APIs built with ASP.NET Core.
### OpenTelemetry.Instrumentation.Http
``
This package provides automatic instrumentation for HTTP clients in .NET applications. It captures telemetry data about outbound HTTP requests, including details such as request and response headers, duration, success status, and more. It’s key for understanding external dependencies and interactions in your application.
# OpenTelemetry using Golang
Source: https://axiom.co/docs/guides/opentelemetry-go
This guide explains how to configure a Go app using the Go OpenTelemetry SDK to send telemetry data to Axiom.
OpenTelemetry offers a [single set of APIs and libraries](https://opentelemetry.io/docs/languages/go/instrumentation/) that standardize how you collect and transfer telemetry data. This guide focuses on setting up OpenTelemetry in a Go app to send traces to Axiom.
## Prerequisites
* Go 1.19 or higher: Ensure you have Go version 1.19 or higher installed in your environment.
* Go app: Use your own app written in Go or start with the provided `main.go` sample below.
* [Create an Axiom account](https://app.axiom.co/).
* [Create a dataset in Axiom](/reference/datasets) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to create, read, update, and delete datasets.
## Installing Dependencies
First, run the following in your terminal to install the necessary Go packages:
```go
go get go.opentelemetry.io/otel
go get go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp
go get go.opentelemetry.io/otel/sdk/resource
go get go.opentelemetry.io/otel/sdk/trace
go get go.opentelemetry.io/otel/semconv/v1.24.0
go get go.opentelemetry.io/otel/trace
go get go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp
go get go.opentelemetry.io/otel/propagation
```
This installs the OpenTelemetry Go SDK, the OTLP (OpenTelemetry Protocol) trace exporter, and other necessary packages for instrumentation and resource definition.
## Initializing a Go module and managing dependencies
Before installing the OpenTelemetry dependencies, ensure your Go project is properly initialized as a module and all dependencies are correctly managed. This step is important for resolving import issues and managing your project’s dependencies effectively.
### Initialize a Go module
If your project is not already initialized as a Go module, run the following command in your project’s root directory. This step creates a `go.mod` file which tracks your project’s dependencies.
```bash
go mod init
```
Replace `` with your project’s name or the GitHub repository path if you plan to push the code to GitHub. For example, `go mod init github.com/yourusername/yourprojectname`.
### Manage dependencies
After initializing your Go module, tidy up your project’s dependencies. This ensures that your `go.mod` file accurately reflects the packages your project depends on, including the correct versions of the OpenTelemetry libraries you'll be using.
Run the following command in your project’s root directory:
```bash
go mod tidy
```
This command will download the necessary dependencies and update your `go.mod` and `go.sum` files accordingly. It’s a good practice to run `go mod tidy` after adding new imports to your project or periodically to keep dependencies up to date.
## HTTP server configuration (main.go)
`main.go` is the entry point of the app. It invokes `InstallExportPipeline` from `exporter.go` to set up the tracing exporter. It also sets up a basic HTTP server with OpenTelemetry instrumentation to demonstrate how telemetry data can be collected and exported in a simple web app context. It also demonstrates the usage of span links to establish relationships between spans across different traces.
```go
// main.go
package main
import (
"context"
"fmt"
"log"
"math/rand"
"net"
"net/http"
"os"
"os/signal"
"time"
// OpenTelemetry imports for tracing and observability.
"go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/trace"
)
// main function starts the application and handles run function errors.
func main() {
if err := run(); err != nil {
log.Fatalln(err)
}
}
// run sets up signal handling, tracer initialization, and starts an HTTP server.
func run() error {
// Creating a context that listens for the interrupt signal from the OS.
ctx, stop := signal.NotifyContext(context.Background(), os.Interrupt)
defer stop()
// Initializes tracing and returns a function to shut down OpenTelemetry cleanly.
otelShutdown, err := SetupTracer()
if err != nil {
return err
}
defer func() {
if shutdownErr := otelShutdown(ctx); shutdownErr != nil {
log.Printf("failed to shutdown OpenTelemetry: %v", shutdownErr) // Log fatal errors during server shutdown
}
}()
// Configuring the HTTP server settings.
srv := &http.Server{
Addr: ":8080", // Server address
BaseContext: func(_ net.Listener) context.Context { return ctx },
ReadTimeout: 5 * time.Second, // Server read timeout
WriteTimeout: 15 * time.Second, // Server write timeout
Handler: newHTTPHandler(), // HTTP handler
}
// Starting the HTTP server in a new goroutine.
go func() {
if err := srv.ListenAndServe(); err != http.ErrServerClosed {
log.Fatalf("HTTP server ListenAndServe: %v", err)
}
}()
// Wait for interrupt signal to gracefully shut down the server with a timeout context.
<-ctx.Done()
shutdownCtx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel() // Ensures cancel function is called on exit
if err := srv.Shutdown(shutdownCtx); err != nil {
log.Fatalf("HTTP server Shutdown: %v", err) // Log fatal errors during server shutdown
}
return nil
}
// newHTTPHandler configures the HTTP routes and integrates OpenTelemetry.
func newHTTPHandler() http.Handler {
mux := http.NewServeMux() // HTTP request multiplexer
// Wrapping the handler function with OpenTelemetry instrumentation.
handleFunc := func(pattern string, handlerFunc func(http.ResponseWriter, *http.Request)) {
handler := otelhttp.WithRouteTag(pattern, http.HandlerFunc(handlerFunc))
mux.Handle(pattern, handler) // Associate pattern with handler
}
// Registering route handlers with OpenTelemetry instrumentation
handleFunc("/rolldice", rolldice)
handleFunc("/roll_with_link", rollWithLink)
handler := otelhttp.NewHandler(mux, "/")
return handler
}
// rolldice handles the /rolldice route by generating a random dice roll.
func rolldice(w http.ResponseWriter, r *http.Request) {
_, span := otel.Tracer("example-tracer").Start(r.Context(), "rolldice")
defer span.End()
// Generating a random dice roll.
randGen := rand.New(rand.NewSource(time.Now().UnixNano()))
roll := 1 + randGen.Intn(6)
// Writing the dice roll to the response.
fmt.Fprintf(w, "Rolled a dice: %d\n", roll)
}
// rollWithLink handles the /roll_with_link route by creating a new span with a link to the parent span.
func rollWithLink(w http.ResponseWriter, r *http.Request) {
ctx, span := otel.Tracer("example-tracer").Start(r.Context(), "roll_with_link")
defer span.End()
/**
* Create a new span for rolldice with a link to the parent span.
* This link helps correlate events that are related but not directly a parent-child relationship.
*/
rollDiceCtx, rollDiceSpan := otel.Tracer("example-tracer").Start(ctx, "rolldice",
trace.WithLinks(trace.Link{
SpanContext: span.SpanContext(),
Attributes: nil,
}),
)
defer rollDiceSpan.End()
// Generating a random dice roll linked to the parent context.
randGen := rand.New(rand.NewSource(time.Now().UnixNano()))
roll := 1 + randGen.Intn(6)
// Writing the linked dice roll to the response.
fmt.Fprintf(w, "Dice roll result (with link): %d\n", roll)
// Use the rollDiceCtx if needed.
_ = rollDiceCtx
}
```
## Exporter configuration (exporter.go)
`exporter.go` is responsible for setting up the OpenTelemetry tracing exporter. It defines the `resource attributes`, `initializes` the `tracer`, and configures the OTLP (OpenTelemetry Protocol) exporter with appropriate endpoints and headers, allowing your app to send telemetry data to Axiom.
```go
package main
import (
"context" // For managing request-scoped values, cancellation signals, and deadlines.
"crypto/tls" // For configuring TLS options, like certificates.
// OpenTelemetry imports for setting up tracing and exporting telemetry data.
"go.opentelemetry.io/otel" // Core OpenTelemetry APIs for managing tracers.
"go.opentelemetry.io/otel/attribute" // For creating and managing trace attributes.
"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp" // HTTP trace exporter for OpenTelemetry Protocol (OTLP).
"go.opentelemetry.io/otel/propagation" // For managing context propagation formats.
"go.opentelemetry.io/otel/sdk/resource" // For defining resources that describe an entity producing telemetry.
"go.opentelemetry.io/otel/sdk/trace" // For configuring tracing, like sampling and processors.
semconv "go.opentelemetry.io/otel/semconv/v1.24.0" // Semantic conventions for resource attributes.
)
const (
serviceName = "axiom-go-otel" // Name of the service for tracing.
serviceVersion = "0.1.0" // Version of the service.
otlpEndpoint = "AXIOM_DOMAIN" // OTLP collector endpoint.
bearerToken = "Bearer API_TOKEN" // Authorization token.
deploymentEnvironment = "production" // Deployment environment.
)
func SetupTracer() (func(context.Context) error, error) {
ctx := context.Background()
return InstallExportPipeline(ctx) // Setup and return the export pipeline for telemetry data.
}
func Resource() *resource.Resource {
// Defines resource with service name, version, and environment.
return resource.NewWithAttributes(
semconv.SchemaURL,
semconv.ServiceNameKey.String(serviceName),
semconv.ServiceVersionKey.String(serviceVersion),
attribute.String("environment", deploymentEnvironment),
)
}
func InstallExportPipeline(ctx context.Context) (func(context.Context) error, error) {
// Sets up OTLP HTTP exporter with endpoint, headers, and TLS config.
exporter, err := otlptracehttp.New(ctx,
otlptracehttp.WithEndpoint(otlpEndpoint),
otlptracehttp.WithHeaders(map[string]string{
"Authorization": bearerToken,
"X-AXIOM-DATASET": "DATASET_NAME",
}),
otlptracehttp.WithTLSClientConfig(&tls.Config{}),
)
if err != nil {
return nil, err
}
// Configures the tracer provider with the exporter and resource.
tracerProvider := trace.NewTracerProvider(
trace.WithBatcher(exporter),
trace.WithResource(Resource()),
)
otel.SetTracerProvider(tracerProvider)
// Sets global propagator to W3C Trace Context and Baggage.
otel.SetTextMapPropagator(propagation.NewCompositeTextMapPropagator(
propagation.TraceContext{},
propagation.Baggage{},
))
return tracerProvider.Shutdown, nil // Returns a function to shut down the tracer provider.
}
```
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
## Run the app
To run the app, execute both `exporter.go` and `main.go`. Use the command `go run main.go exporter.go` to start the app. Once your app is running, traces collected by your app are exported to Axiom. The server starts on the specified port, and you can interact with it by sending requests to the `/rolldice` endpoint.
For example, if you are using port `8080`, your app will be accessible locally at `http://localhost:8080/rolldice`. This URL will direct your requests to the `/rolldice` endpoint of your server running on your local machine.
## Observe the telemetry data in Axiom
After deploying your app, you can log into your Axiom account to view and analyze the telemetry data. As you interact with your app, traces will be collected and exported to Axiom, where you can monitor and analyze your app’s performance and behavior.
## Dynamic OpenTelemetry traces dashboard
This data can then be further viewed and analyzed in Axiom’s dashboard, providing insights into the performance and behavior of your app.
## Send data from an existing Golang project
### Manual Instrumentation
Manual instrumentation in Go involves managing spans within your code to track operations and events. This method offers precise control over what is instrumented and how spans are configured.
1. Initialize the tracer:
Use the OpenTelemetry API to obtain a tracer instance. This tracer will be used to start and manage spans.
```go
tracer := otel.Tracer("serviceName")
```
2. Create and manage spans:
Manually start spans before the operations you want to trace and ensure they are ended after the operations complete.
```go
ctx, span := tracer.Start(context.Background(), "operationName")
defer span.End()
// Perform the operation here
```
3. Annotate spans:
Enhance spans with additional information using attributes or events to provide more context about the traced operation.
```go
span.SetAttributes(attribute.String("key", "value"))
span.AddEvent("eventName", trace.WithAttributes(attribute.String("key", "value")))
```
### Automatic Instrumentation
Automatic instrumentation in Go uses libraries and integrations that automatically create spans for operations, simplifying the addition of observability to your app.
1. Instrumentation libraries:
Use `OpenTelemetry-contrib` libraries designed for automatic instrumentation of standard Go frameworks and libraries, such as `net/http`.
```go
import "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
```
2. Wrap handlers and clients:
Automatically instrument HTTP servers and clients by wrapping them with OpenTelemetry’s instrumentation. For HTTP servers, wrap your handlers with `otelhttp.NewHandler`.
```go
http.Handle("/path", otelhttp.NewHandler(handler, "operationName"))
```
3. Minimal code changes:
After setting up automatic instrumentation, no further changes are required for tracing standard operations. The instrumentation takes care of starting, managing, and ending spans.
## Reference
### List of OpenTelemetry trace fields
| Field Category | Field Name | Description |
| ---------------------------- | --------------------------------------- | ------------------------------------------------------------------- |
| **Unique Identifiers** | | |
| | \_rowid | Unique identifier for each row in the trace data. |
| | span\_id | Unique identifier for the span within the trace. |
| | trace\_id | Unique identifier for the entire trace. |
| **Timestamps** | | |
| | \_systime | System timestamp when the trace data was recorded. |
| | \_time | Timestamp when the actual event being traced occurred. |
| **HTTP Attributes** | | |
| | attributes.custom\["http.host"] | Host information where the HTTP request was sent. |
| | attributes.custom\["http.server\_name"] | Server name for the HTTP request. |
| | attributes.http.flavor | HTTP protocol version used. |
| | attributes.http.method | HTTP method used for the request. |
| | attributes.http.route | Route accessed during the HTTP request. |
| | attributes.http.scheme | Protocol scheme (HTTP/HTTPS). |
| | attributes.http.status\_code | HTTP response status code. |
| | attributes.http.target | Specific target of the HTTP request. |
| | attributes.http.user\_agent | User agent string of the client. |
| | attributes.custom.user\_agent.original | Original user agent string, providing client software and OS. |
| **Network Attributes** | | |
| | attributes.net.host.port | Port number on the host receiving the request. |
| | attributes.net.peer.port | Port number on the peer (client) side. |
| | attributes.custom\["net.peer.ip"] | IP address of the peer in the network interaction. |
| | attributes.net.sock.peer.addr | Socket peer address, indicating the IP version used. |
| | attributes.net.sock.peer.port | Socket peer port number. |
| | attributes.custom.net.protocol.version | Protocol version used in the network interaction. |
| **Operational Details** | | |
| | duration | Time taken for the operation. |
| | kind | Type of span (for example,, server, client). |
| | name | Name of the span. |
| | scope | Instrumentation scope. |
| | service.name | Name of the service generating the trace. |
| | service.version | Version of the service generating the trace. |
| **Resource Attributes** | | |
| | resource.environment | Environment where the trace was captured, for example,, production. |
| | attributes.custom.http.wrote\_bytes | Number of bytes written in the HTTP response. |
| **Telemetry SDK Attributes** | | |
| | telemetry.sdk.language | Language of the telemetry SDK (if previously not included). |
| | telemetry.sdk.name | Name of the telemetry SDK (if previously not included). |
| | telemetry.sdk.version | Version of the telemetry SDK (if previously not included). |
### List of imported libraries
### OpenTelemetry Go SDK
**`go.opentelemetry.io/otel`**
This is the core SDK for OpenTelemetry in Go. It provides the necessary tools to create and manage telemetry data (traces, metrics, and logs).
### OTLP Trace Exporter
**`go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp`**
This package allows your app to export telemetry data over HTTP using the OpenTelemetry Protocol (OTLP). It’s important for sending data to Axiom or any other backend that supports OTLP.
### Resource and Trace Packages
**`go.opentelemetry.io/otel/sdk/resource`** and **`go.opentelemetry.io/otel/sdk/trace`**
These packages help define the properties of your telemetry data, such as service name and version, and manage trace data within your app.
### Semantic Conventions
**`go.opentelemetry.io/otel/semconv/v1.24.0`**
This package provides standardized schema URLs and attributes, ensuring consistency across different OpenTelemetry implementations.
### Tracing API
**`go.opentelemetry.io/otel/trace`**
This package offers the API for tracing. It enables you to create spans, record events, and manage context propagation in your app.
### HTTP Instrumentation
**`go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp`**
Used for instrumenting HTTP clients and servers. It automatically records data about HTTP requests and responses, which is essential for web apps.
### Propagators
**`go.opentelemetry.io/otel/propagation`**
This package provides the ability to propagate context and trace information across service boundaries.
# Send data from Java app using OpenTelemetry
Source: https://axiom.co/docs/guides/opentelemetry-java
This page explains how to configure a Java app using the Java OpenTelemetry SDK to send telemetry data to Axiom.
OpenTelemetry provides a unified approach to collecting telemetry data from your Java applications. This page demonstrates how to configure OpenTelemetry in a Java app to send telemetry data to Axiom using the OpenTelemetry SDK.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
- [Install JDK 11](https://www.oracle.com/java/technologies/java-se-glance.html) or later
- [Install Maven](https://maven.apache.org/download.cgi)
- Use your own app written in Java or the provided `DiceRollerApp.java` sample.
## Create project
To create a Java project, run the Maven archetype command in the terminal:
```bash
mvn archetype:generate -DgroupId=com.example -DartifactId=MyProject -DarchetypeArtifactId=maven-archetype-quickstart -DinteractiveMode=false
```
This command creates a new project in a directory named `MyProject` with a standard directory structure.
## Create core app
`DiceRollerApp.java` is the core of the sample app. It simulates rolling a dice and demonstrates the usage of OpenTelemetry for tracing. The app includes two methods: one for a simple dice roll and another that demonstrates the usage of span links to establish relationships between spans across different traces.
Create the `DiceRollerApp.java` in the `src/main/java/com/example` directory with the following content:
```java
package com.example;
import io.opentelemetry.api.OpenTelemetry;
import io.opentelemetry.api.trace.Span;
import io.opentelemetry.api.trace.Tracer;
import io.opentelemetry.context.Scope;
import java.util.Random;
public class DiceRollerApp {
private static final Tracer tracer;
static {
OpenTelemetry openTelemetry = OtelConfiguration.initializeOpenTelemetry();
tracer = openTelemetry.getTracer(DiceRollerApp.class.getName());
}
public static void main(String[] args) {
rollDice();
rollDiceWithLink();
}
private static void rollDice() {
Span span = tracer.spanBuilder("rollDice").startSpan();
try (Scope scope = span.makeCurrent()) {
int roll = 1 + new Random().nextInt(6);
System.out.println("Rolled a dice: " + roll);
} finally {
span.end();
}
}
private static void rollDiceWithLink() {
Span parentSpan = tracer.spanBuilder("rollWithLink").startSpan();
try (Scope parentScope = parentSpan.makeCurrent()) {
Span childSpan = tracer.spanBuilder("rolldice")
.addLink(parentSpan.getSpanContext())
.startSpan();
try (Scope childScope = childSpan.makeCurrent()) {
int roll = 1 + new Random().nextInt(6);
System.out.println("Dice roll result (with link): " + roll);
} finally {
childSpan.end();
}
} finally {
parentSpan.end();
}
}
}
```
## Configure OpenTelemetry
`OtelConfiguration.java` sets up the OpenTelemetry SDK and configures the exporter to send data to Axiom. It initializes the tracer provider, sets up the Axiom exporter, and configures the resource attributes.
Create the `OtelConfiguration.java` file in the `src/main/java/com/example` directory with the following content:
```java
package com.example;
import io.opentelemetry.api.OpenTelemetry;
import io.opentelemetry.api.common.Attributes;
import io.opentelemetry.api.common.AttributeKey;
import io.opentelemetry.exporter.otlp.http.trace.OtlpHttpSpanExporter;
import io.opentelemetry.sdk.OpenTelemetrySdk;
import io.opentelemetry.sdk.resources.Resource;
import io.opentelemetry.sdk.trace.SdkTracerProvider;
import io.opentelemetry.sdk.trace.export.BatchSpanProcessor;
import java.util.concurrent.TimeUnit;
public class OtelConfiguration {
private static final String SERVICE_NAME = "YOUR_SERVICE_NAME";
private static final String SERVICE_VERSION = "YOUR_SERVICE_VERSION";
private static final String OTLP_ENDPOINT = "https://AXIOM_DOMAIN/v1/traces";
private static final String BEARER_TOKEN = "Bearer API_TOKEN";
private static final String AXIOM_DATASET = "DATASET_NAME";
public static OpenTelemetry initializeOpenTelemetry() {
Resource resource = Resource.getDefault()
.merge(Resource.create(Attributes.of(
AttributeKey.stringKey("service.name"), SERVICE_NAME,
AttributeKey.stringKey("service.version"), SERVICE_VERSION
)));
OtlpHttpSpanExporter spanExporter = OtlpHttpSpanExporter.builder()
.setEndpoint(OTLP_ENDPOINT)
.addHeader("Authorization", BEARER_TOKEN)
.addHeader("X-Axiom-Dataset", AXIOM_DATASET)
.build();
SdkTracerProvider sdkTracerProvider = SdkTracerProvider.builder()
.addSpanProcessor(BatchSpanProcessor.builder(spanExporter)
.setScheduleDelay(100, TimeUnit.MILLISECONDS)
.build())
.setResource(resource)
.build();
OpenTelemetrySdk openTelemetry = OpenTelemetrySdk.builder()
.setTracerProvider(sdkTracerProvider)
.buildAndRegisterGlobal();
Runtime.getRuntime().addShutdownHook(new Thread(sdkTracerProvider::close));
return openTelemetry;
}
}
```
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
## Configure project
The `pom.xml` file defines the project structure and dependencies for Maven. It includes the necessary OpenTelemetry libraries and configures the build process.
Update the `pom.xml` file in the root of your project directory with the following content:
```xml
4.0.0com.exampleaxiom-otel-java1.0-SNAPSHOTUTF-811111.18.0io.opentelemetryopentelemetry-api${opentelemetry.version}io.opentelemetryopentelemetry-sdk${opentelemetry.version}io.opentelemetryopentelemetry-exporter-otlp${opentelemetry.version}junitjunit4.13.2testorg.apache.maven.pluginsmaven-compiler-plugin3.8.11111org.apache.maven.pluginsmaven-surefire-plugin3.0.0-M5trueorg.apache.maven.pluginsmaven-shade-plugin3.2.4packageshadecom.example.DiceRollerApp
```
## Run the instrumented app
To run your Java app with OpenTelemetry instrumentation, follow these steps:
1. Clean the project and download dependencies:
```bash
mvn clean
```
2. Compile the code:
```bash
mvn compile
```
3. Package the app:
```bash
mvn package
```
4. Run the app:
```bash
java -jar target/axiom-otel-java-1.0-SNAPSHOT.jar
```
The app executes the `rollDice()` and `rollDiceWithLink()` methods, generates telemetry data, and sends the data to Axiom.
## Observe telemetry data in Axiom
As the app runs, it sends traces to Axiom. To view the traces:
1. In Axiom, click the **Stream** tab.
2. Click your dataset.
Axiom provides a dynamic dashboard for visualizing and analyzing your OpenTelemetry traces. This dashboard offers insights into the performance and behavior of your app. To view the dashboard:
1. In Axiom, click the **Dashboards** tab.
2. Look for the OpenTelemetry traces dashboard or create a new one.
3. Customize the dashboard to show the event data and visualizations most relevant to the app.
## Send data from an existing Java project
### Manual instrumentation
Manual instrumentation gives fine-grained control over which parts of the app are traced and what information is included in the traces. It requires adding OpenTelemetry-specific code to the app.
Set up OpenTelemetry. Create a configuration class to initialize OpenTelemetry with necessary settings, exporters, and span processors.
```java
// OtelConfiguration.java
package com.example;
import io.opentelemetry.api.OpenTelemetry;
import io.opentelemetry.api.trace.Tracer;
import io.opentelemetry.context.Scope;
import io.opentelemetry.exporter.otlp.http.trace.OtlpHttpSpanExporter;
import io.opentelemetry.sdk.OpenTelemetrySdk;
import io.opentelemetry.sdk.trace.SdkTracerProvider;
import io.opentelemetry.sdk.trace.export.BatchSpanProcessor;
public class OtelConfiguration {
public static OpenTelemetry initializeOpenTelemetry() {
OtlpHttpSpanExporter spanExporter = OtlpHttpSpanExporter.builder()
.setEndpoint("https://AXIOM_DOMAIN/v1/traces")
.addHeader("Authorization", "Bearer API_TOKEN")
.addHeader("X-Axiom-Dataset", "DATASET_NAME")
.build();
SdkTracerProvider tracerProvider = SdkTracerProvider.builder()
.addSpanProcessor(BatchSpanProcessor.builder(spanExporter).build())
.build();
return OpenTelemetrySdk.builder()
.setTracerProvider(tracerProvider)
.buildAndRegisterGlobal();
}
}
```
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Spans represent units of work in the app. They have a start time and duration and can be nested.
```java
// DiceRollerApp.java
package com.example;
import io.opentelemetry.api.OpenTelemetry;
import io.opentelemetry.api.trace.Span;
import io.opentelemetry.api.trace.Tracer;
import io.opentelemetry.context.Scope;
public class DiceRollerApp {
private static final Tracer tracer;
static {
OpenTelemetry openTelemetry = OtelConfiguration.initializeOpenTelemetry();
tracer = openTelemetry.getTracer("com.example.DiceRollerApp");
}
public static void main(String[] args) {
try (Scope scope = tracer.spanBuilder("Main").startScopedSpan()) {
rollDice();
}
}
private static void rollDice() {
Span span = tracer.spanBuilder("rollDice").startSpan();
try (Scope scope = span.makeCurrent()) {
// Simulate dice roll
int result = new Random().nextInt(6) + 1;
System.out.println("Rolled a dice: " + result);
} finally {
span.end();
}
}
}
```
Custom spans are manually managed to provide detailed insights into specific functions or methods within the app.
Spans can be annotated with attributes and events to provide more context about the operation being performed.
```java
private static void rollDice() {
Span span = tracer.spanBuilder("rollDice").startSpan();
try (Scope scope = span.makeCurrent()) {
int roll = 1 + new Random().nextInt(6);
span.setAttribute("roll.value", roll);
span.addEvent("Dice rolled");
System.out.println("Rolled a dice: " + roll);
} finally {
span.end();
}
}
```
Span links allow association of spans that aren’t in a parent-child relationship.
```java
private static void rollDiceWithLink() {
Span parentSpan = tracer.spanBuilder("rollWithLink").startSpan();
try (Scope parentScope = parentSpan.makeCurrent()) {
Span childSpan = tracer.spanBuilder("rolldice")
.addLink(parentSpan.getSpanContext())
.startSpan();
try (Scope childScope = childSpan.makeCurrent()) {
int roll = 1 + new Random().nextInt(6);
System.out.println("Dice roll result (with link): " + roll);
} finally {
childSpan.end();
}
} finally {
parentSpan.end();
}
}
```
### Automatic instrumentation
Automatic instrumentation simplifies adding telemetry to a Java app by automatically capturing data from supported libraries and frameworks.
Ensure all necessary OpenTelemetry libraries are included in your Maven `pom.xml`.
```xml
io.opentelemetryopentelemetry-api{opentelemetry_version}io.opentelemetryopentelemetry-sdk{opentelemetry_version}io.opentelemetry.instrumentationopentelemetry-instrumentation-httpclient{instrumentation_version}
```
Dependencies include the OpenTelemetry SDK and instrumentation libraries that automatically capture data from common Java libraries.
Implement an initialization class to configure the OpenTelemetry SDK along with auto-instrumentation for frameworks used by the app.
```java
// AutoInstrumentationSetup.java
package com.example;
import io.opentelemetry.instrumentation.httpclient.HttpClientInstrumentation;
import io.opentelemetry.api.OpenTelemetry;
public class AutoInstrumentationSetup {
public static void setup() {
OpenTelemetry openTelemetry = OtelConfiguration.initializeOpenTelemetry();
HttpClientInstrumentation.instrument(openTelemetry);
}
}
```
Auto-instrumentation is initialized early in the app lifecycle to ensure all relevant activities are automatically captured.
```java
// Main.java
package com.example;
public class Main {
public static void main(String[] args) {
AutoInstrumentationSetup.setup(); // Initialize OpenTelemetry auto-instrumentation
DiceRollerApp.main(args); // Start the application logic
}
}
```
## Reference
### List of OpenTelemetry trace fields
| Field category | Field name | Description |
| ------------------------- | ---------------------- | ------------------------------------------------------------------------------------------------------------- |
| General trace information | | |
| | \_rowId | Unique identifier for each row in the trace data. |
| | \_sysTime | System timestamp when the trace data was recorded. |
| | \_time | Timestamp when the actual event being traced occurred. |
| | trace\_id | Unique identifier for the entire trace. |
| | span\_id | Unique identifier for the span within the trace. |
| | parent\_span\_id | Unique identifier for the parent span within the trace. |
| Operational details | | |
| | duration | Time taken for the operation, typically in microseconds or milliseconds. |
| | kind | Type of span. For example, `server`, `internal`. |
| | name | Name of the span, often a high-level title for the operation. |
| Scope and instrumentation | | |
| | scope.name | Instrumentation scope, typically the Java package or app component. For example, `com.example.DiceRollerApp`. |
| Service attributes | | |
| | service.name | Name of the service generating the trace. For example, `axiom-java-otel`. |
| | service.version | Version of the service generating the trace. For example, `0.1.0`. |
| Telemetry SDK attributes | | |
| | telemetry.sdk.language | Programming language of the SDK used for telemetry, typically `java`. |
| | telemetry.sdk.name | Name of the telemetry SDK. For example, `opentelemetry`. |
| | telemetry.sdk.version | Version of the telemetry SDK used in the tracing setup. For example, `1.18.0`. |
### List of imported libraries
The Java implementation of OpenTelemetry uses the following key libraries.
`io.opentelemetry:opentelemetry-api`
This package provides the core OpenTelemetry API for Java. It defines the interfaces and classes that developers use to instrument their apps manually. This includes the `Tracer`, `Span`, and `Context` classes, which are fundamental to creating and managing traces in your app. The API is designed to be stable and consistent, allowing developers to instrument their code without tying it to a specific implementation.
`io.opentelemetry:opentelemetry-sdk`
The opentelemetry-sdk package is the reference implementation of the OpenTelemetry API for Java. It provides the actual capability behind the API interfaces, including span creation, context propagation, and resource management. This SDK is highly configurable and extensible, allowing developers to customize how telemetry data is collected, processed, and exported. It’s the core component that brings OpenTelemetry to life in a Java app.
`io.opentelemetry:opentelemetry-exporter-otlp`
This package provides an exporter that sends telemetry data using the OpenTelemetry Protocol (OTLP). OTLP is the standard protocol for transmitting telemetry data in the OpenTelemetry ecosystem. This exporter allows Java applications to send their collected traces, metrics, and logs to any backend that supports OTLP, such as Axiom. The use of OTLP ensures broad compatibility and a standardized way of transmitting telemetry data across different systems and platforms.
`io.opentelemetry:opentelemetry-sdk-extension-autoconfigure`
This extension package provides auto-configuration capabilities for the OpenTelemetry SDK. It allows developers to configure the SDK using environment variables or system properties, making it easier to set up and deploy OpenTelemetry-instrumented applications in different environments. This is particularly useful for containerized applications or those running in cloud environments where configuration through environment variables is common.
`io.opentelemetry:opentelemetry-sdk-trace`
This package is part of the OpenTelemetry SDK and focuses specifically on tracing capability. It includes important classes like `SdkTracerProvider` and `BatchSpanProcessor`. The `SdkTracerProvider` is responsible for creating and managing tracers, while the `BatchSpanProcessor` efficiently processes and exports spans in batches, similar to its Node.js counterpart. This batching mechanism helps optimize the performance of trace data export in OpenTelemetry-instrumented Java applications.
`io.opentelemetry:opentelemetry-sdk-common`
This package provides common capability used across different parts of the OpenTelemetry SDK. It includes utilities for working with attributes, resources, and other shared concepts in OpenTelemetry. This package helps ensure consistency across the SDK and simplifies the implementation of cross-cutting concerns in telemetry data collection and processing.
# OpenTelemetry using Next.js
Source: https://axiom.co/docs/guides/opentelemetry-nextjs
This guide demonstrates how to configure OpenTelemetry in a Next.js app to send telemetry data to Axiom.
OpenTelemetry provides a standardized way to collect and export telemetry data from your Next.js apps. This guide walks you through the process of configuring OpenTelemetry in a Next.js app to send traces to Axiom using the OpenTelemetry SDK.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/).
* [Create a dataset in Axiom](/reference/datasets) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to create, read, update, and delete datasets.
* [Install Node.js version 14](https://nodejs.org/en/download/package-manager) or newer.
* An existing Next.js app. Alternatively, use the provided example project.
## Initial setup
For initial setup, choose one of the following options:
* Use the `@vercel/otel` package for easier setup.
* Set up your app without the `@vercel/otel` package.
### Initial setup with @vercel/otel
To use the `@vercel/otel` package for easier setup, run the following command to install the dependencies:
```bash
npm install @vercel/otel @opentelemetry/exporter-trace-otlp-http @opentelemetry/sdk-trace-node
```
Create an `instrumentation.ts` file in the root of your project with the following content:
```js
import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http';
import { SimpleSpanProcessor } from '@opentelemetry/sdk-trace-node';
import { registerOTel } from '@vercel/otel';
export function register() {
registerOTel({
serviceName: 'nextjs-app',
spanProcessors: [
new SimpleSpanProcessor(
new OTLPTraceExporter({
url: `https://${process.env.AXIOM_DOMAIN}/v1/traces`,
headers: {
Authorization: `Bearer ${process.env.API_TOKEN}`,
'X-Axiom-Dataset': `${process.env.DATASET_NAME}`,
},
})
),
],
});
}
```
Add the `API_TOKEN`, `DATASET_NAME`, and `AXIOM_DOMAIN` environment variables to your `.env` file. For example:
```bash
API_TOKEN=xaat-123
DATASET_NAME=my-dataset
AXIOM_DOMAIN=api.axiom.co
```
### Initial setup without @vercel/otel
To set up your app without the `@vercel/otel` package, run the following command to install the dependencies:
```bash
npm install @opentelemetry/sdk-node @opentelemetry/exporter-trace-otlp-http @opentelemetry/resources @opentelemetry/semantic-conventions @opentelemetry/sdk-trace-node
```
Create an `instrumentation.ts` file in the root of your project with the following content:
```js
import { NodeSDK } from '@opentelemetry/sdk-node';
import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http';
import { Resource } from '@opentelemetry/resources';
import { SEMRESATTRS_SERVICE_NAME } from '@opentelemetry/semantic-conventions';
import { SimpleSpanProcessor } from '@opentelemetry/sdk-trace-node';
export function register() {
const sdk = new NodeSDK({
resource: new Resource({
[SEMRESATTRS_SERVICE_NAME]: 'nextjs-app',
}),
spanProcessor: new SimpleSpanProcessor(
new OTLPTraceExporter({
url: `https://${process.env.AXIOM_DOMAIN}/v1/traces`,
headers: {
Authorization: `Bearer ${process.env.API_TOKEN}`,
'X-Axiom-Dataset': process.env.DATASET_NAME,
},
})
),
});
sdk.start();
}
```
Add the `API_TOKEN` and `DATASET_NAME` environment variables to your `.env` file. For example:
```bash
API_TOKEN=xaat-123
DATASET_NAME=my-dataset
```
## Set up the Next.js environment
### layout.tsx
In the `src/app/layout.tsx` file, import and call the `register` function from the `instrumentation` module:
```js
import { register } from '../../instrumentation';
register();
export default function RootLayout({ children }: Readonly<{ children: React.ReactNode }>) {
return (
{children}
);
}
```
This file sets up the root layout for your Next.js app and initializes the OpenTelemetry instrumentation by calling the `register` function.
### route.ts
Create a `route.ts` file in `src/app/api/rolldice/` to handle HTTP GET requests to the `/rolldice` API endpoint:
```js
// src/app/api/rolldice/route.ts
import { NextResponse } from 'next/server';
function getRandomNumber(min: number, max: number): number {
return Math.floor(Math.random() * (max - min) + min);
}
export async function GET() {
const diceRoll = getRandomNumber(1, 6);
return NextResponse.json(diceRoll.toString());
}
```
This file defines a route handler for the `/rolldice` endpoint, which returns a random number between 1 and 6.
### next.config.js
Configure the `next.config.js` file to enable instrumentation and resolve the `tls` module:
```js
module.exports = {
experimental: {
// Enable the instrumentation hook for collecting telemetry data
instrumentationHook: true,
},
webpack: (config, { isServer }) => {
if (!isServer) {
config.resolve.fallback = {
// Disable the 'tls' module on the client side
tls: false,
};
}
return config;
},
};
```
This configuration enables the instrumentation hook and resolves the `tls` module for the client-side build.
### tsconfig.json
Add the following options to your `tsconfig.json` file to ensure compatibility with OpenTelemetry and Next.js:
```json
{
"compilerOptions": {
"lib": ["dom", "dom.iterable", "esnext"],
"allowJs": true,
"skipLibCheck": true,
"strict": true,
"noEmit": true,
"esModuleInterop": true,
"module": "esnext",
"moduleResolution": "bundler",
"resolveJsonModule": true,
"isolatedModules": true,
"jsx": "preserve",
"incremental": true,
"plugins": [
{
"name": "next"
}
],
"paths": {
"@/*": ["./src/*"]
}
},
"include": ["next-env.d.ts", "**/*.ts", "**/*.tsx", ".next/types/**/*.ts"],
"exclude": ["node_modules"]
}
```
This file configures the TypeScript compiler options for your Next.js app.
## Project structure
After completing the steps above, the project structure of your Next.js app is the following:
```bash
my-nextjs-app/
├── src/
│ ├── app/
│ │ ├── api/
│ │ │ └── rolldice/
│ │ │ └── route.ts
│ │ ├── page.tsx
│ │ └── layout.tsx
│ └── ...
├── instrumentation.ts
├── next.config.js
├── tsconfig.json
└── ...
```
## Run the app and observe traces in Axiom
Use the following command to run your Next.js app with OpenTelemetry instrumentation in development mode:
```bash
npm run dev
```
This command starts the Next.js development server, and the OpenTelemetry instrumentation automatically collects traces. As you interact with your app, traces are sent to Axiom where you can monitor and analyze your app’s performance and behavior.
In Axiom, go to the **Stream** tab and click your dataset. This page displays the traces sent to Axiom and lets you monitor and analyze your app’s performance and behavior.
Go to the **Dashboards** tab and click **OpenTelemetry Traces**. This pre-built traces dashboard provides further insights into the performance and behavior of your app.
## Send data from an existing Next.js project
### Manual instrumentation
Manual instrumentation allows you to create, configure, and manage spans and traces, providing detailed control over telemetry data collection at specific points within the app.
1. Set up and retrieve a tracer from the OpenTelemetry API. This tracer starts and manages spans within your app components or API routes.
```js
import { trace } from '@opentelemetry/api';
const tracer = trace.getTracer('nextjs-app');
```
2. Manually start a span at the beginning of significant operations or transactions within your Next.js app and ensure you end it appropriately. This approach is for tracing specific custom events or operations not automatically captured by instrumentations.
```js
const span = tracer.startSpan('operationName');
try {
// Perform your operation here
} finally {
span.end();
}
```
3. Enhance the span with additional information such as user details or operation outcomes, which can provide deeper insights when analyzing telemetry data.
```js
span.setAttribute('user_id', userId);
span.setAttribute('operation_status', 'success');
```
### Automatic instrumentation
Automatic instrumentation uses the capabilities of OpenTelemetry to automatically capture telemetry data for standard operations such as HTTP requests and responses.
1. Use the OpenTelemetry Node SDK to configure your app to automatically instrument supported libraries and frameworks. Set up `NodeSDK` in an `instrumentation.ts` file in your project.
```js
import { NodeSDK } from '@opentelemetry/sdk-node';
import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http';
export function register() {
const sdk = new NodeSDK({
resource: new Resource({ [SEM_RESOURCE_ATTRIBUTES.SERVICE_NAME]: 'nextjs-app' }),
spanProcessor: new BatchSpanProcessor(
new OTLPTraceExporter({
url: `https://${process.env.AXIOM_DOMAIN}/v1/traces`,
headers: {
Authorization: `Bearer ${process.env.API_TOKEN}`,
'X-Axiom-Dataset': `${process.env.DATASET_NAME}`,
},
})
),
});
sdk.start();
}
```
2. Include necessary OpenTelemetry instrumentation packages to automatically capture telemetry from Node.js libraries like HTTP and any other middlewares used by Next.js.
3. Call the `register` function from the `instrumentation.ts` within your app startup file or before your app starts handling traffic to initialize the OpenTelemetry instrumentation.
```js
// In pages/_app.js or an equivalent entry point
import { register } from '../instrumentation';
register();
```
## Reference
### List of OpenTelemetry trace fields
| Field Category | Field Name | Description |
| --------------------------- | ------------------------------------- | ------------------------------------------------------------------ |
| General Trace Information | | |
| | \_rowId | Unique identifier for each row in the trace data. |
| | \_sysTime | System timestamp when the trace data was recorded. |
| | \_time | Timestamp when the actual event being traced occurred. |
| | trace\_id | Unique identifier for the entire trace. |
| | span\_id | Unique identifier for the span within the trace. |
| | parent\_span\_id | Unique identifier for the parent span within the trace. |
| HTTP Attributes | | |
| | attributes.http.method | HTTP method used for the request. |
| | attributes.http.status\_code | HTTP status code returned in response. |
| | attributes.http.route | Route accessed during the HTTP request. |
| | attributes.http.target | Specific target of the HTTP request. |
| Custom Attributes | | |
| | attributes.custom\["next.route"] | Custom attribute defining the Next.js route. |
| | attributes.custom\["next.rsc"] | Indicates if React Server Components are used. |
| | attributes.custom\["next.span\_name"] | Custom name of the span within Next.js context. |
| | attributes.custom\["next.span\_type"] | Type of the Next.js span, describing the operation context. |
| Resource Process Attributes | | |
| | resource.process.pid | Process ID of the Node.js app. |
| | resource.process.runtime.description | Description of the runtime environment. For example, Node.js. |
| | resource.process.runtime.name | Name of the runtime environment. For example, nodejs. |
| | resource.process.runtime.version | Version of the runtime environment For example, 18.17.0. |
| | resource.process.executable.name | Executable name running the process. For example, next-server. |
| Resource Host Attributes | | |
| | resource.host.arch | Architecture of the host machine. For example, arm64. |
| | resource.host.name | Name of the host machine. For example, MacBook-Pro.local. |
| Operational Details | | |
| | duration | Time taken for the operation. |
| | kind | Type of span (for example, server, internal). |
| | name | Name of the span, often a high-level title for the operation. |
| Scope Attributes | | |
| | scope.name | Name of the scope for the operation. For example, next.js. |
| | scope.version | Version of the scope. For example, 0.0.1. |
| Service Attributes | | |
| | service.name | Name of the service generating the trace. For example, nextjs-app. |
| Telemetry SDK Attributes | | |
| | telemetry.sdk.language | Language of the telemetry SDK. For example, nodejs. |
| | telemetry.sdk.name | Name of the telemetry SDK. For example, opentelemetry. |
| | telemetry.sdk.version | Version of the telemetry SDK. For example, 1.23.0. |
s
### List of imported libraries
`@opentelemetry/api`
The core API for OpenTelemetry in JavaScript, providing the necessary interfaces and utilities for tracing, metrics, and context propagation. In the context of Next.js, it allows developers to manually instrument custom spans, manipulate context, and access the active span if needed.
`@opentelemetry/exporter-trace-otlp-http`
This exporter enables your Next.js app to send trace data over HTTP to any backend that supports the OTLP (OpenTelemetry Protocol), such as Axiom. Using OTLP ensures compatibility with a wide range of observability tools and standardizes the data export process.
`@opentelemetry/resources`
This defines the Resource which represents the entity producing telemetry. In Next.js, Resources can be used to describe the app (for example, service name, version) and are attached to all exported telemetry, aiding in identifying data in backend systems.
`@opentelemetry/sdk-node`
The OpenTelemetry SDK for Node.js which provides a comprehensive set of tools for instrumenting Node.js apps. It includes automatic instrumentation for popular libraries and frameworks, as well as APIs for manual instrumentation. In the Next.js setup, it’s used to configure and initialize the OpenTelemetry SDK.
`@opentelemetry/semantic-conventions`
A set of standard attributes and conventions for describing resources, spans, and metrics in OpenTelemetry. By adhering to these conventions, your Next.js app’s telemetry data becomes more consistent and interoperable with other OpenTelemetry-compatible tools and systems.
`@vercel/otel`
A package provided by Vercel that simplifies the setup and configuration of OpenTelemetry for Next.js apps deployed on the Vercel platform. It abstracts away some of the boilerplate code and provides a more streamlined integration experience.
# OpenTelemetry using Node.js
Source: https://axiom.co/docs/guides/opentelemetry-nodejs
This guide demonstrates how to configure OpenTelemetry in a Node.js app to send telemetry data to Axiom.
OpenTelemetry provides a [unified approach to collecting telemetry data](https://opentelemetry.io/docs/languages/js/instrumentation/) from your Node.js and TypeScript apps. This guide demonstrates how to configure OpenTelemetry in a Node.js app to send telemetry data to Axiom using OpenTelemetry SDK.
## Prerequisites
To configure OpenTelemetry in a Node.js app for sending telemetry data to Axiom, certain prerequisites are necessary. These include:
* Node:js: Node.js version 14 or newer.
* Node.js app: Use your own app written in Node.js, or you can start with the provided **`app.ts`** sample.
* [Create an Axiom account](https://app.axiom.co/).
* [Create a dataset in Axiom](/reference/datasets) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to create, read, update, and delete datasets.
## Core Application (app.ts)
`app.ts` is the core of the app. It uses Express.js to create a simple web server. The server has an endpoint `/rolldice` that returns a random number, simulating a basic API. It also demonstrates the usage of span links to establish relationships between spans across different traces.
```js
/*app.ts*/
// Importing OpenTelemetry instrumentation for tracing
import './instrumentation';
import { trace, context } from '@opentelemetry/api';
// Importing Express.js: A minimal and flexible Node.js web app framework
import express from 'express';
// Setting up the server port: Use the PORT environment variable or default to 8080
const PORT = parseInt(process.env.PORT || '8080');
const app = express();
// Get the tracer from the global tracer provider
const tracer = trace.getTracer('node-traces');
/**
* Function to generate a random number between min and max (inclusive).
* @param min - The minimum number (inclusive).
* @param max - The maximum number (exclusive).
* @returns A random number between min and max.
*/
function getRandomNumber(min: number, max: number): number {
return Math.floor(Math.random() * (max - min) + min);
}
// Defining a route handler for '/rolldice' that returns a random dice roll
app.get('/rolldice', (req, res) => {
const span = trace.getSpan(context.active());
/**
* Spans can be created with zero or more Links to other Spans that are related.
* Links allow creating connections between different traces
*/
const rollDiceSpan = tracer.startSpan('roll_dice_span', {
links: span ? [{ context: span.spanContext() }] : [],
});
// Set the rollDiceSpan as the currently active span
context.with(trace.setSpan(context.active(), rollDiceSpan), () => {
const diceRoll = getRandomNumber(1, 6).toString();
res.send(diceRoll);
rollDiceSpan.end();
});
});
// Defining a route handler for '/roll_with_link' that creates a parent span and calls '/rolldice'
app.get('/roll_with_link', (req, res) => {
/**
* A common scenario is to correlate one or more traces with the current span.
* This can help in tracing and debugging complex interactions across different parts of the app.
*/
const parentSpan = tracer.startSpan('parent_span');
// Set the parentSpan as the currently active span
context.with(trace.setSpan(context.active(), parentSpan), () => {
const diceRoll = getRandomNumber(1, 6).toString();
res.send(`Dice roll result (with link): ${diceRoll}`);
parentSpan.end();
});
});
// Starting the server on the specified PORT and logging the listening message
app.listen(PORT, () => {
console.log(`Listening for requests on http://localhost:${PORT}`);
});
```
## Exporter (instrumentation.ts)
`instrumentation.ts` sets up the OpenTelemetry instrumentation. It configures the OTLP (OpenTelemetry Protocol) exporters for traces and initializes the Node SDK with automatic instrumentation capabilities.
```js
/*instrumentation.ts*/
// Importing necessary OpenTelemetry packages including the core SDK, auto-instrumentations, OTLP trace exporter, and batch span processor
import { NodeSDK } from '@opentelemetry/sdk-node';
import { getNodeAutoInstrumentations } from '@opentelemetry/auto-instrumentations-node';
import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-proto';
import { BatchSpanProcessor } from '@opentelemetry/sdk-trace-base';
import { Resource } from '@opentelemetry/resources';
import { SemanticResourceAttributes } from '@opentelemetry/semantic-conventions';
// Initialize OTLP trace exporter with the endpoint URL and headers
const traceExporter = new OTLPTraceExporter({
url: 'https://AXIOM_DOMAIN/v1/traces',
headers: {
'Authorization': 'Bearer API_TOKEN',
'X-Axiom-Dataset': 'DATASET_NAME'
},
});
// Creating a resource to identify your service in traces
const resource = new Resource({
[SemanticResourceAttributes.SERVICE_NAME]: 'node traces',
});
// Configuring the OpenTelemetry Node SDK
const sdk = new NodeSDK({
// Adding a BatchSpanProcessor to batch and send traces
spanProcessor: new BatchSpanProcessor(traceExporter),
// Registering the resource to the SDK
resource: resource,
// Adding auto-instrumentations to automatically collect trace data
instrumentations: [getNodeAutoInstrumentations()],
});
// Starting the OpenTelemetry SDK to begin collecting telemetry data
sdk.start();
```
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
## Installing the Dependencies
Navigate to the root directory of your project and run the following command to install the required dependencies:
```bash
npm install
```
This command will install all the necessary packages listed in your `package.json` [below](/guides/opentelemetry-nodejs#setting-up-typescript-development-environment)
## Setting Up TypeScript Development Environment
To run the TypeScript app, you need to set up a TypeScript development environment. This includes adding a `package.json` file to manage your project’s dependencies and scripts, and a `tsconfig.json` file to manage TypeScript compiler options.
### Add `package.json`
Create a `package.json` file in the root of your project with the following content:
```json
{
"name": "typescript-traces",
"version": "1.0.0",
"description": "",
"main": "app.js",
"scripts": {
"build": "tsc",
"start": "ts-node app.ts",
"dev": "ts-node-dev --respawn app.ts"
},
"keywords": [],
"author": "",
"license": "ISC",
"dependencies": {
"@opentelemetry/api": "^1.6.0",
"@opentelemetry/api-logs": "^0.46.0",
"@opentelemetry/auto-instrumentations-node": "^0.39.4",
"@opentelemetry/exporter-metrics-otlp-http": "^0.45.0",
"@opentelemetry/exporter-metrics-otlp-proto": "^0.45.1",
"@opentelemetry/exporter-trace-otlp-http": "^0.45.0",
"@opentelemetry/sdk-logs": "^0.46.0",
"@opentelemetry/sdk-metrics": "^1.20.0",
"@opentelemetry/sdk-node": "^0.45.1",
"express": "^4.18.2"
},
"devDependencies": {
"@types/express": "^4.17.21",
"@types/node": "^16.18.71",
"ts-node": "^10.9.2",
"ts-node-dev": "^2.0.0",
"tsc-watch": "^4.6.2",
"typescript": "^4.9.5"
}
}
```
### Add `tsconfig.json`
Create a `tsconfig.json` file in the root of your project with the following content:
```json
{
"compilerOptions": {
"target": "es2016",
"module": "commonjs",
"esModuleInterop": true,
"forceConsistentCasingInFileNames": true,
"strict": true,
"skipLibCheck": true
}
}
```
This configuration file specifies how the TypeScript compiler should transpile TypeScript files into JavaScript.
## Running the Instrumented Application
To run your Node.js app with OpenTelemetry instrumentation, make sure your API token, and dataset is set in the `instrumentation.ts` file.
### In Development Mode
For development purposes, especially when you need automatic restarts upon file changes, use:
```bash
npm run dev
```
This command will start the OpenTelemetry instrumentation in development mode using `ts-node-dev`. It sets up the exporter for tracing and restarts the server automatically whenever you make changes to the files.
### In Production Mode
To run the app in production mode, you need to first build the TypeScript files into JavaScript. Run the following command to build your application:
```bash
npm run build
```
This command compiles the TypeScript files to JavaScript based on the settings specified in `tsconfig.json`. Once the build process is complete, you can start your app in production mode with:
```bash
npm start
```
The server will start on the specified port, and you can interact with it by sending requests to the `/rolldice` endpoint.
## Observe the telemetry data in Axiom
As you interact with your app, traces will be collected and exported to Axiom, where you can monitor and analyze your app’s performance and behavior.
## Dynamic OpenTelemetry traces dashboard
This data can then be further viewed and analyzed in Axiom’s dashboard, providing insights into the performance and behaviour of your app.
## Send data from an existing Node project
### Manual Instrumentation
Manual instrumentation in Node.js requires adding code to create and manage spans around the code blocks you want to trace.
1. Initialize Tracer:
Import and configure a tracer in your Node.js app. Use the tracer configured in your instrumentation setup (instrumentation.ts).
```js
// Assuming OpenTelemetry SDK is already configured
const { trace } = require('@opentelemetry/api');
const tracer = trace.getTracer('example-tracer');
```
2. Create Spans:
Wrap the code blocks that you want to trace with spans. Start and end these spans within your code.
```js
const span = tracer.startSpan('operation_name');
try {
// Your code here
span.end();
} catch (error) {
span.recordException(error);
span.end();
}
```
3. Annotate Spans:
Add metadata and logs to your spans for the trace data.
```js
span.setAttribute('key', 'value');
span.addEvent('event name', { eventKey: 'eventValue' });
```
### Automatic Instrumentation
Automatic instrumentation in Node.js simplifies adding telemetry data to your app. It uses pre-built libraries to automatically instrument common frameworks and libraries.
1. Install Instrumentation Libraries:
Use OpenTelemetry packages that automatically instrument common Node.js frameworks and libraries.
```bash
npm install @opentelemetry/auto-instrumentations-node
```
2. Instrument Application:
Configure your app to use these libraries, which will automatically generate spans for standard operations.
```js
// In your instrumentation setup (instrumentation.ts)
const { getNodeAutoInstrumentations } = require('@opentelemetry/auto-instrumentations-node');
const sdk = new NodeSDK({
// ... other configurations ...
instrumentations: [getNodeAutoInstrumentations()]
});
```
After you set them up, these libraries automatically trace relevant operations without additional code changes in your app.
## Reference
### List of OpenTelemetry trace fields
| Field Category | Field Name | Description |
| ------------------------------- | --------------------------------------- | ------------------------------------------------------------ |
| **Unique Identifiers** | | |
| | \_rowid | Unique identifier for each row in the trace data. |
| | span\_id | Unique identifier for the span within the trace. |
| | trace\_id | Unique identifier for the entire trace. |
| **Timestamps** | | |
| | \_systime | System timestamp when the trace data was recorded. |
| | \_time | Timestamp when the actual event being traced occurred. |
| **HTTP Attributes** | | |
| | attributes.custom\["http.host"] | Host information where the HTTP request was sent. |
| | attributes.custom\["http.server\_name"] | Server name for the HTTP request. |
| | attributes.http.flavor | HTTP protocol version used. |
| | attributes.http.method | HTTP method used for the request. |
| | attributes.http.route | Route accessed during the HTTP request. |
| | attributes.http.scheme | Protocol scheme (HTTP/HTTPS). |
| | attributes.http.status\_code | HTTP response status code. |
| | attributes.http.target | Specific target of the HTTP request. |
| | attributes.http.user\_agent | User agent string of the client. |
| **Network Attributes** | | |
| | attributes.net.host.port | Port number on the host receiving the request. |
| | attributes.net.peer.port | Port number on the peer (client) side. |
| | attributes.custom\["net.peer.ip"] | IP address of the peer in the network interaction. |
| **Operational Details** | | |
| | duration | Time taken for the operation. |
| | kind | Type of span (for example,, server, client). |
| | name | Name of the span. |
| | scope | Instrumentation scope. |
| | service.name | Name of the service generating the trace. |
| **Resource Process Attributes** | | |
| | resource.process.command | Command line string used to start the process. |
| | resource.process.command\_args | List of command line arguments used in starting the process. |
| | resource.process.executable.name | Name of the executable running the process. |
| | resource.process.executable.path | Path to the executable running the process. |
| | resource.process.owner | Owner of the process. |
| | resource.process.pid | Process ID. |
| | resource.process.runtime.description | Description of the runtime environment. |
| | resource.process.runtime.name | Name of the runtime environment. |
| | resource.process.runtime.version | Version of the runtime environment. |
| **Telemetry SDK Attributes** | | |
| | telemetry.sdk.language | Language of the telemetry SDK. |
| | telemetry.sdk.name | Name of the telemetry SDK. |
| | telemetry.sdk.version | Version of the telemetry SDK. |
### List of imported libraries
The `instrumentation.ts` file imports the following libraries:
### **`@opentelemetry/sdk-node`**
This package is the core SDK for OpenTelemetry in Node.js. It provides the primary interface for configuring and initializing OpenTelemetry in a Node.js app. It includes functionalities for managing traces and context propagation. The SDK is designed to be extensible, allowing for custom configurations and integration with different telemetry backends like Axiom.
### **`@opentelemetry/auto-instrumentations-node`**
This package offers automatic instrumentation for Node.js apps. It simplifies the process of instrumenting various common Node.js libraries and frameworks. By using this package, developers can automatically collect telemetry data (such as traces) from their apps without needing to manually instrument each library or API call. This is important for apps with complex dependencies, as it ensures comprehensive and consistent telemetry collection across the app.
### **`@opentelemetry/exporter-trace-otlp-proto`**
The **`@opentelemetry/exporter-trace-otlp-proto`** package provides an exporter that sends trace data using the OpenTelemetry Protocol (OTLP). OTLP is the standard protocol for transmitting telemetry data in the OpenTelemetry ecosystem. This exporter allows Node.js apps to send their collected traces to any backend that supports OTLP, such as Axiom. The use of OTLP ensures broad compatibility and a standardized way of transmitting telemetry data.
### **`@opentelemetry/sdk-trace-base`**
Contained within this package is the **`BatchSpanProcessor`**, among other foundational elements for tracing in OpenTelemetry. The **`BatchSpanProcessor`** is a component that collects and processes spans (individual units of trace data). As the name suggests, it batches these spans before sending them to the configured exporter (in this case, the `OTLPTraceExporter`). This batching mechanism is efficient as it reduces the number of outbound requests by aggregating multiple spans into fewer batches. It helps in the performance and scalability of trace data export in an OpenTelemetry-instrumented app.
# Send OpenTelemetry data from a Python app to Axiom
Source: https://axiom.co/docs/guides/opentelemetry-python
This guide explains how to send OpenTelemetry data from a Python app to Axiom using the Python OpenTelemetry SDK.
This guide explains how to send OpenTelemetry data from a Python app to Axiom using the [Python OpenTelemetry SDK](https://opentelemetry.io/docs/languages/python/instrumentation/).
## Prerequisites
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
- Install Python version 3.7 or higher.
## Install required dependencies
To install the required Python dependencies, run the following code in your terminal:
```bash
pip install opentelemetry-api opentelemetry-sdk opentelemetry-instrumentation-flask opentelemetry-exporter-otlp Flask
```
### Install dependencies with requirements file
Alternatively, if you use a `requirements.txt` file in your Python project, add these lines:
```txt
opentelemetry-api
opentelemetry-sdk
opentelemetry-instrumentation-flask
opentelemetry-exporter-otlp
Flask
```
Then run the following code in your terminal to install dependencies:
```bash
pip install -r requirements.txt
```
## Create an app.py file
Create an `app.py` file with the following content. This file creates a basic HTTP server using Flask. It also demonstrates the usage of span links to establish relationships between spans across different traces.
```python
# app.py
from flask import Flask
from opentelemetry.instrumentation.flask import FlaskInstrumentor
from opentelemetry import trace
from random import randint
import exporter
# Creating a Flask app instance
app = Flask(__name__)
# Automatically instruments Flask app to enable tracing
FlaskInstrumentor().instrument_app(app)
# Retrieving a tracer from the custom exporter
tracer = exporter.service1_tracer
@app.route("/rolldice")
def roll_dice(parent_span=None):
# Starting a new span for the dice roll. If a parent span is provided, link to its span context.
with tracer.start_as_current_span("roll_dice_span",
links=[trace.Link(parent_span.get_span_context())] if parent_span else None) as span:
# Spans can be created with zero or more Links to other Spans that are related.
# Links allow creating connections between different traces
return str(roll())
@app.route("/roll_with_link")
def roll_with_link():
# Starting a new 'parent_span' which may later link to other spans
with tracer.start_as_current_span("parent_span") as parent_span:
# A common scenario is to correlate one or more traces with the current span.
# This can help in tracing and debugging complex interactions across different parts of the app.
result = roll_dice(parent_span)
return f"Dice roll result (with link): {result}"
def roll():
# Function to generate a random number between 1 and 6
return randint(1, 6)
if __name__ == "__main__":
# Starting the Flask server on the specified PORT and enabling debug mode
app.run(port=8080, debug=True)
```
## Create an exporter.py file
Create an `exporter.py` file with the following content. This file establishes an OpenTelemetry configuration and sets up an exporter that sends trace data to Axiom.
```python
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.sdk.resources import Resource, SERVICE_NAME
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
# Define the service name resource for the tracer.
resource = Resource(attributes={
SERVICE_NAME: "NAME_OF_SERVICE" # Replace `NAME_OF_SERVICE` with the name of the service you want to trace.
})
# Create a TracerProvider with the defined resource for creating tracers.
provider = TracerProvider(resource=resource)
# Configure the OTLP/HTTP Span Exporter with Axiom headers and endpoint.
otlp_exporter = OTLPSpanExporter(
endpoint="https://AXIOM_DOMAIN/v1/traces",
headers={
"Authorization": "Bearer API_TOKEN",
"X-Axiom-Dataset": "DATASET_NAME"
}
)
# Create a BatchSpanProcessor with the OTLP exporter to batch and send trace spans.
processor = BatchSpanProcessor(otlp_exporter)
provider.add_span_processor(processor)
# Set the TracerProvider as the global tracer provider.
trace.set_tracer_provider(provider)
# Define a tracer for external use in different parts of the app.
service1_tracer = trace.get_tracer("service1")
```
Replace `NAME_OF_SERVICE` with the name of the service you want to trace. This is important for identifying and categorizing trace data, particularly in systems with multiple services.
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
For more information on the libraries imported by the `exporter.py` file, see the [Reference](#reference) below.
## Run the app
Run the following code in your terminal to run the Python project:
macOS/Linux
```bash
python3 app.py
```
Windows
```
py -3 app.py
```
In your browser, go to `http://127.0.0.1:8080/rolldice` to interact with your Python app. Each time you load the page, the app displays a random number and sends the collected traces to Axiom.
## Observe the telemetry data in Axiom
In Axiom, go the **Stream** tab and click your dataset. This page displays the traces sent to Axiom and enables you to monitor and analyze your app’s performance and behavior.
## Dynamic OpenTelemetry traces dashboard
In Axiom, go the **Dashboards** tab and click **OpenTelemetry Traces (python)**. This pre-built traces dashboard provides further insights into the performance and behavior of your app.
## Send data from an existing Python project
### Manual instrumentation
Manual instrumentation in Python with OpenTelemetry involves adding code to create and manage spans around the blocks of code you want to trace. This approach allows for precise control over the trace data.
1. Import and configure a tracer at the start of your main Python file. For example, use the tracer from the `exporter.py` configuration.
```python
import exporter
tracer = exporter.service1_tracer
```
2. Enclose the code blocks in your app that you want to trace within spans. Start and end these spans in your code.
```python
with tracer.start_as_current_span("operation_name"):
```
3. Add relevant metadata and logs to your spans to enrich the trace data, providing more context for your data.
```python
with tracer.start_as_current_span("operation_name") as span:
span.set_attribute("key", "value")
```
### Automatic instrumentation
Automatic instrumentation in Python with OpenTelemetry simplifies the process of adding telemetry data to your app. It uses pre-built libraries that automatically instrument the frameworks and libraries.
1. Install the OpenTelemetry packages designed for specific frameworks like Flask or Django.
```bash
pip install opentelemetry-instrumentation-flask
```
2. Configure your app to use these libraries that automatically generate spans for standard operations.
```python
from opentelemetry.instrumentation.flask import FlaskInstrumentor
# This assumes `app` is your Flask app.
FlaskInstrumentor().instrument_app(app)
```
After you set them up, these libraries automatically trace relevant operations without additional code changes in your app.
## Reference
### List of OpenTelemetry trace fields
| Field Category | Field Name | Description |
| ------------------- | --------------------------------------- | ------------------------------------------------------ |
| Unique Identifiers | | |
| | \_rowid | Unique identifier for each row in the trace data. |
| | span\_id | Unique identifier for the span within the trace. |
| | trace\_id | Unique identifier for the entire trace. |
| Timestamps | | |
| | \_systime | System timestamp when the trace data was recorded. |
| | \_time | Timestamp when the actual event being traced occurred. |
| HTTP Attributes | | |
| | attributes.custom\["http.host"] | Host information where the HTTP request was sent. |
| | attributes.custom\["http.server\_name"] | Server name for the HTTP request. |
| | attributes.http.flavor | HTTP protocol version used. |
| | attributes.http.method | HTTP method used for the request. |
| | attributes.http.route | Route accessed during the HTTP request. |
| | attributes.http.scheme | Protocol scheme (HTTP/HTTPS). |
| | attributes.http.status\_code | HTTP response status code. |
| | attributes.http.target | Specific target of the HTTP request. |
| | attributes.http.user\_agent | User agent string of the client. |
| Network Attributes | | |
| | attributes.net.host.port | Port number on the host receiving the request. |
| | attributes.net.peer.port | Port number on the peer (client) side. |
| | attributes.custom\["net.peer.ip"] | IP address of the peer in the network interaction. |
| Operational Details | | |
| | duration | Time taken for the operation. |
| | kind | Type of span (for example,, server, client). |
| | name | Name of the span. |
| | scope | Instrumentation scope. |
| | service.name | Name of the service generating the trace. |
### List of imported libraries
The `exporter.py` file imports the following libraries:
from opentelemetry import trace
This module creates and manages trace data in your app. It creates spans and tracers which track the execution flow and performance of your app.
from opentelemetry.sdk.trace import TracerProvider
`TracerProvider` acts as a container for the configuration of your app’s tracing behavior. It allows you to define how spans are generated and processed, essentially serving as the central point for managing trace creation and propagation in your app.
from opentelemetry.sdk.trace.export import BatchSpanProcessor
`BatchSpanProcessor` is responsible for batching spans before they are exported. This is an important aspect of efficient trace data management as it aggregates multiple spans into fewer network requests, reducing the overhead on your app’s performance and the tracing backend.
from opentelemetry.sdk.resources import Resource, SERVICE\_NAME
The `Resource` class is used to describe your app’s service attributes, such as its name, version, and environment. This contextual information is attached to the traces and helps in identifying and categorizing trace data, making it easier to filter and analyze in your monitoring setup.
from opentelemetry.exporter.otlp.proto.http.trace\_exporter import OTLPSpanExporter
The `OTLPSpanExporter` is responsible for sending your app’s trace data to a backend that supports the OTLP such as Axiom. It formats the trace data according to the OTLP standards and transmits it over HTTP, ensuring compatibility and standardization in how telemetry data is sent across different systems and services.
# Send OpenTelemetry data from a Ruby on Rails app to Axiom
Source: https://axiom.co/docs/guides/opentelemetry-ruby
This guide explains how to send OpenTelemetry data from a Ruby on Rails App to Axiom using the Ruby OpenTelemetry SDK.
This guide provides detailed steps on how to configure OpenTelemetry in a Ruby application to send telemetry data to Axiom using the [OpenTelemetry Ruby SDK](https://opentelemetry.io/docs/languages/ruby/).
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
- Install a [Ruby version manager](https://www.ruby-lang.org/en/documentation/installation/) like `rbenv` and use it to install the latest Ruby version.
- Install [Rails](https://guides.rubyonrails.org/v5.0/getting_started.html) using the `gem install rails` command.
## Set up the Ruby on Rails application
1. Create a new Rails app using the `rails new myapp` command.
2. Go to the app directory with the `cd myapp` command.
3. Open the `Gemfile` and add the following OpenTelemetry packages:
```ruby
gem 'opentelemetry-api'
gem 'opentelemetry-sdk'
gem 'opentelemetry-exporter-otlp'
gem 'opentelemetry-instrumentation-rails'
gem 'opentelemetry-instrumentation-http'
gem 'opentelemetry-instrumentation-active_record', require: false
gem 'opentelemetry-instrumentation-all'
```
Install the dependencies by running `bundle install`.
## Configure the OpenTelemetry exporter
In the `initializers` folder of your Rails app, create a new file called `opentelemetry.rb`, and then add the following OpenTelemetry exporter configuration:
```ruby
require 'opentelemetry/sdk'
require 'opentelemetry/exporter/otlp'
require 'opentelemetry/instrumentation/all'
OpenTelemetry::SDK.configure do |c|
c.service_name = 'ruby-traces' # Set your service name
c.use_all # Or specify individual instrumentation you need
c.add_span_processor(
OpenTelemetry::SDK::Trace::Export::BatchSpanProcessor.new(
OpenTelemetry::Exporter::OTLP::Exporter.new(
endpoint: 'https://AXIOM_DOMAIN/v1/traces',
headers: {
'Authorization' => 'Bearer API_TOKEN',
'X-AXIOM-DATASET' => 'DATASET_NAME'
}
)
)
)
end
```
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
## Run the instrumented application
Run your Ruby on Rails application with OpenTelemetry instrumentation.
### In development mode
Start the Rails server using the `rails server` command. The server will start on the default port (usually 3000), and you can access your application by visiting `http://localhost:3000` in your web browser.
As you interact with your application, OpenTelemetry automatically collects telemetry data and sends it to Axiom using the configured OTLP exporter.
### In production mode
For production, ensure to precompile assets and run migrations if necessary. Start the server with `RAILS_ENV=production bin/rails server`. This setup ensures your Ruby application is instrumented to send traces to Axiom, using OpenTelemetry for observability.
## Observe the telemetry data in Axiom
As you interact with your application, traces are collected and exported to Axiom, allowing you to monitor, analyze, and gain insights into your application’s performance and behavior.
1. In your Axiom account and click on the **Datasets** or **Stream** tab.
2. Select your dataset from the list.
3. From the list of fields, click on the **trace\_id** to view your spans.
## Dynamic OpenTelemetry Traces dashboard
This data can then be further viewed and analyzed in Axiom’s dashboard, offering a deeper understanding of your application’s performance and behavior.
1. In your Axiom account, select **Dashboards**, and click on the traces dashboard named after your dataset.
2. View the dashboard which displays your total traces, incoming spans, average span duration, errors, slowest operations, and top 10 span errors across services.
## Send data from an existing Ruby app
### Manual instrumentation
Manual instrumentation allows users to define and manage telemetry data collection points within their Ruby applications, providing granular control over what is traced.
1. Initialize Tracer. Use the OpenTelemetry API to obtain a tracer from the global tracer provider. This tracer will be used to start and manage spans.
```ruby
tracer = OpenTelemetry.tracer_provider.tracer('my-tracer')
```
2. Manually start a span at the beginning of the block of code you want to trace and ensure to end it when your operations complete. This is useful for gathering detailed data about specific operations.
```ruby
span = tracer.start_span('operation_name')
begin
# Perform operation
rescue => e
span.record_exception(e)
span.status = OpenTelemetry::Trace::Status.error("Operation failed")
ensure
span.finish
end
```
3. Enhance spans with custom attributes to provide additional context about the traced operations, helping in debugging and monitoring performance.
```ruby
span.set_attribute("user_id", user.id)
span.add_event("query_executed", attributes: { "query" => sql_query })
```
### Automatic instrumentation
Automatic instrumentation in Ruby uses OpenTelemetry’s libraries to automatically generate telemetry data for common operations, such as HTTP requests and database queries.
1. Set up the OpenTelemetry SDK with the necessary instrumentation libraries in your Ruby application. This typically involves modifying the Gemfile and an initializer to set up the SDK and auto-instrumentation.
```ruby
# In config/initializers/opentelemetry.rb
OpenTelemetry::SDK.configure do |c|
c.service_name = 'ruby-traces'
c.use_all # Automatically use all available instrumentation
end
```
2. Ensure your Gemfile includes gems for the automatic instrumentation of the frameworks and libraries your application uses.
```ruby
gem 'opentelemetry-instrumentation-rails'
gem 'opentelemetry-instrumentation-http'
gem 'opentelemetry-instrumentation-active_record'
```
After setting up, no additional manual changes are required for basic telemetry data collection. The instrumentation libraries handle the creation and management of telemetry data automatically.
## Reference
### List of OpenTelemetry trace fields
| Field Category | Field Name | Description |
| ------------------------------- | ------------------------------------ | ------------------------------------------------------------- |
| **General Trace Information** | | |
| | \_rowId | Unique identifier for each row in the trace data. |
| | \_sysTime | System timestamp when the trace data was recorded. |
| | \_time | Timestamp when the actual event being traced occurred. |
| | trace\_id | Unique identifier for the entire trace. |
| | span\_id | Unique identifier for the span within the trace. |
| | parent\_span\_id | Unique identifier for the parent span within the trace. |
| **HTTP Attributes** | | |
| | attributes.http.method | HTTP method used for the request. |
| | attributes.http.status\_code | HTTP status code returned in response. |
| | attributes.http.target | Specific target of the HTTP request. |
| | attributes.http.scheme | Protocol scheme (HTTP/HTTPS). |
| **User Agent** | | |
| | attributes.http.user\_agent | User agent string, providing client software and OS. |
| **Custom Attributes** | | |
| | attributes.custom\["http.host"] | Host information where the HTTP request was sent. |
| | attributes.custom.identifier | Path to a file or identifier in the trace context. |
| | attributes.custom.layout | Layout used in the rendering process of a view or template. |
| **Resource Process Attributes** | | |
| | resource.process.command | Command line string used to start the process. |
| | resource.process.pid | Process ID. |
| | resource.process.runtime.description | Description of the runtime environment. |
| | resource.process.runtime.name | Name of the runtime environment. |
| | resource.process.runtime.version | Version of the runtime environment. |
| **Operational Details** | | |
| | duration | Time taken for the operation. |
| | kind | Type of span (e.g., server, client, internal). |
| | name | Name of the span, often a high-level title for the operation. |
| **Code Attributes** | | |
| | attributes.code.function | Function or method being executed. |
| | attributes.code.namespace | Namespace or module that includes the function. |
| **Scope Attributes** | | |
| | scope.name | Name of the scope for the operation. |
| | scope.version | Version of the scope. |
| **Service Attributes** | | |
| | service.name | Name of the service generating the trace. |
| | service.version | Version of the service generating the trace. |
| | service.instance.id | Unique identifier for the instance of the service. |
| **Telemetry SDK Attributes** | | |
| | telemetry.sdk.language | Language of the telemetry SDK, e.g., ruby. |
| | telemetry.sdk.name | Name of the telemetry SDK, e.g., opentelemetry. |
| | telemetry.sdk.version | Version of the telemetry SDK, e.g., 1.4.1. |
### List of imported libraries
`gem 'opentelemetry-api'`
The `opentelemetry-api` gem provides the core OpenTelemetry API for Ruby. It defines the basic concepts and interfaces for distributed tracing, such as spans, tracers, and context propagation. This gem is essential for instrumenting your Ruby application with OpenTelemetry.
`gem 'opentelemetry-sdk'`
The `opentelemetry-sdk` gem is the OpenTelemetry SDK for Ruby. It provides the implementation of the OpenTelemetry API, including the tracer provider, span processors, and exporters. This gem is responsible for managing the lifecycle of spans and sending them to the specified backend.
`gem 'opentelemetry-exporter-otlp'`
The `opentelemetry-exporter-otlp` gem is an exporter that sends trace data to a backend that supports the OpenTelemetry Protocol (OTLP), such as Axiom. It formats the trace data according to the OTLP standards and transmits it over HTTP or gRPC, ensuring compatibility and standardization in how telemetry data is sent across different systems and services.
`gem 'opentelemetry-instrumentation-rails'`
The `opentelemetry-instrumentation-rails` gem provides automatic instrumentation for Ruby on Rails applications. It integrates with various aspects of a Rails application, such as controllers, views, and database queries, to capture relevant trace data without requiring manual instrumentation. This gem simplifies the process of adding tracing to your Rails application.
`gem 'opentelemetry-instrumentation-http'`
The `opentelemetry-instrumentation-http` gem provides automatic instrumentation for HTTP requests made using the `Net::HTTP` library. It captures trace data for outgoing HTTP requests, including request headers, response status, and timing information. This gem helps in tracing the external dependencies of your application.
`gem 'opentelemetry-instrumentation-active_record', require: false`
The `opentelemetry-instrumentation-active_record` gem provides automatic instrumentation for ActiveRecord, the Object-Relational Mapping (ORM) library used in Ruby on Rails. It captures trace data for database queries, including the SQL statements executed and their duration. This gem helps in identifying performance bottlenecks related to database interactions.
`gem 'opentelemetry-instrumentation-all'`
The `opentelemetry-instrumentation-all` gem is a meta-gem that includes all the available instrumentation libraries for OpenTelemetry in Ruby. It provides a convenient way to install and configure multiple instrumentation libraries at once, covering various aspects of your application, such as HTTP requests, database queries, and external libraries. This gem simplifies the setup process and ensures comprehensive tracing coverage for your Ruby application.
# Axiom transport for Pino logger
Source: https://axiom.co/docs/guides/pino
This page explains how to send data from a Node.js app to Axiom through Pino.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
## Install SDK
To install the SDK, run the following:
```shell
npm install @axiomhq/pino
```
## Create Pino logger
The example below creates a Pino logger with Axiom configured:
```ts
import pino from 'pino';
const logger = pino(
{ level: 'info' },
pino.transport({
target: '@axiomhq/pino',
options: {
dataset: process.env.AXIOM_DATASET,
token: process.env.AXIOM_TOKEN,
},
}),
);
```
After setting up the Axiom transport for Pino, use the logger as usual:
```js
logger.info('Hello from Pino!');
```
## Examples
For more examples, see the [examples in GitHub](https://github.com/axiomhq/axiom-js/tree/main/examples/pino).
# Send data from Python app to Axiom
Source: https://axiom.co/docs/guides/python
This page explains how to send data from a Python app to Axiom.
To send data from a Python app to Axiom, use the Axiom Python SDK.
The Axiom Python SDK is an open-source project and welcomes your contributions. For more information, see the [GitHub repository](https://github.com/axiomhq/axiom-py).
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
## Install SDK
```shell Linux / MacOS
python3 -m pip install axiom-py
```
```shell Windows
py -m pip install axiom-py
```
```shell pip
pip3 install axiom-py
```
If you use the [Axiom CLI](/reference/cli), run `eval $(axiom config export -f)` to configure your environment variables. Otherwise, [create an API token](/reference/tokens) and export it as `AXIOM_TOKEN`.
You can also configure the client using options passed to the client constructor:
```py
import axiom_py
client = axiom_py.Client("API_TOKEN")
```
## Use client
```py
import axiom_py
import rfc3339
from datetime import datetime,timedelta
client = axiom_py.Client()
client.ingest_events(
dataset="DATASET_NAME",
events=[
{"foo": "bar"},
{"bar": "baz"},
])
client.query(r"['DATASET_NAME'] | where foo == 'bar' | limit 100")
```
For more examples, see the [examples in GitHub](https://github.com/axiomhq/axiom-py/tree/main/examples/client_example.py).
## Example with `AxiomHandler`
The example below uses `AxiomHandler` to send logs from the `logging` module to Axiom:
```python
import axiom_py
from axiom_py.logging import AxiomHandler
import logging
def setup_logger():
client = axiom_py.Client()
handler = AxiomHandler(client, "DATASET_NAME")
logging.getLogger().addHandler(handler)
```
For a full example, see [GitHub](https://github.com/axiomhq/axiom-py/tree/main/examples/logger_example.py).
## Example with `structlog`
The example below uses [structlog](https://github.com/hynek/structlog) to send logs to Axiom:
```python
from axiom_py import Client
from axiom_py.structlog import AxiomProcessor
def setup_logger():
client = Client()
structlog.configure(
processors=[
# ...
structlog.processors.add_log_level,
structlog.processors.TimeStamper(fmt="iso", key="_time"),
AxiomProcessor(client, "DATASET_NAME"),
# ...
]
)
```
For a full example, see [GitHub](https://github.com/axiomhq/axiom-py/tree/main/examples/structlog_example.py).
# Send data from Rust app to Axiom
Source: https://axiom.co/docs/guides/rust
This page explains how to send data from a Rust app to Axiom.
To send data from a Rust app to Axiom, use the Axiom Rust SDK.
The Axiom Rust SDK is an open-source project and welcomes your contributions. For more information, see the [GitHub repository](https://github.com/axiomhq/axiom-rs).
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
## Install SDK
Add the following to your `Cargo.toml`:
```toml
[dependencies]
axiom-rs = "VERSION"
```
Replace `VERSION` with the latest version number specified on the [GitHub Releases](https://github.com/axiomhq/axiom-rs/releases) page. For example, `0.11.0`.
If you use the [Axiom CLI](/reference/cli), run `eval $(axiom config export -f)` to configure your environment variables. Otherwise, [create an API token](/reference/tokens) and export it as `AXIOM_TOKEN`.
## Use client
```rust
use axiom_rs::Client;
use serde_json::json;
#[tokio::main]
async fn main() -> Result<(), Box> {
// Build your client by providing a personal token and an org id:
let client = Client::builder()
.with_token("API_TOKEN")
.build()?;
// Alternatively, auto-configure the client from the environment variable AXIOM_TOKEN:
let client = Client::new()?;
client.datasets().create("DATASET_NAME", "").await?;
client
.ingest(
"DATASET_NAME",
vec![json!({
"foo": "bar",
})],
)
.await?;
let res = client
.query(r#"['DATASET_NAME'] | where foo == "bar" | limit 100"#, None)
.await?;
println!("{:?}", res);
client.datasets().delete("DATASET_NAME").await?;
Ok(())
}
```
For more examples, see the [examples in GitHub](https://github.com/axiomhq/axiom-rs/tree/main/examples).
## Optional features
You can use the [Cargo features](https://doc.rust-lang.org/stable/cargo/reference/features.html#the-features-section):
* `default-tls`: Provides TLS support to connect over HTTPS. Enabled by default.
* `native-tls`: Enables TLS functionality provided by `native-tls`.
* `rustls-tls`: Enables TLS functionality provided by `rustls`.
* `tokio`: Enables usage with the `tokio` runtime. Enabled by default.
* `async-std`: Enables usage with the `async-std` runtime.
# Send logs from Apache Log4j to Axiom
Source: https://axiom.co/docs/guides/send-logs-from-apache-log4j
This guide explains how to configure Apache Log4j to send logs to Axiom
Log4j is a Java logging framework developed by the Apache Software Foundation and widely used in the Java community. This page covers how to get started with Log4j, configure it to forward log messages to Fluentd, and send logs to Axiom.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
- [Install JDK 11](https://www.oracle.com/java/technologies/java-se-glance.html) or later
- [Install Maven](https://maven.apache.org/download.cgi)
- [Install Fluentd](https://www.fluentd.org/download)
- [Install Docker](https://docs.docker.com/get-docker/)
## Configure Log4j
Log4j is a flexible and powerful logging framework for Java applications. To use Log4j in your project, add the necessary dependencies to your `pom.xml` file. The dependencies required for Log4j include `log4j-core`, `log4j-api`, and `log4j-slf4j2-impl` for logging capability, and `jackson-databind` for JSON support.
1. Create a new Maven project:
```bash
mvn archetype:generate -DgroupId=com.example -DartifactId=log4j-axiom-test -DarchetypeArtifactId=maven-archetype-quickstart -DinteractiveMode=false
cd log4j-axiom-test
```
2. Open the `pom.xml` file and replace its contents with the following:
```xml
4.0.0com.examplelog4j-axiom-testjar1.0-SNAPSHOTlog4j-axiom-testhttp://maven.apache.org11112.19.0junitjunit4.12testorg.apache.logging.log4jlog4j-core${log4j.version}org.apache.logging.log4jlog4j-api${log4j.version}org.apache.logging.log4jlog4j-slf4j2-impl${log4j.version}com.fasterxml.jackson.corejackson-databind2.13.0org.apache.maven.pluginsmaven-shade-plugin3.2.4packageshadecom.example.Appfalse
```
This `pom.xml` file includes the necessary Log4j dependencies and configures the Maven Shade plugin to create an executable JAR file.
3. Create a new file named `log4j2.xml` in your root directory and add the following content:
```xml
```
This configuration sets up two appenders:
* A Socket appender that sends logs to Fluentd, running on `localhost:24224`. Is uses JSON format for the log messages, which makes it easier to parse and analyze the logs later in Axiom.
* A Console appender that prints logs to the standard output,
## Set log level
Log4j supports various log levels, allowing you to control the verbosity of your logs. The main log levels, in order of increasing severity, are the following:
* `TRACE`: Fine-grained information for debugging.
* `DEBUG`: General debugging information.
* `INFO`: Informational messages.
* `WARN`: Indications of potential problems.
* `ERROR`: Error events that might still allow the app to continue running.
* `FATAL`: Severe error events that might lead the app to cancel.
In the configuration above, the root logger level is set to INFO which means it logs messages at INFO level and above (WARN, ERROR, and FATAL).
To set the log level, create a simple Java class to demonstrate these log levels. Create a new file named `App.java` in the `src/main/java/com/example` directory with the following content:
```java
package com.example;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.apache.logging.log4j.ThreadContext;
import org.apache.logging.log4j.core.config.Configurator;
import org.apache.logging.log4j.Level;
import java.util.Random;
public class App {
// Define loggers for different purposes
private static final Logger logger = LogManager.getLogger(App.class);
private static final Logger securityLogger = LogManager.getLogger("SecurityLogger");
private static final Logger performanceLogger = LogManager.getLogger("PerformanceLogger");
public static void main(String[] args) {
// Configure logging levels programmatically
configureLogging();
Random random = new Random();
// Infinite loop to continuously generate log events
while (true) {
try {
// Simulate various logging scenarios
simulateUserActivity(random);
simulateDatabaseOperations(random);
simulateSecurityEvents(random);
simulatePerformanceMetrics(random);
// Simulate a critical error with 10% probability
if (random.nextInt(10) == 0) {
throw new RuntimeException("Simulated critical error");
}
Thread.sleep(1000); // Sleep for 1 second
} catch (InterruptedException e) {
logger.warn("Sleep interrupted", e);
} catch (Exception e) {
logger.error("Critical error occurred", e);
} finally {
// Clear thread context after each iteration
ThreadContext.clearAll();
}
}
}
private static void configureLogging() {
// Set root logger level to DEBUG
Configurator.setRootLevel(Level.DEBUG);
// Set custom logger levels
Configurator.setLevel("SecurityLogger", Level.INFO);
Configurator.setLevel("PerformanceLogger", Level.TRACE);
}
// Simulate user activities and log them
private static void simulateUserActivity(Random random) {
String[] users = {"Alice", "Bob", "Charlie", "David"};
String[] actions = {"login", "logout", "view_profile", "update_settings"};
String user = users[random.nextInt(users.length)];
String action = actions[random.nextInt(actions.length)];
// Add user and action to thread context
ThreadContext.put("user", user);
ThreadContext.put("action", action);
// Log different user actions with appropriate levels
switch (action) {
case "login":
logger.info("User logged in successfully");
break;
case "logout":
logger.info("User logged out");
break;
case "view_profile":
logger.debug("User viewed their profile");
break;
case "update_settings":
logger.info("User updated their settings");
break;
}
}
// Simulate database operations and log them
private static void simulateDatabaseOperations(Random random) {
String[] operations = {"select", "insert", "update", "delete"};
String operation = operations[random.nextInt(operations.length)];
long duration = random.nextInt(1000);
// Add operation and duration to thread context
ThreadContext.put("operation", operation);
ThreadContext.put("duration", String.valueOf(duration));
// Log slow database operations as warnings
if (duration > 500) {
logger.warn("Slow database operation detected");
} else {
logger.debug("Database operation completed");
}
// Simulate database connection loss with 5% probability
if (random.nextInt(20) == 0) {
logger.error("Database connection lost", new SQLException("Connection timed out"));
}
}
// Simulate security events and log them
private static void simulateSecurityEvents(Random random) {
String[] events = {"failed_login", "password_change", "role_change", "suspicious_activity"};
String event = events[random.nextInt(events.length)];
ThreadContext.put("security_event", event);
// Log different security events with appropriate levels
switch (event) {
case "failed_login":
securityLogger.warn("Failed login attempt");
break;
case "password_change":
securityLogger.info("User changed their password");
break;
case "role_change":
securityLogger.info("User role was modified");
break;
case "suspicious_activity":
securityLogger.error("Suspicious activity detected", new SecurityException("Potential breach attempt"));
break;
}
}
// Simulate performance metrics and log them
private static void simulatePerformanceMetrics(Random random) {
String[] metrics = {"cpu_usage", "memory_usage", "disk_io", "network_latency"};
String metric = metrics[random.nextInt(metrics.length)];
double value = random.nextDouble() * 100;
// Add metric and value to thread context
ThreadContext.put("metric", metric);
ThreadContext.put("value", String.format("%.2f", value));
// Log high resource usage as warnings
if (value > 80) {
performanceLogger.warn("High resource usage detected");
} else {
performanceLogger.trace("Performance metric recorded");
}
}
// Custom exception classes for simulating errors
private static class SQLException extends Exception {
public SQLException(String message) {
super(message);
}
}
private static class SecurityException extends Exception {
public SecurityException(String message) {
super(message);
}
}
}
```
This class demonstrates the use of different log levels and also shows how to add context to your logs using `ThreadContext`.
## Forward log messages to Fluentd
Fluentd is a popular open-source data collector used to forward logs from Log4j to Axiom. The Log4j configuration is already set up to send logs to Fluentd using the Socket appender. Fluentd acts as a unified logging layer, allowing you to collect, process, and forward logs from various sources to different destinations.
### Configure the Fluentd.conf file
To configure Fluentd, create a configuration file. Create a new file named `fluentd.conf` in your project root directory with the following content:
```xml
@type forward
bind 0.0.0.0
port 24224
@type multi_format
format json
time_key timeMillis
time_type string
time_format %Q
@type record_transformer
tag java.log4j
@type http
endpoint https://AXIOM_DOMAIN/v1/datasets/DATASET_NAME/ingest
headers {"Authorization":"Bearer API_TOKEN"}
json_array true
@type memory
flush_interval 5s
chunk_limit_size 5m
total_limit_size 10m
@type json
```
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
This configuration does the following:
1. Set up a forward input plugin to receive logs from Log4j.
2. Add a `java.log4j` tag to all logs.
3. Forward the logs to Axiom using the HTTP output plugin.
### Create the Dockerfile
To simplify the deployment of the Java app and Fluentd, use Docker. Create a new file named `Dockerfile` in your project root directory with the following content:
```yaml
# Build stage
FROM maven:3.8.1-openjdk-11-slim AS build
WORKDIR /usr/src/app
COPY pom.xml .
COPY src ./src
COPY log4j2.xml .
RUN mvn clean package
# Runtime stage
FROM openjdk:11-jre-slim
WORKDIR /usr/src/app
RUN apt-get update && \
apt-get install -y --no-install-recommends \
ruby \
ruby-dev \
build-essential && \
gem install fluentd --no-document && \
fluent-gem install fluent-plugin-multi-format-parser && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
COPY --from=build /usr/src/app/target/log4j-axiom-test-1.0-SNAPSHOT.jar .
COPY fluentd.conf /etc/fluent/fluent.conf
COPY log4j2.xml .
# Create startup script
RUN echo '#!/bin/sh\n\
fluentd -c /etc/fluent/fluent.conf &\n\
sleep 5\n\
java -Dlog4j.configurationFile=log4j2.xml -jar log4j-axiom-test-1.0-SNAPSHOT.jar\n'\
> /usr/src/app/start.sh && chmod +x /usr/src/app/start.sh
EXPOSE 24224
CMD ["/usr/src/app/start.sh"]
```
This Dockerfile does the following:
1. Build the Java app.
2. Set up a runtime environment with Java and Fluentd.
3. Copy the necessary files and configurations.
4. Create a startup script to run both Fluentd and the Java app.
### Build and run the Dockerfile
1. To build the Docker image, run the following command in your project root directory:
```bash
docker build -t log4j-axiom-test .
```
2. Run the container with the following:
```bash
docker run -p 24224:24224 log4j-axiom-test
```
This command starts the container, running both Fluentd and your Java app.
## View logs in Axiom
Now that your app is running and sending logs to Axiom, you can view them in the Axiom dashboard. Log in to your Axiom account and go to the dataset you specified in the Fluentd configuration.
Logs appear in real-time, with various log levels and context information added.
## Logging in Log4j best practices
* Use appropriate log levels: Reserve ERROR and FATAL for serious issues, use WARN for potential problems, and INFO for general app flow.
* Include context: Add relevant information to your logs using ThreadContext or by including important variables in your log messages.
* Use structured logging: Log in JSON format to make it easier to parse, and later, analyze the logs using [APL](https://axiom.co/docs/apl/introduction).
* Log actionable information: Include enough detail in your logs to understand and potentially reproduce issues.
* Use parameterized logging: Instead of string concatenation, use Log4j’s support for parameterized messages to improve performance.
* Configure appenders appropriately: Use asynchronous appenders for better performance in high-throughput scenarios.
* Regularly review and maintain your logs: Periodically check your logging configuration and the logs themselves to ensure they’re providing value.
# Send logs from a .NET app
Source: https://axiom.co/docs/guides/send-logs-from-dotnet
This guide explains how to set up and configure logging in a .NET application, and how to send logs to Axiom.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
- [Install the .NET SDK](https://dotnet.microsoft.com/download).
## Option 1: Using HTTP Client
### Create a new .NET project
Create a new .NET project. In your terminal, go to the directory where you want to create your project. Run the following command to create a new console app named `AxiomLogs`.
```bash
dotnet new console -n AxiomLogs
```
### Install packages
Install the packages for your project. Use the `Microsoft.AspNet.WebApi.Client` package to make HTTP requests to the Axiom API. Run the following command to install the package:
```bash
dotnet add package Microsoft.AspNet.WebApi.Client
```
### Configure the Axiom logger
Create a class to handle logging to Axiom. Create a new file named `AxiomLogger.cs` in your project directory with the following content:
```csharp
using System;
using System.Net.Http;
using System.Text;
using System.Threading.Tasks;
public static class AxiomLogger
{
public static async Task LogToAxiom(string message, string logLevel)
{
// Create an instance of HttpClient to make HTTP requests
var client = new HttpClient();
// Specify the Axiom dataset name and construct the API endpoint URL
var datasetName = "DATASET_NAME";
var axiomDomain = "AXIOM_DOMAIN";
var axiomUri = $"https://{axiomDomain}/v1/datasets/{datasetName}/ingest";
// Replace with your Axiom API token
var apiToken = "API_TOKEN"; // Ensure your API token is correct
// Create an array of log entries, including the timestamp, message, and log level
var logEntries = new[]
{
new
{
timestamp = DateTime.UtcNow.ToString("o"),
message = message,
level = logLevel
}
};
// Serialize the log entries to JSON format using System.Text.Json.JsonSerializer
var content = new StringContent(System.Text.Json.JsonSerializer.Serialize(logEntries), Encoding.UTF8, "application/json");
// Set the authorization header with the Axiom API token
client.DefaultRequestHeaders.Authorization = new System.Net.Http.Headers.AuthenticationHeaderValue("Bearer", apiToken);
// Make a POST request to the Axiom API endpoint with the serialized log entries
var response = await client.PostAsync(axiomUri, content);
// Check the response status code
if (!response.IsSuccessStatusCode)
{
// If the response is not successful, print the error details
var responseBody = await response.Content.ReadAsStringAsync();
Console.WriteLine($"Failed to send log: {response.StatusCode}\n{responseBody}");
}
else
{
// If the response is successful, print "Log sent successfully."
Console.WriteLine("Log sent successfully.");
}
}
}
```
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
### Configure the main program
Now that the Axiom logger is in place, update the main program so it can be used. Open the `Program.cs` file and replace its contents with the following code:
```csharp
using System;
using System.Threading.Tasks;
class Program
{
static async Task Main(string[] args)
{
// Log the application startup event with an "INFO" log level
await AxiomLogger.LogToAxiom("Application started", "INFO");
// Call the SimulateOperations method to simulate various application operations
await SimulateOperations();
// Log the .NET runtime version information with an "INFO" log level
await AxiomLogger.LogToAxiom($"CLR version: {Environment.Version}", "INFO");
// Log the application shutdown event with an "INFO" log level
await AxiomLogger.LogToAxiom("Application shutting down", "INFO");
}
static async Task SimulateOperations()
{
// Log the start of operations with a "DEBUG" log level
await AxiomLogger.LogToAxiom("Starting operations", "DEBUG");
// Log the database connection event with a "DEBUG" log level
await AxiomLogger.LogToAxiom("Connecting to database", "DEBUG");
await Task.Delay(500); // Simulated delay
// Log the successful database connection with an "INFO" log level
await AxiomLogger.LogToAxiom("Connected to database successfully", "INFO");
// Log the user data retrieval event with a "DEBUG" log level
await AxiomLogger.LogToAxiom("Retrieving user data", "DEBUG");
await Task.Delay(1000);
// Log the number of retrieved user records with an "INFO" log level
await AxiomLogger.LogToAxiom("Retrieved 100 user records", "INFO");
// Log the user preference update event with a "DEBUG" log level
await AxiomLogger.LogToAxiom("Updating user preferences", "DEBUG");
await Task.Delay(800);
// Log the successful user preference update with an "INFO" log level
await AxiomLogger.LogToAxiom("Updated user preferences successfully", "INFO");
try
{
// Log the payment processing event with a "DEBUG" log level
await AxiomLogger.LogToAxiom("Processing payments", "DEBUG");
await Task.Delay(1500);
// Intentionally throw an exception to demonstrate error logging
throw new Exception("Payment gateway unavailable");
}
catch (Exception ex)
{
// Log the payment processing failure with an "ERROR" log level
await AxiomLogger.LogToAxiom($"Payment processing failed: {ex.Message}", "ERROR");
}
// Log the email notification sending event with a "DEBUG" log level
await AxiomLogger.LogToAxiom("Sending email notifications", "DEBUG");
await Task.Delay(1200);
// Log the number of sent email notifications with an "INFO" log level
await AxiomLogger.LogToAxiom("Sent 50 email notifications", "INFO");
// Log the high memory usage detection with a "WARN" log level
await AxiomLogger.LogToAxiom("Detected high memory usage", "WARN");
await Task.Delay(500);
// Log the memory usage normalization with an "INFO" log level
await AxiomLogger.LogToAxiom("Memory usage normalized", "INFO");
// Log the completion of operations with a "DEBUG" log level
await AxiomLogger.LogToAxiom("Operations completed", "DEBUG");
}
}
```
This code simulates various app operations and logs messages at different levels (DEBUG, INFO, WARN, ERROR) to Axiom.
### Project file configuration
Ensure your `axiomlogs.csproj` file is configured with the package reference. The file should look like this:
```xml
Exenet6.0enableenable
```
### Build and run the app
To build and run the app, go to the project directory in your terminal and run the following command:
```bash
dotnet build
dotnet run
```
This command builds the project and runs the app. You see the log messages being sent to Axiom, and the console displays `Log sent successfully.` for each log entry.
## Option 2: Using Serilog
### Install Serilog Packages
Add Serilog and the necessary extensions to your project. You need the `Serilog`, `Serilog.Sinks.Http`, `Serilog.Formatting.Elasticsearch` and `Serilog.Formatting.Json` packages.
```bash
dotnet add package Serilog
dotnet add package Serilog.Sinks.Console
dotnet add package Serilog.Sinks.Http
dotnet add package Serilog.Formatting.Elasticsearch
dotnet add package Microsoft.Extensions.Configuration
```
### Configure Serilog
In your `Program.cs` or a startup configuration file, set up Serilog to use the HTTP sink. Configure the sink to point to the Axiom ingestion API endpoint.
```csharp
using Serilog;
using Serilog.Formatting.Elasticsearch;
using Serilog.Sinks.Http;
using System.Net.Http;
using System.Net.Http.Headers;
using System.IO;
using Microsoft.Extensions.Configuration;
public class AxiomConfig
{
public const string DatasetName = "DATASET_NAME";
public const string ApiToken = "API_TOKEN";
public const string ApiUrl = "https://AXIOM_DOMAIN/v1/datasets";
}
public class AxiomHttpClient : IHttpClient
{
private readonly HttpClient _httpClient;
public AxiomHttpClient()
{
_httpClient = new HttpClient();
_httpClient.DefaultRequestHeaders.Authorization =
new AuthenticationHeaderValue("Bearer", AxiomConfig.ApiToken);
}
public void Configure(IConfiguration configuration)
{
}
public async Task PostAsync(string requestUri, Stream contentStream, CancellationToken cancellationToken = default)
{
var content = new StreamContent(contentStream);
content.Headers.Add("Content-Type", "application/json");
return await _httpClient.PostAsync(requestUri, content, cancellationToken).ConfigureAwait(false);
}
public void Dispose()
{
_httpClient?.Dispose();
}
}
public class Program
{
public static async Task Main(string[] args)
{
Log.Logger = new LoggerConfiguration()
.MinimumLevel.Debug()
.WriteTo.Console()
.WriteTo.Http(
requestUri: $"{AxiomConfig.ApiUrl}/{AxiomConfig.DatasetName}/ingest",
queueLimitBytes: null,
textFormatter: new ElasticsearchJsonFormatter(renderMessageTemplate: false, inlineFields: true),
httpClient: new AxiomHttpClient()
)
.CreateLogger();
try
{
Log.Information("Application started on .NET 8");
await SimulateOperations();
Log.Information($"Runtime version: {Environment.Version}");
Log.Information("Application shutting down");
}
catch (Exception ex)
{
Log.Fatal(ex, "Application terminated unexpectedly");
}
finally
{
await Log.CloseAndFlushAsync();
}
}
static async Task SimulateOperations()
{
Log.Debug("Starting operations");
Log.Debug("Connecting to database");
await Task.Delay(500);
Log.Information("Connected to database successfully");
Log.Debug("Retrieving user data");
await Task.Delay(1000);
Log.Information("Retrieved 100 user records");
Log.Debug("Updating user preferences");
await Task.Delay(800);
Log.Information("Updated user preferences successfully");
try
{
Log.Debug("Processing payments");
await Task.Delay(1500);
throw new Exception("Payment gateway unavailable");
}
catch (Exception ex)
{
Log.Error(ex, "Payment processing failed: {ErrorMessage}", ex.Message);
}
Log.Debug("Sending email notifications");
await Task.Delay(1200);
Log.Information("Sent 50 email notifications");
Log.Warning("Detected high memory usage: {UsagePercentage}%", 85);
await Task.Delay(500);
Log.Information("Memory usage normalized");
Log.Debug("Operations completed");
}
}
```
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
### Project file configuration
Ensure your `axiomlogs.csproj` file is configured with the package references. The file should look like this:
```xml
Exenet8.0enableenableSerilogAppSerilogApp
```
### Build and run the app
To build and run the app, go to the project directory in your terminal and run the following commands:
```bash
dotnet build
dotnet run
```
This command builds the project and runs the app. You see the log messages being sent to Axiom.
## Option 3: Using NLog
### Install NLog Packages
You need NLog and potentially an extension for HTTP targets.
```bash
dotnet add package NLog
dotnet add package NLog.Web.AspNetCore
dotnet add package NLog.Targets.Http
```
### Configure NLog
Set up NLog by creating an `NLog.config` file or configuring it programmatically. Here is an example configuration for `NLog` using an HTTP target:
```xml
```
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
### Configure the main program
Update the main program to use `NLog`. In your `Program.cs` file:
```csharp
using NLog;
using NLog.Web;
var logger = NLogBuilder.ConfigureNLog("nlog.config").GetCurrentClassLogger();
class Program
{
static async Task Main(string[] args)
{
logger.Info("Application started");
await SimulateOperations();
logger.Info($"CLR version: {Environment.Version}");
logger.Info("Application shutting down");
}
static async Task SimulateOperations()
{
logger.Debug("Starting operations");
logger.Debug("Connecting to database");
await Task.Delay(500); // Simulated delay
logger.Info("Connected to database successfully");
logger.Debug("Retrieving user data");
await Task.Delay(1000);
logger.Info("Retrieved 100 user records");
logger.Debug("Updating user preferences");
await Task.Delay(800);
logger.Info("Updated user preferences successfully");
try
{
logger.Debug("Processing payments");
await Task.Delay(1500);
throw new Exception("Payment gateway unavailable");
}
catch (Exception ex)
{
logger.Error($"Payment processing failed: {ex.Message}");
}
logger.Debug("Sending email notifications");
await Task.Delay(1200);
logger.Info("Sent 50 email notifications");
logger.Warn("Detected high memory usage");
await Task.Delay(500);
logger.Info("Memory usage normalized");
logger.Debug("Operations completed");
}
}
```
### Project file configuration
Ensure your `axiomlogs.csproj` file is configured with the package references. The file should look like this:
```xml
Exenet6.0enableenable
```
### Build and run the app
To build and run the app, go to the project directory in your terminal and run the following commands:
```bash
dotnet build
dotnet run
```
This command builds the project and runs the app. You should see the log messages being sent to Axiom.
## Best practices for logging
To make your logging more effective, consider the following best practices:
* Include relevant information such as user IDs, request details, and system state in your log messages to provide context when investigating issues.
* Use different log levels (DEBUG, INFO, WARN, ERROR) to categorize the severity and importance of log messages. This allows you to filter and analyze logs more effectively
* Use structured logging formats like JSON to make it easier to parse and analyze log data
## Conclusion
This guide covers the steps to send logs from a C# .NET app to Axiom. By following these instructions and adhering to logging best practices, you can effectively monitor your app, diagnose issues, and gain valuable insights into its behavior.
# Send logs from Laravel to Axiom
Source: https://axiom.co/docs/guides/send-logs-from-laravel
This guide demonstrates how to configure logging in a Laravel app to send logs to Axiom
This guide explains integrating Axiom as a logging solution in a Laravel app. Using Axiom’s capabilities with a custom log channel, you can efficiently send your app’s logs to Axiom for storage, analysis, and monitoring. This integration uses Monolog, Laravel’s underlying logging library, to create a custom logging handler that forwards logs to Axiom.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
- PHP development [environment](https://www.php.net/manual/en/install.php)
- [Composer](https://laravel.com/docs/11.x/installation) installed on your system
- Laravel app setup
## Installation
### Create a Laravel project
Create a new Laravel project:
```bash
composer create-project --prefer-dist laravel/laravel laravel-axiom-logger
```
## Exploring the logging config file
In your Laravel project, the `config` directory contains several configurations on how different parts of your app work, such as how it connects to the database, manages sessions, and handles caching. Among these files, **`logging.php`** identifies how you can define your app logs activities and errors. This file is designed to let you specify where your logs go: a file, a cloud service, or other destinations. The configuration file below includes the Axiom logging setup.
```bash
code config/logging.php
```
```php
env('LOG_CHANNEL', 'stack'),
'deprecations' => [
'channel' => env('LOG_DEPRECATIONS_CHANNEL', 'null'),
'trace' => false,
],
'channels' => [
'stack' => [
'driver' => 'stack',
'channels' => ['single'],
'ignore_exceptions' => false,
],
'single' => [
'driver' => 'single',
'path' => storage_path('logs/laravel.log'),
'level' => env('LOG_LEVEL', 'debug'),
'replace_placeholders' => true,
],
'axiom' => [
'driver' => 'monolog',
'handler' => App\Logging\AxiomHandler::class,
'level' => env('LOG_LEVEL', 'debug'),
'with' => [
'apiToken' => env('AXIOM_API_TOKEN'),
'dataset' => env('AXIOM_DATASET'),
],
],
'daily' => [
'driver' => 'daily',
'path' => storage_path('logs/laravel.log'),
'level' => env('LOG_LEVEL', 'debug'),
'days' => 14,
'replace_placeholders' => true,
],
'stderr' => [
'driver' => 'monolog',
'level' => env('LOG_LEVEL', 'debug'),
'handler' => StreamHandler::class,
'formatter' => env('LOG_STDERR_FORMATTER'),
'with' => [
'stream' => 'php://stderr',
],
'processors' => [PsrLogMessageProcessor::class],
],
'syslog' => [
'driver' => 'syslog',
'level' => env('LOG_LEVEL', 'debug'),
'facility' => LOG_USER,
'replace_placeholders' => true,
],
'errorlog' => [
'driver' => 'errorlog',
'level' => env('LOG_LEVEL', 'debug'),
'replace_placeholders' => true,
],
'null' => [
'driver' => 'monolog',
'handler' => NullHandler::class,
],
'emergency' => [
'path' => storage_path('logs/laravel.log'),
],
],
];
```
At the start of the `logging.php` file in your Laravel project, you'll find some Monolog handlers like `NullHandler`, `StreamHandler`, and a few more. This shows that Laravel uses Monolog to help with logging, which means it can do a lot of different things with logs.
### Default log channel
The `default` configuration specifies the primary channel Laravel uses for logging. In our setup, this is set through the **`.env`** file with the **`LOG_CHANNEL`** variable, which you've set to **`axiom`**. This means that, by default, log messages will be sent to the Axiom channel, using the custom handler you've defined to send logs to the dataset.
```bash
LOG_CHANNEL=axiom
AXIOM_API_TOKEN=API_TOKEN
AXIOM_DATASET=DATASET_NAME
AXIOM_URL=AXIOM_DOMAIN
LOG_LEVEL=debug
LOG_DEPRECATIONS_CHANNEL=null
```
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
### Deprecations log channel
The `deprecations` channel is configured to handle logs about deprecated features in PHP and libraries, helping you prepare for updates. By default, it’s set to ignore these warnings, but you can adjust this to direct deprecation logs to a specific channel if needed.
```php
'deprecations' => [
'channel' => env('LOG_DEPRECATIONS_CHANNEL', 'null'),
'trace' => false,
],
```
### Configuration log channel
The heart of the `logging.php` file lies within the **`channels`** array where you define all available logging channels. The configuration highlights channels like **`single`**, **`axiom`**, and **`daily`**, each serving different logging purposes:
```php
'single' => [
'driver' => 'single',
'path' => storage_path('logs/laravel.log'),
'level' => env('LOG_LEVEL', 'debug'),
'replace_placeholders' => true,
],
'axiom' => [
'driver' => 'monolog',
'handler' => App\Logging\AxiomHandler::class,
'level' => env('LOG_LEVEL', 'debug'),
'with' => [
'apiToken' => env('AXIOM_API_TOKEN'),
'dataset' => env('AXIOM_DATASET'),
'axiomUrl' => env('AXIOM_URL'),
],
],
```
* **Single**: Designed for simplicity, the **`single`** channel writes logs to a single file. It’s a straightforward solution for tracking logs without needing complex log management strategies.
* Axiom: The custom **`axiom`** channel sends logs to your specified Axiom dataset, providing advanced log management capabilities. This integration enables powerful log analysis and monitoring, supporting better insights into your app’s performance and issues.
* **Daily**: This channel rotates logs daily, keeping your log files manageable and making it easier to navigate log entries over time.
Each channel can be customized further, such as adjusting the log level to control the verbosity of logs captured. The **`LOG_LEVEL`** environment variable sets this, defaulting to **`debug`** for capturing detailed log information.
## Getting started with log levels in Laravel
Laravel lets you choose from eight different levels of importance for your log messages, just like a list of warnings from very serious to just for info. Here’s what each level means, starting with the most severe:
* **EMERGENCY**: Your app is broken and needs immediate attention.
* **ALERT**: similar to `EMERGENCY`, but less severe.
* **CRITICAL**: Critical errors within the main parts of your app.
* **ERROR**: error conditions in your app.
* **WARNING**: something unusual happened that may need to be addressed later.
* **NOTICE**: Important info, but not a warning or error.
* **INFO**: General updates about what your app is doing.
* **DEBUG**: used to record some debugging messages.
Not every situation fits into one of these levels. For example, in an online store, you might use **INFO** to log when someone buys something and **ERROR** if a payment doesn’t go through because of a problem.
Here’s a simple way to log messages at each level in Laravel:
```php
use Illuminate\Support\Facades\Log;
Log::debug("Checking details.");
Log::info("User logged in.");
Log::notice("User tried a feature.");
Log::warning("Feature might not work as expected.");
Log::error("Feature failed to load.");
Log::critical("Major issue with the app.");
Log::alert("Immediate action needed.");
Log::emergency("The app is down.");
```
Output:
```php
[2023-09-01 00:00:00] local.DEBUG: Checking details.
[2023-09-01 00:00:00] local.INFO: User logged in.
[2023-09-01 00:00:00] local.NOTICE: User tried a feature.
[2023-09-01 00:00:00] local.WARNING: Feature might not work as expected.
[2023-09-01 00:00:00] local.ERROR: Feature failed to load.
[2023-09-01 00:00:00] local.CRITICAL: Major issue with the app.
[2023-09-01 00:00:00] local.ALERT: Immediate action needed.
[2023-09-01 00:00:00] local.EMERGENCY: The app is down.
```
## Creating the custom logger class
In this section, we will explain how to create the custom logger class designed for sending your Laravel app’s logs to Axiom. This class named `AxiomHandler` , extends Monolog’s **`AbstractProcessingHandler`** giving us a structured way to handle log messages and forward them to Axiom.
* **Initializing cURL**: The **`initializeCurl`** method sets up a cURL handle to communicate with Axiom’s API. It prepares the request with the appropriate headers, including the authorization header that uses your Axiom API token and content type set to **`application/json` .**
* **Handling errors**: If there’s an error during the cURL request, it’s logged to PHP’s error log. This helps in diagnosing issues with log forwarding without disrupting your app’s normal operations.
* **Formatting logs**: Lastly, we specify the log message format using the **`getDefaultFormatter`** method. By default, we use Monolog’s **`JsonFormatter`** to ensure our log messages are JSON encoded, making them easy to parse and analyze in Axiom.
```php
apiToken = $apiToken;
$this->dataset = $dataset;
$this->axiomUrl = $axiomUrl;
}
private function initializeCurl(): \CurlHandle
{
$endpoint = "https://{$this->axiomUrl}/v1/datasets/{$this->dataset}/ingest";
$ch = curl_init($endpoint);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_POST, true);
curl_setopt($ch, CURLOPT_HTTPHEADER, [
'Authorization: Bearer ' . $this->apiToken,
'Content-Type: application/json',
]);
return $ch;
}
protected function write(LogRecord $record): void
{
$ch = $this->initializeCurl();
$data = [
'message' => $record->message,
'context' => $record->context,
'level' => $record->level->getName(),
'channel' => $record->channel,
'extra' => $record->extra,
];
$payload = json_encode([$data]);
curl_setopt($ch, CURLOPT_POSTFIELDS, $payload);
curl_exec($ch);
if (curl_errno($ch)) {
// Optionally log the curl error to PHP error log
error_log('Curl error: ' . curl_error($ch));
}
curl_close($ch);
}
protected function getDefaultFormatter(): FormatterInterface
{
return new \Monolog\Formatter\JsonFormatter();
}
}
```
## Creating the test controller
In this section, we will demonstrate the process of verifying that your custom Axiom logger is properly set up and functioning within your Laravel app. To do this, we'll create a simple test controller with a method designed to send a log message using the Axiom channel. Following this, we'll define a route that triggers this logging action, allowing you to easily test the logger by accessing a specific URL in your browser or using a tool like cURL.
Create a new controller called `TestController` within your `app/Http/Controllers` directory. In this controller, add a method named `logTest` . This method will use Laravel’s logging to send a test log message to your Axiom dataset. Here’s how you set it up:
```php
check() ? auth()->user()->id : 'guest';
return $record;
};
// Get the Monolog instance for the 'axiom' channel and push the custom processor
$logger = Log::channel('axiom')->getLogger();
if ($logger instanceof Logger) {
$logger->pushProcessor($customProcessor);
}
Log::channel('axiom')->debug("Checking details.", ['action' => 'detailCheck', 'status' => 'initiated']);
Log::channel('axiom')->info("User logged in.", ['user_id' => 'exampleUserId', 'method' => 'standardLogin']);
Log::channel('axiom')->info("User tried a feature.", ['feature' => 'experimentalFeatureX', 'status' => 'trial']);
Log::channel('axiom')->warning("Feature might not work as expected.", ['feature' => 'experimentalFeature', 'warning' => 'betaStage']);
Log::channel('axiom')->warning("Feature failed to load.", ['feature' => 'featureY', 'error_code' => 500]);
Log::channel('axiom')->error("Major issue with the app.", ['system' => 'paymentProcessing', 'error' => 'serviceUnavailable']);
Log::channel('axiom')->warning("Immediate action needed.", ['issue' => 'security', 'level' => 'high']);
Log::channel('axiom')->error("The app is down.", ['system' => 'entireApplication', 'status' => 'offline']);
return 'Log messages sent to Axiom';
}
}
```
This method targets the `axiom` channel, which we previously configured to forward logs to your Axiom account. The message **Testing Axiom logger!** should then appear in your Axiom dataset, confirming that the logger is working as expected.
## Registering the route
Next, you need to make this test accessible via a web route. Open your `routes/web.php` file and add a new route that points to the **`logTest`** method in your **`TestController`**. This enables you to trigger the log message by visiting a specific URL in your web browser.
```php
## Conclusion
This guide has introduced you to integrating Axiom for logging in Laravel apps. You've learned how to create a custom logger, configure log channels, and understand the significance of log levels. With this knowledge, you’re set to track errors and analyze log data effectively using Axiom.
# Send logs from a Ruby on Rails application using Faraday
Source: https://axiom.co/docs/guides/send-logs-from-ruby-on-rails
This guide provides step-by-step instructions on how to send logs from a Ruby on Rails application to Axiom using the Faraday library.
This guide provides step-by-step instructions on how to send logs from a Ruby on Rails application to Axiom using the Faraday library. By following this guide, you configure your Rails app to send logs to Axiom, allowing you to monitor and analyze your application logs effectively.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
- Install a [Ruby version manager](https://www.ruby-lang.org/en/documentation/installation/) like `rbenv` and use it to install the latest Ruby version.
- Install [Ruby on Rails](https://guides.rubyonrails.org/v5.0/getting_started.html) using the `gem install rails` command.
## Set up the Ruby on Rails application
1. Create a new Rails app using the `rails new myapp` command.
2. Navigate to the app directory: `cd myapp`
## Setting up the Gemfile
Open the `Gemfile` in your Rails app, and then add the following gems:
```ruby
gem 'faraday'
gem 'dotenv-rails', groups: [:development, :test]
```
Install the dependencies by running `bundle install`.
## Create and configure the Axiom logger
1. Create a new file named `axiom_logger.rb` in the `app/services` directory of your Rails app.
2. Add the following code to `axiom_logger.rb`:
```ruby
# app/services/axiom_logger.rb
require 'faraday'
require 'json'
class AxiomLogger
def self.send_log(log_data)
dataset_name = "DATASET_NAME"
axiom_ingest_api_url = "https://AXIOM_DOMAIN/v1/datasets/DATASET_NAME/ingest"
ingest_token = "API_TOKEN"
conn = Faraday.new(url: axiom_ingest_api_url) do |faraday|
faraday.request :url_encoded
faraday.adapter Faraday.default_adapter
end
wrapped_log_data = [log_data]
response = conn.post do |req|
req.headers['Content-Type'] = 'application/json'
req.headers['Authorization'] = "Bearer #{ingest_token}"
req.body = wrapped_log_data.to_json
end
puts "AxiomLogger Response status: #{response.status}, body: #{response.body}"
if response.status != 200
Rails.logger.error "Failed to send log to Axiom: #{response.body}"
end
end
end
```
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
## Test with the Axiom logger
1. Create a new file named `axiom_logger_test.rb` in the `config/initializers` directory.
2. Add the following code to `axiom_logger_test.rb`:
```ruby
# config/initializers/axiom_logger_test.rb
Rails.application.config.after_initialize do
puts "Sending test logs to Axiom using Ruby on Rails Faraday..."
# Info logs
AxiomLogger.send_log({ message: "Application started successfully", level: "info", service: "initializer" })
AxiomLogger.send_log({ message: "User authentication successful", level: "info", service: "auth" })
AxiomLogger.send_log({ message: "Data fetched from external API", level: "info", service: "external_api" })
AxiomLogger.send_log({ message: "Email notification sent", level: "info", service: "email" })
# Warn logs
AxiomLogger.send_log({ message: "API request took longer than expected", level: "warn", service: "external_api", duration: 1500 })
AxiomLogger.send_log({ message: "User authentication token expiring soon", level: "warn", service: "auth", user_id: 123 })
AxiomLogger.send_log({ message: "Low disk space warning", level: "warn", service: "system", disk_usage: "85%" })
AxiomLogger.send_log({ message: "Non-critical configuration issue detected", level: "warn", service: "config" })
# Error logs
AxiomLogger.send_log({ message: "Database connection error", level: "error", service: "database", error: "Timeout" })
AxiomLogger.send_log({ message: "Failed to process payment", level: "error", service: "payment", user_id: 456, error: "Invalid card" })
AxiomLogger.send_log({ message: "Unhandled exception occurred", level: "error", service: "application", exception: "NoMethodError" })
AxiomLogger.send_log({ message: "Third-party API returned an error", level: "error", service: "integration", status_code: 500 })
# Debug logs
AxiomLogger.send_log({ message: "Request parameters", level: "debug", service: "api", params: { page: 1, limit: 20 } })
AxiomLogger.send_log({ message: "Response headers", level: "debug", service: "api", headers: { "Content-Type" => "application/json" } })
AxiomLogger.send_log({ message: "User object details", level: "debug", service: "user", user: { id: 789, name: "Axiom Observability", email: "support@axiom.co" } })
AxiomLogger.send_log({ message: "Cache hit for key", level: "debug", service: "cache", key: "popular_products" })
end
```
Each log entry includes a message, level, service, and additional relevant data.
* Info logs:
* Application started successfully
* User authentication successful
* Data fetched from external API
* Email notification sent
* Warn logs:
* API request took longer than expected (including duration)
* User authentication token expiring soon (including user ID)
* Low disk space warning (including disk usage percentage)
* Non-critical configuration issue detected
* Error logs:
* Database connection error (including error message)
* Failed to process payment (including user ID and error message)
* Unhandled exception occurred (including exception type)
* Third-party API returned an error (including status code)
* Debug logs:
* Request parameters (including parameter values)
* Response headers (including header key-value pairs)
* User object details (including user attributes)
* Cache hit for key (including cache key)
Adjust the log messages, services, and additional data according to your application’s specific requirements and context.
## Create the `log.rake` tasks
1. Create a new directory named `tasks` in the `lib` directory of your Rails app.
2. Create a new file named `log.rake` in the `lib/tasks` directory.
3. Add the following code to `log.rake`:
```ruby
# lib/tasks/log.rake
namespace :log do
desc "Send a test log to Axiom"
task send_test_log: :environment do
log_data = { message: "Hello, Axiom from Rake!", level: "info", service: "rake_task" }
AxiomLogger.send_log(log_data)
puts "Test log sent to Axiom."
end
end
```
This code defines a Rake task that sends a test log to Axiom when invoked.
## View logs in Axiom
1. Start your Rails server by running `rails server`.
2. Go to `http://localhost:3000` to trigger the test log from the initializer.
3. Run the Rake task to send another test log by executing `rails log:send_test_log` in your terminal.
4. In Axiom, go to the Stream tab, and then select the dataset where you send the logs.
5. You see the test logs appear allowing you to view and analyze your event data coming from your Ruby on Rails application.
## Conclusion
You have successfully set up your Ruby on Rails application to send logs to Axiom using the Faraday library. With this configuration, you can centralize your application logs and use Axiom’s powerful features like [APL](/apl/introduction) for log querying, monitoring, and observing various log levels and types effectively.
# Axiom transport for Winston logger
Source: https://axiom.co/docs/guides/winston
This page explains how to send data from a Node.js app to Axiom through Winston.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
## Install SDK
To install the SDK, run the following:
```shell
npm install @axiomhq/winston
```
## Import the Axiom transport for Winston
```js
import { WinstonTransport as AxiomTransport } from '@axiomhq/winston';
```
## Create a Winston logger instance
```js
const logger = winston.createLogger({
level: 'info',
format: winston.format.json(),
defaultMeta: { service: 'user-service' },
transports: [
// You can pass an option here. If you don’t, the transport is configured automatically
// using environment variables like `AXIOM_DATASET` and `AXIOM_TOKEN`
new AxiomTransport({
dataset: 'DATASET_NAME',
token: 'API_TOKEN',
}),
],
});
```
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
After setting up the Axiom transport for Winston, use the logger as usual:
```js
logger.log({
level: 'info',
message: 'Logger successfully setup',
});
```
### Error, exception, and rejection handling
To log errors, use the [`winston.format.errors`](https://github.com/winstonjs/logform#errors) formatter. For example:
```ts
import winston from 'winston';
import { WinstonTransport as AxiomTransport } from '@axiomhq/winston';
const { combine, errors, stack } = winston.format;
const axiomTransport = new AxiomTransport({ ... });
const logger = winston.createLogger({
// 8<----snip----
format: combine(errors({ stack: true }), json()),
// 8<----snip----
});
```
To automatically log exceptions and rejections, add the Axiom transport to the [`exceptionHandlers`](https://github.com/winstonjs/winston#exceptions) and [`rejectionHandlers`](https://github.com/winstonjs/winston#rejections). For example:
```ts
import winston from 'winston';
import { WinstonTransport as AxiomTransport } from '@axiomhq/winston';
const axiomTransport = new AxiomTransport({ ... });
const logger = winston.createLogger({
// 8<----snip----
transports: [axiomTransport],
exceptionHandlers: [axiomTransport],
rejectionHandlers: [axiomTransport],
// 8<----snip----
});
```
Running on Edge runtime isn’t supported.
## Examples
For more examples, see the [examples in GitHub](https://github.com/axiomhq/axiom-js/tree/main/examples/winston).
# Axiom adapter for Zap
Source: https://axiom.co/docs/guides/zap
This page explains how to send logs generated by the uber-go/zap library to Axiom.
Use the adapter of the Axiom Go SDK to send logs generated by the [uber-go/zap](https://github.com/uber-go/zap) library to Axiom.
The Axiom Go SDK is an open-source project and welcomes your contributions. For more information, see the [GitHub repository](https://github.com/axiomhq/axiom-go).
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
## Set up SDK
1. Install the Axiom Go SDK and configure your environment as explained in [Send data from Go app to Axiom](/guides/go).
2. In your Go app, import the `zap` package. It is imported as an `adapter` so that it doesn’t conflict with the `uber-go/zap` package.
```go
import adapter "github.com/axiomhq/axiom-go/adapters/zap"
```
Alternatively, configure the adapter using [options](https://pkg.go.dev/github.com/axiomhq/axiom-go/adapters/zap#Option) passed to the [New](https://pkg.go.dev/github.com/axiomhq/axiom-go/adapters/zap#New) function:
```go
core, err := adapter.New(
adapter.SetDataset("DATASET_NAME"),
)
```
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
## Configure client
To configure the underlying client manually, choose one of the following:
* Use [SetClient](https://pkg.go.dev/github.com/axiomhq/axiom-go/adapters/zap#SetClient) to pass in the client you have previously created with [Send data from Go app to Axiom](/guides/go).
* Use [SetClientOptions](https://pkg.go.dev/github.com/axiomhq/axiom-go/adapters/zap#SetClientOptions) to pass [client options](https://pkg.go.dev/github.com/axiomhq/axiom-go/axiom#Option) to the adapter.
```go
import (
"github.com/axiomhq/axiom-go/axiom"
adapter "github.com/axiomhq/axiom-go/adapters/zap"
)
// ...
core, err := adapter.New(
adapter.SetClientOptions(
axiom.SetPersonalTokenConfig("AXIOM_TOKEN"),
),
)
```
The adapter uses a buffer to batch events before sending them to Axiom. Flush this buffer explicitly by calling [Sync](https://pkg.go.dev/github.com/axiomhq/axiom-go/adapters/zap#WriteSyncer.Sync). For more information, see the [zap documentation](https://pkg.go.dev/go.uber.org/zap/zapcore#WriteSyncer) and the [example in GitHub](https://github.com/axiomhq/axiom-go/blob/main/examples/zap/main.go).
## Reference
For a full reference of the adapter’s functions, see the [Go Packages page](https://pkg.go.dev/github.com/axiomhq/axiom-go/adapters/zap).
# What is Axiom?
Source: https://axiom.co/docs/introduction
Axiom is a data platform designed to efficiently collect, store, and analyze event and telemetry data at massive scale. At its core, Axiom combines a high-performance, proprietary data store with an intelligent Console—helping teams reach actionable insights from their data faster.
Trusted by 30,000+ organizations—from high-growth startups to global enterprises.
## Components
Axiom consists of two fundamental components:
### EventDB
Robust, cost-effective, and scalable datastore specifically optimized for timestamped event data. Built from the ground up to handle the vast volumes and high velocity of event ingestion, EventDB ensures:
* **Scalable data loading:** Events are ingested seamlessly without complex middleware, scaling linearly with no single points of failure.
* **Extreme compression:** Tuned storage format compresses data 25-50x, significantly reducing storage costs and ensuring data remains queryable at any time.
* **Serverless querying:** Axiom spins up ephemeral, serverless runtimes on-demand to execute queries efficiently, minimizing idle compute resources and costs.
Learn more about Axiom’s architecture in the [architecture](/platform-overview/architecture) page.
### Console
Intuitive web app built for exploration, visualization, and monitoring of your data.
* **Real-time exploration:** Effortlessly query and visualize data streams in real-time, providing instant clarity on operational and business conditions.
* **Dynamic visualizations:** Generate insightful visualizations, from straightforward counts to sophisticated aggregations, tailored specifically to your needs.
* **Robust monitoring:** Set up threshold-based and anomaly driven alerts, ensuring proactive visibility into potential issues.
## Why choose Axiom?
* **Cost-efficiency:** Axiom dramatically lowers data ingestion and storage costs compared to traditional observability and logging solutions.
* **Flexible insights:** Real-time query capabilities and an increasingly intelligent UI help pinpoint issues and opportunities without sampling.
* **AI engineering:** Axiom provides specialized features designed explicitly for AI engineering workflows, allowing teams to confidently build, deploy, and optimize AI capabilities.
## Getting started
* Learn more about Axiom’s [features](/platform-overview/features).
* Explore the interactive demo [playground](https://play.axiom.co/).
* Create your own [organization](https://app.axiom.co/register).
# Acceptable use policy
Source: https://axiom.co/docs/legal/acceptable-use-policy
Guidelines and restrictions for appropriate use of Axiom services and platform.
{/* vale off */}
This Axiom Acceptable Use Policy ("AUP") explains the polices that govern your access and use of the Service offered by Axiom to You. You must use the Service in a lawful manner that is consistent with Axiom’s published AUP. The examples provided by Axiom are a non-exhaustive list of prohibited conduct. Axiom reserves the right to modify this AUP at any time by posting an updated version of the AUP on Axiom’s website. Changes to the policy are deemed effective upon posting and You are responsible for monitoring the website for changes to the policy. Use of any Axiom Service by You is deemed an acceptance of the most current version of the AUP. If the terms of this AUP are violated, Axiom reserves the right to immediately suspend or terminate Your use of the Service.
**Responsibility for Content**\
You are solely responsible for the content and data you use with the Axiom Service. Axiom does not monitor such content for illegal activity. You shall comply with all applicable laws and regulations. Any transmission, storage or distribution, or any use that directly facilitates a violation of applicable law or regulation is strictly prohibited.
**Inappropriate and Illegal Content**\
You may not use, distribute or instruct others to use the Axiom Services for illegal or harmful conduct. The following is a non-exhaustive list of prohibited conduct:
* **Illegal Activities:** Use of content or data in connection with the Axiom Service that is illegal, harassing, defamatory, libelous, indecent, obscene, pornographic, promotes online gambling, or otherwise objectionable.
* **Harmful Content:** Use of content or data in connection with the Axiom Service that is harmful to Axiom’s Service, network or other users of the network including, but not limited to, viruses, Trojan horses, time bombs, or any other computer programming that may damage, interfere with program data or information.
* **Infringing Content:** Use of content or data in connection with the Axiom Service that infringes or misappropriates the intellectual property or property right of others including, but not limited to, material protected by copyright, trademark, patent, or trade secret.
* **Export Violations:** Use of the Axiom Service in violation of U.S. and International export laws, regulations and rules.
**Email, Spam and Usenet**\
Axiom explicitly prohibits you from using the Axiom Service to send unsolicited messages to users without their permission. The following is a non-exhaustive list of prohibited conduct.
* Using the Axiom Service to send unsolicited bulk email, commercial advertising, informational announcements or promotional material (“spam”).
* Using the Axiom Service to post email messages that are excessive or intended to harass others.
* Using the Axiom Service in a manner that assumes the identity of a user without the users explicit permission.
**Security Violations**\
You may not use the Axiom Service to violate the security or integrity of the Axiom Service or Axiom’s network. The following is non-exhaustive a list of prohibited conduct:
* Unauthorized access of networks, data, servers, or databases without obtaining permission.
* Any attempt to test the vulnerability of a network or system, probe or scan or to breach security or authentication measures without permission.
* Any attempt to interfere with, disrupt or disable Service to any user, or network, including, but not limited to, via means of attempts to overload the system, crashing, mail bombing, denial of service attacks, or flooding techniques.
* The forging of any TCP/IP packet header, or using a computer system or program that intentionally conceals the source of information.
* Any usage or attempted usage of the Service You are not authorized to use.
* Any attempt or action designed to circumvent or alter any use limitations, including but not limited to, storage restrictions or billing measurement on the Services.
**Reporting Violations**\
It is your responsibility to report all known or suspected violations of this AUP. Axiom reserves the right, but is not obligated to investigate any violation or misuse of the Services. Axiom reserves the right to immediately block access, suspend or terminate you access or use of the Axiom Service, Axiom deems in its sole discretion, has violated the terms of this AUP. Axiom will attempt to provide you notice of possible suspension of Services, but reserves the right to immediately suspend or terminate your access or use of the Axiom Service that would have an adverse effect on Axiom, the Axiom Service, the Axiom network, or any Axiom customer. Axiom shall not be liable for any damages arising out of the violation of this AUP and you shall indemnify Axiom for any and all damages that may arise from a violation of this AUP by You.
# Cookie Policy
Source: https://axiom.co/docs/legal/cookies
How we use cookies and similar tracking technologies on our websites.
{/* vale off */}
## What are cookies?
Cookies are small data files that are placed on your computer when you visit a site. Cookies serve different purposes, like helping us understand how our site is being used, letting you navigate between pages efficiently, remembering your preferences, and generally improving your browsing experience. Cookies can also help ensure advertising you see online is more relevant to you and your interests.
## Who places cookies on my device?
Cookies set by the site you visit are called "first party cookies". Cookies set by parties other than us are called "third party cookies". Third party cookies enable third party features or functionality within the site, such as site analytics, advertising and social media features. The parties that set these third party cookies can recognize your computer or device both when it visits the site in question and also when it visits certain other sites and/or mobile apps. We do not control how these third parties' use your information, which is subject to their own privacy policies. See below for details on use of third party cookies and similar technologies with our sites and app.
## How long will cookies stay on my device?
The length of time a cookie will stay on your device depends on whether it is a "persistent" or "session" cookie. Session cookies will only stay on your device until you stop browsing. Persistent cookies stay on your browsing device after you have finished browsing until they expire or are deleted.
## What other tracking technologies should I know about?
Cookies are not the only way to track visitors to a site or app. Companies use tiny graphics files with unique identifiers called beacons (and also "pixels" or "clear gifs") to recognize when someone visits its sites. These technologies often depend on cookies to function properly, and so disabling cookies may impair their functioning.
## What types of cookies and similar tracking technologies does Axiom use?
We use cookies and other tracking technologies in the following categories described below.
### Essential cookies
These cookies are essential to provide you with services available through our websites and to enable you to use some of their features. Without these cookies, the services that you have asked for cannot be provided, and we only use these cookies to provide you with those services.
**Who serves these cookies:** Axiom, Inc.
**How to refuse:** Because these cookies are strictly necessary to deliver the Websites to you, you cannot refuse them. You can block or delete them by changing your browser settings however, as described below under the heading "Your choices".
### Functionality cookies
These cookies allow our websites to remember choices you make when you use them. The purpose of these cookies is to provide you with a more personal experience and to avoid you having to re-select your preferences every time you visit our websites.
**Who serves these cookies:** Axiom, Inc.
**How to refuse:** To refuse these cookies, please follow the instructions below under the heading "Your choices".
### Analytics and performance cookies
These cookies are used to collect information about traffic to our websites and how users use our websites. The information gathered may include the number of visitors to our websites, the websites that referred them to our websites, the pages they visited on our websites, what time of day they visited our websites, whether they have visited our websites before, and other similar information. We use this information to help operate our websites more efficiently, to gather broad demographic information, monitor the level of activity on our websites, and improve the websites.
**Who serves these cookies:**
**Segment** (Twilio, Inc.)
The subsite, [https://app.axiom.co](https://app.axiom.co) (a subsite of axiom.co) uses Segment to help analyze how users use the site. The tool does not use cookies however user data is shared with Segment. In addition to your name and email address, your IP may be transmitted to Segment (though never stored there). This information is then used to evaluate the use of the service as well as compute statistical reports on website activity to help us build a better product.
You can find more information about Segment's privacy policy here: [https://www.twilio.com/en-us/legal/privacy](https://www.twilio.com/en-us/legal/privacy). If you wish to not share usage information with Twilio/Segment, please let us know at [privacy@axiom.co](mailto:privacy@axiom.co)
**Mixpanel**
The subsite, [https://app.axiom.co](https://app.axiom.co) (a subsite of axiom.co) uses Mixpanel to help analyze how users use the site. The tool does not use cookies however user data is shared with Mixpanel. In addition to your name and email address, your IP may be transmitted to Mixpanel (though never stored there). This information is then used to evaluate the use of the service as well as compute statistical reports on website activity to help us build a better product.
You can find more information about Mixpanel's privacy policy here: [https://mixpanel.com/legal/privacy-policy](https://mixpanel.com/legal/privacy-policy). If you wish to not share usage information with Mixpanel, please let us know at [privacy@axiom.co](mailto:privacy@axiom.co)
**Hubspot** (Hubspot, Inc.)
The utilization of HubSpot's services enables the collection, storage, and analysis of user interaction data, enhancing our capacity to provide tailored content, optimize user experience, and conduct comprehensive website performance evaluations. It is imperative to note that all data captured through HubSpot's cookie tracking technology is processed and stored in strict adherence to applicable data protection laws and regulations, ensuring the utmost level of data integrity and security. Users retain the right to opt-out of cookie tracking at any given time, a provision that can be executed through the designated privacy settings available on our website. By continuing to navigate our website, users express their informed consent to our use of HubSpot for cookie tracking purposes, acknowledging the vital role it plays in our ongoing efforts to refine and personalize the user experience.
**Koala** (Konfetti, Inc)
The Services use cookies and similar technologies such as pixel tags, web beacons, clear GIFs and JavaScript (collectively, "Cookies") to enable our servers to recognize your web browser, tell us how and when you visit and use our Services, analyze trends, learn about our user base and operate and improve our Services. Cookies are small pieces of data– usually text files – placed on your computer, tablet, phone or similar device when you use that device to access our Services. We may also supplement the information we collect from you with information received from third parties, including third parties that have placed their own Cookies on your device(s). Please note that because of our use of Cookies, the Services do not support "Do Not Track" requests sent from a browser at this time.
**X, (Formally, Twitter)** (X, Inc. (Previously Twitter))
Pixels are small amounts of code placed on a web page, in a web-enabled app, or an email. We use pixels, some of which we provide to advertisers to place on their web properties, to learn whether you've interacted with specific web or email content — as many services do. This helps us measure and improve our services and personalize your experience, including the ads you see.
**Linkedin** (Linkedin, Inc.)
Pixels are small amounts of code placed on a web page, in a web-enabled app, or an email. We use pixels, some of which we provide to advertisers to place on their web properties, to learn whether you've interacted with specific web or email content — as many services do. This helps us measure and improve our services and personalize your experience, including the ads you see.
**Google Tag Manager, Google Analytics** (Google, LLC.)
Pixels are small amounts of code placed on a web page, in a web-enabled app, or an email. We use pixels, some of which we provide to advertisers to place on their web properties, to learn whether you've interacted with specific web or email content — as many services do. This helps us measure and improve our services and personalize your experience, including the ads you see.
**How to refuse:** To refuse these cookies, please follow the instructions below under the heading "Your choices" Alternatively, please click on the relevant opt-out link below.
You can control these cookies as described in the Your choices section below. The third parties who serve cookies listed in the table above may use other third parties to place cookies, but any such indirect placement of cookies is out of our control. You should review the privacy policies of the third parties listed in the table above to find out more information about their use of cookies.
## Your choices
You can access cookie preferences by using the "Cookie settings" button above. However, most browsers let you remove or reject cookies. To do this, follow the instructions in your browser settings. Many browsers accept cookies by default until you change your settings. Please note that if you set your browser to disable cookies, parts of the site may not work properly. For more information about cookies, including how to see what cookies have been set on your computer or mobile device and how to manage and delete them, visit [www.allaboutcookies.org](http://www.allaboutcookies.org).
## Changes
Information about the cookies we use may be updated from time to time, so please check back on a regular basis for any changes.
## Questions
If you have any questions about this Cookie Policy, please contact us by email at [privacy@axiom.co](mailto:privacy@axiom.co).
# Data processing addendum
Source: https://axiom.co/docs/legal/data-processing
Legal agreement governing data processing.
{/* vale off */}
This Data Processing Addendum (“Addendum”) is incorporated into and forms part of the Axiom Terms of Service or other applicable agreement governing your use of Axiom’s Services (the “Agreement”).
This Addendum is intended to ensure compliance with all Applicable Data Protection Laws, including the European Union General Data Protection Regulation (EU) 2016/679 ("GDPR"), the UK GDPR, the Swiss FADP, and the California Consumer Privacy Act ("CCPA"), among others.
**By accessing or using the Axiom Services, or otherwise indicating your acceptance of the Agreement, you agree to the terms of this Addendum on behalf of yourself and any affiliated entities you represent.**
This Addendum applies to Axiom Inc.’s (“Axiom”) Processing of Personal Data in connection with the Services and is effective as of the date you first access or use the Services after the date of publication of this Addendum (the “Effective Date”). This Addendum does not require a signature to be valid or enforceable and is deemed to be mutually agreed upon and entered into by you and Axiom through your acceptance of the Agreement.
For a copy of the signed Addendum for recordkeeping purposes, or if you require a countersigned version due to internal policies, please contact Axiom at [privacy@axiom.co](mailto:privacy@axiom.co).
1\. Definitions
Capitalized terms that are used but not defined in this Addendum have the meanings given in the Agreement.
a. “**Affiliate**” means any entity that directly or indirectly controls, is controlled by, or is under common control with the subject entity. “Control” for purposes of this definition, means direct or indirect ownership or control of more than 50% of the voting interest of the subject entity.
b. “**Applicable Data Protection Laws**” means, with respect to a party, all privacy, data protection and information security-related laws and regulations applicable to such party’s Processing of Personal Data.
c. “**Customer Data**” means Your Data (as defined in the Agreement) that constitutes Personal Data.
d. “**Data Subject**” means the identified or identifiable natural person who is the subject of Personal Data.
e. “**Processing**” means any operation or set of operations which is performed on Personal Data or on sets of Personal Data, whether or not by automated means, such as collection, recording, organization, structuring, storage, adaptation or alteration, retrieval, consultation, use, disclosure by transmission, dissemination or otherwise making available, alignment or combination, restriction, erasure or destruction.
f. “**Personal Data**” means “personal data”, “personal information”, “personally identifiable information” or similar information defined in and governed by Applicable Data Protection Laws.
g. “**Security Incident**” means any confirmed unauthorized or unlawful breach of security that leads to the accidental or unlawful destruction, loss, alteration, unauthorized disclosure of or access to Customer Data being Processed by Axiom. Security Incidents do not include unsuccessful attempts or activities that do not compromise the security of Personal Data, including unsuccessful log-in
attempts, pings, port scans, denial of service attacks or other network attacks on firewalls or networked systems.
h. “**Subprocessor**” means any third party authorized by Axiom to Process any Customer Data.
i. “**Usage Data**” means aggregate and other usage data that is not Your Data. This Addendum applies to Usage Data to the extent Usage Data constitutes Personal Data.
2\. General; Termination
a. This Addendum forms part of the Agreement and except as expressly set forth in this Addendum, the Agreement remains unchanged and in full force and effect. If there is any conflict between this Addendum and the Agreement, this Addendum will govern.
b. Any liabilities arising under this Addendum are subject to the limitations of liability in the Agreement.
c. This Addendum will be governed by and construed in accordance with governing law and jurisdiction provisions in the Agreement, unless required otherwise by Applicable Data Protection Laws.
d. This Addendum will automatically terminate upon expiration or termination of the Agreement.
3\. Relationship of the Parties
a. Axiom as Processor. The parties acknowledge and agree that with regard to the Processing of Customer Data, Customer acts as a controller and Axiom is a processor. Axiom will process Customer Data in accordance with Customer’s instructions as outlined in Section 5 (Role and Scope of Processing).
b. Axiom as Controller. To the extent that any Usage Data (as defined in the Agreement) is considered Personal Data, Axiom is the controller with respect to such data and will Process such data in accordance with its Privacy Policy.
4\. Compliance with Law. Each party will comply with its obligations under Applicable Data Protection Laws with respect to its Processing of Customer Data.
5\. Role and Scope of the Processing
a. Customer Instructions. Axiom will Process Customer Data only in accordance with Customer’s instructions. By entering into the Agreement, Customer instructs Axiom to Process Customer Data to provide the Services and pursuant to any other written instructions given by Customer and acknowledged in writing by Axiom as constituting instructions for purposes of this Addendum. Customer acknowledges and agrees that such instruction authorizes Axiom to Process Customer Data (a) to perform its obligations and exercise its rights under the Agreement; and (b) to perform its legal obligations and to establish, exercise or defend legal claims in respect of the Agreement.
6\. Subprocessing
a. Customer specifically authorizes Axiom to use its Affiliates as Subprocessors, and generally authorizes Axiom to engage Subprocessors to Process Customer Data. In such instances, Axiom:
(i) will enter into a written agreement with each Subprocessor, imposing data protection obligations substantially similar to those set out in this Addendum; and
(ii) remains liable for compliance with the obligations of this Addendum and for any acts or omissions of the Subprocessor that cause Axiom to breach any of its obligations under this Addendum.
b. A list of Axiom’s Subprocessors, including their functions and locations, is available at [https://axiom.co/sub-processors](https://axiom.co/sub-processors), and may be updated by Axiom from time to time in accordance with this Addendum.
c. When any new Subprocessor is engaged, Axiom will notify Customer of the engagement, which notice may be given by updating the Subprocessor Page and/or via a message through email or the Service. Axiom will give such notice at least ten (10) calendar days before the new Subprocessor Processes any Customer Data, except that if Axiom reasonably believes engaging a new Subprocessor on an expedited basis is necessary to protect the confidentiality, integrity or availability of the Customer Data or avoid material disruption to the Services, Axiom will give such notice as soon as reasonably practicable. If, within five (5) calendar days after such notice, Customer notifies Axiom in writing that Customer objects to Axiom’s appointment of a new Subprocessor based on reasonable data protection concerns, the parties will discuss such concerns in good faith and whether they can be resolved. If the parties are not able to mutually agree to a resolution of such concerns, Customer, as its sole and exclusive remedy, may terminate the Agreement for convenience with no refunds and Customer will remain liable to pay any committed fees in an order form, order, statement of work or other similar ordering document.
7\. Security
a. Security Measures. Axiom will implement and maintain technical and organizational security measures designed to protect Customer Data from Security Incidents and to preserve the security and confidentiality of the Customer Data, in accordance with Axiom’s security standards referenced in the Agreement (“**Security Measures**”).
b. Customer Responsibility.
(i) Customer is responsible for reviewing the information made available by Axiom relating to data security and making an independent determination as to whether the Services meet Customer’s requirements and legal obligations under Applicable Data Protection Laws. Customer acknowledges that the Security Measures may be updated from time to time upon reasonable notice to Customer to reflect process improvements or changing practices (but the modifications will not materially decrease Axiom’s obligations as compared to those reflected in such terms as of the Effective Date).
(ii) Customer agrees that, without limitation of Axiom’s obligations under this Section 7, Customer is solely responsible for its use of the Services, including (a) making appropriate use of the Services to ensure a level of security appropriate to the risk in respect of the Customer Data; (b) securing the account authentication credentials, systems and devices Customer uses to access the Services; (c) securing Customer’s systems and devices that it uses with the Services; and (d) maintaining its own backups of Customer Data.
c. Security Incident. Upon becoming aware of a confirmed Security Incident, Axiom will notify Customer without undue delay unless prohibited by applicable law. A delay in giving such notice requested by law enforcement and/or in light of Axiom’s legitimate needs to investigate or remediate the matter before providing notice will not constitute an undue delay. Such notices will describe, to the extent possible, details of the Security Incident, including steps taken to mitigate the
potential risks and steps Axiom recommends Customer take to address the Security Incident. Without prejudice to Axiom’s obligations under this Section 7.c., Customer is solely responsible for complying with Security Incident notification laws applicable to Customer and fulfilling any third-party notification obligations related to any Security Incidents. Axiom’s notification of or response to a Security Incident under this Section 7.c. will not be construed as an acknowledgement by Axiom of any fault or liability with respect to the Security Incident.
8\. Audits and Reviews of Compliance. The parties acknowledge that Customer must be able to assess Axiom’s compliance with its obligations under Applicable Data Protection Law and this Addendum, insofar as Axiom is acting as a processor on behalf of Customer.
a. Axiom’s Audit Program. Axiom uses external auditors to verify the adequacy of its security measures with respect to its processing of Customer Data and is seeking SOC2 Type 2 compliance. Such audits are performed at least once annually at Axiom’s expense by independent third-party security professionals at Axiom’s selection and result in the generation of a confidential audit report (“**Audit Report**”).
b. Customer Audit. Upon Customer’s written request at reasonable intervals, and subject to reasonable confidentiality controls, Axiom will make available to Customer a copy of Axiom’s most recent Audit Report. Customer agrees that any audit rights granted by Applicable Data Protection Laws will be satisfied by these Audit Reports. To the extent that Axiom’s provision of an Audit Report does not provide sufficient information for Customer to verify Axiom’s compliance with this Addendum or Customer is required to respond to a regulatory authority audit, Customer agrees to a mutually agreed-upon audit plan with Axiom that: (a) ensures the use of an independent third party; (b) provides notice to Axiom in a timely fashion; (c) requests access only during business hours; (d) accepts billing to Customer at Axiom’s then-current rates; (e) occurs no more than once annually; (f) restricts findings to only Customer Data relevant to Customer; and (g) obligates Customer, to the extent permitted by law or regulation, to keep confidential any information gathered that, by its nature, should be confidential.
9\. Impact Assessments and Consultations. Axiom will provide reasonable cooperation to Customer in connection with any data protection impact assessment (at Customer’s expense only if such reasonable cooperation will require Axiom to assign significant resources to that effort) or consultations with regulatory authorities that may be required in accordance with Applicable Data Protection Laws.
10\. Data Subject Requests. Axiom will upon Customer’s request (and at Customer’s expense) provide Customer with such assistance as it may reasonably require to comply with its obligations under Applicable Data Protection Laws to respond to requests from individuals to exercise their rights under Applicable Data Protection Laws (e.g., rights of data access, rectification, erasure, restriction, portability and objection) in cases where Customer cannot reasonably fulfill such requests independently by using the self-service functionality of the Services. If Axiom receives a request from a Data Subject in relation to their Customer Data, Axiom will advise the Data Subject to submit their request to Customer, and Customer will be responsible for responding to any such request.
11\. Return or Deletion of Customer Data
a. Axiom will, within sixty (60) days after request by Customer following the termination or expiration of the Agreement, delete all Customer Data from Axiom’s systems.
b. Notwithstanding the foregoing, Customer understands that Axiom may retain Customer Data if required by law, and such data will remain subject to the requirements of this Addendum.
12\. International Provisions
a. Processing Location. Customer Data may be processed in the United States or the EU. Transfer Mechanisms (e.g., Standard Contractual Clauses) will apply as needed. Data hosting locations are managed with the same security measures and protocols as defined herein
b. Jurisdiction Specific Terms. To the extent that Axiom Processes Customer Data originating from and protected by Applicable Data Protection Laws in one of the Jurisdictions listed in Schedule 4 (Jurisdiction Specific Terms), then the terms specified therein with respect to the applicable jurisdiction(s) will apply in addition to the terms of this Addendum.
c. Cross Border Data Transfer Mechanism. To the extent that Customer’s use of the Services requires an onward transfer mechanism to lawfully transfer personal data from a jurisdiction (i.e., the European Economic Area (“**EEA**\*”),\* the United Kingdom (“**UK**”), Switzerland or any other jurisdiction listed in Schedule 3) to Axiom located outside of that jurisdiction (a “**Transfer Mechanism**”), the terms and conditions of Schedule 3 (Cross Border Transfer Mechanisms) will apply.
**This Data Processing Addendum is effective and binding upon your use of the Services as described above.**
**SCHEDULE 1**
**SUBJECT MATTER & DETAILS OF PROCESSING**
1\. Nature and Purpose of the Processing. Axiom will process Personal Data as necessary to provide the Services under the Agreement. Axiom does not sell Customer Data (or end user information within such Customer Data) and does not share such end users’ information with third parties for compensation or for those third parties’ own business interests.
a. Customer Data. Axiom will process Customer Data as a processor in accordance with Customer’s instructions as outlined in Section 5.a (Customer Instructions) of this Addendum.
b. Usage Data. Axiom will process Usage Data as a controller for the purposes outlined in Section 3.b (Axiom as Controller) of this Addendum.
2\. Processing Activities.
a. Customer Data. Customer Data will be subject to the following basic processing activities: the provision of Services.
b. Usage Data. Personal Data contained in Usage Data will be subject to the following processing activities by Axiom: Axiom may use Usage Data to operate, improve and support the Services and for other lawful business practices, such as analytics, benchmarking and reporting.
3\. Duration of the Processing. The period for which Personal Data will be retained and the criteria used to determine that period is as follows:
a. Customer Data. Prior to the termination of the Agreement, Axiom will process stored Customer Data for the purpose of providing the Services until Customer elects to delete such Customer Data via the Axiom Services or in accordance with the Agreement.
b. Usage Data. Upon termination of the Agreement, Axiom may retain, use and disclose Usage Data for the purposes set forth above in Section 2.b (Usage Data) of this Schedule 1, subject to the confidentiality obligations set forth in the Agreement. Axiom will anonymize or delete Personal Data contained within Usage Data when Axiom no longer requires it for the purpose set forth in Section 2.b (Usage Data) of this Schedule 1.
4\. Categories of Data Subjects.
a. Customer Data. Customer’s customers, employees, suppliers and end-users.
b. Usage Data. Customer’s authorized users with access to an Axiom account, customers, suppliers and end-users.
5\. Categories of Personal Data.
a. Customer Data. The categories of Customer Data are such categories as Customer is authorized to ingest into the Services under the Agreement.
b. Usage Data. Axiom processes Personal Data within Usage Data.
6\. Sensitive Data or Special Categories of Data.
a. Customer Data. Customers are prohibited from including sensitive data or special categories of data in Customer Data.
b. Usage Data. Sensitive Data is not contained in Usage Data.
**SCHEDULE 2**
**TECHNICAL & ORGANIZATIONAL SECURITY MEASURES**
Where applicable, this Schedule 2 will serve as Annex II to the Standard Contractual Clauses. The following provides more information regarding Axiom’s technical and organizational security measures set forth below.
Technical and Organizational Security Measures:
1\. Measures of pseudonymization and encryption of personal data.
Axiom maintains Customer Data in an encrypted format at rest using Advanced Encryption Standard and in transit using TLS.
2\. Measures for ensuring ongoing confidentiality, integrity, and availability and resilience of processing systems and services.
Axiom's customer agreements contain strict confidentiality obligations. Additionally, Axiom requires every downstream Subprocessor to sign confidentiality provisions that are substantially similar to those contained in Axiom's customer agreements. The infrastructure for the Axiom Services spans multiple fault-independent AWS availability zones in geographic regions physically separated from one another, supported by various tools and processes to maintain high availability of services.
3\. Measures for ensuring the ability to restore availability and access to Personal Data in a timely manner in the event of a physical or technical incident.
Axiom performs regular backups of Customer Data, which is hosted in AWS data centers. Backups are retained redundantly across multiple availability zones and encrypted in-transit and at-rest using Advanced Encryption Standard (AES-256).
4\. Processes for regular testing, assessing and evaluating the effectiveness of technical and organizational measures in order to ensure the security of processing.
Axiom maintains a risk-based assessment security program. The framework for Axiom’s security program includes administrative, organizational, technical, and physical safeguards reasonably designed to protect the Services and confidentiality, integrity, and availability of Customer Data. Axiom’s security program is intended to be appropriate to the nature of the Services and the size and complexity of Axiom’s business operations. Axiom has a separate and dedicated security team that manages Axiom’s security program. This team facilitates and supports independent audits and assessments performed by third-parties to provide independent feedback on the operating effectiveness of the information security program.
5\. Measures for user identification and authorization.
Axiom personnel are required to use unique user access credentials and passwords for authorization. Axiom follows the principles of least privilege through role-based and time-based access models when provisioning system access. Axiom personnel are authorized to access Customer Data based on their job function, role and responsibilities, and such access requires approval prior to access provisioning. Access is promptly removed upon role change or termination.
6\. Measures for the protection of data during transmission.
Customer Data is encrypted when in-transit between Customer and Axiom Service using TLS.
7\. Measures for the protection of data during storage.
Customer Data is stored encrypted using the Advanced Encryption Standard.
8\. Measures for ensuring physical security of locations at which personal data are processed.
Axiom headquarters and office spaces have a physical security program that manages visitors, building entrances, CCTVs (closed circuit televisions), and overall office security. All employees,\
contractors, and visitors are required to identify themselves and have unique access tokens. Physical security controls are inherited from our co-working office provider.
The Services operate on Amazon Web Services (“**AWS**”) and are protected by the security and environmental controls of Amazon and Google, respectively.
Detailed information about AWS security is available at:
\- [https://aws.amazon.com/security/](https://aws.amazon.com/security/) and
\- [http://aws.amazon.com/security/sharing-the-security-responsibility/](http://aws.amazon.com/security/sharing-the-security-responsibility/).
For AWS SOC Reports, please see:
\- [https://aws.amazon.com/compliance/soc-faqs/](https://aws.amazon.com/compliance/soc-faqs/)
9\. Measures for ensuring events logging.
Axiom monitors access to applications, tools, and resources that process or store Customer Data, including cloud services. Monitoring of security logs is centralized by the security team. Log activities are investigated when necessary and escalated appropriately.
10\. Measures for ensuring systems configuration, including default configuration.
Axiom applies Secure Software Development Lifecycle (Secure SDLC) standards to perform numerous security-related activities for the Services across different phases of the product creation lifecycle from requirements gathering and product design all the way through product deployment. These activities include, but are not limited to, the performance of (a) internal security reviews before new Services are deployed; (b) bi-annual penetration testing by independent third parties; and (c) threat models for new Services to detect any potential security threats and vulnerabilities.
Axiom adheres to a change management process to administer changes to the production environment for the Services, including changes to its underlying software, applications, and systems. Monitors are in place to notify the security team of changes made to critical infrastructure and services that do not adhere to the change management processes.
11\. Measures for internal IT and IT security governance and management.
Axiom maintains a risk-based assessment security program. The framework for Axiom’s security program includes administrative, organizational, technical, and physical safeguards reasonably designed to protect the Services and confidentiality, integrity, and availability of Customer Data. Axiom’s security program is intended to be appropriate to the nature of the Services and the size and complexity of Axiom’s business operations. Axiom has a separate and dedicated Information Security team that manages Axiom’s security program. This team facilitates and supports independent audits and assessments performed by third parties. Axiom’s security framework is based on the ISO 27001 Information Security Management System and includes programs covering: Policies and Procedures, Asset Management, Access Management, Cryptography, Physical Security, Operations Security, Communications Security, Business Continuity Security, People Security, Product Security, Cloud and Network Infrastructure Security, Security Compliance, Third-Party Security, Vulnerability Management, and Security Monitoring and Incident Response. Security is managed at the highest\
levels of the company, with security and technology leadership meeting with executive management regularly to discuss issues and coordinate company-wide security initiatives. Information security policies and standards are reviewed and approved by management at least annually and are made available to all Axiom employees for their reference.
12\. Measures for certifications/assurance of processes and products.
Axiom conducts various third-party audits to attest to various frameworks including ISO 27001, SOC 2 Type 2, and bi-annual application penetration testing.
13\. Measures for ensuring data minimization.
Axiom Customers unilaterally determine what Customer Data they route through the Axiom Services and how the Services are configured. As such, Axiom operates on a shared responsibility model. Axiom provides tools within the Services that gives Customers control over exactly what data enters the platform and enables Customers with the ability to block data at the Source level. Additionally, Axiom allows Customers to delete and suppress Customer Data on demand.
14\. Measures for ensuring data quality.
Axiom has a three-fold approach for ensuring data quality. These measures include: (i) unit testing to ensure the quality of logic used to make API calls, (ii) volume testing to ensure the code is able to scale, and (iii) daily end-to-end testing to ensure that the input values match expected values. Axiom applies these measures across the board, both to ensure the quality of any Usage Data that Axiom collects and to ensure that the Axiom Platform is operating in accordance with the documentation.
Each Axiom Customer chooses what Customer Data they route through the Axiom Services and how the Services are configured. As such, Axiom operates on a shared responsibility model. Axiom ensures that data quality is maintained from the time a Customer sends Customer Data into the Services and until that Customer Data leaves Axiom to flow to a downstream destination.
15\. Measures for ensuring limited data retention.
Axiom Customers unilaterally determine what Customer Data they route through the Axiom Services and how the Services are configured. As such, Axiom operates on a shared responsibility model. If a Customer is unable to delete Customer Data via the self-services functionality of the Services, then Axiom deletes Customer Data upon the Customer's written request, within the timeframe specified in the Data Protection Addendum and in accordance with Applicable Data Protection Law.
16\. Measures for ensuring accountability.
Axiom has adopted measures for ensuring accountability, such as implementing data protection policies across the business, publishing Axiom's Information Security Policy (available at [https://axiom.co/security](https://axiom.co/security)), maintaining documentation of processing activities, and recording and reporting Security Incidents involving Personal Data. Axiom conducts regular third-party audits to ensure compliance with our privacy and security standards.
17\. Measures for allowing data portability and ensuring erasure.
Axiom's Customers have direct relationships with their end users and are responsible for responding to requests from their end users who wish to exercise their rights under Applicable Data Protection Laws. Axiom has functionality that allows Customers to delete and suppress Customer Data. Axiom specifies in the Data Protection Addendum that it will provide assistance to such Customer as may reasonably be require to comply with Customer's obligations under Applicable Data Protection Laws to respond to requests from individuals to exercise their rights under Applicable Data Protection Laws (e.g., rights of data access, rectification, erasure, restriction, portability and objection). If Axiom receives a request from a Data Subject in relation to their Customer Data, Axiom will advise the Data Subject to submit their request to Customer, and Customer will be responsible for responding to any such request.
18\. For transfers to \[sub]-processors, also describe the specific technical and organizational measures to be taken by the \[sub]-processor to be able to provide assistance to the controller and, for transfers from a processor to a \[sub]-processor, to the data exporter.
When Axiom engages a sub-processor under this Addendum, Axiom and the sub-processor enter into an agreement with data protection terms substantially similar to those contained herein. Each sub-processor agreement must ensure that Axiom is able to meet its obligations to Customer. In addition to implementing technical and organizational measures to protect personal data, sub-processors must a) notify Axiom in the event of a Security Incident so Axiom may notify Customer; b) delete data when instructed by Axiom in accordance with Customer’s instructions to Axiom; c) not engage additional sub-processors without authorization; d) not change the location where data is processed; or e) process data in a manner which conflicts with Customer’s instructions to Axiom.
**SCHEDULE 3**
**CROSS BORDER DATA TRANSFER MECHANISM**
1\. **Definitions**
a. **“Standard Contractual Clauses”** means, depending on the circumstances unique to any particular Customer, any of the following:
(i) UK Standard Contractual Clauses; and (ii) 2021 Standard Contractual Clauses.
b. “**UK Standard Contractual Clauses**” means:
(i) Standard Contractual Clauses for data controller to data processor transfers approved by the European Commission in decision 2010/87/EU (“**UK Controller to Processor SCCs**”); and
(ii) Standard Contractual Clauses for data controller to data controller transfers approved by the European Commission in decision 2004/915/EC (“**UK Controller to Controller SCCs”**).
c. "**2021 Standard Contractual Clauses**" means the Standard Contractual Clauses approved by the European Commission in decision 2021/914.
**2. UK Standard Contractual Clause**s. For data transfers from the United Kingdom that are subject to the UK Standard Contractual Clauses, the UK Standard Contractual Clauses will be deemed entered into (and incorporated into this Addendum by reference) and completed as follows:
a. The UK Controller to Processor SCCs will apply where Axiom is processing Customer Data. The illustrative indemnification clause will not apply. Schedule 1 serves as Appendix 1 of the UK Controller to Processor SCCs. Schedule 2 serves as Appendix 2 of the UK Controller to Processor SCCs.
b. The UK Controller to Controller SCCs will apply where Axiom is processing Usage Data. In Clause II(h), Axiom will process personal data in accordance with the data processing principles set forth in Annex A of the UK Controller to Controller SCCs. The illustrative commercial clause will not apply. Schedule 1 serves as Annex B of the UK Controller to Controller SCCs. Personal Data transferred under these clauses may only be disclosed to the following categories of recipients: i) Axiom’s employees, agents, Affiliates, advisors and independent contractors with a reasonable business purpose for needing such personal data; ii) Axiom vendors that, in their performance of their obligations to Axiom, must process such personal data acting on behalf of and according to instructions from Axiom; and iii) any person (natural or legal) or organisation to whom Axiom may be required by applicable law or regulation to disclose personal data, including law enforcement authorities, central and local government.
**3. The 2021 Standard Contractual Clauses**. For data transfers from the European Economic Area, the UK, and Switzerland that are subject to the 2021 Standard Contractual Clauses, the 2021 Standard Contractual Clauses will apply in the following manner:
a. Module One (Controller to Controller) will apply where Customer is a controller of Usage Data and Axiom is a controller of Usage Data.
b. Module Two (Controller to Processor) will apply where Customer is a controller of Customer Data and Axiom is a processor of Customer Data;
d. For each Module, where applicable:
(i) in Clause 7, the option docking clause will not apply;
(ii) in Clause 9, Option 2 will apply, and the time period for prior notice of sub-processor changes will be as set forth in Section 6 (Subprocessing) of this Addendum;
(iii) in Clause 11, the optional language will not apply;
(iv) in Clause 17 (Option 1), the 2021 Standard Contractual Clauses will be governed by Irish law.
(v) in Clause 18(b), disputes will be resolved before the courts of Ireland; (vi) In Annex I, Part A: Data Exporter: Customer and authorized Affiliates of Customer.\
Contact Details: Customer’s account owner email address, or to the email address(es) for which Customer elects to receive privacy communications.
Data Exporter Role: The Data Exporter’s role is outlined in Section 3 of this Addendum Schedule.
Signature & Date: By entering into the Agreement, Data Exporter is deemed to have signed these Standard Contractual Clauses incorporated herein, including their Annexes, as of the Effective Date of the Agreement.
Data Importer: Axiom Inc.
Contact Details: Axiom Privacy Team – [privacy@axiom.co](mailto:privacy@axiom.co)
Data Importer Role: The Data Importer’s role is outlined in Section 3 of this Addendum Schedule.
Signature & Date: By entering into the Agreement, Data Importer is deemed to have signed these Standard Contractual Clauses, incorporated herein, including their Annexes, as of the Effective Date of the Agreement.
(vii) In Annex I, Part B:
The categories of data subjects are described in Schedule 1, Section 4.
The sensitive data transferred is described in Schedule 1, Section 6.
The frequency of the transfer is a continuous basis for the duration of the Agreement. The nature of the processing is described in Schedule 1, Section 1.
The purpose of the processing is described in Schedule 1, Section 1.
The period of the processing is described in Schedule 1, Section 3.
For transfers to sub-processors, the subject matter, nature, and duration of the processing is outlined at [https://trust.axiom.co/subprocessors](https://trust.axiom.co/subprocessors).
(viii) In Annex I, Part C: The Irish Data Protection Commission will be the competent supervisory authority.
(ix) Schedule 2 serves as Annex II of the Standard Contractual Clauses.
4\. As to the specific modules, the parties agree that the following modules apply, as the circumstances of the transfer may apply:
Controller-Controller - Module One
Controller-Processor - Module Two
5\. To the extent there is any conflict between the Standard Contractual Clauses and any other terms in this Addendum, including Schedule 4 (Jurisdiction Specific Terms), the provisions of the Standard Contractual Clauses will prevail.\
**SCHEDULE 4**
**JURISDICTION SPECIFIC TERMS**
1\. California
a. The definition of “**Applicable Data Protection Law**” includes the California Consumer Privacy Act (“**CCPA**”).
b. The terms “**business**”, “**commercial purpose**”, “**service provider**”, “**sell**” and “**personal information**” have the meanings given in the CCPA.
c. With respect to Customer Data, Axiom is a service provider under the CCPA.
d. Axiom will not (a) sell Customer Data; (b) retain, use or disclose any Customer Data for any purpose other than for the specific purpose of providing the Services, including retaining, using or disclosing the Customer Data for a commercial purpose other than providing the Services; or (c) retain, use or disclose the Customer Data outside of the direct business relationship between Axiom and Customer.
e. The parties acknowledge and agree that the Processing of Customer Data authorized by Customer’s instructions described in Section 5 of this Addendum is integral to and encompassed by Axiom’s provision of the Services and the direct business relationship between the parties.
f. Notwithstanding anything in the Agreement or any Order Form entered in connection therewith, the parties acknowledge and agree that Axiom’s access to Customer Data does not constitute part of the consideration exchanged by the parties in respect of the Agreement.
g. To the extent that any Usage Data (as defined in the Agreement) is considered Personal Data, if and when Axiom is subject to the CCPA, Axiom is the business under the CCPA with respect to such data and will Process such data in accordance with its Privacy Policy. As of October 1, 2021 Axiom is not subject to the CCPA as a business.
2\. EEA
a. The definition of “**Applicable Data Protection Laws**” includes the General Data Protection Regulation (EU 2016/679)(“**GDPR**”).
b. When Axiom engages a Subprocessor under Section 6 (Subprocessing), it will:
(i) require any appointed Subprocessor to protect Customer Data to the standard required by Applicable Data Protection Laws, such as including the same data protection obligations referred to in Article 28(3) of the GDPR, in particular providing sufficient guarantees to implement appropriate technical and organizational measures in such a manner that the processing will meet the requirements of the GDPR; and
(ii) require any appointed Subprocessor to agree in writing to only process data in a country that the European Union has declared to have an “adequate” level of protection; or to only process data on terms equivalent to the Standard Contractual Clauses.
c. GDPR Penalties. Notwithstanding anything to the contrary in this Addendum or in the Agreement (including, without limitation, either party’s indemnification obligations), neither party will be responsible for any GDPR fines issued or levied under Article 83 of the GDPR against the other party by a regulatory authority or governmental body in connection with such other party’s violation of the GDPR.
3\. Switzerland
a. The definition of “Applicable Data Protection Laws” includes the Swiss Federal Act on Data Protection.
b. When Axiom engages a Subprocessor under Section 6 (Subprocessing), it will:
(i) require any appointed Subprocessor to protect Customer Data to the standard required by Applicable Data Protection Laws, such as including the same data protection obligations referred to in Article 28(3) of the GDPR, in particular providing sufficient guarantees to implement appropriate technical and organizational measures in such a manner that the processing will meet the requirements of the GDPR; and
(ii) require any appointed Subprocessor to agree in writing to only process data in a country that the European Union has declared to have an “adequate” level of protection; or to only process data on terms equivalent to the Standard Contractual Clauses.
4\. United Kingdom
a. References in this Addendum to GDPR will to that extent be deemed to be references to the corresponding laws of the United Kingdom (including the UK GDPR and Data Protection Act 2018).
b. When Axiom engages a Subprocessor under Section 6 (Subprocessing), it will:
(i) require any appointed Subprocessor to protect Customer Data to the standard required by Applicable Data Protection Laws, such as including the same data protection obligations referred to in Article 28(3) of the GDPR, in particular providing sufficient guarantees to implement appropriate technical and organizational measures in such a manner that the processing will meet the requirements of the GDPR; and
(ii) require any appointed Subprocessor to agree in writing to only process data in a country that the European Union has declared to have an “adequate” level of protection; or to only process data on terms equivalent to the Standard Contractual Clauses.
# HIPAA anti-retaliation policy
Source: https://axiom.co/docs/legal/hipaa
Protection for individuals who report HIPAA violations or participate in compliance activities.
{/* vale off */}
Title II of the Federal Health Insurance Portability and Accountability Act (42 USC 1320d to 1329d-8, and Section 264 of Public Law 104191), and its accompanying Privacy Regulations, 45 CFR Parts 160 and 164, require that "covered entities," as defined by the HIPAA Privacy Regulations, refrain from any retaliatory acts targeted toward those who file complaints or otherwise report HIPAA violations or infractions. The purpose of this policy is to clearly state the position of Axiom.co on intimidation and retaliation. This policy applies to all workforce, volunteers, and management of Axiom.co.
Under no circumstances shall Axiom.co intimidate, threaten, coerce, discriminate against, or take other retaliatory action against any individual for:
1. The exercise of rights guaranteed under HIPAA, including the filing of a HIPAA complaint against Axiom.co;
2. The filing of a HIPAA complaint with the Secretary of HHS;
3. Testifying, assisting, or participating in a HIPAA investigation, compliance, review, proceeding, or hearing;
4. Opposing any act or practice that is counter to the HIPAA regulations, provided the individual or person has a good faith belief that the practice opposed is unlawful, and the manner of the opposition is reasonable and does not involve a disclosure of PHI in violation of HIPAA.
No retaliatory action against an individual or group involved in filing HIPAA complaints or otherwise reporting infractions will be tolerated.
Under no circumstances shall Axiom.co require any member(s) of its workforce, volunteers, or management to waive their rights under HIPAA.
All allegations of HIPAA retaliation against individuals will be reviewed and investigated by Axiom.co in a timely manner.
# Privacy policy
Source: https://axiom.co/docs/legal/privacy
Learn how we collect, use, and protect your personal information and data.
{/* vale off */}
This Privacy Policy explains how Axiom, Inc. ("Axiom", "us", "we" or "our") collects, uses, shares, and otherwise processes personal information in connection with our websites (the "Sites") and online services.
## Personal information we collect
Information you provide to us. Personal information that you may provide through the Sites, or otherwise communicate with us, includes:
* **Contact information,** such as your first and last name, phone number, email address and company name.
* **Feedback and correspondence,** such as information you provide when you report a problem with the Sites or otherwise correspond with us.
* **Marketing information,** such as your preferences for receiving marketing communications and details about how you engage with marketing communications.
**Information from third party sites.** Our Sites include interfaces that allow you to connect with third party sites, such as when you create an account on our Sites by logging in to your Google account. If you connect to a third party site through the Sites, you authorize us to access, use and store the information that you agreed the third party site could provide to us based on your settings on that third party site. We will access, use and store that information in accordance with this Privacy Policy. You can revoke our access to the information you provide in this way at any time by amending the appropriate settings from within your account settings on the applicable third party site.
**Information Collected Automatically.** We may automatically log information about you and your computer or mobile device when you access our Sites. For example, we may log your operating system name and version, manufacturer and model, device identifier, browser type, screen resolution, the website you visited before browsing to our Sites, pages you viewed, how long you spent on a page, access times, general location information such as city, state or geographic area, and information about your use of and actions on our Sites. We collect this information about you using cookies. Please refer to the [Cookies and Similar Technologies](/legal/cookies) section for more details.
To make our Sites more useful to you, our servers (which may be hosted by a third party service provider) collect information from you, including your browser type, operating system, Internet Protocol ("IP") address (a number that is automatically assigned to your computer when you use the Internet, which may vary from session to session), domain name, and/or a date/time stamp for your visit.
## How we use your personal information
We use your personal information for the following purposes and as otherwise described to you in this Privacy Policy or at the time of collection:
**To provide you with our products and services.** We use your personal information to register you and/or your company for an account and to communicate with you regarding Axiom and our Sites, including by sending you announcements, updates, and support and administrative messages.
To operate the Sites. We use your personal information to:
* operate, maintain, administer and improve the Sites
* improve the quality of experience when you interact with our Sites;
* better understand your needs and interests, and personalize your experience with the Sites; and
* respond to your service-related requests, questions and feedback.
**For research and development.** We use information automatically collected and other information to analyze trends, administer the Sites, analyze users' movements around the Sites, gather demographic information about our user base as a whole and improve the Sites.
**To send you surveys and marketing communications.** We may send you surveys, promotions or other marketing communications but you may opt out of receiving them as described in the Email communications section below.
**To create anonymous data.** We may create aggregated and other anonymous data from our users' information. We make personal information into anonymous data by removing information that makes the data personally identifiable. We may use this anonymous data and share it with third parties for our lawful business purposes.
**For compliance, fraud prevention and safety.** We may use your personal information as we believe appropriate to (a) investigate violations of and enforce our Terms of Use; (b) protect our, your or others' rights, privacy, safety or property (including by making and defending legal claims); and (c) protect, investigate and deter against fraudulent, harmful, unauthorized, unethical or illegal activity.
**For compliance with law.** We may use your personal information as we believe appropriate to (a) comply with applicable laws, lawful requests and legal process, such as to respond to subpoenas or requests from government authorities; and (b) where permitted by law in connection with a legal investigation.
**With your consent.** In some cases we may ask for your consent to collect, use or share your personal information, such as when required by law or our agreements with third parties.
## How we share your personal information
We do not share your personal information with third parties without your consent, except in the following circumstances:
**Service providers.** We may share your personal information with third party companies and individuals as needed for them to provide us with services in support of the Sites (such as hosting and storage, website analytics, email delivery, marketing, database management services and legal and other professional advice). These third parties will be given limited access to your personal information that is reasonably necessary for them to provide their services.
**Compliance, fraud prevention, safety.** We may disclose your personal information as we believe appropriate to government or law enforcement officials or private parties for the purposes described above under the following sections: For compliance, fraud prevention and safety and For compliance with law.
**Business transfers.** We may sell, transfer or otherwise share some or all of our business or assets, including your personal information, in connection with a business deal (or potential business deal) such as a corporate divestiture, merger, consolidation, acquisition, reorganization or sale of assets, or in the event of bankruptcy or dissolution.
## Your choices
In addition to your choices in connection with cookies (see Cookies and Similar Technologies), you have several choices regarding use of information with respect to our Sites:
**Email communications.** We may periodically send you information and emails that directly promote the use of our Sites. When you receive newsletters or promotional communications from us, you may indicate a preference to stop receiving further communications from us and you will have the opportunity to "opt-out" by following the unsubscribe instructions provided in the e-mail you receive or by contacting us directly at [contact@axiom.co](mailto:contact@axiom.co). You may continue to receive service-related and other non-marketing emails.
**Changes to your personal information.** If your personal information changes during your relationship with us, you may update your personal information by emailing us at [privacy@axiom.co](mailto:privacy@axiom.co). You may also request deletion of your personal information by us, and we will use commercially reasonable efforts to honor your request, but please note that we may be required to keep such information and not delete it (or to keep this information for a certain time, in which case we will comply with your deletion request only after we have fulfilled such requirements). When we delete any information, it will be deleted from the active database, but may remain in our archives. We may also retain your information for fraud or similar purposes.
**Choosing not to share your personal information.** Where we are required by law to collect your personal information, or where we need your personal information in order to provide the Sites to you, if you do not provide this information when requested (or you later ask to delete it), we may not be able to provide you with the Sites. We will tell you what information you must provide to receive the Sites by designating it as required or through other appropriate means.
We may allow service providers and other third parties to use cookies and other tracking technologies to track your browsing activity over time and across the Sites and third party websites. For more details, see our Cookie Policy. Some Internet browsers may be configured to send "Do Not Track" signals to the online services that you visit. We currently do not respond to "Do Not Track" or similar signals. To find out more about "Do Not Track," please visit [http://www.allaboutdnt.com](http://www.allaboutdnt.com).
## Other important privacy information
**Security.** The security of your personal information important to us. We take a number of organizational, technical and physical measures designed to protect the personal information we collect. However, security risk is inherent in all internet and information technologies and we cannot guarantee the absolute security of your personal information. We will comply with applicable laws and regulations requiring that we notify you in the event your personal information is compromised as a result of a breach of our security measures.
**Third party sites and services.** Our Sites may contain links to third party websites. When you click on a link to any other website or location, you will leave our Sites and go to another site and another entity may collect personal and/or other information from you. We have no control over, do not review, and cannot be responsible for, these outside websites or their content. Please be aware that the terms of this Privacy Policy do not apply to these outside websites or content, or to any collection of your personal information after you click on links to such outside websites. We encourage you to read the privacy policies of every website you visit. The links to third party websites or locations are for your convenience and do not signify our endorsement of such third parties or their products, content or websites.
**International Transfers.** Axiom is headquartered in the United States, and your personal information may be collected, used and stored in the United States or other locations outside of your state, province, country or other governmental jurisdiction, where privacy laws may not be as protective as those in your jurisdiction.
**Children.** We reserve the right to modify this Privacy Policy at any time. If we make changes to this Privacy Policy we will post them on the Sites and indicate the effective date of the change. If we make material changes to this Privacy Policy we will notify you by email or through the Sites.
## How to contact us
**Axiom, Inc.**\
1390 Market Street\
Suite 200\
San Francisco, CA 94102\
[privacy@axiom.co](mailto:privacy@axiom.co)
## Notice to European users
The following applies to individuals in the European Economic Area.
**Controller**. Axiom is the controller of your personal information covered by this Privacy Policy for purposes of European data protection legislation.
**Legal bases for processing**. The legal bases of our processing of your personal information are described in the table below.
* To provide you with our products and services
You have entered a contract with us and we need to use your personal information to provide services you have requested or take steps that you request prior to providing services.
* To provide you with our products and services
* For research and development
* To send you surveys and marketing communications
* For compliance, fraud prevention and safety
These processing activities constitute our legitimate interests. We consider and balance the potential impact on your rights before we process your personal information for our legitimate interests. We do not use your personal information for activities where our interests are overridden by the impact on you (unless we have your consent or are otherwise required or permitted to by law).
* For compliance with law
You have entered a contract with us and we need to use your personal information to provide services you have requested or take steps that you request prior to providing services.
* With your consent
Processing is based on your consent. Where we rely on your consent you have the right to withdraw it anytime in the manner indicated in the Sites.
* To share your personal information as described in this Privacy Policy
This sharing constitutes our legitimate interests, and in some cases may be necessary to comply with our legal obligations.
## Retention
Generally, we retain your personal information for as long as necessary to process your personal information as described in this Privacy Policy unless a longer retention period is required or permitted by law.
## Your rights
European data protection laws give you certain rights regarding your personal information. You may ask us to take the following actions in relation to your personal information that we hold:
* **Access.** Provide you with information about our processing of your personal information and give you access to your personal information.
* **Correct.** Update or correct inaccuracies in your personal information.
* **Delete.** Delete your personal information.
* **Transfer.** Transfer a machine-readable copy of your personal information to you or a third party of your choice.
* **Restrict.** Restrict the processing of your personal information.
* **Object.** Object to our reliance on our legitimate interests as the basis of our processing of your personal information that impacts your rights.
You may submit these requests by email to [privacy@axiom.co](mailto:privacy@axiom.co) or our postal address provided above. We may request specific information from you to help us confirm your identity and process your request. Applicable law may require or permit us to decline your request. If we decline your request, we will tell you why, subject to legal restrictions. If you would like to submit a complaint about our use of your personal information or response to your requests regarding your personal information, you may contact us or submit a complaint to the data protection regulator in your jurisdiction. You can find your data protection regulator here.
## Personal data export
If we export your personal information from the European Economic Area to a country outside of it and are required to apply additional safeguards to that personal information under European data protection legislation, we will do so. Such safeguards may include applying the European Commission model contracts for the transfer of personal data to third countries described here. Please contact us for further information about any such transfers or the specific safeguards applied.
# Terms of service
Source: https://axiom.co/docs/legal/terms-of-service
The terms and conditions governing your use of our services and platform.
{/* vale off */}
PLEASE READ THESE TERMS OF SERVICE CAREFULLY BEFORE USING THE SERVICE OFFERED BY AXIOM, INC. (“**AXIOM**”). BY MUTUALLY EXECUTING ONE OR MORE SERVICE ORDERS WITH AXIOM WHICH REFERENCE THESE TERMS (EACH, A “**SERVICE** **ORDER**”) OR BY ACCESSING OR USING THE SERVICES IN ANY MANNER, YOU (“**YOU**” OR “**CUSTOMER**”) AGREE TO BE BOUND BY THESE TERMS (TOGETHER WITH THE APPLICABLE SERVICE DESCRIPTION, THE “**AGREEMENT**”) TO THE EXCLUSION OF ALL OTHER TERMS. YOU REPRESENT AND WARRANT THAT YOU HAVE THE AUTHORITY TO ENTER INTO THIS AGREEMENT; IF YOU ARE ENTERING INTO THIS AGREEMENT ON BEHALF OF AN ORGANIZATION OR ENTITY, REFERENCES TO “CUSTOMER” AND “YOU” IN THIS AGREEMENT, REFER TO THAT ORGANIZATION OR ENTITY. IF YOU DO NOT AGREE TO ALL OF THE FOLLOWING, YOU MAY NOT USE OR ACCESS THE SERVICES IN ANY MANNER. IF THE TERMS OF THIS AGREEMENT ARE CONSIDERED AN OFFER, ACCEPTANCE IS EXPRESSLY LIMITED TO SUCH TERMS.
**1 SCOPE OF SERVICE AND RESTRICTIONS**
**1.1 Access to and Scope of Service**. Subject to Axiom’ receipt of the applicable Fees with respect to the service specified in the corresponding Service Description (the “**Service**”), Axiom will use commercially reasonable efforts to make the Service available to Customer as set forth in this Agreement. Subject to Customer’s compliance with the terms and conditions of the Agreement, Customer may access and use the Service as specified in the Service Description and the applicable Supplemental Terms. ”**Service Description**” means the applicable use limitations, fees, use period, Supplemental Terms and related limitations with respect to the applicable Service. The applicable Service Description is available on the Axiom website, presented in connection with billing or invoicing, or as specified in a Service Order.
**1.2 Supplemental Terms**. The Service is available in a number of versions, packages and implementation types (each an “**Offering**”). Each Offering may be subject to additional requirements, use limitation and associated specifics with respect to the operation and use of such Offering (the “**Supplemental Terms**”). The Supplemental Terms will be specified in the applicable Service Description.
**1.3 Trials.** If Customer is accessing or making use of the Service on a trial basis or no-fee basis as identified in the corresponding Service Description (the “**Trial**”), Customer may use the Service during the Trial provided that (a) access or such use does not to exceed the scope of the corresponding Service Description; (b) Customer acknowledges and agrees that the Trial is made available on an “as-is” basis without support, warranty, or indemnification; and (c) Axiom shall have no liability or obligation with respect to any Trial.
**1.4 Restrictions**. Customer will use the Service only in accordance with all applicable laws, including, but not limited to, rules and regulations related to data and personally identifiable information. Customer agrees not to, and will not allow any third party to: (i) remove or otherwise alter any proprietary notices or labels from the Service or any portion thereof; (ii) reverse engineer, decompile, disassemble, or otherwise attempt to discover the underlying structure, ideas, or algorithms of the Service or any software used to provide or make the Service available; (iii) rent, resell or otherwise allow any third party access to or use of the Service; (iv) use or access the Service in any manner inconsistent with the applicable Service Description; or (v) use or access of the Service in any manner inconsistent with the Axiom Acceptable Use Policy available at the following URL: [https://axiom.co/docs/legal/acceptable-use-policy](https://axiom.co/docs/legal/acceptable-use-policy) (the “**AUP**”).
**1.5 Ownership**. Axiom retains all right, title, and interest in and to the: the Service, Documentation, Axiom Confidential Information; any improvements to and derivative works of the same; Axiom Templates; and all other intellectual property created, used, provided, or made available by Axiom under or in connection with the Service (collectively, “**Axiom IP**”). Customer may from time to time provide suggestions, comments, or other feedback to Axiom with respect to the Service or Documentation (“**Feedback**”). Customer shall, and hereby does, grant to Axiom a nonexclusive, worldwide, perpetual, irrevocable, transferable, sublicensable, royalty-free, fully paid-up license to use the Feedback for any purpose.
**1.6 Customer Data**. Customer is solely responsible for Customer Data including, but not limited to: (a) compliance with all applicable laws and this Agreement; (b) any claims relating to Customer Data; and (c) any claims that Customer Data infringes, misappropriates, or otherwise violates the rights of any third party. Customer acknowledges and agrees that Customer Data may be irretrievably deleted after fifteen (15) days following a termination or expiration of this Agreement. Customer authorizes Axiom to use Customer Data as necessary to provide the Service to Customer. For purposes of this Agreement, “**Customer Data**” shall mean any data, information or other material provided, uploaded, or submitted by Customer to the Service in the course of using the Service. Customer shall retain all right, title and interest in and to the Customer Data, including all intellectual property rights therein.
**1.7 Telemetry Data** Axiom may collect or otherwise receive Telemetry Data and use the same in connection with improvements to the Service, and to monitor Customer’s compliance with the Service Description and the terms of this Agreement. “**Telemetry Data**” shall mean data and information generated by or collected by Axiom regarding Customer’s use of the Service, the health and performance of the Service, and related information, excluding any Customer Data. Telemetry Data will be held in aggregated and anonymized format and will not identify Customer, or reveal any Customer Confidential Information.
**1.8 Personal Data.** Customer acknowledges and agrees that the exchange of personal information subject to applicable personal data laws or regulations (“PII”) is not required to make use of the Service, and to the extent Customer transfers, submits, or otherwise makes PII available to Axiom or the Service, Customer agrees to Axiom’s Data Processing Agreement available at the following URL: [https://axiom.co/docs/legal/data-processing](https://axiom.co/docs/legal/data-processing) (the “DPA”).
**1.9 Support**. To the extent Axiom support is specified in the applicable Service Description, Axiom will use commercially reasonable efforts to provide support for the Service according to the Axiom Support Policy available at the following URL: [https://axiom.co/support](https://axiom.co/support) (“**Axiom Support**”).
**1.10 Service Suspension**. Axiom may suspend Customer’s access to or use of the Service as follows: (a) immediately if Axiom reasonably believes Customer’s use of the Service may pose a security risk to or may adversely impact the Service; (b) immediately if Customer become insolvent, has ceased to operate in the ordinary course, made an assignment for the benefit of creditors, or becomes the subject of any bankruptcy, reorganization, liquidation, dissolution or similar proceeding; (c) following thirty (30) days written notice if Customer is in breach of this Agreement or any Service Description (and has not cured such breach, if curable, within the thirty (30) days of such notice); or (d) Customer has failed to pay Axiom the Fees with respect to the Service. If any amount owing by Customer is thirty (30) or more days overdue (or 10 or more days overdue in the case of invoices to be paid by credit card), Axiom may, without limiting any rights and remedies, accelerate Customer’s unpaid fee obligations to become immediately due and payable, and suspend the provision of the Service to Customer until the overdue amounts are paid in full.
**2 FEES,TAXES, AND AUTHORIZED RESELLERS**
**2.1 Fees and Invoicing Terms**. Customer shall pay to Axiom the fees as set forth in each applicable Service Description according to the billing frequency and method stated therein or otherwise presented to Customer (the “**Fees**”). Customer shall provide accurate and updated billing contact information. If Fees are not received by Axiom by the due date, then without limiting Axiom’ rights or remedies: (a) those charges may accrue late interest at the rate of 1.5% of the outstanding balance per month, or the maximum rate permitted by law, whichever is lower, and (b) Axiom may condition future renewals and Service Descriptions on different payment terms.
**2.2 Taxes**. Any and all payments made by Customer in accordance with this Agreement are exclusive of any taxes that might be assessed by any jurisdiction. Customer shall pay or reimburse Axiom for all value-added, sales, use, property, and similar taxes; all customs duties, import fees, stamp duties, license fees and similar charges; and all other mandatory payments to government agencies of whatever kind, except taxes imposed on the net or gross income of Axiom. All amounts payable to Axiom under this Agreement shall be without set-off and without deduction of any taxes, levies, imposts, charges, withholdings or duties of any nature which may be levied or imposed, including without limitation, value added tax, customs duty and withholding tax. To the extent Customer is required by the local taxing authority to withhold value added tax or a similar withholding tax, Customer agrees to true-up Fees payable to Axiom to account for such withholding.
**2.3 Authorized Resellers**. Customer may purchase subscriptions to the Service through a third party authorized in writing by Axiom to resell subscriptions to the Service (an “**Authorized Reseller**”). Service subscriptions resold to Customer by an Authorized Reseller (each a “**Resale Transaction**”) are subject to the terms and conditions of this Agreement, other than Sections 2.1, and 2.2. Customer will pay the Authorized Reseller the applicable fees according to the payment terms, fees, refund rights (if any), and associated commercial terms determined by and between Customer and the corresponding Authorized Reseller.
**3 TERM AND TERMINATION**
**3.1 Term**. The term of this Agreement shall commence on the Effective Date and unless terminated earlier according to this Section 3, will end on the last day of the term specified in the Service Description (the “**Term**”). Unless otherwise specified in the Service Description or the applicable Service Order, each Service Description will renew automatically at the end of the applicable term (each such renewal, a “**Service Renewal**”), unless either party provides to the other advance written notice with respect to non-renewal at least thirty (30) days prior to the end of the then current term. Customer acknowledges and agrees that each Service Renewal shall be subject to the then-current, on-demand standard use rates, unless otherwise specified in the applicable Service Description.
**3.2 Termination**. This Agreement and the Service Descriptions hereunder may be terminated: (a) by either party if the other has materially breached this Agreement, within thirty (30) calendar days after written notice of such breach to the other party if the breach is remediable or immediately upon notice if the breach is not remediable; or (b) by Axiom upon written notice to Customer if Customer (i) has made or attempted to make any assignment for the benefit of its creditors or any compositions with creditors, (ii) has any action or proceedings under any bankruptcy or insolvency laws taken by or against it which have not been dismissed within sixty (60) days, (iii) has effected a compulsory or voluntary liquidation or dissolution, or (iv) has undergone the occurrence of any event analogous to any of the foregoing under the law of any jurisdiction.
**3.3 Effect of Termination**. Upon any expiration or termination of this Agreement, Customer shall (i) immediately cease use of the Service, and (ii) return all Axiom Confidential Information and other materials and information provided by Axiom. Any termination or expiration shall not relieve Customer of its obligation to pay all Fees accruing prior to termination. If the Agreement is terminated due to Section 3.2 (a), Customer shall pay to Axiom all Fees set forth in the corresponding Service Description(s).
**3.4 Survival.** The following provisions will survive termination of this Agreement: Sections 1.4, 1.5, 1.7, 2.1, 2.2, 3, 4, 5, 6.3, 7, and 8.
**4 CONFIDENTIALITY**
During the term of this Agreement, either party may provide the other party with confidential and/or proprietary materials and information (“**Confidential Information”**). All materials and information provided by the disclosing party and identified at the time of disclosure as “Confidential” or bearing a similar legend, and all other information that the receiving party reasonably should have known was the Confidential Information of the disclosing party, shall be considered Confidential Information. This Agreement is Confidential Information, and all pricing terms are Axiom Confidential Information. The receiving party shall maintain the confidentiality of the Confidential Information and will not disclose such information to any third party without the prior written consent of the disclosing party. The receiving party will only use the Confidential Information internally for the purposes contemplated hereunder. The obligations in this Section 4 shall not apply to any information that: (a) is made generally available to the public without breach of this Agreement, (b) is developed by the receiving party independently from and without reference to the Confidential Information, (c) is disclosed to the receiving party by a third party without restriction, or (d) was in the receiving party’s lawful possession prior to the disclosure and was not obtained by the receiving party either directly or indirectly from the disclosing party. The receiving party may disclose Confidential Information as required by law or court order; provided that, the receiving party provides the disclosing with prompt written notice thereof and uses the receiving party’s best efforts to limit disclosure. At any time, upon the disclosing party’s written request, the receiving party shall return to the disclosing party all disclosing party’s Confidential Information in its possession, including, without limitation, all copies and extracts thereof.
**5 INDEMNIFICATION**
**5.1 Indemnification by Customer**. Customer will defend, indemnify, and hold Axiom, its affiliates, suppliers and licensors harmless and each of their respective officers, directors, employees and representatives from and against any claims, damages, losses, liabilities, costs, and expenses (including reasonable attorneys’ fees) arising out of or relating to any third party claim with respect to: (a) Customer Data; (b) breach of this Agreement or violation of applicable law by Customer; or (c) alleged infringement or misappropriation of third-party’s intellectual property rights resulting from Customer Data.
**5.2 Indemnification by Axiom**. Axiom will defend, indemnify, and hold Customer harmless from and against any third-party claims, damages, losses, liabilities, costs, and expenses (including reasonable attorneys’ fees) arising from claims by a thirty party that Customer’s use of the Service directly infringes or misappropriates a third party’s intellectual property rights (an “**Infringement Claim**”). Notwithstanding anything to the contrary, Axiom shall have no obligation to indemnify or reimburse Customer with respect to any Infringement Claim to the extent arising from: (a) the combination of any Customer Data with the Service; (b) the combination of any products or services, other than those provided by Axiom to Customer under this Agreement, with the Service; or (c) non-discretionary designs or specifications provided to Axiom by Customer that caused such Infringement Claim. Customer agrees to reimburse Axiom for any and all damages, losses, costs and expenses incurred as a result of any of the foregoing actions.
**5.3 Notice of Claim and Indemnity Procedure**. In the event of a claim for which a party seeks indemnity or reimbursement under this Section 5 (each an “**Indemnified Party**”) and as conditions of the indemnity, the Indemnified Party shall: (a) notify the indemnifying party in writing as soon as practicable, but in no event later than thirty (30) days after receipt of such claim, together with such further information as is necessary for the indemnifying party to evaluate such claim; and (b) the Indemnified Party allows the indemnifying party to assume full control of the defense of the claim, including retaining counsel of its own choosing. Upon the assumption by the indemnifying party of the defense of a claim with counsel of its choosing, the indemnifying party will not be liable for the fees and expenses of additional counsel retained by any Indemnified Party. The Indemnified Party shall cooperate with the indemnifying party in the defense of any such claim. Notwithstanding the foregoing provisions, the indemnifying party shall have no obligation to indemnify or reimburse for any losses, damages, costs, disbursements, expenses, settlement liability of a claim or other sums paid by any Indemnified Party voluntarily, and without the indemnifying party’s prior written consent, to settle a claim. Subject to the maximum liability set forth in Section 7, the provisions of this Section 5 constitute the entire understanding of the parties regarding each party’s respective liability under this Section 5, including but not limited to Infringement Claims (including related claims for breach of warranty) and each party’s sole obligation to indemnify and reimburse any Indemnified Party.
**6 WARRANTY**
**6.1 Warranty.** The Service, when used by Customer in accordance with the provisions of this Agreement and in compliance with the Service documentation published by Axiom (the “**Documentation**”), will conform in material respects, to the Documentation during the period of use specified in the applicable Service Description.
**6.2 Exclusive Remedies.** Customer shall report to Axiom, pursuant to the notice provision of this Agreement, any breach of the warranties set forth in this Section 6. In the event of a breach of warranty by Axiom under this Agreement, Customer’s sole and exclusive remedy, and Axiom’ entire liability, shall be prompt correction of any material non-conformance in order to minimize any material adverse effect on Customer’s business.
**6.3 Disclaimer of Warranty**. Axiom does not represent or warrant that the operation of the Service (or any portion thereof) will be uninterrupted or error free, or that the Service (or any portion thereof) will operate in combination with other hardware, software, systems, or data not provided by Axiom, except as expressly specified in the applicable Documentation. CUSTOMER ACKNOWLEDGES THAT, EXCEPT AS EXPRESSLY SET FORTH IN SECTION 6.1, AXIOM MAKES NO EXPRESS OR IMPLIED REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE SERVICE OR SERVICES, OR THEIR CONDITION. AXIOM IS FURNISHING THE WARRANTIES SET FORTH IN SECTION 6.1 IN LIEU OF, AND AXIOM HEREBY EXPRESSLY EXCLUDES, ANY AND ALL OTHER EXPRESS OR IMPLIED REPRESENTATIONS OR WARRANTIES, WHETHER UNDER COMMON LAW, STATUTE OR OTHERWISE, INCLUDING WITHOUT LIMITATION ANY AND ALL WARRANTIES AS TO MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, SATISFACTORY QUALITY, NON-INFRINGEMENT OF THIRD-PARTY RIGHTS.
**7 LIMITATIONS OF LIABILITY**
IN NO EVENT SHALL AXIOM BE LIABLE FOR ANY, LOST PROFITS, BUSINESS INTERRUPTION, REPLACEMENT SERVICE OR OTHER SPECIAL, INCIDENTAL, CONSEQUENTIAL, PUNITIVE, OR INDIRECT DAMAGES, HOWEVER CAUSED AND REGARDLESS OF THEORY OF LIABILITY. AXIOM’S LIABILITY FOR ALL CLAIMS ARISING UNDER THIS AGREEMENT, WHETHER IN CONTRACT, TORT OR OTHERWISE, SHALL NOT EXCEED THE AMOUNT OF FEES PAID OR PAYABLE BY CUSTOMER UNDER THE APPLICABLE SERVICE DESCRIPTION DURING THE TWELVE (12) MONTH PERIOD PRECEDING THE CLAIM.
**8 MISCELLANEOUS**
**8.1 Export Control**. Customer hereby certifies that Customer will comply with all current applicable export control laws applicable to Axiom Confidential Information, Customer’s use of the Service, and Customer Data. Customer agrees to defend, indemnify and hold Axiom harmless from any liability for Customer’s violation of any applicable export control laws.
**8.2 Compliance with Laws.** Customer shall comply with all applicable laws and regulations in its use of the Service and with respect to Customer Data, including without limitation the unlawful gathering or collecting, or assisting in the gathering or collecting of information in violation of any privacy laws or regulations. Customer shall, at its own expense, defend, indemnify and hold harmless Axiom from and against any and all claims, losses, liabilities, damages, judgments, government or federal sanctions, costs and expenses (including attorneys’ fees) incurred by Axiom arising from any claim or assertion by any third party of violation of privacy laws or regulations by Customer or any of its agents, officers, directors or employees.
**8.3 Assignment**. Neither party may transfer and assign its rights and obligations under this Agreement without the prior written consent of the other party. Notwithstanding the foregoing, Axiom may transfer and assign its rights under this Agreement without consent from the other party in connection with a change in control, acquisition or sale of all or substantially all of its assets.
**8.4 Force Majeure**. Neither party shall be responsible for failure or delay in performance by events out of their reasonable control, including but not limited to, acts of God, Internet outage, terrorism, war, fires, earthquakes and other disasters (each a “**Force Majeure**”). Notwithstanding the foregoing: (i) Customer shall be liable for payment obligations for Service rendered; and (ii) if a Force Majeure continues for more than thirty (30) days, either party may to terminate this agreement upon written notice to the other party.
**8.5 Notice**. All notices between the parties shall be in writing and shall be deemed to have been given if personally delivered or sent by registered or certified mail (return receipt), or by recognized courier service.
**8.6 No Agency**. Both parties agree that no agency, partnership, joint venture, or employment is created as a result of this Agreement. Customer does not have any authority of any kind to bind Axiom.
**8.7 Governing Law**. This Agreement and all matters relating to this Agreement shall be construed in accordance with and controlled by the laws of the State of California, without reference to its conflict of law principles. The parties agree to submit to the non-exclusive jurisdiction and venue of the courts located in Santa Clara, California and hereby waive any objections to the jurisdiction and venue of such courts.
**8.8 Entire Agreement**. This Agreement is the complete and exclusive statement of the mutual understanding of the parties and supersedes and cancels all previous written and oral agreements, communications, and other understandings relating to the subject matter of this Agreement, and all waivers and modifications must be in a writing signed by both parties, except as otherwise provided herein. Any term or provision of this Agreement held to be illegal or unenforceable shall be, to the fullest extent possible, interpreted so as to be construed as valid, but in any event the validity or enforceability of the remainder hereof shall not be affected. In the event of a conflict between this Agreement, the Service Description, or Supplemental Terms, the terms of this Agreement shall control.
# Terms of use
Source: https://axiom.co/docs/legal/terms-of-use
Legal agreement governing your use of the axiom.co website.
{/* vale off */}
These Terms of Use constitute a legally binding agreement made between you, whether personally or on behalf of an entity ("**you**") and Axiom, Inc., its affiliates or agents ("**Company**", "**we**", or "**our"**) concerning your access to and use of the axiom.co website as well as any other media form, media channel, mobile website or mobile application related, linked, or otherwise connected thereto (collectively, the "**Site**"). In the event that you are registering for a user account, evaluating our hosted / platform as a service-based service or buying paid-for services from Company, the Terms of Service [https://axiom.co/docs/legal/terms-of-service](/legal/terms-of-service) apply and shall govern your use of such services. We are registered in Delaware, United States and have our registered office at Axiom, Inc. 1390 Market Street, Suite 200, San Francisco, CA 94102. You agree that by accessing the Site, you have read, understood, and agreed to be bound by all of these Terms of Use. IF YOU DO NOT AGREE WITH ALL OF THESE TERMS OF USE, THEN YOU ARE EXPRESSLY PROHIBITED FROM USING THE SITE AND YOU MUST DISCONTINUE USE IMMEDIATELY.\
\
We reserve the right, in our sole discretion, to make changes or modifications to these Terms of Use from time to time. We will alert you about any changes by updating the “Last updated” date of these Terms of Use, and you waive any right to receive specific notice of each such change. Please ensure that you check the applicable Terms every time you use our Site so that you understand which Terms apply. You will be subject to, and will be deemed to have been made aware of and to have accepted, the changes in any revised Terms of Use by your continued use of the Site after the date such revised Terms of Use are posted.\
\
The information provided on the Site is not intended for distribution to or use by any person or entity in any jurisdiction or country where such distribution or use would be contrary to law or regulation or which would subject us to any registration requirement within such jurisdiction or country. Accordingly, those persons who choose to access the Site from other locations do so on their own initiative and are solely responsible for compliance with local laws, if and to the extent local laws are applicable.\
\
The Site is not tailored to comply with industry-specific regulations (Health Insurance Portability and Accountability Act (HIPAA), Federal Information Security Management Act (FISMA), etc.), so if your interactions would be subjected to such laws, you may not use this Site. You may not use the Site in a way that would violate the Gramm-Leach-Bliley Act (GLBA).\
\
The Site is intended for users who are at least 18 years old. Persons under the age of 18 are not permitted to use or register for the Site.
Unless otherwise expressly indicated herein, the Site is our proprietary property and all source code, databases, functionality, software, website designs, audio, video, text, photographs, and graphics on the Site (collectively, the “**Content**”) and the trademarks, service marks, and logos contained therein (the “**Marks**”) are owned or controlled by us or licensed to us, and are protected by copyright and trademark laws and various other intellectual property rights and unfair competition laws of the United States, international copyright laws, and international conventions. The Content and the Marks are provided on the Site “AS IS” for your information and personal use only. Except as expressly provided in these Terms of Use, no part of the Site and no Content or Marks may be copied, reproduced, aggregated, republished, uploaded, posted, publicly displayed, encoded, translated, transmitted, distributed, sold, licensed, or otherwise exploited for any commercial purpose whatsoever, without our express prior written permission.\
\
Provided that you are eligible to use the Site, you are granted a limited license to access and use the Site and to download or print a copy of any portion of the Content to which you have properly gained access solely for your personal, non-commercial use. We reserve all rights not expressly granted to you in and to the Site, the Content and the Marks.\
\
By using the Site, you represent and warrant that: (1) you have the legal capacity and you agree to comply with these Terms of Use; (2) you are not a minor in the jurisdiction in which you reside; (3) you will not use the Site for any illegal or unauthorized purpose; and (4) your use of the Site will not violate any applicable law or regulation.
If you provide any information that is untrue, inaccurate, not current, or incomplete, or otherwise violate these Terms of Use, we have the right to suspend or terminate your account and refuse any and all current or future use of the Site (or any portion thereof).\
\
You may not access or use the Site for any purpose other than that for which we make the Site available.\
\
**As a user of the Site, you agree not to:**
1. Systematically retrieve data or other content from the Site to create or compile, directly or indirectly, a collection, compilation, database, or directory without written permission from us.
2. Trick, defraud, or mislead us and other users, especially in any attempt to learn sensitive account information such as user passwords.
3. Circumvent, disable, or otherwise interfere with security-related features of the Site, including features that prevent or restrict the use or copying of any Content or enforce limitations on the use of the Site and/or the Content contained therein.
4. Disparage, tarnish, or otherwise harm, in our opinion, us and/or the Site.
5. Use any information obtained from the Site in order to harass, abuse, or harm another person.
6. Make improper use of our support services or submit false reports of abuse or misconduct.
7. Use the Site in a manner inconsistent with any applicable laws or regulations.
8. Engage in unauthorized framing of or linking to the Site.
9. Upload or transmit (or attempt to upload or to transmit) viruses, Trojan horses, or other material, including excessive use of capital letters and spamming (continuous posting of repetitive text), that interferes with any party’s uninterrupted use and enjoyment of the Site or modifies, impairs, disrupts, alters, or interferes with the use, features, functions, operation, or maintenance of the Site.
10. Engage in any automated use of the system, such as using scripts to send comments or messages, or using any data mining, robots, or similar data gathering and extraction tools.
11. Delete the copyright or other proprietary rights notice from any Content (including Third Party Content as defined below).
12. Attempt to impersonate another user or person or use the username of another user.
13. Upload or transmit (or attempt to upload or to transmit) any material that acts as a passive or active information collection or transmission mechanism, including without limitation, clear graphics interchange formats (“gifs”), 1×1 pixels, web bugs, cookies, or other similar devices (sometimes referred to as “spyware” or “passive collection mechanisms” or “pcms”).
14. Interfere with, disrupt, or create an undue burden on the Site or the networks or services connected to the Site.
15. Harass, annoy, intimidate, or threaten any of our employees or agents engaged in providing any portion of the Site to you.
16. Attempt to bypass any measures of the Site designed to prevent or restrict access to the Site, or any portion of the Site.
17. Copy or adapt the Site’s software, including but not limited to Flash, PHP, HTML, JavaScript, or other code.
18. Except as permitted by applicable law, decipher, decompile, disassemble, or reverse engineer any of the software comprising or in any way making up a part of the Site.
19. Except as may be the result of standard search engine or Internet browser usage, use, launch, develop, or distribute any automated system, including without limitation, any spider, robot, cheat utility, scraper, or offline reader that accesses the Site, or using or launching any unauthorized script or other software.
20. Use a buying agent or purchasing agent to make purchases on the Site.
21. Make any unauthorized use of the Site, including collecting usernames and/or email addresses of users by electronic or other means for the purpose of sending unsolicited email, or creating user accounts by automated means or under false pretenses.
22. Use the Site as part of any effort to compete with us or otherwise use the Site and/or the Content for any revenue-generating endeavor or commercial enterprise.
23. Use the Site to advertise or offer to sell goods and services, except as may be authorized by us.
Any use of the Site in violation of the foregoing violates these Terms of Use and may result in, among other things, termination or suspension of your rights to use the Site.\
\
You agree that we may access, store, process, and use any information and personal data that you provide following the terms of our Privacy Policy and your choices related thereto (including settings).\
\
By submitting suggestions or other feedback regarding the Site, you agree that we can use and share such feedback for any purpose without compensation to you.\
\
We do not assert any ownership over your Contributions, except to the extent that content included within such Contributions is considered a Submission as defined below. We are not liable for any statements or representations in your Contributions provided by you in any area on the Site. You are solely responsible for your Contributions to the Site and you expressly agree to exonerate us from any and all responsibility and to refrain from any legal action against us regarding your Contributions.\
\
You acknowledge and agree that any questions, comments, suggestions, ideas, feedback, or other information regarding the Site ("**Submissions**") provided by you to us are non-confidential. As between us, Company shall own exclusive rights, including all intellectual property rights, and shall be entitled to the unrestricted use, modification and dissemination of these Submissions for any lawful purpose, commercial or otherwise, without acknowledgment or compensation to you. Further, and to the extent that any of Your Submissions may be subject to copyright protection, You hereby grant to Company and to any recipients of software distributed by Company, a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare derivative works of, publicly display, publicly perform, sublicense, and distribute Your Contributions and any derivative works therefrom. You hereby waive all moral rights to any such Submissions, and you hereby warrant that any such Submissions are original with you or that you have the right to submit such Submissions. You agree there shall be no recourse against us for any alleged or actual infringement or misappropriation of any proprietary right in your Submissions.\
\
The Site may contain (or you may be sent via the Site) links to other websites ("**Third-Party Websites**") as well as articles, photographs, text, graphics, pictures, designs, music, sound, video, information, applications, software, and other content or items belonging to or originating from third parties ("**Third-Party Content**"). Such Third-Party Websites and Third-Party Content are not investigated, monitored, or checked for accuracy, appropriateness, or completeness by us, and we are not responsible for any Third-Party Websites accessed through the Site or any Third-Party Content posted on, available through, or installed from the Site, nor any content, accuracy, offensiveness, opinions, reliability, privacy practices, or other policies of or contained in the Third-Party Websites or the Third-Party Content. Inclusion of, linking to, or permitting the use or installation of any Third-Party Websites or any Third-Party Content does not imply approval or endorsement by us. If you decide to leave the Site and access the Third-Party Websites or to use or install any Third-Party Content, you do so at your own risk, and these Terms of Use no longer govern your use of such Third Party Websites. You should review the applicable terms and policies, including privacy and data gathering practices, of any website to which you navigate from the Site or relating to any applications you use or install from the Site. Any purchases you make through Third-Party Websites will be through other websites and from other companies, and we take no responsibility whatsoever in relation to such purchases which are exclusively between you and the applicable third party. You agree and acknowledge that we do not endorse the products or services offered on Third-Party Websites and you shall hold us harmless from any harm caused by your purchase of such products or services. Additionally, you shall hold us harmless from any losses sustained by you or harm caused to you relating to or resulting in any way from any Third-Party Content or any contact with Third-Party Websites\
\
We reserve the right, but not the obligation, to: (1) monitor the Site for violations of these Terms of Use; (2) take appropriate legal action against anyone who, in our sole discretion, violates the law or these Terms of Use, including without limitation, reporting such user to law enforcement authorities; (3) in our sole discretion and without limitation, refuse, restrict access to, limit the availability of, or disable (to the extent technologically feasible) any of your Contributions or any portion thereof; (4) in our sole discretion and without limitation, notice, or liability, to remove from the Site or otherwise disable all files and content that are excessive in size or are in any way burdensome to our systems; and (5) otherwise manage the Site in a manner designed to protect our rights and property and to facilitate the proper functioning of the Site.\
\
By using the Site, you agree to be bound by our Privacy Policy posted on the Site, which is incorporated into these Terms of Use. Please be advised the Site is hosted in the United States. If you access the Site from any other region of the world with laws or other requirements governing personal data collection, use, or disclosure that differ from applicable laws in the United States, then through your continued use of the Site, you are transferring your data to the United States, and you agree to have your data transferred to and processed in United States.\
\
These Terms of Use shall remain in full force and effect while you use the Site. WITHOUT LIMITING ANY OTHER PROVISION OF THESE TERMS OF USE, WE RESERVE THE RIGHT TO, IN OUR SOLE DISCRETION AND WITHOUT NOTICE OR LIABILITY, DENY ACCESS TO AND USE OF THE SITE (INCLUDING BLOCKING CERTAIN IP ADDRESSES), TO ANY PERSON FOR ANY REASON OR FOR NO REASON, INCLUDING WITHOUT LIMITATION FOR BREACH OF ANY REPRESENTATION, WARRANTY, OR COVENANT CONTAINED IN THESE TERMS OF USE OR OF ANY APPLICABLE LAW OR REGULATION. WE MAY TERMINATE YOUR USE OR PARTICIPATION IN THE SITE OR DELETE ANY CONTENT OR INFORMATION THAT YOU POSTED AT ANY TIME, WITHOUT WARNING, IN OUR SOLE DISCRETION.\
\
If we terminate or suspend your account for any reason, you are prohibited from registering and creating a new account under your name, a fake or borrowed name, or the name of any third party, even if you may be acting on behalf of the third party. In addition to terminating or suspending your account, we reserve the right to take appropriate legal action, including without limitation pursuing civil, criminal, and injunctive redress.\
\
We reserve the right to change, modify, or remove the contents of the Site at any time or for any reason at our sole discretion without notice. However, we have no obligation to update any information on our Site. We also reserve the right to modify or discontinue all or part of the Site without notice at any time. We will not be liable to you or any third party for any modification, price change, suspension, or discontinuance of the Site.\
\
We cannot guarantee the Site will be available at all times. We may experience hardware, software, or other problems or need to perform maintenance related to the Site, resulting in interruptions, delays, or errors. We reserve the right to change, revise, update, suspend, discontinue, or otherwise modify the Site at any time or for any reason without notice to you. You agree that we have no liability whatsoever for any loss, damage, or inconvenience caused by your inability to access or use the Site during any downtime or discontinuance of the Site. Nothing in these Terms of Use will be construed to obligate us to maintain and support the Site or to supply any corrections, updates, or releases in connection therewith.\
\
These Terms of Use and your use of the Site are governed by and construed in accordance with the laws of the State of California, without regard to its conflict of law principles.\
\
**Disclaimer and Limitation of Our Liability**
There may be information on the Site that contains typographical errors, inaccuracies, or omissions, including descriptions, pricing, availability, and various other information. We reserve the right to correct any errors, inaccuracies, or omissions and to change or update the information on the Site at any time, without prior notice.\
\
THE SITE IS PROVIDED ON AN AS-IS AND AS-AVAILABLE BASIS. YOU AGREE THAT YOUR USE OF THE SITE AND OUR SERVICES WILL BE AT YOUR SOLE RISK. TO THE FULLEST EXTENT PERMITTED BY LAW, WE DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED, IN CONNECTION WITH THE SITE AND YOUR USE THEREOF, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, AND NON-INFRINGEMENT. WE MAKE NO WARRANTIES OR REPRESENTATIONS ABOUT THE ACCURACY OR COMPLETENESS OF THE SITE’S CONTENT OR THE CONTENT OF ANY WEBSITES LINKED TO THE SITE AND WE WILL ASSUME NO LIABILITY OR RESPONSIBILITY FOR ANY (1) ERRORS, MISTAKES, OR INACCURACIES OF CONTENT AND MATERIALS, (2) PERSONAL INJURY OR PROPERTY DAMAGE, OF ANY NATURE WHATSOEVER, RESULTING FROM YOUR ACCESS TO AND USE OF THE SITE, (3) ANY UNAUTHORIZED ACCESS TO OR USE OF OUR SECURE SERVERS AND/OR ANY AND ALL PERSONAL INFORMATION AND/OR FINANCIAL INFORMATION STORED THEREIN, (4) ANY INTERRUPTION OR CESSATION OF TRANSMISSION TO OR FROM THE SITE, (5) ANY BUGS, VIRUSES, TROJAN HORSES, OR THE LIKE WHICH MAY BE TRANSMITTED TO OR THROUGH THE SITE BY ANY THIRD PARTY, AND/OR (6) ANY ERRORS OR OMISSIONS IN ANY CONTENT AND MATERIALS OR FOR ANY LOSS OR DAMAGE OF ANY KIND INCURRED AS A RESULT OF THE USE OF ANY CONTENT POSTED, TRANSMITTED, OR OTHERWISE MADE AVAILABLE VIA THE SITE. WE DO NOT WARRANT, ENDORSE, GUARANTEE, OR ASSUME RESPONSIBILITY FOR ANY PRODUCT OR SERVICE ADVERTISED OR OFFERED BY A THIRD PARTY THROUGH THE SITE, ANY HYPERLINKED WEBSITE, OR ANY WEBSITE OR MOBILE APPLICATION FEATURED IN ANY BANNER OR OTHER ADVERTISING, AND WE WILL NOT BE A PARTY TO OR IN ANY WAY BE RESPONSIBLE FOR MONITORING ANY TRANSACTION BETWEEN YOU AND ANY THIRD-PARTY PROVIDERS OF PRODUCTS OR SERVICES.\
\
IN NO EVENT WILL WE OR OUR DIRECTORS, EMPLOYEES, OR AGENTS BE LIABLE TO YOU OR ANY THIRD PARTY FOR ANY DIRECT, INDIRECT, CONSEQUENTIAL, EXEMPLARY, INCIDENTAL, SPECIAL, OR PUNITIVE DAMAGES, INCLUDING LOST PROFIT, LOST REVENUE, LOSS OF DATA, OR OTHER DAMAGES ARISING FROM YOUR USE OF THE SITE, EVEN IF WE HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. NOTWITHSTANDING ANYTHING TO THE CONTRARY CONTAINED HEREIN, OUR LIABILITY TO YOU FOR ANY CAUSE WHATSOEVER AND REGARDLESS OF THE FORM OF THE ACTION, WILL AT ALL TIMES BE LIMITED TO ONE THOUSAND US DOLLARS (\$1,000.00). CERTAIN US STATE LAWS AND INTERNATIONAL LAWS DO NOT ALLOW LIMITATIONS ON IMPLIED WARRANTIES OR THE EXCLUSION OR LIMITATION OF CERTAIN DAMAGES. IF THESE LAWS APPLY TO YOU, SOME OR ALL OF THE ABOVE DISCLAIMERS OR LIMITATIONS MAY NOT APPLY TO YOU, AND YOU MAY HAVE ADDITIONAL RIGHTS.\
\
You agree to defend, indemnify, and hold us harmless from and against any loss, damage, liability, claim, or demand, including reasonable attorneys’ fees and expenses, made by any third party due to or arising out of: (1) use of the Site; (2) breach of these Terms of Use; (3) any breach of your representations and warranties set forth in these Terms of Use; (4) your violation of the rights of a third party; or (5) any overt harmful act toward any other user of the Site with whom you connected via the Site. Notwithstanding the foregoing, we reserve the right, at your expense, to assume the exclusive defense and control of any matter for which you are required to indemnify us, and you agree to cooperate, at your expense, with our defense of such claims. We will use reasonable efforts to notify you of any such claim, action, or proceeding which is subject to this indemnification upon becoming aware of it.\
\
We will maintain certain data that you transmit to the Site for the purpose of managing the performance of the Site, as well as data relating to your use of the Site. Although we perform regular routine backups of data, you are solely responsible for all data that you transmit or that relates to any activity you have undertaken using the Site. You agree that we shall have no liability to you for any loss or corruption of any such data, and you hereby waive any right of action against us arising from any such loss or corruption of such data.\
\
Visiting the Site, sending us emails, and completing online forms constitute electronic communications. You consent to receive electronic communications, and you agree that all agreements, notices, disclosures, and other communications we provide to you electronically, via email and on the Site, satisfy any legal requirement that such communication be in writing. YOU HEREBY AGREE TO THE USE OF ELECTRONIC SIGNATURES, CONTRACTS, ORDERS, AND OTHER RECORDS, AND TO ELECTRONIC DELIVERY OF NOTICES, POLICIES, AND RECORDS OF TRANSACTIONS INITIATED OR COMPLETED BY US OR VIA THE SITE. You hereby waive any rights or requirements under any statutes, regulations, rules, ordinances, or other laws in any jurisdiction which require an original signature or delivery or retention of non-electronic records, or to payments or the granting of credits by any means other than electronic means to the extent applicable.
If any complaint with us is not satisfactorily resolved, you can contact the Complaint Assistance Unit of the Division of Consumer Services of the California Department of Consumer Affairs in writing at 1625 North Market Blvd., Suite N 112, Sacramento, California 95834 or by telephone at (800) 952-5210 or (916) 445-1254.\
\
These Terms of Use and any policies or operating rules posted by us on the Site or in respect to the Site constitute the entire agreement and understanding between you and us. Our failure to exercise or enforce any right or provision of these Terms of Use shall not operate as a waiver of such right or provision. These Terms of Use operate to the fullest extent permissible by law. We may assign any or all of our rights and obligations to others at any time. If any provision or part of a provision of these Terms of Use is determined to be unlawful, void, or unenforceable, that provision or part of the provision is deemed severable from these Terms of Use and does not affect the validity and enforceability of any remaining provisions. There is no joint venture, partnership, employment or agency relationship created between you and us as a result of these Terms of Use or use of the Site. You agree that these Terms of Use will not be construed against us by virtue of having drafted them. You hereby waive any and all defenses you may have based on the electronic form of these Terms of Use and the lack of signing by the parties hereto to execute these Terms of Use.\
\
In order to resolve a complaint regarding the Site or to receive further information regarding use of the Site, please contact us at:
Axiom, Inc.\
1390 Market Street, Suite 200, San Francisco, CA 94102\
[legal@axiom.co](mailto:legal@axiom.co)
# List of docs pages
Source: https://axiom.co/docs/llms/llms
Access the [list of docs pages](https://axiom.co/docs/llms.txt) in Markdown format and pass it to your LLM.
# Full docs
Source: https://axiom.co/docs/llms/llms-full
Access the [full documentation](https://axiom.co/docs/llms-full.txt) in Markdown format and pass it to your LLM.
# Interpret Axiom docs with LLMs
Source: https://axiom.co/docs/llms/llms-overview
This page explains how to interpret Axiom docs with LLMs.
## AI chat
To get an answer about Axiom from the documentation’s built-in AI assistant, press Cmd/CtrlK, and then type your question.
## Interpret individual pages
To interpret individual pages of the documentation with an LLM, click in the top right, and then choose one of the following options:
* Copy the page in Markdown format and manually pass it to LLMs
* View the page in Markdown format
* Open the page in ChatGPT
* Open the page in Claude
## Interpret the full docs
To make it easy for LLMs to interpret the documentation, Axiom offers the following based on the [/llms.txt standard](https://llmstxt.org/):
* [List of docs pages](https://axiom.co/docs/llms.txt)
* [Full documentation](https://axiom.co/docs/llms-full.txt)
Access one of the above documents and pass it to your LLM to start interpreting the Axiom documentation.
# Anomaly monitors
Source: https://axiom.co/docs/monitor-data/anomaly-monitors
This section introduces the Monitors tab and explains how to create monitors.
Anomaly monitors allow you to aggregate your event data and compare the results of this aggregation to what can be considered normal for the query. When the results are too much above or below the value that Axiom expects based on the event history, the monitor enters the alert state. The monitor remains in the alert state until the results no longer deviate from the expected value. This can happen without the results returning to their previous level if they stabilize around a new value. An anomaly monitor sends you a notification each time it enters or exits the alert state.
## Create anomaly monitor
To create an anomaly monitor, follow these steps:
1. Click the **Monitors** tab, and then click **New monitor**.
2. Click **Anomaly monitor**.
3. Name your monitor and add a description.
4. Configure the monitor using the following options:
* The comparison operator is the rule to apply when comparing the results to the expected value. The possible values are **above**, **below**, and **above or below**.
* The tolerance factor controls the sensitivity of the monitor. Axiom combines the tolerance factor with a measure of how much the results of your query tend to vary, and uses them to determine how much deviation from the expected value to tolerate before triggering the monitor. The higher the tolerance factor, the wider the tolerated range of deviation. When the results of the aggregation stay within this range, the monitor doesn’t trigger. When the results of the aggregation cross this range, the monitor triggers. The tolerance factor can be any positive numeric value.
* The frequency is how often the monitor runs. This is a positive integer number of minutes.
* The range is the time range for your query. This is a positive integer number of minutes. A longer time range allows the anomaly monitor to consider a larger number of datapoints when calculating the expected value.
* **Alert on no data** triggers the monitor when your query doesn’t return any data. Your query returns no data if no events match your filters and an aggregation used in the query is undefined. For example, you take the average of a field not present in any matching events.
* You can group by attributes when defining your query. By default, your monitor enters the alert state if any of the values returned for the group-by attributes deviate from the expected value, and remains in the alert state until none of the values returned deviates from the expected value. To trigger the monitor separately for each group that deviates from the expected value, enable **Notify by group**. At most one trigger notification is sent per monitor run. This option only has an effect if the monitor’s query groups by a non-time field.
* Toggle **Require seasonality** to compare the results to seasonal patterns in your data. For example, your query produces a time series that increases at the same time each morning. Without accounting for seasonality, the monitor compares to recent results only. By toggling **Require seasonality**, the monitor compares the results to the same time of the previous day or week and only triggers if the results deviate from the expected seasonal pattern.
5. Click **Add notifier**, and then select the notifiers that define how you want to receive notifications for this monitor. For more information, see [Notifiers](#notifiers).
6. To define your query, use one of the following options:
* To use the visual query builder, click **Simple query builder**. Click **Visualize** to select an aggregation method, and then click **Run query** to preview the results.
* To use Axiom Processing Language (APL), click **Advanced query language**. Write a query where the final clause uses the `summarize` operator and bins the results by `_time`, and then click **Run query** to preview the results. For more information, see [Introduction to APL](/apl/introduction).
In the preview, the boundary where the monitor triggers is displayed as a dashed line. Where there isn’t enough data to compute a boundary, the chart is grayed out. If the monitor preview shows that it alerts when you don’t want it to, try increasing the tolerance. Inversely, try decreasing the tolerance if the monitor preview shows that it doesn’t alert when you want it to.
7. Click **Create**.
You have created an anomaly monitor. Axiom alerts you when the results from your query are too high or too low compared to what’s expected based on the event history.
In the chart, the red dotted line displays the tolerance range around the expected value over time. When the results of the query cross this range, the monitor triggers.
## Examples
For real-world use cases, see [Monitor examples](/monitor-data/monitor-examples).
# Configure monitors
Source: https://axiom.co/docs/monitor-data/configure-monitors
This page explains how to configure monitors.
## Change monitors
To change an existing monitor:
1. Click the Monitors tab.
2. Click the monitor in the list that you want to disable.
3. In the top right, click **Edit monitor**.
4. Make changes to the monitor.
5. Click **Save**.
## Disable monitors
Disable a monitor to prevent it from running for a specific amount of time.
To disable a monitor:
1. Click the Monitors tab.
2. Click the monitor in the list that you want to disable.
3. In the top right, click **Disable monitor**.
4. Select the time period for which you want to disable the monitor.
5. Click **Disable**.
Axiom automatically enables the monitor after the time period you specified.
## Enable monitors
To enable a monitor:
1. Click the Monitors tab.
2. Click the monitor in the list that you want to enable.
3. In the top right, click **Enable monitor**.
4. Click **Enable**.
## Delete monitors
To delete a monitor:
1. Click the Monitors tab.
2. Click the monitor in the list that you want to delete.
3. In the top right, click .
4. Click **Delete**.
# Configure notifiers
Source: https://axiom.co/docs/monitor-data/configure-notifiers
This page explains how to configure notifiers.
## Disable notifiers
Disable a monitor to prevent it from running for a specific amount of time.
To disable a notifier:
1. Click the Monitors tab.
2. In the left, click **Notifiers**.
3. Click the notifier in the list that you want to disable.
4. In the top right, click **Disable notifier**.
5. Select the time period for which you want to disable the notifier.
6. Click **Disable**.
Axiom automatically enables the notifier after the time period you specified.
## Enable notifiers
To enable a notifier:
1. Click the Monitors tab.
2. In the left, click **Notifiers**.
3. Click the notifier in the list that you want to enable.
4. In the top right, click **Enable notifier**.
5. Click **Enable**.
## Delete notifiers
To delete a notifier:
1. Click the Monitors tab.
2. In the left, click **Notifiers**.
3. Click the notifier in the list that you want to delete.
4. In the top right, click .
5. Click **Delete**.
# Custom webhook notifier
Source: https://axiom.co/docs/monitor-data/custom-webhook-notifier
This page explains how to create and configure a custom webhook notifier.
Use a custom webhook notifier to connect your monitors to internal or external services. The webhook URL receives a POST request with a content type of `application/json` together with any other headers you specify.
To create a custom webhook notifier, follow these steps:
1. Click the **Monitors** tab, and then click **Manage notifiers** on the right.
2. Click **New notifier** on the top right.
3. Name your notifier.
4. Click **Custom webhook**.
5. In **Webhook URL**, enter the URL where you want to send the POST request.
6. Optional: To customize the content of your webhook, use the [Go template syntax](https://pkg.go.dev/text/template) to interact with these variables:
* `.Action` has value `Open` when the notification corresponds to a match monitor matching or a threshold monitor triggering, and has value `Closed` when the notification corresponds to a threshold monitor resolving.
* `.MonitorID` is the unique identifier for the monitor associated with the notification.
* `.Body` is the message body associated with the notification. When the notification corresponds to a match monitor, this is the matching event data. When the notification corresponds to a threshold monitor, this provides information about the value that gave rise to the monitor triggering or resolving.
* `.Description` is the description of the monitor associated with the notification.
* `.QueryEndTime` is the end time applied in the monitor query that gave rise to the notification.
* `.QueryStartTime` is the start time applied in the monitor query that gave rise to the notification.
* `.Timestamp` is the time the notification was generated.
* `.Title` is the name of the monitor associated with the notification.
* `.Value` is the value that gave rise to the monitor triggering or resolving. It’s only applicable if the notification corresponds to a threshold monitor.
* `.MatchedEvent` is a JSON object that represents the event that matched the criteria of the monitor. It’s only applicable if the notification corresponds to a match monitor.
* `.GroupKeys` and `.GroupValues` are JSON arrays that contain the keys and the values returned by the group-by attributes of your query. They are only applicable if the APL query of the monitor groups by a non-time field.
You can fully customize the content of the webhook to match the requirements of your environment.
7. Optional: Add headers to the POST request sent to the webhook URL.
8. Click **Create**.
## Examples
The example below is the default template for a custom webhook notification:
```json
{
"action": "{{.Action}}",
"event": {
"monitorID": "{{.MonitorID}}",
"body": "{{.Body}}",
"description": "{{.Description}}",
"queryEndTime": "{{.QueryEndTime}}",
"queryStartTime": "{{.QueryStartTime}}",
"timestamp": "{{.Timestamp}}",
"title": "{{.Title}}",
"value": {{.Value}},
"matchedEvent": {{jsonObject .MatchedEvent}},
"groupKeys": {{jsonArray .GroupKeys}},
"groupValues": {{jsonArray .GroupValues}}
}
}
```
Using the template above, the body of a POST request sent to the webhook URL for a threshold monitor triggering:
```json
{
"action": "Open",
"event": {
"monitorID": "CabI3w142069etTgd0",
"body": "Current value of 57347 is above or equal to the threshold value of 0",
"description": "",
"queryEndTime": "2024-06-28 14:55:57.631364493 +0000 UTC",
"queryStartTime": "2024-06-28 14:45:57.631364493 +0000 UTC",
"timestamp": "2024-06-28 14:55:57 +0000 UTC",
"title": "Axiom Monitor Test Triggered",
"value": 57347,
"matchedEvent": null,
"groupKeys": null,
"groupValues": null
}
}
```
The example template below formats the webhook message to match the [expectations of incident.io](https://api-docs.incident.io/tag/Alert-Events-V2/) using the monitor ID as the `deduplication_key`.
```json
{
"title": "{{.Title}}",
"description": "{{.Body}}",
"deduplication_key": "{{.MonitorID}}",
"status": "{{ if eq .Action "Open" }}firing{{ else }}resolved{{ end }}",
"metadata": {
"description": "{{.Description}}",
"value": {{.Value}}
},
"source_url": "https://app.axiom.co/{your-org-id-here}/monitors/{{.MonitorID}}"
}
```
# Discord notifier
Source: https://axiom.co/docs/monitor-data/discord-notifier
This page explains how to create and configure a Discord notifier.
Use a Discord notifier to notify specific channels in your Discord server.
To create a Discord notifier, choose one of the following methods:
* [Create Discord notifier with a token](#create-discord-notifier-with-token)
* [Create Discord notifier with a webhook URL](#create-discord-notifier-with-webhook)
## Create Discord notifier with token
In Discord, create a token and get the channel ID:
1. Go to [Discord .dev](https://discord.com/developers/applications) and create a new application.
2. Click **Bot > Add Bot > Reset Token** to get your Discord token.
3. Click **OAuth2 > URL Generator**, check the Bot scope and the Send Messages permission.
4. Open the generated URL to add the bot to your server.
5. Click **User Settings > Advanced**, and then enable developer mode.
6. Right-click a channel, and then click **Copy ID**.
7. Ensure the **Discord Bot** has the proper allow channel access permissions.
In Axiom:
1. Click the **Monitors** tab, and then click **Manage notifiers** on the right.
2. Click **New notifier** on the top right.
3. Name your notifier.
4. Click **Discord**.
5. Enter the token you have previously generated and the channel ID.
6. Click **Create**.
### Create Discord notifier with webhook
1. In Discord, generate a webhook. For more information, see the [Discord documentation](https://support.discord.com/hc/en-us/articles/228383668-Intro-to-Webhooks).
2. In Axiom, click the **Monitors** tab, and then click **Manage notifiers** on the right.
3. Click **New notifier** on the top right.
4. Name your notifier.
5. Click **Discord Webhook**.
6. Enter the webhook URL you have previously generated.
7. Click **Create**.
# Email notifier
Source: https://axiom.co/docs/monitor-data/email-notifier
This page explains how to create and configure an email notifier.
To create an email notifier, follow these steps:
1. Click the **Monitors** tab, and then click **Manage notifiers** on the right.
2. Click **New notifier** on the top right.
3. Name your notifier.
4. Click **Email**.
5. In the **Users** section, add the email addresses where you want to send notifications, and then click **+** on the right.
6. Click **Create**.
# Match monitors
Source: https://axiom.co/docs/monitor-data/match-monitors
This section introduces the Monitors tab and explains how to create monitors.
Match monitors allow you to continuously filter your log data and send you matching events. Axiom sends a notification for each matching event. By default, the notification message contains the entire matching event in JSON format. When you define your match monitor using APL, you can control which event attributes to include in the notification message.
Axiom recommends using match monitors for alerting purposes only. A match monitor can send 10 notifications per minute and 500 notifications per day. A notification can usually include events up to 0.1 MB but the maximum size can be smaller depending on the type of the notifier.
## Create match monitor
To create a match monitor, follow these steps:
1. Click the **Monitors** tab, and then click **New monitor**.
2. Click **Match monitor**.
3. Name your monitor and add a description.
4. Click **Add notifier**, and then select the notifiers that define how you want to receive notifications for this monitor. For more information, see [Notifiers](#notifiers).
5. To define your query, use one of the following options:
* To use the visual query builder, click **Simple query builder**. Select the filters, and then click **Run query** to preview the recent events that match your filters. To preview matching events over a specific period, select the time range.
* To use Axiom Processing Language (APL), click **Advanced query language**. Write a query using the `where` operator to filter for events, and then click **Run query** to preview the results. To transform matching events before sending them to you, use the `extend` and the `project` operators. Don’t use aggregations in your query. For more information, see [Introduction to APL](/apl/introduction).
6. When the preview displays the events that you want to match, click **Create**. You cannot create a match monitor if more than 500 events match your query within the past 24 hours.
You have created a match monitor, and Axiom alerts you about every event that matches the filters you set. Each notification contains the event details as shown in the preview.
If you define your query using APL, you can use the following limited set of tabular operators:
* [extend](/apl/tabular-operators/extend-operator)
* [extend-valid](/apl/tabular-operators/extend-valid-operator)
* [parse](/apl/tabular-operators/parse-operator)
* parse-kv
* [project](/apl/tabular-operators/project-operator)
* [project-away](/apl/tabular-operators/project-away-operator)
* [project-keep](/apl/tabular-operators/project-keep-operator)
* project-rename
* [project-reorder](/apl/tabular-operators/project-reorder-operator)
* [where](/apl/tabular-operators/where-operator)
This restriction only applies to tabular operators.
## Examples
For real-world use cases, see [Monitor examples](/monitor-data/monitor-examples).
# Microsoft Teams notifier
Source: https://axiom.co/docs/monitor-data/microsoft-teams-notifier
This page explains how to create and configure a Microsoft Teams notifier.
Use a Microsoft Teams notifier to send a notification to a specific channel in your Microsoft Teams instance.
To create a Microsoft Teams notifier, follow these steps:
1. In Microsoft Teams, generate an incoming webhook. For more information, see the [Microsoft documentation](https://learn.microsoft.com/en-us/microsoftteams/platform/webhooks-and-connectors/how-to/add-incoming-webhook).
2. In Axiom, click the **Monitors** tab, and then click **Manage notifiers** on the right.
3. Click **New notifier** on the top right.
4. Name your notifier.
5. Click **Microsoft Teams**.
6. Enter the webhook URL you have previously generated.
7. Click **Create**.
# Monitor examples
Source: https://axiom.co/docs/monitor-data/monitor-examples
This page presents example monitor configurations for some common alerting use cases.
## Notify on all occurrences of error
To receive a notification on all occurrences of an error, create a match monitor where the filter conditions match the events reporting the error.
To receive only certain attributes in the notification message, use the `project` operator.
## Notify when error rate above threshold
To receive a notification when the error rate exceeds a threshold, [create a threshold monitor](/monitor-data/threshold-monitors) with an APL query that identifies the rate of error messages.
For example, logs in your dataset `['sample_dataset']` have a `status.code` attribute that takes the value `ERROR` when a log is about an error. In this case, the following example query tracks the error rate every minute:
```apl
['sample_dataset']
| extend is_error = case(['status.code'] == 'ERROR', 1, 0)
| summarize avg(error) by bin(_time, 1m)
```
Other options:
* To trigger the monitor when the error rate is above or equal to 0.01, set the threshold value to 0.01 and the comparison operator to `above or equal`.
* To run the monitor every 5 minutes, set the frequency to 5.
* To keep the monitor in the alert state until 10 minutes have passed with the per-minute error rate remaining below your threshold value, set the range to 10.
## Notify when number of error messages above threshold
To receive a notification when the number of error message of a given type exceeds a threshold, create a threshold monitor with an APL query that counts the different error messages.
For example, logs in your dataset `['sample_dataset']` have a `error.message` attribute. In this case, the following example query counts errors by type every 5 minutes:
```apl
['sample_dataset']
| summarize count() by ['error.message'], bin(_time, 5m)
```
Other options:
* To trigger the monitor when the count is above or equal to 10 for any individual message type, set the threshold to 10 and the comparison operator to **above or equal**.
* To run the monitor every 5 minutes, set the frequency to 5.
* To run the monitor the query with a range of 10 minutes, set the range to 10.
By default, the monitor enters the alert state when any of the counts returned by the query cross the threshold, and remains in the alert state until no counts cross the threshold. To alert separately for each message value instead, enable **Notify by group**.
## Notify when response times spike
To receive a notification whenever your response times spike without having to rely on a single threshold, [create an anomaly monitor](/monitor-data/anomaly-monitors) with an APL query that tracks your median response time.
For example, you have a dataset `['my_traces']` of trace data with the following:
* Route information is in the `route` field.
* Duration information is in the `duration` field.
* For top-level spans, the `parent_span_id` field is empty.
The following query gives median response times by route in one-minute intervals:
```apl
['my_traces']
| where isempty(parent_span_id)
| summarize percentile(duration, 50) by ['route'], bin(_time, 1m)
```
Other options:
* To only trigger the monitor when response times are unusually high for a route, set the comparison operator to **above**.
* To run the monitor every 5 minutes, set the frequency to 5.
* To consider the previous 30 minutes of data when determining what sort of variation is expected for median response times for a route, set the range to 30.
* To notify separately for each route, enable **Notify by group**.
# Monitors
Source: https://axiom.co/docs/monitor-data/monitors
This section introduces monitors and explains how you can use them to generate automated alerts from your event data.
A monitor is a background task that periodically runs a query that you define. For example, it counts the number of error messages in your logs over the previous 5 minutes. A notifier defines how Axiom notifies you about the monitor output. For example, Axiom can send you an email.
You can use the following types of monitor:
* [Anomaly monitors](/monitor-data/anomaly-monitors) aggregate event data over time and look for values that are unexpected based on the event history. When the results of the aggregation are too high or low compared to the expected value, Axiom sends you an alert.
* [Match monitors](/monitor-data/match-monitors) filter for key events and send them to you.
* [Threshold monitors](/monitor-data/threshold-monitors) aggregate event data over time. When the results of the aggregation cross a threshold, Axiom sends you an alert.
# Notifiers
Source: https://axiom.co/docs/monitor-data/notifiers-overview
This section introduces notifiers and explains how you can use them to generate automated alerts from your event data.
A monitor is a background task that periodically runs a query that you define. For example, it counts the number of error messages in your logs over the previous 5 minutes. A notifier defines how Axiom notifies you about the monitor output. For example, Axiom can send you an email.
By adding a notifier to a monitor, you receive a notification with the following message:
* When a match monitor matches an event, the message contains the full event if you created the monitor using the simple query builder, or the output of the APL query if you created the monitor using APL.
* When a threshold monitor changes state, the message includes a relevant value from the query results. If you enable **Notify by group**, the notification message also contains the relevant group value.
Choose one of the following to learn more about a type of notifier:
# Opsgenie notifier
Source: https://axiom.co/docs/monitor-data/opsgenie-notifier
This page explains how to create and configure an Opsgenie notifier.
Use an Opsgenie notifier to use all the incident management features of Opsgenie with Axiom.
To create an Opsgenie notifier, follow these steps:
1. In Opsgenie, create an API integration. For more information, see the [Opsgenie documentation](https://support.atlassian.com/opsgenie/docs/create-a-default-api-integration/).
2. Click the **Monitors** tab, and then click **Manage notifiers** on the right.
3. Click **New notifier** on the top right.
4. Name your notifier.
5. Click **Opsgenie**.
6. Enter the API key you have previously generated.
7. Select the region of your Opsgenie instance.
8. Click **Create**.
# PagerDuty notifier
Source: https://axiom.co/docs/monitor-data/pagerduty
This page explains how to create and configure a PagerDuty notifier.
Use a PagerDuty notifier to use all the incident management features of PagerDuty with Axiom.
## Benefits of using PagerDuty with Axiom
* Increase the performance and availability of your apps and services.
* Use specific insights in your backend, apps, and workloads by running PagerDuty in tandem with Axiom.
* Detect critical issues before any disruption happens to your resources: Axiom automatically opens and closes PagerDuty incidents.
* Obtain deep understanding of the issue root cause by visualising the data using Axiom.
Axiom creates PagerDuty events that arise from critical issues, disruptions, vulnerabilities, or workloads downtime on a service created in PagerDuty. The alert on Axiom side is linked to the PagerDuty Event allowing for Axiom to automatically close the Event incident if the Alert is resolved. This ensures no duplicate Events on PagerDuty side are created for the corresponding ones on Axiom side.
### Prerequisites
* Ensure you have [Admin base role](https://support.pagerduty.com/docs/user-roles) in PagerDuty.
## Create PagerDuty notifier
To create a PagerDuty notifier, follow these steps:
1. In PagerDuty’s Events V2 API, create a new service named **Axiom** with the default settings. Copy the integration key. For more information, see the [PagerDuty documentation](https://support.pagerduty.com/main/docs/services-and-integrations#create-a-service)
2. In Axiom, click the **Monitors** tab, and then click **Manage notifiers** on the right.
3. Click **New notifier** on the top right.
4. Name your notifier.
5. Click **Slack**.
6. Enter the integration key you have previously generated.
7. Click **Create**.
You can now add your PagerDuty notifier to a specific monitor in Axiom. If any incident happens on your monitor, Axiom notifies you on the PagerDuty Service Activity dashboard.
# Slack notifier
Source: https://axiom.co/docs/monitor-data/slack-notifier
This page explains how to create and configure a Slack notifier.
Use a Slack notifiers to notify specific channels in your Slack organization.
To create a Slack notifier, follow these steps:
1. In Slack, generate an incoming webhook. For more information, see the [Slack documentation](https://api.slack.com/messaging/webhooks).
2. In Axiom, click the **Monitors** tab, and then click **Manage notifiers** on the right.
3. Click **New notifier** on the top right.
4. Name your notifier.
5. Click **Slack**.
6. Enter the webhook URL you have previously generated.
7. Click **Create**.
# Threshold monitors
Source: https://axiom.co/docs/monitor-data/threshold-monitors
This section introduces the Monitors tab and explains how to create monitors.
Threshold monitors allow you to periodically aggregate your event data and compare the results of this aggregation to a threshold that you define. When the results cross the threshold, the monitor enters the alert state. The monitor remains in the alert state until the results no longer cross the threshold. A threshold monitor sends you a notification each time it enters or exits the alert state.
## Create threshold monitor
To create a threshold monitor, follow these steps:
1. Click the **Monitors** tab, and then click **New monitor**.
2. Click **Threshold monitor**.
3. Name your monitor and add a description.
4. Configure the monitor using the following options:
* The threshold is the value to compare the results of the query to. This can be any numeric value.
* The comparison operator is the rule to apply when comparing the results to the threshold. The possible values are **above**, **above or equal**, **below**, and **below or equal**.
* The frequency is how often the monitor runs. This is a positive integer number of minutes.
* The range is the time range for your query. This is a positive integer number of minutes. The end time is the time the monitor runs.
* **Alert on no data** triggers the monitor when your query doesn’t return any data. Your query returns no data if no events match your filters and an aggregation used in the query is undefined. For example, you take the average of a field not present in any matching events.
* You can group by attributes when defining your query. By default, your monitor enters the alert state if any of the values returned for the group-by attributes cross the threshold, and remains in the alert state until none of the values returned cross the threshold. To trigger the monitor separately for each group that crosses the threshold, enable **Notify by group**. At most one trigger notification is sent per monitor run. This option only has an effect if the monitor’s query groups by a non-time field.
5. Click **Add notifier**, and then select the notifiers that define how you want to receive notifications for this monitor. For more information, see [Notifiers](#notifiers).
6. To define your query, use one of the following options:
* To use the visual query builder, click **Simple query builder**. Click **Visualize** to select an aggregation method, and then click **Run query** to preview the results in a chart. The monitor enters the alert state if any points on the chart cross the threshold. Optionally, use filters to specify which events to aggregate, and group by fields to split the aggregation across the values of these fields.
* To use Axiom Processing Language (APL), click **Advanced query language**. Write a query where the final clause uses the `summarize` operator, and then click **Run query** to preview the results. For more information, see [Introduction to APL](/apl/introduction). If your query returns a chart, the monitor enters the alert state if any points on the chart cross the threshold. If your query returns a table, the monitor enters the alert state if any numeric values in the table cross the threshold. If your query uses the `bin_auto` function, Axiom displays a warning. To ensure that the monitor preview gives an accurate picture of future performance, use `bin` rather than `bin_auto`.
7. Click **Create**.
You have created a threshold monitor, and Axiom alerts you when the results from your query cross the threshold.
## Examples
For real-world use cases, see [Monitor examples](/monitor-data/monitor-examples).
# View monitor status
Source: https://axiom.co/docs/monitor-data/view-monitor-status
This page explains how to view the status of monitors.
To view the status of a monitor:
1. Click the Monitors tab.
2. Click the monitor in the list whose status you want to view.
The monitor status page provides an overview of the monitor’s current status and history.
## View recent activity and history of runs
On the left, you see the recent activity and the history of the monitor runs:
* The `_time` field displays the time of the monitor run.
* The `Status` field displays the status of the monitor.
* The `Range from` and `Range to` fields display the time range used in the monitor run.
You can change the time range of this overview in the top right corner.
## View information about monitor configuration
On the right, you see information about the monitor’s configuration.
* Current status
* Monitor type
* Query the monitor periodically runs
* Configuration details
* Notifiers attached to the monitor
* Metadata such as name and description
## Check recent viewers of monitor status
The status page displays the initials of the users who have recently looked at the monitor. To check which users have recently viewed the status page of monitors, hold the pointer over the initials in the top right of the page.
For example, this can be useful if you want to know who has recently seen that a monitor had been triggered and you can start a conversation with them to understand what’s happening.
# Architecture
Source: https://axiom.co/docs/platform-overview/architecture
Technical deep-dive into Axiom's distributed architecture.
You don't need to understand any of the following material to get massive value from Axiom. As a fully managed data platform, Axiom just works. This technical deep-dive is intended for curious minds wondering: Why is Axiom different?
Axiom routes ingestion requests through a distributed edge layer to a cluster of specialized services that process and store data in a proprietary columnar format optimized for event data. Query requests are executed by ephemeral, serverless workers that operate directly on compressed data stored in object storage.
## Ingestion architecture
Data flows through a multi-layered ingestion system designed for high throughput and reliability:
**Regional Edge Layer**: HTTPS ingestion requests are received by regional edge proxies positioned to meet data jurisdiction requirements. These proxies handle protocol translation, authentication, and initial data validation. The edge layer supports multiple input formats (JSON, CSV, compressed streams) and can buffer data during downstream issues.
**High-availability routing**: The system provides intelligent routing to healthy database nodes using real-time health monitoring. When primary ingestion paths fail, requests are automatically routed to available nodes or queued in a backlog system that processes data when systems recover.
**Streaming Pipeline**: Raw events are parsed, validated, and transformed in streaming fashion. Field limits and schema validation occur during this phase.
**Write-Ahead Logging**: All ingested data is durably written to a distributed write-ahead log before being processed. This ensures zero data loss even during system failures and supports concurrent writes across multiple ingestion nodes.
## Storage architecture
Axiom's storage layer is built around a custom columnar format that achieves extreme compression ratios:
**Columnar organization**: Events are decomposed into columns and stored using specialized encodings optimized for each data type. String columns use dictionary encoding, numeric columns use various compression schemes, and boolean columns use bitmap compression.
**Block-based storage**: Data is organized into immutable blocks that are written once and read many times. Each block contains:
* Column metadata and statistics
* Compressed column data in a proprietary format
* Separate time indexes for temporal queries
* Field schemas and type information
**Compression pipeline**: Data flows through multiple compression stages:
1. **Ingestion compression**: Real-time compression during ingestion (25-50% reduction)
2. **Block compression**: Columnar compression within storage blocks (10-20x additional compression)
3. **Compaction compression**: Background compaction further optimizes storage (additional 2-5x compression)
**Object storage integration**: Blocks are stored in object storage (S3) with intelligent partitioning strategies that distribute load and avoid hot-spotting. The system supports multiple storage tiers and automatic lifecycle management.
## Query architecture
Axiom executes queries using a serverless architecture that spins up compute resources on-demand:
**Query compilation**: The APL (Axiom Processing Language) query is parsed, optimized, and compiled into an execution plan. The compiler performs predicate pushdown, projection optimization, and identifies which blocks need to be read.
**Serverless Workers**: Query execution occurs in ephemeral workers optimized through "Fusion queries"—a system that runs parallel queries inside a single worker to reduce costs and leave more resources for large queries. Workers download only the necessary column data from object storage, enabling efficient resource utilization. Multiple workers can process different blocks in parallel.
**Block-level parallelism**: Each query spawns multiple workers that process different blocks concurrently. Workers read compressed column data directly from object storage, decompress it in memory, and execute the query.
**Result aggregation**: Worker results are streamed back and aggregated by a coordinator process. Large result sets are automatically spilled to object storage and streamed to clients via signed URLs.
**Intelligent caching**: Query results are cached in object storage with intelligent cache keys that account for time ranges and query patterns. Cache hits dramatically reduce query latency for repeated queries.
## Compaction system
A background compaction system continuously optimizes storage efficiency:
**Automatic compaction**: The compaction scheduler identifies blocks that can be merged based on size, age, and access patterns. Small blocks are combined into larger "superblocks" that provide better compression ratios and query performance.
**Multiple strategies**: The system supports several compaction algorithms:
* **Default**: General-purpose compaction with optimal compression
* **Clustered**: Groups data by common field values for better locality
* **Fieldspace**: Optimizes for specific field access patterns
* **Concat**: Simple concatenation for append-heavy workloads
**Compression optimization**: During compaction, data is recompressed using more aggressive algorithms and column-specific optimizations that aren't feasible during real-time ingestion.
## System architecture
The overall system is composed of specialized microservices:
**Core services**: Handle authentication, billing, dataset management, and API routing. These services are stateless and horizontally scalable.
**Database layer**: The core database engine processes ingestion, manages storage, and coordinates query execution. It supports multiple deployment modes and automatic failover.
**Orchestration layer**: Manages distributed operations, monitors system health, and coordinates background processes like compaction and maintenance.
**Edge services**: Handle real-time data ingestion, protocol translation, and provide regional data collection points.
## Why this architecture wins
**Cost efficiency**: Serverless query execution means you only pay for compute during active queries. Extreme compression (25-50x) dramatically reduces storage costs compared to traditional row-based systems.
**Operational simplicity**: The system is designed to be self-managing. Automatic compaction, intelligent caching, and distributed coordination eliminate operational overhead.
**Elastic scale**: Each component scales independently. Ingestion scales with edge capacity, storage scales with object storage, and query capacity scales with serverless workers.
**Fault tolerance**: Write-ahead logging, distributed routing, and automatic failover ensure high availability. The system gracefully handles node failures and storage outages.
**Real-time performance**: Despite the distributed architecture, the system maintains sub-second query performance through intelligent caching, predicate pushdown, and columnar storage optimizations.
This architecture enables Axiom to ingest millions of events per second while maintaining sub-second query latency at a fraction of the cost of traditional logging and observability solutions.
# Features
Source: https://axiom.co/docs/platform-overview/features
Comprehensive overview of Axiom's components, features, and capabilities across the platform.
| Component | Sub-Component | Feature | Description |
| :-------------------------- | :--------------------------------------------------------------------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------ |
| **Data Platform** | Deployment | Axiom Cloud | Axiom hosts and manages all infrastructure in its own cloud. |
| | | | |
| **EventDB** | - | - | Foundation of Axiom’s platform for ingesting, storing, and querying timestamped event data at scale. |
| **EventDB** | Ingest | - | Ingest pipeline that is coordination-free, durable by default, and scales linearly without requiring Kafka or other heavy middleware. |
| **EventDB** | Storage | - | Custom block-based format on object storage, with extreme compression (average 25×, up to 50× for more structured events) and efficient metadata. |
| **EventDB** | Query | - | Serverless ephemeral runtimes that spin up on demand to process queries, powered by the Axiom Processing Language (APL). |
| **EventDB** | Query | [APL (Axiom Processing Language)](/apl/introduction) | Powerful query language supporting filtering, aggregations, transformations, and specialized operators. |
| **EventDB** | Query | [Virtual fields](/query-data/virtual-fields) | Ability to derive new values from data in real-time during queries without pre-structuring or transforming data during ingestion. |
| | | | |
| **Console** | - | - | Web UI for data management, querying, dashboarding, monitoring, and user administration. |
| **Console** | Query | [Simple Query Builder](/query-data/explore) | Guided interface to quickly filter and group data. |
| **Console** | Query | [Advanced Query Builder](/query-data/explore) | A full APL-based environment for complex aggregations, transformations, and correlations. |
| **Console** | Query | [Visualizations](/query-data/visualizations) | Charts, graphs, and other visual components to make sense of query results. |
| **Console** | Query | [Trace waterfall](/query-data/traces) | Dedicated interface for analyzing distributed traces. |
| **Console** | Stream | [Live tailing](/query-data/stream) | Real-time streaming view of incoming logs and events. |
| **Console** | Dashboards | - | Combine multiple visual elements (charts, tables, logs, etc.) onto a single page. |
| **Console** | Dashboards | [Elements](/dashboard-elements/overview) | Various chart types, log streams, notes, and more to tailor each dashboard. |
| **Console** | Dashboards | [Annotations](/query-data/annotate-charts) | Mark points in time or highlight events directly in your dashboards. |
| **Console** | Monitors | [Threshold onitors](/monitor-data/threshold-monitors) | Checks if aggregated values exceed or fall below a predefined threshold (e.g., error counts > 100). |
| **Console** | Monitors | [Match monitors](/monitor-data/match-monitors) | Triggers on specific log patterns or conditions for each event. |
| **Console** | Monitors | [Anomaly monitors](/monitor-data/anomaly-monitors) | Learns from historical data to detect unexpected deviations or spikes. |
| **Console** | Alerting | [Notifiers](/monitor-data/notifiers-overview) (Webhooks, Email, Slack, and more) | Sends notifications through various channels including email, chat platforms, incident management systems, and custom webhook integrations. |
| **Console** | Governance | [Dataset management](/reference/datasets) | Data retention controls, trim functionality, field vacuuming, and dataset sharing capabilities. |
| **Console** | Governance | [Role-Based Access Control (RBAC)](/reference/settings) | Fine-grained permissions at the organization and dataset levels. |
| **Console** | Governance | [Audit logs](/reference/audit-log) | Track every change and action taken by users within Axiom. |
| | | | |
| **Integrations** | - | - | A wide range of official and community tools to collect logs, metrics, and traces and send them into Axiom. |
| **Integrations** | Popular connectors | [OpenTelemetry](/send-data/opentelemetry), [Vector](/send-data/vector), [Cribl](/send-data/cribl), and [more](/send-data/methods) | Common shippers and pipeline tools for bridging data sources to Axiom. |
| **Integrations** | AWS | [CloudWatch Forwarder](/send-data/cloudwatch), [Lambda Extension](/send-data/aws-lambda), [Kinesis](/send-data/aws-firehose), [S3](/send-data/aws-s3) | Official AWS-based ingestion solutions. |
| **Integrations** | Language libraries | [Go](/guides/go), [Python](/guides/python), [Node.js](/guides/opentelemetry-nodejs), [Java](/guides/opentelemetry-java), [Ruby](/guides/opentelemetry-ruby), [Rust](/guides/rust), [.NET](/guides/opentelemetry-dotnet) | SDKs to send logs and metrics directly from applications. |
| | | | |
| **APIs and CLI** | - | - | Programmatic and command-line interfaces for ingesting, querying, and managing Axiom resources. |
| **APIs and CLI** | REST API | [Endpoints](/restapi/introduction) | Programmatic interfaces for ingesting data, running queries, retrieving results, and managing annotations and tokens. |
| **APIs and CLI** | REST API | [API tokens and Personal Access Tokens](/reference/tokens) | Authentication mechanisms for API access with basic and advanced tokens, plus personal tokens. |
| **APIs and CLI** | CLI | [Auth and dataset management]() | Create datasets, set tokens, or update config from a shell. |
| **APIs and CLI** | CLI | [Query from terminal](/restapi/query) | Execute queries in APL or simpler filters directly in a command-line session. |
| **APIs and CLI** | [Terraform Provider](https://registry.terraform.io/providers/axiomhq/axiom/latest) | - | Terraform provider for programmatically creating and managing Axiom resources including datasets, notifiers, monitors, and users. |
| | | | |
| **Security and Compliance** | - | - | Axiom's data protection measures and compliance with major privacy/security frameworks. |
| **Security and Compliance** | [Compliance](/platform-overview/security) | SOC 2 Type II, GDPR, CCPA, HIPAA | Meets industry standards for data handling and privacy. |
## Related links
* Explore Axiom’s interactive demo [Playground](https://play.axiom.co/) to try these features.
* Check Axiom’s [roadmap](/platform-overview/roadmap) for upcoming features.
* [Contact](https://axiom.co/contact) Axiom if you have a feature request.
# Roadmap
Source: https://axiom.co/docs/platform-overview/roadmap
A high-level overview of Axiom’s current product development themes.
## Focus areas
Axiom continuously ships improvements to its core platform. Beyond these changes, Axiom’s current product development is guided by two parallel focus areas:
1. Transforming the core data platform into an intelligent assistant
2. Building dedicated tools for teams creating the next generation of AI-powered software.
[Contact](https://axiom.co/contact) the Axiom team to request more details about specific roadmap items or to join early access programs.
### Intelligent assistant
Axiom’s Console is evolving from a powerful analysis tool into an intelligent assistant that dramatically accelerates time-to-insight. By augmenting human-led investigations with AI, Axiom helps teams move from reactive troubleshooting to proactive action.
Learn more in the [Intelligence](/console/intelligence) docs.
### Confident AI engineering
As teams incorporate more AI capabilities into their own products, they need confidence that those features are performing as expected. Axiom is building a dedicated suite of tools to bring the rigor of observability to the entire AI development lifecycle.
Learn more in the [AI engineering](/ai-engineering/overview) docs.
### Platform excellence and scale
Supporting ambitious builders requires a rock-solid and scalable foundation. Axiom continues to invest heavily in core performance, reliability, and capabilities of the Axiom platform to ensure it can handle the most demanding workloads.
* **Full observability coverage**: Axiom is working to make **Metrics** a generally available, first-class citizen within the data platform, completing the "three pillars of observability" and providing a unified platform for all your telemetry.
* **Global architecture**: Axiom is being re-architected for global scale and efficiency. This includes building out a multi-region edge infrastructure and overhauling storage to support millions of datasets, ensuring high availability and low latency.
## Feature states
Each feature of Axiom is in one of the following states:
* **In development:** Axiom is actively building this feature. It’s not available yet but it’s progressing towards Preview.
* **Private preview:** An early access feature available to selected customers which helps Axiom validate the feature with trusted partners.
* **Public preview:** The feature is available for everyone to try but may have some rough edges. Axiom is gathering feedback before making it GA.
* **Generally available (GA):** The feature is fully released, production-ready, and supported. Feel free to use it in your workflows.
* **Planned end of life:** The feature is scheduled to be retired. It’s still working but you should start migrating to alternative solutions.
* **End of life:** The feature is no longer available or supported. Axiom has sunset it in favor of newer solutions.
Private and public preview features are experimental, are not guaranteed to work as expected, and may return unexpected query results. Please consider the risk you run when you use preview features against production workloads.
Current private preview features:
* [Flow](/process-data/introduction)
Current public preview features:
* [Cursor-based pagination](/restapi/pagination)
* [`externaldata` operator](/apl/tabular-operators/externaldata-operator)
* [`join` operator](/apl/tabular-operators/join-operator)
# Security
Source: https://axiom.co/docs/platform-overview/security
This page summarizes what Axiom does to ensure the highest standards of information security and data protection.
## Compliance
Axiom complies with key standards and regulations.
### ISO 27001
Axiom’s ISO 27001 certification indicates that we have established a robust system to manage information security risks concerning the data we control or process.
### SOC2 Type II
Axiom’s SOC 2 Type II certification proves that we have strict security measures in place to protect customer data.
If you’re on the Axiom Cloud or the Bring Your Own Cloud plan, you can request a report that outlines the technical and legal details under non-disclosure agreement (NDA). For more information, see the [Axiom Trust Center](https://trust.axiom.co/).
### General Data Protection Regulation (GDPR)
Axiom complies with GDPR and its core principles including data minimization and rights of the data subject.
### California Consumer Privacy Act (CCPA)
Axiom complies with CCPA and its core principles including transparency on data collection, processing and storage. You can request a Data Processing Addendum that outlines the technical and legal details.
### Health Insurance Portability and Accountability Act (HIPAA)
Axiom complies with HIPAA and its core principles. HIPAA compliance means that Axiom can enter into Business Associate Agreements (BAAs) with healthcare providers, insurers, pharma and health research firms, and service providers who work with protected health information (PHI).
If you’re on the Axiom Cloud or the Bring Your Own Cloud plan, you can request Business Associate Agreement (BAA). For more information, see the [Axiom Trust Center](https://trust.axiom.co/).
## Compliance and Axiom AI
Features powered by Axiom AI allow you to get insights from your data faster. These features are powered by leading foundation models through trusted enterprise providers including Amazon Bedrock and Google Gemini. Your inputs and outputs are never used to train generative models.
AI features are turned on by default for most customers. You can turn them on or off anytime for the whole organization, for example, for regulatory and compliance reasons.
To turn Axiom AI on or off:
1. Click **Settings > General**.
2. Click **Turn on Axiom AI** or **Turn off Axiom AI**.
## Comprehensive security measures
Axiom employs a multi-faceted approach to ensure data security, covering encryption, penetration testing, infrastructure security, and organizational measures.
### Data encryption
Data at Axiom is encrypted both at rest and in transit. Our encryption practices align with industry standards and are regularly audited to ensure the highest level of security.
Data is stored in the Amazon Web Services (AWS) infrastructure at rest and encrypted through technologies offered by AWS using AES-256 bit encryption. The same high level of security is provided for data in transit using AES-256 bit encryption and TLS to secure network traffic.
### Penetration testing
Axiom performs regular vulnerability scans and annual penetration tests to proactively identify and mitigate potential security threats.
### System protection
Axiom systems are segmented into separate networks and protected through restrictive firewalls. Network access to production environments is tightly restricted. Monitors are in place to ensure that service delivery matches SLA requirements.
### Resilience against system failure
Axiom maintains daily encrypted backups and full system replication of production platforms across multiple availability zones to ensure business continuity and resilience against system failures. Axiom periodically tests restoration capabilities to ensure your data is always protected and accessible.
### Organizational security practices
Axiom’s commitment to security extends beyond technological measures to include comprehensive organizational practices. Axiom employees receive regular security training and follow stringent security requirements like encryption of storage and two-factor authentication.
Axiom supports secure, centralized user authentication through SAML-based SSO (Security Assertion Markup Language-based single sign-on). This makes it easy to keep access grants up-to-date with support for the industry standard SCIM protocol. Axiom supports both the flows initiated by the service provider and the identity provider (SP- and the IdP-initiated flows).
Axiom enables you to take control over access to your data and features within Axiom through role-based permissions.
Axiom provides you with searchable audit logs that provide you with comprehensive tracking of all activity in your Axiom organization to meet even the most stringent compliance requirements.
SAML-based SSO, role-based access control (RBAC), and full access to the audit log are available as add-ons if you’re on the Axiom Cloud plan, and they are included by default on the Bring Your Own Cloud plan. For more information on ugrading, see the [Plan page](https://app.axiom.co/settings/plan) in your Axiom settings.
## Sub-processors
Axiom works with a limited number of trusted sub-processors. For a full list, see [Sub-processors](https://trust.axiom.co/subprocessors). Axiom regularly reviews all third parties to ensure they meet our high standards for security.
## Report vulnerabilities
Axiom takes all reports seriously and has a responsible disclosure process. Please submit vulnerabilities by email to [security@axiom.co](mailto:security@axiom.co).
# Amazon S3 destination
Source: https://axiom.co/docs/process-data/destinations/amazon-s3
This page explains how to set up an Amazon S3 destination.
[Amazon S3](https://aws.amazon.com/s3/) (Simple Storage Service) is a scalable, secure, and highly durable cloud storage solution for storing and retrieving data.
To set up an Amazon S3 destination:
1. In AWS, ensure the AWS Policy contains the statements required to perform a `PutObject` operation. For more information, see the AWS documentation on [policies and permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html), [access keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html), and the [`PutObject` operation](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html).
2. In Axiom, create an Amazon S3 destination. For more information, see [Manage destinations](/process-data/destinations/manage-destinations).
3. Configure the following:
* **Access key ID**. For more information on access keys, see the [Amazon documentation](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html).
* **Secret access key**. For more information on access keys, see the [Amazon documentation](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html).
* In **Region**, select the bucket region. For more information on bucket properties, see the [Amazon documentation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/view-bucket-properties.html).
* In **Bucket**, enter the bucket name. For more information on bucket properties, see the [Amazon documentation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/view-bucket-properties.html).
* Optional: In **Format**, specify the format in which Axiom sends data to the destination.
# Axiom destination
Source: https://axiom.co/docs/process-data/destinations/axiom
This page explains how to set up an Axiom destination.
Use Axiom destinations to process and route data from one Axiom dataset (source dataset) to another (destination dataset).
To set up an Axiom destination:
1. Create a destination dataset in Axiom where you want to route data.
2. Create an Axiom API token with permissions to update the destination dataset.
3. Create an Axiom destination. For more information, see [Manage destinations](/process-data/destinations/manage-destinations).
4. Configure the following:
* In **Dataset**, enter the name of the destination dataset.
* In **API Token**, enter the Axiom API token.
* In **Region**, select the region that your organization uses. For more information, see [Regions](/reference/regions). Optional: Select **Custom URL** and specify a custom URL.
## Billing for Axiom destinations
If you route data to an Axiom destination using Flow, Axiom bills the receiving organization for the data ingest.
# Azure Blob destination
Source: https://axiom.co/docs/process-data/destinations/azure-blob
This page explains how to set up an Azure Blob destination.
[Azure Blob Storage](https://azure.microsoft.com/en-us/products/storage/blobs) is Microsoft’s cloud object storage solution optimized for storing unstructured data such as documents, media files, and backups at a massive scale.
To set up an Azure Blob destination:
1. In Azure, create a service principal account with authorization to perform a `Put Blob` operation. For more information, see the Azure documentation on [creating a service principal](https://learn.microsoft.com/en-us/entra/identity-platform/howto-create-service-principal-portal) and on [authorizing a `Put Blob` operation](https://learn.microsoft.com/en-us/rest/api/storageservices/put-blob?tabs=microsoft-entra-id#authorization).
2. In Axiom, create an Azure Blob destination. For more information, see [Manage destinations](/process-data/destinations/manage-destinations).
3. Configure the following:
* In **URL**, enter the path to the storage account.
* In **Format**, specify the format in which Axiom sends data to the destination.
* In **Directory (tenant) ID**, enter the directory (tenant) ID. For more information on getting the directory (tenant) ID ID, see the [Azure documentation](https://learn.microsoft.com/en-us/entra/identity-platform/howto-create-service-principal-portal#sign-in-to-the-application).
* In **Application ID**, enter the app ID. For more information on getting the app ID, see the [Azure documentation](https://learn.microsoft.com/en-us/entra/identity-platform/howto-create-service-principal-portal#sign-in-to-the-application).
* In **Application secret**, enter the app secret. For more information on creating a client secret, see the [Azure documentation](https://learn.microsoft.com/en-us/entra/identity-platform/howto-create-service-principal-portal#option-3-create-a-new-client-secret).
# Elastic Bulk destination
Source: https://axiom.co/docs/process-data/destinations/elastic-bulk
This page explains how to set up an Elastic Bulk destination.
[Elastic Bulk API](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html) enables efficient indexing or deletion of large volumes of documents in Elasticsearch, reducing latency by bundling multiple operations into a single request.
To set up an Elastic Bulk destination:
1. In Elastic, ensure your account has the index privileges to use the create action. For more information, see the [Elastic documentation](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html#docs-bulk-api-prereqs).
2. In Axiom, create an Elastic Bulk destination. For more information, see [Manage destinations](/process-data/destinations/manage-destinations).
3. Configure the following:
* In **URL**, enter the path to the Elastic Bulk API where you want to route data. For example, enter `https://api.elastic-cloud.com/` if you use Elastic Cloud.
* In **Index**, enter the Elastic index.
* In **Username** and **Password**, enter your Elastic login credentials.
# Google Cloud Storage destination
Source: https://axiom.co/docs/process-data/destinations/gcs
This page explains how to set up a Google Cloud Storage destination.
[Google Cloud Storage](https://cloud.google.com/storage) is a scalable, secure, and durable object storage service for unstructured data.
To configure a Google Cloud Storage destination:
1. In Google Cloud Storage, create a service account. For more information, see the [Google documentation](https://developers.google.com/workspace/guides/create-credentials#create_a_service_account).
2. Create credentials for the service account in JSON format. For more information, see the [Google documentation](https://developers.google.com/workspace/guides/create-credentials#create_credentials_for_a_service_account).
3. In Axiom, create a Google Cloud Storage destination. For more information, see [Manage destinations](/process-data/destinations/manage-destinations).
4. Configure the following:
* In **Bucket**, enter the bucket name. For more information on retrieving bucket metadata in Google Cloud Storage, see the [Google documentation](https://cloud.google.com/storage/docs/getting-bucket-metadata).
* In **Credentials JSON**, enter the credentials you have previously created for the service account.
* Optional: In **Format**, specify the format in which Axiom sends data to the destination.
# HTTP destination
Source: https://axiom.co/docs/process-data/destinations/http
This page explains how to set up an HTTP destination.
HTTP destinations use HTTP requests to route data to web apps or services.
To configure an HTTP destination:
* In **URL**, enter the path to the HTTP destination where you want to route data.
* Optional: In **Format**, specify the format in which Axiom sends data to the destination.
* Optional: In **Headers**, specify any headers you want Axiom to send to the destination.
* In **Authorization type**, select one of the following options to authorize requests to the HTTP destination:
* Select **None** if the destination doesn’t require authorization to receive data.
* Select **Authorization header** to authorize requests to the destination with the `Authorization` request header, and specify the value of the request header. For example, `Basic 123`. For more information, see the [MDN documentation](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Authorization).
* Select **Basic** to authorize requests to the destination with username and password, and specify your login credentials.
# Manage destinations
Source: https://axiom.co/docs/process-data/destinations/manage-destinations
This page explains how to manage Flow destinations.
Flow is currently in private preview. To try it out, [sign up for a free account](https://app.axiom.co/flows).
To transform and route data from an Axiom dataset to a destination, you need to set up a destination. This is where data is routed. Once you set up a destination, it can be used in any flow configuration.
To set up a destination:
1. Click the [Flows](https://app.axiom.co/flows) tab. Axiom displays the list of flow configurations you have created.
2. In the left, click **Destinations**, and then click **New destination**.
3. Name the destination.
4. In **Destination type**, select the destination type.
5. Configure the destination. For more information on each destination type, see the following:
* [Amazon S3](/process-data/destinations/amazon-s3)
* [Axiom](/process-data/destinations/axiom)
* [Azure Blob](/process-data/destinations/azure-blob)
* [Elastic Bulk](/process-data/destinations/elastic-bulk)
* [Google Cloud Storage](/process-data/destinations/gcs)
* [HTTP](/process-data/destinations/http)
* [OpenTelemetry Traces](/process-data/destinations/opentelemetry)
* [Splunk](/process-data/destinations/splunk)
* [S3-compatible storage](/process-data/destinations/s3-compatible)
6. At the bottom right, click **Save**.
# OpenTelemetry Traces destination
Source: https://axiom.co/docs/process-data/destinations/opentelemetry
This page explains how to set up an OpenTelemetry Traces destination.
[OpenTelemetry](https://opentelemetry.io/) provide a standardized way to collect, process, and visualize distributed tracing data, enabling you to understand the performance and dependencies of complex applications.
To set up an OpenTelemetry Traces destination:
1. Create an OpenTelemetry Traces destination in Axiom. For more information, see [Manage destinations](/process-data/destinations/manage-destinations).
2. In **URL**, enter the path to the OpenTelemetry destination where you want to route data.
3. In **Format**, specify the format in which Axiom sends data to the destination.
4. Optional: In **Headers**, specify any headers you want Axiom to send to the destination.
# S3-compatible storage destination
Source: https://axiom.co/docs/process-data/destinations/s3-compatible
This page explains how to set up an S3-compatible storage destination.
S3-compatible storage refers to third-party storage systems that implement Amazon S3’s APIs, enabling seamless interoperability with tools and applications built for S3. For example, [MinIO](https://min.io/), [Wasabi](https://wasabi.com/), or [Backblaze](https://www.backblaze.com/).
To configure an S3-compatible storage destination:
* **Access key ID**. For more information on access keys in Amazon S3, see the [Amazon documentation](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html).
* **Secret access key**. For more information on access keys in Amazon S3, see the [Amazon documentation](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html).
* In **Bucket**, enter the bucket name. For more information on bucket properties in Amazon S3, see the [Amazon documentation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/view-bucket-properties.html).
* In **Hostname**, specify the hostname.
* In **Region**, select the bucket region. For more information on bucket properties in Amazon S3, see the [Amazon documentation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/view-bucket-properties.html).
* Optional: In **Format**, specify the format in which Axiom sends data to the destination.
# Splunk destination
Source: https://axiom.co/docs/process-data/destinations/splunk
This page explains how to set up a Splunk destination.
[Splunk](https://www.splunk.com/) is a data analytics platform designed for searching, monitoring, and analyzing machine-generated data to provide real-time insights and operational intelligence.
To configure a Splunk destination:
* In **URL**, enter the path to the Splunk destination where you want to route data.
* Optional: In **Headers**, specify any headers you want Axiom to send to the destination.
* In **Authorization type**, select one of the following options to authorize requests to the HTTP destination:
* Select **None** if the destination doesn’t require authorization to receive data.
* Select **Authorization header** to authorize requests to the destination with the `Authorization` request header, and specify the value of the request header. For example, `Basic 123`. For more information, see the [MDN documentation](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Authorization).
* Select **Basic** to authorize requests to the destination with username and password, and specify your login credentials.
# Configure Flow
Source: https://axiom.co/docs/process-data/flows
This page explains how to set up a flow to filter, shape, and route data from an Axiom dataset to a destination.
Flow is currently in private preview. To try it out, [sign up for a free preview](https://app.axiom.co/flows).
A flow is a way to filter, shape, and route data from an Axiom dataset to a destination that you choose. This page explains how to set up a flow.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
- Set up a destination. For more information, see [Destinations](/process-data/destinations).
## Set up a flow configuration
To set up a flow configuration:
1. Click the [Flows](https://app.axiom.co/flows) tab. Axiom displays the list of flow configurations you have created.
2. In the top right, click **New configuration**.
3. In the **Source** section, specify the source dataset and the transformation in an APL query. For example, the following APL query selects events from a `cloudflare-logpush` dataset and reduces them by removing a set of fields, before enriching with a new field.
```kusto
['cloudflare-logpush']
| where QueryName == "app.axiom.co."
// Reduce events by dropping unimportant field
| project-away ['@app']*
// Enrich events with additional context
| extend ['@origin'] = "_axiom"
```
If you only specify the name of the dataset in the query, Axiom routes all events to the destination.
In the APL query, you can only use filters (such as `where` and `search`) and column transformations (such as `project` and `extend`).
4. Click **Preview** to check whether the query you specified transforms your data as desired. The **Input event** section displays the original data stored in Axiom. The **Output event** section displays the transformed data that Axiom sends to the destination. The original data in the Axiom dataset isn’t affected by the transformation.
5. In the **Destination** section, click **Add a destination**, and then select an existing destination where you want to route data or click **Create new destination**.
6. In the top right, click **Create**.
After creating a flow configuration, create a one-time flow.
## Create one-time flow
One-time flows are one-off operations that process past data for a specific time range and route the output to a destination.
1. Click the **Flows** tab. Axiom displays the list of flow configurations you have created. Select the flow configuration that you want to use for creating a one-time flow.
2. In the top right, click **Create flow** and select **One-time flow**.
3. Specify the time range for events you want to process.
4. Click **Create flow**.
As a result, Axiom runs the query on the source data for the specified time range and routes the results of the query to the destination.
# Introduction to Flow
Source: https://axiom.co/docs/process-data/introduction
This section explains how to use Axiom’s Flow feature to filter, shape, and route event data.
Flow provides onward event processing, including filtering, shaping, and routing. Flow works after persisting data in Axiom’s highly efficient queryable store, and uses [APL](/apl/introduction) to define processing.
Flow is currently in private preview. To try it out, [sign up for a free account](https://app.axiom.co/flows).
## Elements of a flow
A flow consists of three elements:
* **Source**. This is the Axiom dataset used as the flow origin.
* **Transformation**. This is the APL query used to filter, shape, and enrich the events.
* **Destination**. This is where events are routed.
To get started with Flow, see [Configure Flow](/process-data/flows).
For more information on the measures Axiom takes to protect sensitive data, see [Data security in Flow](/process-data/security).
# Data security in Flow
Source: https://axiom.co/docs/process-data/security
This page explains the measures Axiom takes to protect sensitive data in Flow.
When you use flows, Axiom takes the following measures to protect sensitive data such as private keys:
* **Encrypted storage**: Credentials are encrypted at rest in the database. Axiom uses strong, industry-standard encryption methods and follows best practices.
* **Per-entry encryption**: Each credential is encrypted individually with its own unique key. This limits the potential impact if any single key is compromised.
* **Secure transit**: Credentials are encrypted in transit between your browser/client and the Axiom API using TLS 1.2 or 1.3.
* **Internal encryption**: Credentials remain encrypted within Axiom’s internal network.
* **Memory handling**: When credentials are briefly held in memory (for example, when delivering payloads), Axiom relies on cloud infrastructure security guarantees and proper memory management techniques, including garbage collection.
* **Contextual encryption**: Different uses of the same credentials use different encryption contexts. This adds an extra layer of protection.
* **Role-based access**: Axiom uses role-based access control for key management without keeping any master keys that can decrypt customer data.
These measures ensure that accessing usable credentials is extremely difficult even in the highly unlikely event of a data breach. The individual encryption of each entry means that even if one is compromised, the others remain secure.
For more information on Axiom’s security posture, see [Security](https://axiom.co/security).
# Annotate dashboard elements
Source: https://axiom.co/docs/query-data/annotate-charts
This page explains how to use annotations to add context to your dashboard elements.
Annotating charts lets you add context to your charts. For example, use annotations to mark the time of the following:
* Deployments
* Server outages
* Incidents
* Feature flags
This adds context to the trends displayed in your charts and makes it easier to investigate issues in your app or system.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
- [Send data](/send-data/methods) to your Axiom dataset.
- [Create an API token in Axiom](/reference/tokens) with permissions to create, read, update, and delete annotations.
## Create annotations
Create annotations in one of the following ways:
* [Use a GitHub Action](#create-annotations-with-github-actions)
* [Send a request to the Axiom API](#create-annotations-with-axiom-api)
If you use the Axiom Vercel integration, annotations are automatically created for deployments.
Axiom automatically creates an annotation if a monitor triggers.
### Create annotations with GitHub Actions
You can configure GitHub Actions using YAML syntax. For more information, see the [GitHub documentation](https://docs.github.com/en/actions/learn-github-actions/understanding-github-actions#create-an-example-workflow).
To create an annotation when a deployment happens in GitHub, add the following to the end of your GitHub Action file:
```yml
- name: Add annotation in Axiom when a deployment happens
uses: axiomhq/annotation-action@v0.1.0
with:
axiomToken: ${{ secrets.API_TOKEN }}
datasets: DATASET_NAME
type: "production-release"
time: "2024-01-01T00:00:00Z" # optional, defaults to now
endTime: "2024-01-01T01:00:00Z" # optional, defaults to null
title: "Production deployment" # optional
description: "Commit ${{ github.event.head_commit.message }}" # optional
url: "https://example.com" # optional, defaults to job URL
```
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
Customize the other fields of the code above such as the title, the description, and the URL.
This creates an annotation in Axiom each time you deploy in GitHub.
### Create annotations using Axiom API
To create an annotation using the Axiom API, use the following API request:
```bash
curl -X 'POST' 'https://AXIOM_DOMAIN/v2/annotations' \
-H 'Authorization: Bearer API_TOKEN' \
-H 'Content-Type: application/json' \
-d '{
"time": "2024-03-18T08:39:28.382Z",
"type": "deploy",
"datasets": ["DATASET_NAME"],
"title": "Production deployment",
"description": "Deploy new feature to the sales form",
"url": "https://example.com"
}'
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
Customize the other fields of the code above such as the title, the description, and the URL. For more information on the allowed fields, see [Annotation object](#annotation-object).
**Example response**
```bash
{
"datasets": ["my-dataset"],
"description": "Deploy new feature to the sales form",
"id": "ann_123",
"time": "2024-03-18T08:39:28.382Z",
"title": "Production deployment",
"type": "deploy",
"url": "https://example.com"
}
```
The API response from Axiom contains an `id` field. This is the annotation ID that you can later use to change or delete the annotation.
## Get information about annotations
To get information about all datasets in your org, use the following API request:
```bash
curl -X 'GET' 'https://AXIOM_DOMAIN/v2/annotations' \
-H 'Authorization: Bearer API_TOKEN'
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Use the following parameters in the endpoint URL to filter for a specific time interval and dataset:
* `start` is an ISO timestamp that specifies the beginning of the time interval.
* `end` is an ISO timestamp that specifies the end of the time interval.
* `datasets` is the list of datasets whose annotations you want to get information about. Separate datasets by commas, for example `datasets=my-dataset1,my-dataset2`.
The example below gets information about annotations about occurrences between March 16th and 19th, 2024 and added to the dataset `my-dataset`:
```bash
curl -X 'GET' 'https://AXIOM_DOMAIN/v2/annotations?start=2024-03-16T00:00:00.000Z&end=2024-03-19T23:59:59.999Z&datasets=my-dataset' \
-H 'Authorization: Bearer API_TOKEN'
```
**Example response**
```json
[
{
"datasets": ["my-dataset"],
"description": "Deploy new feature to the navigation component",
"id": "ann_234",
"time": "2024-03-17T01:15:45.232Z",
"title": "Production deployment",
"type": "deploy",
"url": "https://example.com"
},
{
"datasets": ["my-dataset"],
"description": "Deploy new feature to the sales form",
"id": "ann_123",
"time": "2024-03-18T08:39:28.382Z",
"title": "Production deployment",
"type": "deploy",
"url": "https://example.com"
}
]
```
The API response from Axiom contains an `id` field. This is the annotation ID that you can later use to change or delete the annotation. For more information on the other fields, see [Annotation object](#annotation-object).
To get information about a specific annotation, use the following API request:
```bash
curl -X 'GET' 'https://AXIOM_DOMAIN/v2/annotations/ANNOTATION_ID' \
-H 'Authorization: Bearer API_TOKEN'
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `ANNOTATION_ID` with the ID of the annotation.
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
**Example response**
```bash
{
"datasets": ["my-dataset"],
"description": "Deploy new feature to the sales form",
"id": "ann_123",
"time": "2024-03-18T08:39:28.382Z",
"title": "Production deployment",
"type": "deploy",
"url": "https://example.com"
}
```
For more information on these fields, see [Annotation object](#annotation-object).
## Change annotations
To change an existing annotation, use the following API request:
```bash
curl -X 'PUT' 'https://AXIOM_DOMAIN/v2/annotations/ANNOTATION_ID' \
-H 'Authorization: Bearer API_TOKEN' \
-H 'Content-Type: application/json' \
-d '{
"endTime": "2024-03-18T08:49:28.382Z"
}'
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `ANNOTATION_ID` with the ID of the annotation. For more information about how to determine the annotation ID, see [Get information about annotations](#get-information-about-annotations).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
In the payload, specify the properties of the annotation that you want to change. The example above adds an `endTime` field to the annotation created above. For more information on the allowed fields, see [Annotation object](#annotation-object).
**Example response**
```bash
{
"datasets": ["my-dataset"],
"description": "Deploy new feature to the sales form",
"id": "ann_123",
"time": "2024-03-18T08:39:28.382Z",
"title": "Production deployment",
"type": "deploy",
"url": "https://example.com",
"endTime": "2024-03-18T08:49:28.382Z"
}
```
## Delete annotations
To delete an existing annotation, use the following API request:
```bash
curl -X 'DELETE' 'https://AXIOM_DOMAIN/v2/annotations/ANNOTATION_ID' \
-H 'Authorization: Bearer API_TOKEN' \
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `ANNOTATION_ID` with the ID of the annotation. For more information about how to determine the annotation ID, see [Get information about annotations](#get-information-about-annotations).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
## Annotation object
Annotations are represented as objects with the following fields:
* `datasets` is the list of dataset names for which the annotation appears on charts.
* `id` is the unique ID of the annotation.
* `description` is an explanation of the event the annotation marks on the charts.
* `time` is an ISO timestamp value that specifies the time the annotation marks on the charts.
* `title` is a summary of the annotation that appears on the charts.
* `type` is the type of the event marked by the annotation. For example, production deployment.
* `url` is the URL relevant for the event marked by the annotation. For example, link to GitHub pull request.
* Optional: `endTime` is an ISO timestamp value that specifies the end time of the annotation.
## Show and hide annotations on dashboards
To show and hide annotations on a dashboard, follow these steps:
1. Go to the dashboard where you see annotations. For example, the prebuilt Vercel dashboard automatically shows annotations about deployments.
2. Click **Toggle annotations**.
3. Select the datasets whose annotations you want to display on the charts.
## Example use case
The example below demonstrates how annotations help you troubleshoot issues in your app or system. Your monitor alerts you about rising form submission errors. You explore this trend and when it started. Right before form submission errors started rising, you see an annotation about a deployment of a new feature to the form. You make the hypothesis that the deployment is the reason for the error and decide to investigate the code changes it introduced.
### Create annotation
Use the following API request to create an annotation:
```bash
curl -X 'POST' 'https://AXIOM_DOMAIN/v2/annotations' \
-H 'Authorization: Bearer API_TOKEN' \
-H 'Content-Type: application/json' \
-d '{
"time": "2024-03-18T08:39:28.382Z",
"type": "deploy",
"datasets": ["my-dataset"],
"title": "Production deployment",
"description": "Deploy new feature to the sales form",
"url": "https://example.com"
}'
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
### Create a monitor
In this example, you set up a monitor that alerts you when the number of form submission errors rises. For more information on creating a monitor, see [Monitoring and Notifiers](/monitor-data/monitors).
### Explore trends
Suppose your monitor sends you a notification about rising form submission errors.
You decide to investigate and run a query to display the number of form submission errors over time. Ensure you select a time range that includes the annotation.
You get a chart similar to the example below displaying form submission errors and annotations about the time of important events such as deployments.
### Inspect issue
1. From the chart, you see that the number of errors started to rise after the deployment of a new feature to the sales form. This correlation allows you to form the hypothesis that the errors might be caused by the deployment.
2. You decide to investigate the deployment by clicking on the link associated with the annotation. The link takes you to the GitHub pull request.
3. You inspect the code changes in depth and discover the cause of the errors.
4. You quickly fix the issue in another deployment.
# Analyze data
Source: https://axiom.co/docs/query-data/datasets
This page explains how to use the Datasets tab in Axiom.
The Datasets tab allows you to gain a better understanding of the fields you have in your datasets.
In Axiom, an individual piece of data is an event, and a dataset is a collection of related events. Datasets contain incoming event data. The Datasets tab provides you with information about each field within your datasets.
## Datasets overview
When you open the Datasets tab, you see the list of datasets. To explore the fields in a dataset, select the dataset from the list on the left.
When you select a dataset, Axiom displays the list of fields within the dataset on the left. The field types are the following:
* String
* Number
* Boolean
* Array
* [Virtual fields](#virtual-fields)
This view flattens field names with dot notation. This means that the event `{"foo": { "bar": "baz" }}` appears as `foo.bar`. Field names containing periods (`.`) are folded.
On the right, you see the following:
* [Views](#views)
* [Saved queries](#saved-queries)
* [Query history](#query-history)
### Edit field
To edit a field:
1. Go to the Datasets tab.
2. Select the dataset that contains the field.
3. Find the field in the list, and then click it.
4. Edit the following:
* Field description.
* Field unit. This is only available for number field types.
* Hidden. This means that the field is still present in the underlying Axiom database, but it doesn’t appear in the Axiom UI. Use this option if you sent the field to Axiom by mistake or you don’t want to use it anymore in Axiom.
## Quick charts
Quick charts allow fast charting of fields depending on their field type. For example, for number fields, choose one of the following for easily visualizing
* Percentiles
* Averages
* Histograms
## Virtual fields
Virtual fields are powerful expressions that run on every event during a query to create new fields. The virtual fields are calculated from the events in the query using an APL expression. They’re similar to tools like derived columns in other products but super-charged with an expressive interpreter and with the flexibility to add, edit, or remove them at any time.
To manage a dataset’s virtual fields, click in the toolbar.
For more information, see [Virtual fields](/query-data/virtual-fields).
## Map fields
Map fields are a special type of field that can hold a collection of nested key-value pairs within a single field. You can think of the content of a map field as a JSON object. The Dataset tab enables you to create map fields, and view unused and removed map fields. For more information, see [Map fields](/apl/data-types/map-fields#create-map-fields).
## Views
Views allow you to apply commonly-used filters and transformations to your dataset. The result is a view that you can use similarly to how you use datasets. The concept of a view in Axiom is similar to the concept of a virtual table in a database.
For more information, see [Views](/query-data/views).
## Queries
Every query has a unique ID that you can save and share with your team members. The Datasets tab allows you to do the following:
* Star a query so that you and your team members can easily find it in the future.
* Browse previous queries and find a past query.
### Saved queries
To find and run previously saved queries:
1. Select a dataset.
2. Optional: In the top right of the **Saved queries** section, select whether to display your queries or your team’s queries.
3. Find the query in the list, and then click it to run the query.
### Query history
To find and run recent queries:
1. Select a dataset.
2. Optional: In the top right of the **Query history** section, select whether to display your queries or your team’s queries.
3. Find the query in the list, and then click it to run the query.
# Query data with Axiom
Source: https://axiom.co/docs/query-data/explore
Learn how to filter, manipulate, extend, and summarize your data.
The Query tab provides you with robust computation and processing power to get deeper insights into your data. It enables you to filter, manipulate, extend, and summarize your data.
To query your data, go to the Query tab and choose one of the following options:
* [Generate a query using AI based on a natural-language prompt](#generate-query-using-ai)
* [Create a query using the visual query builder](#create-a-query-using-visual-query-builder)
* [Create a query using Axiom Processing Language (APL)](#create-a-query-using-apl)
You can easily switch between these methods at any point when creating the query.
## Generate query using AI
Explain what you want to infer from your data in your own words and Axiom AI generates the valid APL query for you.
1. Go to the Query tab.
2. Click **APL**, and then click in the query editor.
3. Press Cmd/CtrlK.
4. Type what you want to infer from your data in your own words using natural language, and then click **Generate**. For example, type `Show me the most common status responses in HTTP logs.`
5. Axiom’s AI generates the APL query based on your prompt and gives you the following options:
* Click **Accept** to update the editor with the generated query and change the generated query before running it. Any previous input in the query editor is lost.
* Click **Accept and run** to update the editor with the generated query and run it immediately. Any previous input in the query editor is lost.
* Click **Reject** to go back to your previous input in the query editor and close the query generator.
### Iterate over prompt history
Axiom saves the prompts you type in the query generator. To find one of your previous prompts and generate an APL query for it:
1. Go to the Query tab.
2. Click **APL**, and then click in the query editor.
3. Press Cmd/CtrlK.
4. Cycle through your history using the arrow keys and to find the prompt.
5. Click **Generate**.
## Create query using visual query builder
1. In the top left, click **Builder**.
2. From the list, select the dataset that you want to query.
3. Optional: In the **Where** section, create filters to narrow down the query results.
4. Optional: In the **Summarize** section, select a way to visualize the query results.
5. Optional: In the **More** section, specify additional options such as sorting the results or limiting the number of displayed events.
6. Select the time range.
7. Click **Run**.
While the query runs, the status bar gives you continuous updates about the number of rows examined, matched, and returned.
See below for more information about each of these steps.
### Add filters
Use the **Where** section to filter the results to specific events. For example, to filter for events that originate in a specific geolocation like France.
To add a filter:
1. Click **+** in the **Where** section.
2. Select the field where you want to filter for values. For example, `geo.country`.
3. Select the logical operator of the filter. These are different for each field type. For example, you can use **starts-with** for string fields and **>=** for number fields. In this example, select `==` for an exact match.
4. Specify the value for which you want to filter. In this example, enter `France`.
When you run the query, the results only show events matching the criteria you specified for the filter.
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20where%20%5B'geo.country'%5D%20%3D~%20'France'%22%7D)
### Add multiple filters
You can add multiple filters and combine them with AND/OR operators. For example, to filter for events that originate in France or Germany.
To add and combine multiple filters:
1. Add a filter for France as explained in [Add filters](#add-filters).
2. Add a filter for Germany as explained in [Add filters](#add-filters).
3. Click **and** that appears between the two filters, and then select **or**.
The query results display events that originate in France or Germany.
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20where%20\(%5B'geo.country'%5D%20%3D~%20'France'%20or%20%5B'geo.country'%5D%20%3D~%20'Germany'\)%22%7D)
You can add groups of filters using the **New Group** element.
Axiom supports AND/OR operators at the top level and one level deep.
### Add visualizations
Axiom provides powerful visualizations that display the output of aggregate functions across your dataset. The **Summarize** section provides you with several ways to visualize the query results. For example, the `count` visualization displays the number of events matching your query over time. Some visualizations require an argument such as a field or other parameters.
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20summarize%20count\(\)%20by%20bin_auto\(_time\)%22%7D)
For more information about visualizations, see [Visualize data](/query-data/visualizations).
### Segment data
When visualizing data, segment data into specific groups to see more clearly how the data behaves. For example, to see how many events originate in each geolocation, select the `count` visualization and group by `geo.country`.
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20summarize%20count\(\)%20by%20bin_auto\(_time\)%2C%20%5B'geo.country'%5D%22%7D)
### More options
In the **More** section, specify the following additional options:
* By default, Axiom automatically chooses the best ordering for the query results. To specify the sorting order manually, click **Sort by**, and then select the field according to which you want to sort the results.
* To limit the number of events the query returns, click **Limit**, and then specify the maximum number of returned events.
* Specify whether to display or hide open intervals.
### Select time range
When you select the time range of a query, you specify the time interval where you want to look for events.
To select the time range, choose one of the following options:
1. In the top left, click **Time range**.
2. Choose one of the following options:
* Use the **Quick range** items to quickly select popular time ranges.
* Use the **Custom start/end date** fields to select specific times.
### Special fields
Axiom creates the following two fields automatically for a new dataset:
* `_time` is the timestamp of the event. If the data you ingest doesn’t have a `_time` field, Axiom assigns the time of the data ingest to the events.
* `_sysTime` is the time when you ingested the data.
In most cases, you can use `_time` and `_sysTime` interchangeably. The difference between them can be useful if you experience clock skews on your event-producing systems.
## Create query using APL
APL is a data processing language that supports filtering, extending, and summarizing data. For more information, see [Introduction to APL](/apl/introduction).
Some APL queries are explained below. The pipe symbol `|` separates the operations as they flow from left to right, and top to bottom.
APL is case-sensitive for everything: dataset names, field names, operators, functions, etc.
Use double forward slashes (`//`) for comments.
### APL count operator
The below query returns the number of events from the `sample-http-logs` dataset.
```kusto
['sample-http-logs']
| summarize count()
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20summarize%20count\(\)%22%7D)
### APL limit operator
The `limit` operator returns a random subset of rows from a dataset up to the specified number of rows. This query returns a thousand rows from `sample-http-logs` randomly chosen by APL.
```kusto
['sample-http-logs']
| limit 1000
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20limit%201000%22%7D)
### APL summarize operator
The `summarize` operator produces a table that aggregates the content of the dataset. This query returns a chart of the `avg(req_duration_ms)`, and a table of `geo.city` and `avg(req_duration_ms)` of the `sample-http-logs` dataset from the time range of 2 days and time interval of 4 hours.
```kusto
['sample-http-logs']
| where _time > ago(2d)
| summarize avg(req_duration_ms) by _time=bin(_time, 4h), ['geo.city']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20where%20_time%20%3E%20ago\(2d\)%5Cn%7C%20summarize%20avg\(req_duration_ms\)%20by%20_time%3Dbin\(_time%2C%204h\)%2C%20%5B'geo.city'%5D%22%7D)
## Query results
The results view adapts to the query. This means that it adds and removes components as necessary to give you the best experience. The toolbar is always visible and gives details on the currently running or last-run query. The other components are explained below.
### Query results without visualizations
When you run a query on a dataset without specifying a visualization, Axiom displays a table with the raw query results.
#### View event details
To view the details for an event, click the event in the table.
To configure the event details view, select one of the following in the top right corner:
* Click **Navigate up** or **Navigate down** to display the details of the next or previous event.
* Click **Fit panel to results** or **Fit panel to viewport height** to change the height of the event details view.
In the event details view, click **More** for additional options:
* **View in context** opens the event in the stream of other events in the Stream tab.
* **Copy link to event**
* **Copy JSON**
* **Show nulls** or **Hide nulls** toggles whether to display fields with null values.
#### Select displayed fields
To select the fields to be highlighted or displayed in the table, click **Toggle fields panel**, and then click the fields in the list.
Select **Single column for event** to highlight the selected fields below the raw data for each event. Alternatively, select **Column for each field** to display each selected field in a different column without showing the raw event data. In this view, you can resize the width of columns by dragging the borders.
#### Configure table options
To configure the table options, click , and then select one of the following:
* Select **Wrap lines** to keep the whole table within the viewport and avoid horizontal scrolling.
* Select **Show timestamp** to display the time field.
* Select **Show event** to display the raw event data in a single column and highlight the selected fields below the raw data for each event. Alternatively, clear **Show event** to display each selected field in a different column without showing the raw event data. In this view, you can resize the width of columns by dragging the borders.
* Select **Hide nulls** to hide empty data points.
#### Event timeline
Axiom can also display an event timeline about the distribution of events across the selected time range. In the event timeline, each bar represents the number of events matched within that specific time interval. Holding the pointer over a bar reveals a blue line marking the total events and shows when those events occurred in that particular time range. To display the event timeline, click , and then click **Show chart**.
### Query results with visualizations
When you run a query with visualizations, Axiom displays all the visualizations that you add to the query. Hold the pointer over charts to get extra detail on each result set.
Below the charts, Axiom displays a table with the totals from each of the aggregate functions for the visualizations you specify.
If the query includes group-by clauses, there is a row for each group. Hold the pointer over a group row to highlight the group’s data on time series charts. Select the checkboxes on the left to display data only for the selected rows.
#### Configure chart options
Click to access the following options for each chart:
* In **Values**, specify how to treat missing or undefined values.
* In **Variant**, specify the chart type. Select from area, bar, or line charts.
* In **Y-Axis**, specify the scale of the vertical axis. Select from linear or log scales.
* In **Annotations**, specify the types of annotations to display in the chart.
For more information on each option, see [Configure dashboard elements](/dashboard-elements/configure).
#### Merge charts
When you run a query that produces several visualizations, Axiom displays the charts separately. For example:
```kusto
['sample-http-logs']
| summarize percentiles_array(req_duration_ms, 50, 90, 95) by status, bin_auto(_time)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20summarize%20percentiles_array\(req_duration_ms%2C%2050%2C%2090%2C%2095\)%20by%20status%2C%20bin_auto\(_time\)%22%7D)
To merge the separately displayed charts into a single chart, click , and then select **Merge charts**.
#### Compare time periods
On time series charts, holding the pointer over a specific time shows the same marker on similar charts for easy comparison.
When you run a query with a time series visualization, you can use the **Compare period** menu to select a historical time against which to compare the results of your time range. For example, to compare the last hour’s average response time to the same time yesterday, select `1 hr` in the time range menu, and then select `-1 day` from the **Compare period** menu. The dotted line represents results from the base date, and the totals table includes the comparative totals.
### Highlight time range
In the event timeline, line charts, and heat maps, you can drag the pointer over the chart to highlight a specific time range, and then choose one of the following:
* **Zoom** enlarges the section of the chart you highlighted.
* **Show events** displays events in the selected time range in the event details view.
The time range of your query automatically updates to match what you selected.
### Search within query results
To quickly search for an expression and highlight its occurrences within the query results:
1. In the query results view, press Cmd/CtrlF.
2. Type the expression that you want to search for. Axiom automatically highlights the matches and jumps to the first match.
3. Press Enter repeatedly to go forward in the list of matches, and press EnterShift to go backward.
Axiom’s search overrides the browser’s native search. Axiom’s search is more powerful because it highlights matching entries in all results returned by the query (while still respecting automatic limits). In contrast, the browser’s search can only highlight matching entries in the events rendered on your screen.
## Save and export query
You can save and export the query and its results to use them in other contexts.
### Save query
Save a query so that you and your team members can easily find it in the future. A saved query only includes the APL query itself, not the query results. You can later [find saved queries](/query-data/datasets#saved-queries) in the Datasets tab.
### Create new saved query
1. Click **Save** in the top bar.
2. Axiom AI generates a descriptive name based on the query. Accept it or edit it to fit your needs.
3. Click **Save**.
### Replace previously saved query
1. Click **Save** in the top bar.
2. Click **Replace previously saved query**.
3. Select the existing saved query that you want to overwrite.
4. Click **Save**.
### Export query
To export a query and its results, click **More** in the top bar to access the following options:
* **Add to dashboard** lets you create dashboard elements based on the query and add them to a dashboard. For more information, see [Create dashboard elements](/dashboard-elements/create).
* **Create new monitor** lets you create a monitor based on the query. For more information, see [Monitors](/monitor-data/monitors).
* **Copy link with relative time** copies a link to the query where the time range is relative to the time when you open the link. For example, if the time range of the query is the last 30 minutes, using the link shows query results for the 30-minute time range before opening the link.
* **Copy link with absolute time** copies a link to the query where the time range is fixed to the same time range that you see in the query when you create the link. This link shows query results for the same time range, irrespective of when you open the link.
* **Copy as JSON** lets you copy the query results to your clipboard in JSON format.
* **Download as JSON** lets you download the query results in JSON format.
* **Copy as CSV** lets you copy the query results to your clipboard in CSV format.
* **Download as CSV** lets you download the query results in CSV format.
# Create dashboards with filters
Source: https://axiom.co/docs/query-data/filters
This page explains how to create dashboards with filters that let you choose the data you want to display.
Filters let you choose the data you want to display in your dashboard. This page explains how to create and configure dashboards with filters.
Try out all the examples explained on this page in the [HTTP logs dashboard of the Axiom Playground](https://play.axiom.co/axiom-play-qf1k/dashboards/gZXp8KNJy68q7yGsuA).
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets) where you send your data.
* [Send data](/send-data/methods) to your Axiom dataset.
* [Create an empty dashboard](/dashboards/create).
## Filter types
You can use two types of filter in your dashboards:
* Search filters let you enter any text, filter for data that matches the text input, and then narrow down the results displayed by the charts in the dashboard. For example, you enter **Mac OS**, filter for results that contain this string in the user agent field, and then only display the corresponding results in the charts.
* Select filters let you choose one option from a list of options, filter for data that matches the chosen option, and then narrow down the results displayed by the charts in the dashboard. For example, you choose **France** from the list of countries, filter for results that match the chosen geographical origin, and then only display the corresponding results in the charts.
## Use dashboards with filters
To see different filters in action, check out the [HTTP logs dashboard of the Axiom Playground](https://play.axiom.co/axiom-play-qf1k/dashboards/gZXp8KNJy68q7yGsuA). The search filter on the top right lets you search for a specific phrase in the user agent field to only display HTTP requests from a specific user agent. The select filters on the top left let you choose country and city to only display HTTP requests from a specific geographical origin.
In each chart on your dashboard, you can use all, some, or none of the filters to narrow down the data displayed in the chart. For example, in the [HTTP logs dashboard of the Axiom Playground](https://play.axiom.co/axiom-play-qf1k/dashboards/gZXp8KNJy68q7yGsuA), the charts Popular data centers and Popular countries aren’t affected by your choices in the select filters. You choose to use a filter in a chart by [referencing the unique ID of the filter in the chart query](#reference-filters-in-chart-query) as explained later on this page.
Filters can be interdependent. For example, in the [HTTP logs dashboard of the Axiom Playground](https://play.axiom.co/axiom-play-qf1k/dashboards/gZXp8KNJy68q7yGsuA), the values you can choose in the city filter depend on your choice in the country filter. You make a filter dependent on another by [referencing the unique ID of the filter](#create-select-filters) as explained later on this page.
For each filter, you define a unique ID when you create the filter. When you create multiple filters, all of them must have a different ID. You can later use this ID to reference the filter in dashboard charts and other filters.
Filters are visually displayed in your dashboard in a filter bar that you can create and move as any other chart. You can add different types of filter to a single filter bar. A filter bar can contain maximum one search filter and any number of select filters.
## Create search filters
1. In the empty dashboard, click **Add element**.
2. In **Chart type**, select **Filter bar**.
3. In **Filter type**, select **Search**.
4. In **Filter name**, enter the placeholder text you want to display in your search filter.
5. Specify a unique filter ID that you later use to reference the filter. For example, `user_agent_filter`.
Try out this filter in the [HTTP logs dashboard of the Axiom Playground](https://play.axiom.co/axiom-play-qf1k/dashboards/gZXp8KNJy68q7yGsuA).
## Create select filters
1. In the empty dashboard, click **Add element**.
2. In **Chart type**, select **Filter bar**.
3. In **Filter type**, select **Select**.
4. In **Filter name**, enter the text you want to display above the select filter.
5. Specify a unique filter ID that you later use to reference the filter. For example, `country_filter`.
6. In the **Value** section, define the list of options to choose from in the select filter as key-value pairs. Axiom displays the key in the list of options in the filter dropdown, and uses the value to filter your data. For example, the key `France` is displayed in the list of options, and the value `FR` is used to filter data in your charts. Define the key-value pairs in one of the following ways:
* Choose **List** to manually define a static list of options. Enter the options as a list of key-value pairs.
* Choose **Query** to define a dynamic list of options. In this case, Axiom determines the list of options displayed in the filter dynamically based on an APL query. The results of the APL query must contain two fields which Axiom interprets as key-value pairs. Use the `project` command to create key-value fields from any output.
The value in the key-value pairs must be a string. To use number or Boolean fields, convert their values to strings using [`tostring()`](/apl/scalar-functions/conversion-functions#tostring\(\)).
The example APL query below uses the distinct values in the `geo.country` field to populate the list of options. It projects these values as both the key and the value and sorts them in alphabetical order.
```kusto
['sample-http-logs']
| distinct ['geo.country']
| project key=['geo.country'] , value=['geo.country']
| sort by key asc
```
See this filter in action in the [HTTP logs dashboard of the Axiom Playground](https://play.axiom.co/axiom-play-qf1k/dashboards/gZXp8KNJy68q7yGsuA).
### Create dependent select filters
Sometimes it makes sense that filters depend on each other. For example, in one filter you select the country, and in the other filter the city. In this case, the list of options in the city filter depends on your choice in the country filter.
To create a filter that depends on another filter, follow these steps:
1. Create a filter. In this example, the ID of the independent filter is `country_filter`.
2. Create a dependent select filter. In this example, the ID of the dependent select filter is `city_filter`. The dependent filter must be a select filter.
3. In the dependent filter, use `declare query_parameters` at the beginning of your query to reference the independent filter’s ID. For example, `declare query_parameters (country_filter:string = "")`. This lets you use `country_filter` as a parameter in your query even though it doesn’t exist in your data. For more information, see [Declare query parameters](#declare-query-parameters).
4. Use the `country_filter` parameter to filter results in the dependent filter’s query.
The example APL query below defines the dependent filter. It uses the value of the independent filter with the ID `country_filter` to determine the list of options in the dependent filter. Based on the selected country, the APL query uses the distinct values in the `geo.city` field to populate the list of options. It projects these values as both the key and the value and sorts them in alphabetical order.
```kusto
declare query_parameters (country_filter:string = "");
['sample-http-logs']
| where isnotempty(['geo.country']) and isnotempty(['geo.city'])
| where ['geo.country'] == country_filter
| summarize count() by ['geo.city']
| project key = ['geo.city'], value = ['geo.city']
| sort by key asc
```
Check out this filter in the [HTTP logs dashboard of the Axiom Playground](https://play.axiom.co/axiom-play-qf1k/dashboards/gZXp8KNJy68q7yGsuA).
## Reference filters in chart queries
After creating a filter, specify how you want to use the value chosen in the filter. Include the filter in the APL query of each chart where you want to use the filter to narrow down results. To do so,
use `declare query_parameters` at the beginning of the chart’s APL query to reference the filter’s ID. For example, `declare query_parameters (country_filter:string = "")`. This lets you use `country_filter` as a parameter in the chart’s query even though it doesn’t exist in your data. For more information, see [Declare query parameters](#declare-query-parameters).
The APL query below defines a statistic chart where the data displayed depends on your choice in the filter with the ID `country_filter`. For example, if you choose **France** in the filter, the chart only displays the number of HTTP requests from this geographical origin.
```kusto
declare query_parameters (country_filter:string = "");
['sample-http-logs']
| where isempty(country_filter) or ['geo.country'] == country_filter
| summarize count() by bin_auto(_time)
```
## Combine filters
You can combine several filters of different types in a chart’s query. For example, the APL query below defines a statistic chart where the data displayed depends on three filters:
* A select filter that lists countries.
* A select filter that lists cities within the chosen country.
* A search filter that lets you search in the `user_agent` field.
```kusto
declare query_parameters (country_filter:string = "",
city_filter:string = "",
user_agent_filter:string = "");
['sample-http-logs']
| where isempty(country_filter) or ['geo.country'] == country_filter
| where isempty(city_filter) or ['geo.city'] == city_filter
| where isempty(user_agent_filter) or user_agent contains user_agent_filter
| summarize count() by bin_auto(_time)
```
See this filter in action in the Total requests chart in the [HTTP logs dashboard of the Axiom Playground](https://play.axiom.co/axiom-play-qf1k/dashboards/gZXp8KNJy68q7yGsuA).
## Declare query parameters
Use `declare query_parameters` at the beginning of an APL query to reference a filter’s ID. For example, `declare query_parameters (country_filter:string = "")`. This lets you use `country_filter` as a parameter in the chart’s query even though it doesn’t exist in your data.
The `declare query_parameters` statement defines the data type of the parameter. In the case of filters, the data type is always string.
## Choose default option in select filter
The default option of a select filter is the option chosen when the dashboard loads. In most cases, this means that no filter is applied. This option is added automatically as the first in the list of options when you create the filter with the key **All** and an empty value. To choose another default value, reorder the list of options.
## Handle empty values
The examples on this page assume that you use the default setting where the **All** key means an empty value, and the empty value in a filter means that the data isn’t filtered in the chart. The example chart queries above handle this empty (null) value in the `where` clause. For example, `where isempty(country_filter) or ['geo.country'] == country_filter` means that if no option is chosen in the country filter, `isempty(country_filter)` is true and the data isn’t filtered. If any other option is chosen with a non-null value, the chart only displays data where the `geo.country` field’s value is the same as the value chosen in the filter.
# Stream data with Axiom
Source: https://axiom.co/docs/query-data/stream
The Stream tab enables you to process and analyze high volumes of high-velocity data from a variety of sources in real time.
The Stream tab allows you to inspect individual events and watch as they’re ingested live.
It can be incredibly useful to be able to live-stream events as they’re ingested to know what’s going on in the context of the entire system. Like a supercharged terminal, the Stream tab in Axiom allows you to view streams of events, filter them to only see important information, and finally inspect each individual event.
This section introduces the Stream tab and its components that unlock powerful insights from your data.
## Choose a dataset
The default view is one where you can easily see which datasets are available and also see some recent Starred Queries in case you want to jump directly into a stream:
Select a dataset from the list of datasets to continue.
## Event stream
Upon selecting a dataset, you are immediately taken to the live event stream for that dataset:
You can click an event to be taken to the event details slide-out:
On this slide-out, you can copy individual field values, or copy the entire event as JSON.
You can view and copy the raw data:
## Filter data
The Stream tab provides access to a powerful filter builder right on the toolbar:
For more information, see the [filters documentation](/dashboard-elements/create#filters).
## Time range selection
The stream has two time modes:
* Live stream (default)
* Time range
Live stream continuously checks for new events and presents them in the stream.
Time range only shows events that fall between a specific start and end date. This can be useful when investigating an issue. THe time range menu has some options to quickly choose some time ranges, or you can input a specific range for your search:
When you are ready to return to live streaming, click this button:
Click the button again to pause the stream.
## View settings
The Stream tab is customizable via the view settings menu:
Options include:
* Text size used in the stream
* Wrap lines
* Highlight severity (this is automatically extracted from the event)
* Show the raw event details
* Fields to display in their own column
## Starred queries
The starred queries slide-out is activated via the toolbar:
For more information, see [Starred queries](/query-data/datasets#starred-queries).
## Highlight severity
The Stream tab allows you to easily detect warnings and errors in your logs by highlighting the severity of log entries in different colors.
To highlight the severity of log entries:
1. Specify the log level in the data you send to Axiom. For more information, see [Requirements for log level fields](/reference/field-restrictions#requirements-for-log-level-fields).
2. In the Stream tab, click in the top right, and then select **Highlight severity**.
As a result, Axiom automatically searches for the words `warn` and `error` in the keys of the fields mentioned in Step 1, and then displays warnings in orange and errors in red.
# Explore traces
Source: https://axiom.co/docs/query-data/traces
Learn how to observe how requests propagate through your distributed systems, understand the interactions between microservices, and trace the life of the request through your app’s architecture.
Distributed tracing in Axiom allows you to observe how requests propagate through your distributed systems. This could involve a user request going through several microservices, and resources until the requested information is retrieved and returned. By tracing these requests, you’re able to understand the interactions between these microservices, pinpoint issues, understand latency, and trace the life of the request through your app’s architecture.
### Traces and spans
A trace is a representation of a single operation or transaction as it moves through a system. A trace is made up of multiple spans.
A span represents a logical unit of work in the system with a start and end time. For example, an HTTP request handling process might be a span. Each span includes metadata like unique identifiers (`trace_id` and `span_id`), start and end times, parent-child relationships with other spans, and optional events, logs, or other details to help describe the span’s operation.
### Trace schema overview
| Field | Type | Description |
| ---------------- | -------- | -------------------------------------------------------- |
| `trace_id` | String | Unique identifier for a trace |
| `span_id` | String | Unique identifier for a span within a trace |
| `parent_span_id` | String | Identifier of the parent span |
| `name` | String | Name of the span for example, the operation |
| `kind` | String | Type of the span (for example, client, server, producer) |
| `duration` | Timespan | Duration of the span |
| `error` | Boolean | Whether this span contains an error |
| `status.code` | String | Status of the span (for example, null, OK, error) |
| `status.message` | String | Status message of the span |
| `attributes` | Object | Key-value pairs providing additional metadata |
| `events` | Array | Timestamped events associated with the span |
| `links` | Array | Links to related spans or external resources |
| `resource` | Object | Information about the source of the span |
This guide explains how you can use Axiom to analyze and interrogate your trace data from simple overviews to complex queries.
## Browse traces with the OpenTelemetry app
The Axiom OpenTelemetry app automatically detects any OpenTelemetry trace data flowing into your datasets and publishes an OpenTelemetry Traces dashboard to help you browse your trace data.
The following fields are expected to display the OpenTelemetry Traces dashboard: `duration`, `kind`, `name`, `parent_span_id`, `service.name`, `span_id`, and `trace_id`.
### Navigate the app
* Use the **Filter Bar** at the top of the app to narrow the charts to a specific service or operation.
* Use the **Search Input** to find a trace ID in the selected time period.
* Use the **Slowest Operations** chart to identify performance issues across services and traces.
* Use the **Top Errors** list to quickly identify the worst-offending causes of errors.
* Use the **Results** table to get an overview and navigate between services, operations, and traces.
### View a trace
Click a trace ID in the results table to show the waterfall view. This view allows you to see that span in the context of the entire trace from start to finish.
### Customize the app
To customize the app, use the fork button to create an editable duplicate for you and your team.
## Query traces
In Axiom, trace events are just like any other events inside datasets. This means they’re directly queryable in the UI. While this is can be a powerful experience, it’s important to note some important details to consider before querying:
* Directly aggregating upon the `duration` field produces aggregate values across every span in the dataset. This is usually not the desired outcome when you want to inspect a service’s performance or robustness.
* For request, rate, and duration aggregations, it’s best to only include the root span using `isnull(parent_span_id)`.
## Waterfall view of traces
To see how spans in a trace are related to each other, explore the trace in a waterfall view. In this view, each span in the trace is correlated with its parent and child spans.
### Traces in OpenTelemetry Traces dashboard
To explore spans within a trace using the OpenTelemetry Traces app, follow these steps:
1. Click the `Dashboards` tab.
2. Click `OpenTelemetry Traces`.
3. In the `Slowest Operations` chart, click the service that contains the trace.
4. In the list of trace IDs, click the trace you want to explore.
5. Explore how spans within the trace are related to each other in the waterfall view. To reveal additional options such as collapsing and expanding child spans, right-click a span.
To try out this example, go to the Axiom Playground.
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/dashboards/otel.traces.otel-demo-traces)
### Traces in Query tab
To access the waterfall view from the Query tab, follow these steps:
1. Ensure the dataset you work with has trace data.
2. Click the Query tab.
3. Run a query that returns the `_time` and `trace_id` fields. For example, the following query returns the number of spans in each trace:
```kusto
['otel-demo-traces']
| summarize count() by trace_id
```
4. In the list of trace IDs, click the trace you want to explore. To reveal additional options such as copying the trace ID, right-click a trace.
5. Explore how spans within the trace are related to each other in the waterfall view. To reveal additional options such as collapsing and expanding child spans, right-click a span. Event names are displayed on the timeline for each span.
To try out this example, go to the Axiom Playground.
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%5Cn%7C%20summarize%20count\(\)%20by%20trace_id%22%7D)
### Customize waterfall view
To toggle the display of the span details on the right, click **Span details**.
To resize the width of the waterfall view and the span details panel, drag the border.
### Span duration histogram
In the waterfall view of traces, Axiom warns you about slow and fast spans. These spans are outliers because they’re at least a standard deviation over or under the average duration of spans that have the same span name and service name. Hold the pointer over the **SLOW** or **FAST** label to see additional information about the span type such as average and maximum duration. In addition, Axiom displays a histogram about the durations of spans that have the same span name and service name as the span you selected. By default, the histogram shows a one-hour window around the selected span.
The span duration histogram can be useful in the following cases, among others:
* You look at a span and you’re not familiar with the typical behavior of the service that created it. You want to know if you look at something normal in terms of duration or an outlier. The histogram helps you determine if you look at an outlier and might drill down further.
* You've found an outlier. You want to investigate and look at other outliers. The histogram shows you what the baseline is and what’s not normal in terms of duration. You want to filter for the outliers and see what they have in common.
* You want to see if there was a recent change in the typical duration for the selected span type.
To narrow the time range of the histogram, click and select an area in the histogram.
## Example queries
Below are a collection of queries that can help get you started with traces inside Axiom. Queries are all executable on the [Axiom Play sandbox](https://axiom.co/play).
Number of requests, average response
```kusto
['otel-demo-traces']
| where isnull(parent_span_id)
| summarize count(),
avg(duration),
percentiles_array(duration, 95, 99, 99.9)
by bin_auto(_time)
```
Top five slowest services by operation
```kusto
['otel-demo-traces']
| summarize count(), avg(duration) by name
| sort by avg_duration desc
| limit 5
```
Top five errors per service and operation
```kusto
['otel-demo-traces']
| summarize topk(['status.message'], 5) by ['service.name'], name
| limit 5
```
## Semantic conventions
[OpenTelemetry semantic conventions](https://opentelemetry.io/docs/specs/semconv/) specify standard attribute names and values for different kinds of operations and data. For more information on Axiom’s support for OTel semantic conventions and what it means for your data, see [Semantic conventions](/reference/semantic-conventions).
## Span links
Span links allow you to associate one span with one or more other spans, establishing a relationship between them that indicates the operation of one span depends on the other. Span links can connect spans within the same trace or across different traces.
Span links are useful for representing asynchronous operations or batch-processing scenarios. For example, an initial operation triggers a subsequent operation, but the subsequent operation may start at some unknown later time or even in a different trace. By linking the spans, you can capture and preserve the relationship between these operations, even if they’re not directly connected in the same trace.
### How it works
Span links in Axiom are based on the [OpenTelemetry specification](https://opentelemetry.io/docs/concepts/signals/traces/#span-links). When instrumenting your code, you create span links using the OpenTelemetry API by passing the `SpanContext` (containing `trace_id` and `span_id`) of the span to which to link. Links are specified when starting a new span by providing them in the span configuration. The OpenTelemetry SDK includes the link information when exporting spans to Axiom. Links are recorded at span creation time so that sampling decisions can consider them.
### View span links
1. Run the following APL query to find traces with span links, for example:
```kusto
['dataset']
| where isnotempty(links)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%20%22%5B%27otel-demo-traces%27%5D%5Cn%7C%20where%20isnotempty%28links%29%22%7D)
2. Click on a trace in the results and select the `trace_id`.
3. In the trace details view, find the links section. This displays the `trace_id` and `span_id` associated with each linked span, as well as other attributes of the link.
4. Click **View span** to navigate to a linked span, either in the same trace or a different trace.
# Views
Source: https://axiom.co/docs/query-data/views
This page explains how to use views in Axiom.
Views allow you to apply commonly-used filters and transformations to your dataset. The result is a view that you can use like you use datasets:
* You can reference views in queries, dashboards, and monitors in the same way you reference datasets. For example, you can create queries that look for data in a view and you can use this query to [visualize data](/query-data/explore) in the Query tab, create a [dashboard element](/dashboard-elements/overview), or set up a [monitor](/monitor-data/monitors).
* You can share and control access to views in the same way you do with datasets using role-based access control (RBAC). For example, views allow you to grant other users scoped access to your datasets. This means that instead of sharing the whole dataset with other users, you have the option to share only a filtered and transformed representation of the data. You define a view using a query that applies filters and transformations to your dataset, and then you only share the results of this query with another user.
Contrary to datasets:
* You can’t ingest directly to a view.
* You can’t trim or delete data from a view.
* You can’t create a view from another view.
The concept of a view in Axiom is similar to the concept of a virtual table in a database.
### Create view
1. Select a dataset.
2. In the top right of the **Views** section, click **+ Create view**.
3. Create the query using the [visual query builder](/query-data/explore#create-a-query-using-the-visual-query-builder) or the [Axiom Processing Language (APL)](/query-data/explore#create-a-query-using-apl).
4. Below the query, check the preview of the events and the fields that your query returns.
5. Name the view and optionally add a description.
6. Click **Create view**.
### Display view
1. Select a dataset.
2. In the **Views** section on the right, click the view you want to display.
### Grant access to view
All views are available to users with organization-level read access to views. You can allow users without these permissions to access an individual view in the same way you grant access to individual datasets:
1. Click **Settings > Roles**.
2. Click the role with which you want to share the view or create a new role.
3. In the **Individual datasets** section, click **Add dataset**.
4. Start typing the name of the view. In the list, find the view that you want to share, and then click **Add**.
It’s enough to grant access to the view. You don’t need to grant access to the underlying dataset.
5. Click **Save**.
# Virtual fields
Source: https://axiom.co/docs/query-data/virtual-fields
Virtual fields allow you to derive new values from your data in real time, eliminating the need for up-front data structuring, enhancing flexibility and efficiency.
Virtual fields allow you to derive new values from your data in real time.
One of the most powerful features of Axiom are virtual fields. With virtual fields, there is no need to do any up-front planning of how to structure or transform your data. Instead, send your data as-is and then use virtual fields to manipulate your data in real-time during queries.
The feature is also known as derived fields, but Axiom’s virtual fields have some unique properties that make them much more powerful.
In this guide, you’ll be introduced to virtual fields, their features, how to manage them, and how to get the best out of them.
## Creating a virtual field
To create a virtual field, follow these steps:
1. Go to the Datasets tab.
2. Select the dataset where you want to create the virtual field.
3. Click the **Virtual fields** icon in the top right. You see a list of all the virtual fields for the dataset.
4. Click **Add virtual field**.
5. Fill in the following fields:
* **Name** and **Description** help your team understand what the virtual field is about.
* **Expression** is the formula applied to every event to calculate the virtual field. The expression produces a result such as a `boolean`, `string`, `number`, or `object`.
The **Preview** section displays the result of applying the expression to some of your data. Use this section to verify the expression and the resulting values of the virtual field.
The power of virtual fields is in letting you manipulate data on read instead of on write, allowing you to adjust and update virtual fields over time as well as easily add new ones without worrying that the data has already been indexed.
## Usage
### Visualizations
Virtual fields are available as parameters to visualizations but, as the type of a virtual field can be any of the supported types, it’s important to make sure that you use a virtual field that produces the correct type of argument.
### Filters
Virtual fields are available in the filter menu and all filter options are presented. It’s important to ensure that you are using a supported filter operation for the type of result your virtual field produces.
## Group By
Virtual fields can be used for segmentation in the same way as any standard field.
## Reference
Virtual fields are APL expressions and share all the same functions and syntax as APL expressions. For more information, see [Introduction to APL](/apl/introduction).
The list of APL scalar functions:
* [String functions](/apl/scalar-functions/string-functions)
* [Math functions](/apl/scalar-functions/mathematical-functions)
* [Array functions](/apl/scalar-functions/array-functions)
* [Conversion functions](/apl/scalar-functions/conversion-functions)
* [Hash functions](/apl/scalar-functions/hash-functions)
* [DateTime/Timespan functions](/apl/scalar-functions/datetime-functions)
* [Rounding functions](/apl/scalar-functions/rounding-functions)
* [Conditional functions](/apl/scalar-functions/conditional-function)
* [IP functions](/apl/scalar-functions/ip-functions)
Virtual fields may reference other virtual fields. The order of the fields is important. Ensure that the referenced field is specified before the field that references it.
{/*
### Literals
| Functions | Description |
| ------------- | --------------------------------------- |
| `strings` | single and double quotes are supported. |
| `numbers` | `101`, `101.1` |
| `booleans` | `true` and `false` |
| `arrays` | `["one", "two", "three"]` |
| `maps` | `{ region: "us-east-1" }` |
| `nil` - | `nil` |
### Arithmetic operators
| Operator | Description |
| ------------ | --------------- |
| `+` | addition |
| `-` | subtraction |
| `*` | multiplication |
| `/` | division |
| `%` | modulus |
| `**` | pow |
### Comparison operators
| Operator | Description |
| ------------ | ------------------------ |
| `==` | equal |
| `!=` | not equal |
| `<` | less than |
| `>` | greater than |
| `<=` | less than or equal to |
| `>=` | greater than or equal to |
### Logical operators
| Operator |
| -------------------------------------- |
| `and` or `&&` |
| `or` or ` |
| `not` or `!` |
| `success ? 'yes' : 'no'` - ternary |
### String operators
| Operator | Description |
| ------------ | --------------- |
| `+` | concatenation |
| `matches` | regular expression match |
| `contains` | string contains |
| `startsWith` | has prefix |
| `endsWith` | has suffix |
To test the negative case of not matching, wrap the operator in a `not()` operator:
`not ("us-east-1" contains "us")`
Use parenthesis because the operator `not` has precedence over the operator `contains`.
### Numeric operators
In addition to the [arithmetic operators](#arithmetic-operators):
- `..` - numeric range
`age in 18..45`The range is inclusive: `1..3 == [1, 2, 3]`
### Membership operators
| Operator | Description |
| ------------ | -------------------- |
| `in` | contains |
| `not in` | doesn’t contain |
Examples:
`{Arrays: metadata.region in ["us-east-1", "us-east-2"]}`
`{Maps: 'region' in { region: 'us-east-1 } // true}`
### Built-ins
| Operator | Description |
| ------------ | ---------------------------------------------------------------- |
| `len` | length of an array, map, or string |
| `all` | return true if all element satisfies the predicate |
| `none` | return true if all element doesn’t satisfies the predicate |
| `any` | return true if any element satisfies the predicate |
| `one` | return true if exactly ONE element satisfies the predicate |
| `filter` | filter array by the predicate |
| `map` | map all items with the closure |
| `count` | returns number of elements what satisfies the predicate |
{'all(comments, {.Size < 280})'}
{'one(repos, {.private})'}
### Closures
- `{...}` - closure
Closures allowed only with builtin functions. To access the current item, used the `#` symbol.
{'`map(0..9, {# / 2})`'}
If the item of array is struct, it’s possible to access fields of struct with omitted `#` symbol (`#.Value` becomes `.Value`).
{'filter(comments, {len(.body) > 280})'}
### Slices
- `myArray[:]` - slice
Slices can work with arrays or strings
The variable `myArray` is `[1, 2, 3, 4, 5]`
`myArray[1:5] == [2, 3, 4] myArray[3:] == [4, 5] myArray[:4] == [1, 2, 3] myArray[:] == myArray`
*/}
# Visualize data
Source: https://axiom.co/docs/query-data/visualizations
Learn how to run powerful aggregations across your data to produce insights that are easy to understand and monitor.
Visualizations are powerful aggregations of your data to produce insights that are easy to understand and monitor.
With visualizations, you can create and obtain data stats, group fields, and observe methods in running deployments.
This page introduces you to the visualizations supported by Axiom and some tips on how best to use them.
## `count`
The `count` visualization counts all matching events and produces a time series chart.
#### Arguments
This visualization doesn’t take an argument.
#### Group-by behaviour
The visualization produces a separate result for each group plotted on a time series chart.
## `distinct`
The `distinct` visualization counts each distinct occurrence of the distinct field inside the dataset and produce a time series chart.
#### Arguments
`field: any` is the field to aggregate.
#### Group-By Behaviour
The visualization produces a separate result for each group plotted on a time series chart.
## `avg`
The `avg` visualization averages the values of the field inside the dataset and produces a time series chart.
#### Arguments
`field: number` is the number field to average.
#### Group-by behaviour
The visualization produces a separate result for each group plotted on a time series chart.
## `max`
The `max` visualization finds the maximum value of the field inside the dataset and produces a time series chart.
#### Arguments
`field: number` is the number field where Axiom finds the maximum value.
#### Group-by behaviour
The visualization produces a separate result for each group plotted on a time series chart.
## `min`
The `min` visualization finds the minimum value of the field inside the dataset and produces a time series chart.
#### Arguments
`field: number` is the number field where Axiom finds the minimum value.
#### Group-by behaviour
The visualization produces a separate result for each group plotted on a time series chart.
## `sum`
The `sum` visualization adds all the values of the field inside the dataset and produces a time series chart.
#### Arguments
`field: number` is the number field where Axiom calculates the sum.
#### Group-by behaviour
The visualization produces a separate result for each group plotted on a time series chart.
## `percentiles`
The `percentiles` visualization calculates the requested percentiles of the field in the dataset and produces a time series chart.
#### Arguments
* `field: number` is the number field where Axiom calculates the percentiles.
* `percentiles: number [, ...]` is a list of percentiles , each a float between 0 and 100. For example, `percentiles(request_size, 95, 99, 99.9)`.
#### Group-by behaviour
The visualization produces a separate result for each group plotted on a horizontal bar chart, allowing for visual comparison across the groups.
## `histogram`
The `histogram` visualization buckets the field into a distribution of N buckets, returning a time series heatmap chart.
#### Arguments
* `field: number` is the number field where Axiom calculates the distribution.
* `nBuckets` is the number of buckets to return. For example, `histogram(request_size, 15)`.
#### Group-by behaviour
The visualization produces a separate result for each group plotted on a time series histogram. Hovering over a group in the totals table shows only the results for that group in the histogram.
## `topk`
The `topk` visualization calculates the top values for a field in a dataset.
#### Arguments
* `field: number` is the number field where Axiom calculates the top values.
* `nResults` is the number of top values to return. For example, `topk(method, 10)`.
#### Group-by behaviour
The visualization produces a separate result for each group plotted on a time series chart.
## `variance`
The `variance` visualization calculates the variance of the field in the dataset and produces a time series chart.
The `variance` aggregation returns the sample variance of the fields of the dataset.
#### Arguments
`field: number` is the number field where Axiom calculates the variance.
#### Group-by behaviour
The visualization produces a separate result for each group plotted on a time series chart.
## `stddev`
The `stddev` visualization calculates the standard deviation of the field in the dataset and produces a time series chart.
The `stddev` aggregation returns the sample standard deviation of the fields of the dataset.
#### Arguments
`field: number` is the number field where Axiom calculates the standard deviation.
#### Group-by behaviour
The visualization produces a separate result for each group plotted on a time series chart.
# Track activity in Axiom
Source: https://axiom.co/docs/reference/audit-log
This page explains how to track activity in your Axiom organization with the audit log.
The audit log allows you to track who did what and when within your Axiom organization.
Tracking activity in your Axiom organization with the audit log is useful for legal compliance reasons. For example, you can investigate the following:
* Track who has accessed the Axiom platform.
* Track organization access over time.
* Track data access over time.
The audit log also make it easier to manage your Axiom organization. They allow you to do the following, among others:
* Track changes made by your team to your observability posture.
* Track monitoring performance.
The audit log is available to all users. By default, you can query the audit log for the previous three days. You can request the ability to query the audit log for the full time range as an add-on if you’re on the Axiom Cloud plan, and it’s included by default on the Bring Your Own Cloud plan. For more information on ugrading, see the [Plan page](https://app.axiom.co/settings/plan) in your Axiom settings.
## Explore audit log
1. Go to the Query tab, and then click **APL**.
2. Query the `axiom-audit` dataset. For example, run the query `['axiom-audit']` to display the raw audit log data in a table.
3. Optional: Customize your query to filter or summarize the audit log. For more information, see [Explore data](/query-data/explore).
4. Click **Run**.
The `action` field specifies the type of activity that happened in your Axiom organization.
## Export audit log
1. Run the query to [display the audit log](#explore-audit-logs).
2. Click **More > Download as JSON**.
## Restrict access to audit log
To restrict access to the audit log, use Axiom’s role-based access control to define who can access the `axiom-audit` dataset. For more information, see [Access](/reference/settings).
## List of trackable actions
The `action` field specifies the type of activity that happened in your Axiom organization. The actions that Audit logs allow you to track are the following:
* aplDelete
* createAnnotation
* createAPIToken
* createDashboard
* createDataset
* createEndpoint
* createFlowConfiguration
* createFlowDestination
* createFlowReplay
* createFlowStream
* createGroup
* createMapField
* createMonitor
* createNotifier
* createOrg
* createOrgStorage
* createPersonalToken
* createRole
* createUser
* createView
* createVirtualField
* deleteAnnotation
* deleteAPIToken
* deleteDashboard
* deleteDataset
* deleteEndpoint
* deleteFlowConfiguration
* deleteFlowDestination
* deleteGroup
* deleteMapField
* deleteMonitor
* deleteNotifier
* deleteOrg
* deletePersonalToken
* deleteRepo
* deleteRole
* deleteSession
* deleteShareLink
* deleteView
* downgradeOrg
* downgradePlan
* fieldLimitApproached
* fieldLimitExceeded
* getDashboard
* getDatasetFields
* getField
* getSharedRepos
* logout
* logoutEverywhere
* messageSent
* notifierFailed
* notifierTriggered
* notifyCustomerIOIssues
* postRepos
* regenerateAPIToken
* regeneratePersonalToken
* removeRBAC
* removeUserFromOrg
* resolveMonitor
* resolveMonitorAll
* resumeFlowReplay
* resumeFlowStream
* rotateSharedAccessKeys
* runAPLQuery
* sendOrgDeletedEmails
* sendOrgMonthlyIngestedExceededEmail
* sendOrgMonthlyIngestedNearLimitEmail
* sendUserDeletedEmail
* sendWelcomeEmail
* setEnableAI
* shareRepo
* stopFlowReplay
* stopFlowStream
* streamDataset
* triggerNotifier
* triggerNotifierWithID
* trimDataset
* unShareRepo
* updateDashboard
* updateDataset
* updateDatasetSettings
* updateEndpoint
* updateField
* updateFlowConfiguration
* updateFlowDestination
* updateGroup
* updateMapFields
* updateMonitor
* updateNotifier
* updateOrg
* updatePersonalToken
* updateRepo
* updateRole
* updateUser
* updateUserSettings
* updateView
* updateVirtualField
* upgradeOrg
* upgradePlan
* usageCalculated
* useShareLink
* vacuumDataset
# Axiom CLI
Source: https://axiom.co/docs/reference/cli
Learn how to use the Axiom CLI to ingest data, manage authentication state, and configure multiple deployments.
Axiom’s command line interface (CLI) is an Axiom tool that lets you test, manage, and build your Axiom organizations by typing commands on the command-line.
You can use the command line to ingest data, manage authentication state, and configure multiple organizations.
## Installation
### Install using go install
To install Axiom CLI, make sure you have [Go](https://golang.org/dl/) installed, then run this command from any directory in your terminal.
```bash
go install github.com/axiomhq/cli/cmd/axiom@latest
```
### Install using Homebrew
You can also install the CLI using [Homebrew](https://brew.sh/)
```bash
brew tap axiomhq/tap
brew install axiom
```
This installs Axiom command globally so you can run `axiom` commands from any directory.
To update:
```bash
brew upgrade axiom
```
### Install from source
```bash
git clone https://github.com/axiomhq/cli.git
cd cli
make install # Build and install binary into $GOPATH
```
### Run the Docker image
Docker images are available on [DockerHub.](https://hub.docker.com/r/axiomhq/cli)
```bash
docker pull axiomhq/cli
docker run axiomhq/cli
```
You can check the version and find out basic commands about Axiom CLI by running the following command:
```bash
axiom
```
## Authentication
The easiest way to start using Axiom CLI is by logging in through the command line. Simply run `axiom auth login` or simply `axiom` if no prior configuration exists. This will guide you through a straightforward login process.
## Managing multiple organizations
While most users will only need to manage a single Axiom deployment, Axiom CLI provides the capability to switch between multiple organizations for those who require it. You can easily switch between organizations using straightforward CLI commands. For example, `axiom auth switch-org` lets you change your active organization, or you can set the `AXIOM_ORG_ID` environment variable for the same purpose.
Every setting in Axiom CLI can be overwritten via environment variables configured in the `~/.axiom.toml` file. Specifically, `AXIOM_URL`, `AXIOM_TOKEN`, and `AXIOM_ORG_ID` are important for configuring your environment. Set `AXIOM_URL` to your Axiom domain. For more information, see [Regions](/reference/regions). You can switch between environments using the `axiom auth select` command.
To view available environment variables, run `axiom help environment` for an up to date list of env vars:
```
AXIOM_DEPLOYMENT: The deployment to use. Overwrites the choice loaded
from the configuration file.
AXIOM_ORG_ID: The organization ID of the organization the access
token is valid for.
AXIOM_PAGER, PAGER (in order of precedence): A terminal paging
program to send standard output to, for example, "less".
AXIOM_TOKEN: Token The access token to use. Overwrites the choice
loaded from the configuration file.
AXIOM_URL: The deployment url to use. Overwrites the choice loaded
from the configuration file.
VISUAL, EDITOR (in order of precedence): The editor to use for
authoring text.
NO_COLOR: Set to any value to avoid printing ANSI escape sequences
for color output.
CLICOLOR: Set to "0" to disable printing ANSI colors in output.
CLICOLOR_FORCE: Set to a value other than "0" to keep ANSI colors in
output even when the output is piped.
```
## One-Click Login
The One-Click Login is an easier way to authenticate Axiom-CLI and log in to your Axiom deployments and account resources directly on your terminal using the Axiom CLI.
## Tokens
You can generate an ingest and personal token manually in your Axiom user settings.
See [Tokens](/reference/tokens) to know more about managing access and authorization.
## Configuration and Deployment
Axiom CLI lets you ingest, authenticate, and stream data.
For more information about Configuration, managing authentication status, ingesting, streaming, and more,
visit the [Axiom CLI](https://github.com/axiomhq/cli) repository on GitHub.
Axiom CLI supports the ingestion of different formats of data **( JSON, NDJSON, and CSV)**
## Querying
Get deeper insights into your data using [Axiom Processing Language](/apl/introduction)
## Ingestion
Import, transfer, load and process data for later use or storage using the Axiom CLI. With [Axiom CLI](https://github.com/axiomhq/cli) you can Ingest the contents of a **JSON, NDJSON, CSV** logfile into a dataset.
**To view a list of all the available commands run `axiom` on your terminal:**
```bash
➜ ~ axiom
The power of Axiom on the command-line.
USAGE
axiom [flags]
CORE COMMANDS
ingest: Ingest structured data
query: Query data using APL
stream: Livestream data
MANAGEMENT COMMANDS
auth: Manage authentication state
config: Manage configuration
dataset: Manage datasets
ADDITIONAL COMMANDS
completion: Generate shell completion scripts
help: Help about any command
version: Print version
web: Open Axiom in the browser
FLAGS
-O, --auth-org-id string Organization ID to use
-T, --auth-token string Token to use
-C, --config string Path to configuration file to use
-D, --deployment string Deployment to use
-h, --help Show help for command
--no-spinner Disable the activity indicator
-v, --version Show axiom version
EXAMPLES
$ axiom auth login
$ axiom version
$ cat http-logs.json | axiom ingest http-logs
AUTHENTICATION
See 'axiom help credentials' for help and guidance on authentication.
ENVIRONMENT VARIABLES
See 'axiom help environment' for the list of supported environment variables.
LEARN MORE
Use 'axiom --help' for more information about a command.
Read the manual at https://axiom.co/reference/cli
```
## Command Reference
Below are the commonly used commands on Axiom CLI
**Core Commands**
| Commands | Description |
| ---------------- | -------------------- |
| **axiom ingest** | Ingest data |
| **axiom query** | Query data using APL |
| **axiom stream** | Live stream data |
**Management Commands**
| Commands | Description |
| --------------------------- | ----------------------------------------- |
| **axiom auth login** | Login to Axiom |
| **axiom auth logout** | Logout of Axiom |
| **axiom auth select** | Select an Axiom environment configuration |
| **axiom auth status** | View authentication status |
| **axiom auth switch-org** | Switch the organization |
| **axiom auth update-token** | Update the token used to authenticate |
| **axiom config edit** | Edit the configuration file |
| **axiom config get** | Get a configuration value |
| **axiom config set** | Set a configuration value |
| **axiom config export** | Export the configuration values |
| **axiom dataset create** | Create a dataset |
| **axiom dataset delete** | Delete a dataset |
| **axiom dataset list** | List all datasets |
| **axiom dataset trim** | Trim a dataset to a given size |
| **axiom dataset update** | Update a dataset |
**Additional Commands**
| Commands | Description |
| ------------------------------- | ----------------------------------------------- |
| **axiom completion bash** | Generate shell completion script for bash |
| **axiom completion fish** | Generate shell completion script for fish |
| **axiom completion powershell** | Generate shell completion script for powershell |
| **axiom completion zsh** | Generate shell completion script for zsh |
| **axiom help** | Help about any command |
| **axiom version** | Print version |
| **axiom web** | Open Axiom in the browser |
## Get help
To get usage tips and learn more about available commands from within Axiom CLI, run the following:
```bash
axiom help
```
For more information about a specific command, run `help` with the name of the command.
```bash
axiom help auth
```
This also works for sub-commands.
```bash
axiom help auth status
```
**if you have questions, or any opinions you can [start an issue](https://github.com/axiomhq/cli/issues) on Axiom CLI’s open source repository.**
**You can also visit our [Discord community](https://axiom.co/discord) to start or join a discussion. We'd love to hear from you!**
# Manage datasets
Source: https://axiom.co/docs/reference/datasets
Learn how to manage datasets in Axiom.
This reference article explains how to manage datasets in Axiom, including creating new datasets, importing data, and deleting datasets.
## What datasets are
Axiom’s datastore is tuned for the efficient collection, storage, and analysis of timestamped event data. An individual piece of data is an event, and a dataset is a collection of related events. Datasets contain incoming event data.
## Best practices for organizing datasets
Use datasets to organize your data ready for querying based on the event schema. Common ways to separate include environment, signal type, and service.
### Separate by environment
If you work with data sourced from different environments, separate them into different datasets. For example, use one dataset for events from production and another dataset for events from your development environment.
You might be tempted to use a single `environment` attribute instead, but this risks causing confusion when results show up side-by-side in query results. Although some organizations choose to collect events from all environments in one dataset, they’ll often rely on applying an `environment` filter to all queries, which becomes a chore and is error-prone for newcomers.
### Separate by signal type
If you work with distributed applications, consider splitting your data into different datasets. For example:
* A dataset with traces for all services
* A dataset with application logs for all services
* A dataset with frontend web vitals
* A dataset with infrastructure logs
* A dataset with security logs
* A dataset with CI logs
If you look for a specific event in a distributed system, you are likely to know its signal type but not the related service. By splitting data into different datasets using the approach above, you can find data easily.
### Separate by service
Another common practice is to separate datasets by service. This approach allows for easier access control management.
For example, you might separate engineering services with datasets like `kubernetes`, `billing`, or `vpn`, or include events from your wider company collectors like `product-analytics`, `security-logs`, or `marketing-attribution`.
This separation enables teams to focus on their relevant data and simplifies querying within a specific domain. It also works well with Axiom’s role-based access control feature as you can restrict access to sensitive datasets to those who need it.
While separating by service is beneficial, avoid over-segmentation. Creating a dataset for every microservice or function can lead to unnecessary complexity and management overhead. Instead, group related services or functions into logical datasets that align with your organizational structure or major system components.
When you work with OpenTelemetry trace data, keep all spans of a given trace in the same dataset. To investigate spans for different services, don’t send them to different datasets. Instead, keep the spans in the same dataset and filter on the `service.name` field. For more information, see [Send OpenTelemetry data to Axiom](/send-data/opentelemetry).
### Avoid the “kitchen sink”
While it might seem convenient to send all events to a single dataset, this “kitchen sink” approach is generally not advisable for several reasons:
* Field count explosion: As you add more event types to a single dataset, the number of fields grows rapidly. This can make it harder to understand the structure of your data and impacts query performance.
* Query inefficiency: With a large, mixed dataset, queries often require multiple filters to isolate the relevant data. This is tedious, but without those filters, queries take longer to execute since they scan through more irrelevant data.
* Schema conflicts: Different event types may have conflicting field names or data types, leading to unnecessary type coercion at query time.
* Access management: With all data in one dataset, it becomes challenging to provide granular access controls. You might end up giving users access to more data than they need.
Don’t create multiple Axiom organizations to separate your data. For example, don’t use a different Axiom organization for each deployment. If you’re on the Axiom Cloud (Personal) plan, this might go against [Axiom’s fair use policy](https://axiom.co/terms). Instead, separate data by creating a different dataset for each deployment within the same Axiom organization.
### Access to datasets
The datasets that individual users have access to determine the following:
* The data they see in dashboards. If a user has access to a dashboard but only to some of the datasets referenced in the dashboard’s elements, the user only sees data from the datasets they have access to.
* The monitors they see. A user only sees the monitors that reference the datasets that the user has access to. If a user has access to the monitors of an organization but only to some of the datasets referenced in the monitors, the user only sees the monitors that reference the datasets they have access to. If a monitor joins several datasets, a user can only see the monitor if the user has access to all of the datasets.
## Special fields
Axiom creates the following two fields automatically for a new dataset:
* `_time` is the timestamp of the event. If the data you ingest doesn’t have a `_time` field, Axiom assigns the time of the data ingest to the events.
* `_sysTime` is the time when you ingested the data.
In most cases, you can use `_time` and `_sysTime` interchangeably. The difference between them can be useful if you experience clock skews on your event-producing systems.
## Create dataset
To create a dataset using the Axiom app, follow these steps:
1. Click **Settings > Datasets**.
2. Click **New dataset**.
3. Name the dataset, and then click **Add**.
To create a dataset using the Axiom API, send a POST request to the [datasets endpoint](https://axiom.co/docs/restapi/endpoints/createDataset).
Dataset names are 1 to 128 characters in length. They only contain ASCII alphanumeric characters and the hyphen (`-`) character.
## Import data
You can import data to your dataset in one of the following formats:
* Newline delimited JSON (NDJSON)
* Arrays of JSON objects
* CSV
To import data to a dataset, follow these steps:
1. Click **Settings > Datasets**.
2. In the list, find the dataset where you want to import data, and then click **Import** on the right.
3. Optional: Specify the timestamp field. This is only necessary if your data contains a timestamp field and it’s different from `_time`.
4. Upload the file, and then click **Import**.
## Trim dataset
Trimming a dataset deletes all data in the dataset before a date you specify. This can be useful if your dataset contains too many fields or takes up too much storage space, and you want to reduce its size to ensure you stay within the [allowed limits](/reference/field-restrictions#pricing-based-limits).
Trimming a dataset deletes all data before the specified date.
To trim a dataset, follow these steps:
1. Click **Settings > Datasets**.
2. In the list, find the dataset that you want to trim, and then click **Trim dataset** on the right.
3. Specify the date before which you want to delete data.
4. Enter the name of the dataset, and then click **Trim**.
## Vacuum fields
The data schema of your dataset is defined on read. Axiom continuously creates and updates the data structures during the data ingestion process. At the same time, Axiom only retains data for the [retention period you specify](#specify-data-retention-period). This means that the data schema can contain fields that you ingested into the dataset in the past, but these fields are no longer present in the data currently associated with the dataset. This can be an issue if the number of fields in the dataset exceeds the [allowed limits](/reference/field-restrictions#pricing-based-limits).
In this case, vacuuming fields in a dataset can help you reduce the number of fields associated with a dataset and stay within the allowed limits. Vacuuming fields resets the number of fields associated with a dataset to the fields that occur in events within your retention period. Technically, it wipes the data schema and rebuilds it from the data you currently have in the dataset, which is partly defined by the retention period. For example, you have ingested 500 fields over the last year and 50 fields in the last 95 days, which is your retention period. In this case, before vacuuming, your data schema contains 500 fields. After vacuuming, the dataset only contains 50 fields.
Vacuuming fields doesn’t delete any events from your dataset. To delete events, [trim the dataset](#trim-dataset). You can use trimming and vacuuming in combination. For example, if you accidentally ingested events with fields you didn’t want to send to Axiom, and these events are within your retention period, vacuuming alone doesn’t solve your problem. In this case, first trim the dataset to delete the events with the unintended fields, and then vacuum the fields to rebuild the data schema.
You can only vacuum fields once per day for each dataset.
To vacuum fields, follow these steps:
1. Click **Settings > Datasets**.
2. In the list, find the dataset where you want to vacuum fields, and then click **Vacuum fields** on the right.
3. Select the checkbox, and then click **Vacuum**.
## Share datasets
You can share your datasets with other Axiom organizations. The receiving organization:
* can query the shared dataset.
* can create other Axiom resources that rely on query access such as dashboards and monitors.
* can’t ingest data into the shared dataset.
* can‘t modify the shared dataset.
No ingest usage associated with the shared dataset accrues to the receiving organization. Query usage associated with the shared dataset accrues to the organization running the query.
To share a dataset with another Axiom organization:
1. Ensure you have the necessary privileges to share datasets. By default, only users with the Owner role can share datasets.
2. Click **Settings > Datasets**.
3. In the list, find the dataset that you want to share, and then click **Share dataset** on the right.
4. In the Sharing links section, click **+** to create a new sharing link.
5. Copy the URL and share it with the receiving user in the organization with which you want to share the dataset. For example, `https://app.axiom.co/s/dataset/{sharing-token}`.
6. Ask the receiving user to open the sharing link. When opening the link, the receiving user sees the name of the dataset and the email address of the Axiom user that created the sharing link. They click **Add dataset** to confirm that they want to receive the shared dataset.
### Delete sharing link
Organizations can gain access to the dataset with an active sharing link. To deactivate the sharing link, delete the sharing link. Deleting a sharing link means that organizations that don’t have access to the dataset can’t use the sharing link to join the dataset in the future. Deleting a sharing link doesn’t affect the access of organizations that already have access to the shared dataset.
To delete a sharing link:
1. Click **Settings > Datasets**.
2. In the list, find the dataset, and then click **Share dataset** on the right.
3. To the right of the sharing link, click **Delete**.
4. Click **Delete sharing link**.
### Remove access to shared dataset
If your organization has previously shared a dataset with a receiving organization, and you want to remove the receiving organization’s access to the dataset, follow these steps:
1. Click **Settings > Datasets**.
2. In the list, find the dataset, and then click **Share dataset** on the right.
3. In the list, find the organization whose access you want to remove, and then click **Remove**.
4. Click **Remove access**.
### Remove shared dataset
If your organization has previously received access to a dataset from a sending organization, and you want to remove the shared dataset from your organization, follow these steps:
1. Ensure you have Delete permissions for the shared dataset.
2. Click **Settings > Datasets**.
3. In the list, click the shared dataset that you want to remove, and then click **Remove dataset**.
4. Enter the name of the dataset, and then click **Remove**.
This procedure only removes the shared dataset from your organization. The underlying dataset in the sending organization isn’t affected.
## Specify data retention period
The data retention period determines how long Axiom stores your data. By default, the data retention period is the same for all datasets. You can configure custom retention periods for individual datasets. As a result, Axiom automatically trims data after the specified time period instead of the default period. For example, this can be useful if your dataset contains sensitive event data that you don’t want to retain for a long time.
If you’re on the Personal plan, the default data retention period is 30 days and you can only specify a shorter period. For more information, see [Pricing-based limits](/reference/field-restrictions#pricing-based-limits).
When you specify a data retention period for a dataset that is shorter than the previous setting, all data older than the new retention period is automatically deleted. This process cannot be undone.
To change the data retention period for a dataset, follow these steps:
1. Click **Settings > Datasets**.
2. In the list, find the dataset for which you want to change the retention period, and then click **Edit dataset retention** on the right.
3. Enter a data retention period. The custom retention period must be greater than 0 days.
4. Click **Submit**.
## Delete dataset
Deleting a dataset deletes all data contained in the dataset.
To delete a dataset, follow these steps:
1. Click **Settings > Datasets**.
2. In the list, click the dataset that you want to delete, and then click **Delete dataset**.
3. Enter the name of the dataset, and then click **Delete**.
# Limits
Source: https://axiom.co/docs/reference/field-restrictions
This reference article explains the pricing-based and system-wide limits and requirements imposed by Axiom.
{/* TODO: Rename this file it does not reflect the content. */}
Axiom applies certain limits and requirements to guarantee good service across the platform. Some of these limits depend on your pricing plan, and some of them are applied system-wide. This reference article explains all limits and requirements applied by Axiom.
Limits are necessary to prevent potential issues that could arise from the ingestion of excessively large events or data structures that are too complex. Limits help maintain system performance, allow for effective data processing, and manage resources effectively.
## Pricing-based limits
The table below summarizes the limits applied to each pricing plan. For more details on pricing and contact information, see the [Axiom pricing page](https://axiom.co/pricing).
| | Personal | Axiom Cloud | Bring Your Own Cloud |
| :--------------------------- | :------------------ | :------------------- | :------------------- |
| Always Free storage | 25 GB | 100 GB | \* |
| Always Free data loading | 500 GB / month | 1,000 GB / month | \* |
| Always Free query compute | 10 GB-hours / month | 100 GB-hours / month | \* |
| Maximum data loading | 500 GB / month | – | – |
| Maximum data retention | 30 days | Custom | Custom |
| Datasets | 2 | 100 † | 2,500 † |
| Fields per dataset | 256 | 1,024 † | 4,096 † |
| Users | 1 | 1,000 † | 50,000 † |
| Monitors | 3 | 500 † | 20,000 † |
| Notifiers | Email, Discord | All supported | All supported |
| Supported deployment regions | US | US, EU | Not applicable |
\* For the Bring Your Own Cloud (BYOC) plan, Axiom doesn’t charge anything for data loading, query compute, and storage. These costs are billed by your cloud provider.
† Soft limit that can be increased upon request.
If you’re on the Axiom Cloud plan and you exceed the Always Free allowances outlined above, additional charges apply based on your usage above the allowance. For more information, see the [Axiom pricing page](https://axiom.co/pricing).
All plans include unlimited bandwidth, API access, and data sources subject to the [Fair Use Policy](https://axiom.co/terms).
To see how much of your allowance each dataset uses, go to **Settings > Usage**.
### Optimize storage costs
The amount of storage you use depends on the following:
* The amount of data you ingest to Axiom.
* The data retention period you specify.
The data retention period defines how long Axiom stores your data. After this period, Axiom trims the data and it doesn’t count towards your storage costs. You can define a custom data retention period for each dataset. For more information, see [Specify data retention period](reference/datasets#specify-data-retention-period).
### Restrictions on datasets and fields
Axiom restricts the number of datasets and the number of fields in your datasets. The number of datasets and fields you can use is based on your pricing plan and explained in the table above.
If you ingest a new event that would exceed the allowed number of fields in a dataset, Axiom returns an error and rejects the event. To prevent this error, ensure that the number of fields in your events are within the allowed limits. To reduce the number of fields in a dataset, [trim the dataset](/reference/datasets#trim-dataset) and [vacuum its fields](/reference/datasets#vacuum-fields).
## System-wide limits
The following limits are applied to all accounts, irrespective of the pricing plan.
### Limits on ingested data
The table below summarizes the limits Axiom applies to each data ingest. These limits are independent of your pricing plan.
| | Limit |
| ------------------------- | --------- |
| Maximum event size | 1 MB |
| Maximum events in a batch | 10,000 |
| Maximum field name length | 200 bytes |
### Requirements for timestamp field
The most important field requirement is about the timestamp.
All events stored in Axiom must have a `_time` timestamp field. If the data you ingest doesn’t have a `_time` field, Axiom assigns the time of the data ingest to the events. To specify the timestamp yourself, include a `_time` field in the ingested data.
If you include the `_time` field in the ingested data, follow these requirements:
* Timestamps are specified in the `_time` field.
* The `_time` field contains timestamps in a valid time format. Axiom accepts many date strings and timestamps without knowing the format in advance, including Unix Epoch, RFC3339, or ISO 8601.
* The `_time` field is a field with UTF-8 encoding.
* The `_time` field is not used for any other purpose.
### Requirements for log level fields
The Stream and Query tabs allow you to easily detect warnings and errors in your logs by highlighting the severity of log entries in different colors. As a prerequisite, specify the log level in the data you send to Axiom.
For Open Telemetry logs, specify the log level in the following fields:
* `severity`
* `severityNumber`
* `severityText`
For AWS Lambda logs, specify the log level in the following fields:
* `record.error`
* `record.level`
* `record.severity`
* `type`
For logs from other sources, specify the log level in the following fields:
* `level`
* `@level`
* `severity`
* `@severity`
* `status.code`
## Temporary account-specific limits
If you send a large amount of data in a short amount of time and with a high frequency of API requests, we may temporarily restrict or disable your ability to send data to Axiom. This is to prevent abuse of our platform and to guarantee consistent and high-quality service to all customers. In this case, we kindly ask you to reconsider your approach to data collection. For example, to reduce the total number of API requests, try sending your data in larger batches. This adjustment both streamlines our operations and improves the efficiency of your data ingest. If you often experience these temporary restrictions and have a good reason for changing these limits, please [contact Support](https://axiom.co/contact).
# Manage data sources
Source: https://axiom.co/docs/reference/manage-data-sources
This section explains how to manage data sources in Axiom settings.
## Apps
Apps allow you to enrich your Axiom organization with dedicated apps. For more information, see [Introduction to apps](/apps/introduction).
### Edit app
1. Click **Settings > Apps**.
2. Find the app in the list, and then click **More > Edit**.
### Disconnect app
1. Click **Settings > Apps**.
2. Find the app in the list, and then click **More > Disconnect**.
## Datasets
Datasets are collections of related events. They contain incoming event data.
For information on managing datasets for your organization, see [Datasets](/reference/datasets).
## Endpoints
Endpoints allow you to easily integrate Axiom into your existing data flow using tools and libraries that you already know. With endpoints, you can build and configure your existing tooling to send data to Axiom so you can start monitoring your logs immediately.
### Create endpoint
1. Click **Settings > Endpoints**.
2. Click **New endpoint**.
3. Click the type of endpoint you want to create.
4. Name the endpoint.
5. Select the dataset where you want to send data.
6. Copy the URL displayed for the newly created endpoint. This is the target URL where you send the data.
### Delete endpoint
1. Click **Settings > Endpoints**.
2. Find the endpoint in the list, and then click **Delete endpoint** on the right.
# Configure Axiom organization
Source: https://axiom.co/docs/reference/organization-settings
This section explains how to configure your Axiom organization.
## View organization ID
1. Click **Settings > General**.
2. Find organization ID in the **ID** section.
## View organization region
1. Click **Settings > General**.
2. Find organization region in the **Region** section.
For more information, see [Regions](/reference/regions).
## Turn Axiom AI on or off
Features powered by Axiom AI allow you to get insights from your data faster. These features are powered by leading foundation models through trusted enterprise providers including Amazon Bedrock and Google Gemini. Your inputs and outputs are never used to train generative models.
AI features are turned on by default for most customers. You can turn them on or off anytime for the whole organization, for example, for regulatory and compliance reasons.
To turn Axiom AI on or off:
1. Click **Settings > General**.
2. Click **Turn on Axiom AI** or **Turn off Axiom AI**.
## Delete organization
This is a destructive action. After you delete your organization, you lose access to all data within that org.
To delete your organization:
1. Back up your data. You will now be able to access the data after deleting the org.
2. Click **Settings > General**
3. Click **Delete organization**.
## Pricing plan and usage
To view details about your current plan and upgrade to a higher plan, click **Settings > Plan**.
To view details about your organization’s total usage of Axiom, click **Settings > Usage**.
# Optimize performance
Source: https://axiom.co/docs/reference/performance
Axiom is blazing fast. This page explains how you can further improve performance in Axiom.
Axiom is optimized for storing and querying timestamped event data. However, certain ingest and query practices can degrade performance and increase cost. This page explains pitfalls and provides guidance on how you can avoid them to keep your Axiom queries fast and efficient.
## Summary of pitfalls
| Practice | Severity | Impact |
| :-------------------------------------------------------------------------------------------------------------------------- | :------- | :------------------------------------------------------ |
| [Mixing unrelated data in datasets](#mixing-unrelated-data-in-datasets) | Critical | Combining unrelated data inflates schema, slows queries |
| [Excessive backfilling, big difference between \_time and \_sysTime](#excessive-backfilling-and-large-time-vs-systime-gaps) | Critical | Creates overlapping blocks, breaks time-based indexing |
| [Large number of fields in a dataset](#large-number-of-fields-in-a-dataset) | High | Very high dimensionality slows down query performance |
| [Failing to use \_time](#failing-to-use-the-_time-field-for-event-timestamps) | High | No efficient time-based filtering |
| [Overly wide queries (project \*)](#overly-wide-queries-returning-more-fields-than-needed) | High | Returns massive unneeded data |
| [Mixed data types in the same field](#mixing-unrelated-data-in-datasets) | Moderate | Reduces compression, complicates queries |
| [Using regex when simpler filters suffice](#regular-expressions-when-simple-filters-suffice) | Moderate | More CPU-heavy scanning |
| [Overusing runtime JSON parsing (parse\_json)](#overusing-runtime-json-parsing-parse_json) | Moderate | CPU overhead, no indexing on nested fields |
| [Virtual fields for simple transformations](#virtual-fields-for-simple-transformations) | Low | Extra overhead for trivial conversions |
| [Poor filter order in queries](#poor-filter-order-in-queries) | Low | Suboptimal scanning of data |
## Mixing unrelated data in datasets
### Problem
A “kitchen-sink” dataset is one in which events from multiple, unrelated applications or services get lumped together, often resulting in:
* **Excessive width (too many columns)**: Adding more and more unique fields bloats the schema, reducing query throughput.
* **Mixed data types in the same field**: For example, some events store `user_id` as a string, while others store it as a number in the same `user_id` field.
* **Unrelated schemas in a single dataset**: Columns that make sense for one app might be `null` or typed differently for another.
These issues reduce compression efficiency and force Axiom to scan more data than necessary.
### Why it matters
* **Slower queries**: Each query must scan wider blocks of data and handle inconsistent column types.
* **Higher resource usage**: Wide schemas reduce row packing in blocks, harming throughput and potentially raising costs.
* **Harder data exploration**: When fields differ drastically between events, discovering the correct columns or shaping queries becomes more difficult.
### How to fix it
* **Keep datasets narrowly focused:** Group data from the same application or service in its own dataset. For example, keep `k8s_logs` separate from `web_traffic`.
* **Avoid mixing data types for the same field:** Enforce consistent types during ingest. If a field is numeric, always send numeric values.
* **Consider using map fields:** If you have sparse or high-cardinality nested data, consider storing it in a single map (object) field instead of flattening every key. This reduces the total number of top-level fields. Axiom’s [map fields](/apl/data-types/map-fields#map-fields) are optimized for large objects.
## Excessive backfilling and large `_time` vs. `_sysTime` gaps
### Problem
Axiom’s `_time` index is critical for query performance. Ideally, incoming events for a block lie in a closely bounded time range. However, backfilling large amounts of historical data after the fact (especially out of chronological order) creates wide time overlaps in blocks. If `_time` is far from `_sysTime` (the time the event was ingested), Axiom’s time index effectiveness is weakened.
### Why it matters
* **Poor performance on time-based queries**: Blocks must be scanned despite time filters, because many blocks overlap the query time window.
* **Inefficient block filtering**: Queries that filter on time must scan blocks that contain data from a wide time range.
* **Large data merges**: Compaction processes that rely on time ordering become less efficient.
### How to fix it
* **Minimize backfill:** Try to ingest events close to their actual creation time whenever possible. Ingest events close to the time they occur.
* **Backfill in dedicated batches:** If you must backfill older data, do it in dedicated batches that do not mix with live data.
* **Use discrete backfill intervals:** When backfilling data, ingest one segment at a time (for example, day-by-day).
* **Avoid wide time ranges in a single batch:** If you are sending data for a 24-hour period, avoid mixing in data that is weeks or months older.
* **Be aware of ingestion concurrency:** Avoid mixing brand-new events with extremely old events in the same ingest request.
Future improvements: Axiom’s roadmap includes an initiative which aims to mitigate the impact of poorly clustered time data by performing incremental time-based compaction. Until then, avoid mixing large historical ranges with live ingest whenever possible.
## Large number of fields in a dataset
### Problem
Slow query performance in datasets with very high dimensionality (with more than several thousand fields).
### Why it matters
Axiom stores event data in a tuned format. As a result:
* The number of distinct values (cardinality) in your data impacts performance because low-cardinality fields compress better than high-cardinality fields.
* The number of fields in a dataset (dimensionality) impacts performance.
* The volume of data collected impacts performance.
### How to fix it
Scoping the number of fields in a dataset below a few thousand can help you achieve the best performance in Axiom.
## Failing to use the `_time` field for event timestamps
### Problem
Axiom’s core optimizations rely on `_time` for indexing and time-based queries. If you store event timestamps in a different field (for example, `timestamp` or `created_at`) and use that field in time filters, Axiom’s time-based optimizations will not be leveraged.
### Why it matters
* **No time-based indexing**: Every block must be scanned because your custom timestamp field is invisible to the time index.
### How to fix it
* **Always use `_time`:** Configure your ingest pipelines so that Axiom sets `_time` to the actual event timestamp.
* If you have a custom field like `created_at`, rename it to `_time` at ingest.
* Verify that your ingestion library or agent is correctly populating `_time`.
* **Use Axiom’s native time filters:** Rely on `where _time >= ... and _time <= ...` or the built-in time range selectors in the query UI.
## Handling mixed types in the same field
### Problem
A single field sometimes stores different data types across events (for instance, strings in some events and integers in others). This is typically a side effect of using “kitchen-sink” ingestion or inconsistent parsing logic in your code.
### Why it matters
* **Reduced compression**: Storing multiple types in the same column (variant column) is less efficient than storing a single type.
* **Complex queries**: You might need frequent casting or conditional logic in queries (`tostring()` calls, etc.).
### How to fix it
* **Standardize your types at ingest:** If a field is semantically an integer, always send it as an integer.
* **Use consistent schemas across services:** If multiple applications write to the same dataset, agree on a schema and data types.
* **Perform corrections at the source:** If you discover your data has been mixed historically, stop ingesting mismatched types. Over time, new blocks will reflect the corrected types even though historical blocks remain mixed.
## Overly wide queries returning more fields than needed
### Problem
By default, Axiom’s query engine projects all fields (`project *`) for each matching event. This can return large amounts of unneeded data, especially in wide datasets with many fields.
### Why it matters
* **High I/O and memory usage**: Unnecessary data is scanned, read, and returned.
* **Slower queries**: Time is wasted processing fields you never use.
### How to fix it
* **Use `project` or `project-keep`**
Specify exactly which fields you need. For example:
```kusto
dataset
| where status == 500
| project timestamp, error_code, user_id
```
* **Use `project-away` if you only need to exclude a few fields:** If you need 90% of the fields but want to exclude the largest ones, for instance:
```kusto
dataset
| project-away debug_payload, large_object_field
```
* **Limit your results**
If you only need a sample of events for debugging, use a lower `limit` value (such as 10) instead of the default 1000.
## Regular expressions when simple filters suffice
### Problem
Regular expressions (`matches`, `regex`) can be powerful, but they are also expensive to evaluate, especially on large datasets.
### Why it matters
* **High CPU usage**: Regex filters require complex per-row matching.
* **Slower queries**: Large swaths of data are scanned with less efficient matching.
### How to fix it
* **Use direct string filters**
Instead of:
```kusto
dataset
| where message matches "[Ff]ailed"
```
Use:
```kusto
dataset
| where message contains "failed"
```
* **Use `search` for substring search:**
To find `foobar` in all fields, use:
```kusto
dataset
| search "foobar"
```
`search` matches text in all fields. To find text in a specific field, a more efficient solution is to use the following:
```kusto
dataset
| where FIELD contains_cs "foobar"
```
In this example, `cs` stands for case-sensitive.
## Overusing runtime JSON parsing (`parse_json`)
### Problem
Some ingestion pipelines place large JSON payloads into a string field, deferring parsing until query time with `parse_json()`. This is both CPU-intensive and slower than columnar operations.
### Why it matters
* **Repeated parsing overhead**: You pay a performance penalty on each query.
* **Limited indexing**: Axiom cannot index nested fields if they are only known at query time.
### How to fix it
* **Ingest as map fields:** Axiom’s new [map column type](/apl/data-types/map-fields#map-fields) can store object fields column by column, preserving structure and optimizing for nested queries. This allows indexing of specific nested keys.
* **Extract top-level fields where possible:** If a certain nested field is frequently used for filtering or grouping, consider promoting it to its own top-level column (for faster scanning and filtering).
* **Avoid `parse_json()` in query:** If your JSON cannot be flattened entirely, ingest it into a map field. Then query subfields directly:
```kusto
dataset
| where data_map.someKey == "someValue"
```
## Virtual fields for simple transformations
### Problem
You can create virtual fields (for example, `extend converted = toint(some_field)`) to transform data at query time. While sometimes necessary, every additional virtual field imposes overhead.
### Why it matters
* **Increased CPU**: Each virtual field requires interpretation by Axiom’s expression engine.
* **Slower queries**: Overuse of `extend` for trivial or frequently repeated operations can add up.
### How to fix it
* **Avoid unnecessary casting:** If a field must be an integer, handle it at ingest time.
**Example:** Instead of
```kusto
dataset
| extend str_user_id = tostring(mixed_user_id)
| where str_user_id contains "123"
```
Use:
```kusto
| where mixed_user_id contains "123"
```
The filter automatically matches string values in mixed columns.
* **Reserve virtual fields for truly dynamic or derived logic**
If you frequently need a computed value, store it at ingest or keep the transformations minimal.
## Poor filter order in queries
### Problem
Axiom’s query engine does not currently reorder your `where` clauses optimally. This means the sequence of filters in your query can matter.
### Why it matters
* **Unnecessary scans**: If you use selective filters last, the engine may process many rows before discarding them.
* **Longer execution times**: CPU usage and scan times increase.
### How to fix it
* **Put the most selective filters first:**
Example:
```kusto
dataset
| where user_id == 1234
| where log_level == "ERROR"
```
If `user_id == 1234` discards most rows, apply it before `log_level == "ERROR"`.
* **Profile your filters:** Experiment with which filters discard the most rows to find the most selective conditions.
# Configure user profile
Source: https://axiom.co/docs/reference/profile
This section explains how to configure your user profile in Axiom settings.
## Change name
1. Click **Settings > Profile**.
2. Enter your name in the **Name** section.
## View contact details and base role
1. Click **Settings > Profile**.
2. Find your your contact details and base role in the **Email** and **Role** sections.
## Change timezone
1. Click **Settings > Profile**.
2. Select your timezone in the **Timezone** section.
## Change editor mode
The editor mode determines the style of the APL query editor. To change this:
1. Click **Settings > Profile**.
2. Select your preferred editor in the **Editor mode** section.
## Select default method for null values
When you visualize your data, you can select how Axiom treats missing or undefined values in the chart. For more information, see [Configure dashboard elements](/dashboard-elements/configure#values). When you select a default method to deal with null values, Axiom uses this method in every new chart you create.
To select the default method for null values:
1. Click **Settings > Profile**.
2. Select your preferred editor in the **default method for null values** section.
## Manage personal access tokens
Create and delete personal access tokens (PATs). For more information, see [Personal access tokens](/reference/tokens#personal-access-tokens-pat).
## View and manage active sessions
1. Click **Settings > Profile**.
2. View active sessions in the **Sessions** section.
3. Optional: To log out of a session, find the session in the list, and then click **Delete session** on the right.
## Delete user account
This is a destructive action. After you delete your user account, you cannot recover it.
To delete your user account:
1. Click **Settings > Profile**.
2. Click **Delete account**.
# Query costs
Source: https://axiom.co/docs/reference/query-hours
This page explains how to calculate and manage query compute resources in GB-hours to optimize usage within Axiom.
{/* TODO: Rename this file it does not reflect the content. */}
Axiom measures the resources used to execute queries in terms of GB-hours.
## What GB-hours are
When you run queries, your usage of the Axiom platform is measured in query-hours. The unit of this measurement is GB-hours which reflects the duration (measured in milliseconds) serverless functions are running to execute your query multiplied by the amount of memory (GB) allocated to execution. This metric is important for monitoring and managing your usage against the monthly allowance included in your plan.
## How Axiom measures query-hours
Axiom uses serverless computing to execute queries efficiently. The consumption of serverless compute resources is measured along two dimensions:
* Time: The duration (in milliseconds) for which the serverless function is running to execute your query.
* Memory allocation: The amount of memory (in GB) allocated to the serverless function during execution.
## What counts as a query
In calculating query costs, Axiom considers any request that queries your data as a query. For example, the following all count as queries:
* You initiate a query in the Axiom user interface.
* You query your data with an API token or a personal access token.
* Your match monitor runs a query to determine if any new events match your criteria.
Each query is charged at the same rate, irrespective of its origin.
Each monitor run counts towards your query costs. For this reason, the frequency (how often the monitor runs) can have a slight effect on query costs.
## Run queries and understand costs
When you run queries on Axiom, the cost in GB-hours is determined by the shape and size of the events in your dataset and the volume of events scanned to return a query result. After executing a query, you can find the associated query cost in the response header labeled as `X-Axiom-Query-Cost-Gbms`.
## Determine query cost
Send a `POST` request to the [Run query](/restapi/endpoints/queryApl) endpoint with the following configuration:
* `Content-Type` header with the value `application/json`.
* `Authorization` header with the value `Bearer API_TOKEN`. Replace `API_TOKEN` with your Axiom API token.
* In the body of your request, enter your query in JSON format. For example:
```json
{
"apl": "telegraf | count",
"startTime": "2024-01-11T19:25:00Z",
"endTime": "2024-02-13T19:25:00Z"
}
```
`apl` specifies the Axiom Processing Language (APL) query to run. In this case, `"telegraf | count"` indicates that you query the `telegraf` dataset and use the `count` operator to aggregate the data.
`startTime` and `endTime` define the time range of your query. In this case, `"2024-01-11T19:25:00Z"` is the start time, and `"2024-02-13T19:25:00Z"` is the end time, both in ISO 8601 format. This time range limits the query to events recorded within these specific dates and times.
In the response to your request, the information about the query cost in GB-milliseconds is in the `X-Axiom-Query-Cost-Gbms` header.
## Example of GB-hour calculation
As an example, a typical query analyzing 1 million events might consume approximately 1 GB-second. There are 3,600 seconds in an hour which means that an organization can run 3,600 of these queries before reaching 1 GB-hour of query usage. This is an example and the actual usage depends on the complexity of the query and the input data.
## Plan and GB-hours allowance
Your GB-hours allowance depends on your pricing plan. To learn more about the plan offerings and find the one that best suits your needs, see [Axiom Pricing](https://axiom.co/pricing).
## Optimize queries to lower costs
This section explains ways you can optimize your queries to save on query costs. For more information optimizing your queries for performance, see [Optimize performance](/reference/performance).
### Optimize the order of field-specific filters
Field-specific filters narrow your query results to events where a field has a given value. For example, the APL query `where ["my-field"] == "axiom"` filters for events where the `my-field` field takes the value `axiom`.
Include field-specific filters near the beginning of your query for modest savings in query costs. For more information, see [Poor filter order in queries](/reference/performance#poor-filter-order-in-queries)
### Optimize `search` operator and non-field-specific filters
Non-field-specific filters narrow your query results by searching across multiple datasets and fields for a given value. Examples of non-field-specific filters are the `search` operator and equivalent expressions such as `where * contains` or `where * has`.
Using non-field-specific filters can have a significant impact on query costs. For more information, see [Use the `search` operator efficiently](/apl/tabular-operators/search-operator#use-the-search-operator-efficiently) and [Regular expressions when simple filters suffice](/reference/performance#regular-expressions-when-simple-filters-suffice).
## Optimize dashboard refresh rates
Each time your dashboard refreshes, it runs a query on your data which results in query costs. Selecting a short refresh rate (such as 15s) for a long time range (such as 90 days) means that your dashboard frequently runs large queries in the background.
To optimize query costs, choose a refresh rate that is appropriate for the time range of your dashboard. For more information, see [Select refresh rate](/dashboards/configure#select-refresh-rate).
# Regions
Source: https://axiom.co/docs/reference/regions
This page explains how to work with Axiom based on your organization’s region.
Axiom will soon support a unified multi-region model to manage data across the US, EU (and beyond) from a single organization; contact the Axiom team to learn more.
In Axiom, your organization can use one of the following regions:
* US (most common)
* EU
The examples in this documentation use the US domain. If your organization uses the EU region, the base domain of the Axiom app and the Axiom API reference is different from the US region and you need to make some changes to the examples you find in this documentation.
## Check your region
Determine the region your organization uses in one of the following ways:
* Go to the [Axiom app](https://app.axiom.co/) and check the URL. Match the base domain in the Axiom web app with the table below:
| Region | Base domain in web app |
| ------ | ------------------------- |
| US | `https://app.axiom.co` |
| EU | `https://app.eu.axiom.co` |
* Click **Settings > General**, and then find the **Region** section.
## Axiom API reference
All examples in the documentation use the default US base domain `https://api.axiom.co`.
If your organization uses the EU region, change the base domain in the examples to `https://api.eu.axiom.co`.
| Region | Base domain of API endpoints | Example |
| ------ | ---------------------------- | --------------------------------------------------------- |
| US | `https://api.axiom.co` | `https://api.axiom.co/v1/datasets/DATASET_NAME/ingest` |
| EU | `https://api.eu.axiom.co` | `https://api.eu.axiom.co/v1/datasets/DATASET_NAME/ingest` |
# Semantic conventions
Source: https://axiom.co/docs/reference/semantic-conventions
This page explains Axiom’s support for OTel semantic conventions and what it means for your data.
[OpenTelemetry semantic conventions](https://opentelemetry.io/docs/specs/semconv/) specify standard attribute names and values for different kinds of operations and data.
## Trace attributes in Axiom
The OTel trace attributes you send to Axiom are available under the following fields:
* Attributes that follow semantic conventions are nested fields under the `attributes` field. For example, `attributes.http.method` or `attributes.http.url`.
* Resource attributes that follow semantic conventions are nested fields under the `resource` field. For example, `resource.host.name` or `resource.host.id`.
* Custom attributes that don’t match any semantic conventions are nested fields under the `attributes.custom` map field.
For more information on map fields and querying nested fields, see [Map fields](/apl/data-types/map-fields).
## Supported versions
For the versions of OTel semantic conventions that Axiom supports, see [System requirements](/reference/system-requirements#opentelemetry).
(Recommended) When you send OTel data to Axiom, include the version of the OTel semantic conventions that your data follows. This ensures that your data is properly shaped in Axiom.
If you don’t define the version and Axiom can’t detect it, Axiom defaults to the version specified in [System requirements](/reference/system-requirements#opentelemetry). In this case, Axiom nests attributes that don’t match the semantic conventions of the default version under the `attributes.custom` map field.
## Semantic conventions upgrades
To guarantee the best logging experience with OTel, Axiom regularly updates the list of supported versions of semantic conventions and sometimes the default version. These updates can change the shape of your data. Axiom announces these changes in the [Changelog](https://axiom.co/changelog) and you might need to take action:
* If you send data to Axiom using an unsupported version of OTel semantic conventions, be aware that the shape of your data can change when Axiom adds support for new versions.
* When you send OTel data to Axiom, include the version of the OTel semantic conventions that your data follows. This ensures that your data is properly shaped in Axiom and it won’t be affected when Axiom changes the default version.
In addition, the shape of your data can change when you choose to migrate to a newer version of OTel semantic conventions.
See the sections below for more details on how your data can change and the actions you need to take when this happens.
### Changes to list of supported versions
After Axiom adds support for new versions of OTel semantic conventions, the shape of your data can change when the following are all true:
* Before the update, you sent data to Axiom using an unsupported version of OTel semantic conventions.
* After the update, the version of OTel semantic conventions that you used becomes supported.
In this case, the shape of your data can change:
* Before the update, attributes that Axiom couldn’t match to the previously supported semantic conventions were nested under the `attributes.custom` map field.
* After the update, Axiom matches these attributes to the newly supported semantic conventions. The newly recognised attributes are nested under the `attributes` or `resource` fields, similarly to all other attributes that follow semantic conventions.
When the shape of your data changes, you need to [take action](#take-action-when-shape-of-data-changes).
### Changes to default version
If you don’t specify the version of the OTel semantic conventions that your data follows when you send OTel data to Axiom, Axiom interprets the data using the default version.
After Axiom changes the default version of OTel semantic conventions, the shape of your data can change when you don’t specify the version of the OTel semantic conventions in the data you send to Axiom. For this reason, to prevent changes to your data, include the version of the OTel semantic conventions that your data follows when you send OTel data to Axiom. This ensures that your data is properly shaped in Axiom and it won’t be affected when Axiom changes the default version.
When Axiom updates the default version of OTel semantic conventions, the shape of your data can change:
* Before the update, attributes that become supported between the old and the new default versions are nested under the `attributes.custom` field. After the update, these attributes are nested under the `attributes` or `resource` fields.
* Before the update, attributes that became deprecated between the old and the new default versions are nested under the `attributes` or `resource` fields. After the update, these attributes are nested under the `attributes.custom` field.
When the shape of your data changes, you need to [take action](#take-action-when-shape-of-data-changes).
### Migrate to new version
When you choose to migrate to a newer version of OTel semantic conventions, the shape of your data can change. Some attributes can become supported, deprecated, renamed, or relocated.
When the shape of your data changes, you need to [take action](#take-action-when-shape-of-data-changes).
### Determine changes between versions
To determine the changes between different versions of OTel semantic conventions, compare the [schema files](https://github.com/open-telemetry/semantic-conventions/tree/main/schemas) or the [changelog](https://github.com/open-telemetry/semantic-conventions/releases) in the OTel documentation. This informs you about how the shape of your data can change as a result of semantic conventions upgrades.
## Take action when shape of data changes
When some attributes are relocated or renamed, the shape of your data changes and you need to take action.
For example, assume that Axiom supports OTel semantic conventions up to version 1.25. You send data to Axiom that follows version 1.32 and you don’t specify the version. On June 12th 2025, Axiom adds support for versions up to 1.32 and makes version 1.32 the default. The following happens with the attribute `db.system.name`:
* You send the attribute `db.system.name` to Axiom because your data follows version 1.32. The shape of the data you send to Axiom doesn’t change during the update.
* Before the update, Axiom interpreted your data using the old default version 1.25. It didn’t recognize `db.system.name` and nested it under `attributes.custom`.
* After the update, Axiom interprets your data using the new default version 1.32. It recognizes `db.system.name` properly and nests it under `attributes`.
As a result, the attribute is relocated from `['attributes.custom']['db.system.name']` to `['attributes.db.system.name']`. You need to update all saved queries, dashboards, or monitors that reference the attribute. You have the following options:
* [Update queries to reference the new location](#reference-new-location)
* [Update queries to reference both locations](#reference-both-locations)
### Reference new location
When you update affected queries to reference the new location, the query results only include data you send after the semantic conventions upgrade.
For example, a saved query references the old location before the update:
```kusto
['otel-demo-traces']
| where ['service.name'] == "frontend"
| project ['attributes.custom']['db.system.name']
```
After the update, change the query to the following:
```kusto
['otel-demo-traces']
| where ['service.name'] == "frontend"
| project ['attributes.db.system.name']
```
### Reference both locations
To ensure that affected queries include data from the attribute that you send before and after the semantic conventions upgrade, use [coalesce](/apl/scalar-functions/string-functions#coalesce) in your query. This function evaluates a list of expressions and returns the first non-null value. In this case, pass the old and the new locations of the attribute to the `coalesce` function.
For example, a saved query references the old location before the update:
```kusto
['otel-demo-traces']
| where ['service.name'] == "frontend"
| project ['attributes.custom']['db.system.name']
```
After the update, change the query to the following:
```kusto
['otel-demo-traces']
| where ['service.name'] == "frontend"
| project coalesce(['attributes.custom']['db.system.name'], ['attributes.db.system.name'])
```
# Role-Based Access Control
Source: https://axiom.co/docs/reference/settings
This section explains how to configure Role-Based Access Control (RBAC).
{/* TODO: Rename this file it does not reflect the content. */}
Role-Based Access Control (RBAC) allows you to manage and restrict access to your data and resources efficiently. You can control access to your data with the following:
* [Groups](#groups)
* [Roles](#roles)
* [Users](#users)
* [Directory Sync](#directory-sync)
* [Single Sign-On (SAML SSO)](#single-sign-on-saml-sso)
Role-based access control (RBAC), Directory Sync, and Single Sign-On (SAML SSO) are available as add-ons if you’re on the Axiom Cloud plan, and they are included by default on the Bring Your Own Cloud plan. For more information on ugrading, see the [Plan page](https://app.axiom.co/settings/plan) in your Axiom settings.
## Groups
Groups connect users with roles, making it easier to manage access control at scale. For example, you can create groups for areas of your business like Security, Infrastructure, or Business Analytics, with specific roles assigned to serve the unique needs of these domains.
A user’s complete set of capabilities is derived from the additive union of their base role, plus any roles assigned through group membership.
### Create new group
1. Click **Settings > Groups**
2. Click **New group**.
3. Enter the name and description of the group.
4. Click **Add users** to add users to the group.
5. Click **Add roles** to add roles to the group.
## Roles
Roles are sets of capabilities that define which actions a user can perform at both the organization and dataset levels.
### Default roles
The default roles are the following:
* **Owner:** Assigns all capabilities across the entire Axiom platform.
* **Admin:** Assigns administrative capabilities except for Billing capabilities, which are reserved for Owners.
* **User:** Assigns standard access for regular users.
* **Read-only:** Assigns read capabilities for datasets, plus read access on various resources like dashboards, monitors, notifiers, users, queries, saved queries, and virtual fields.
* **None:** Assigns zero capabilities, useful for adopting the principle of least privilege when inviting new users. You can build up specific capabilities for these users by assigning their role to a group.
### Create custom role
1. Ensure you have create permission for the access control capability. By default, this capability is assigned to the Owner and Admin roles.
2. Click **Settings > Roles**.
3. Click **New role**.
4. Enter the name and description of the role.
5. Assign permissions (create, read, update, and delete) across capabilities (access control, API tokens, dashboards, datasets, etc.).
### Assign capabilities to roles
You can assign organization-level and dataset-level capabilities to roles. You can assign create, read, update, or delete (CRUD) permissions to most capabilities.
Organization-level capabilities define access for various parts of your Axiom organization:
* **Access control:** Full CRUD.
* **Annotations:** Full CRUD.
* **API tokens:** Full CRUD.
* **Apps:** Full CRUD.
* **Audit log:** Read only.
* **Billing:** Read and update only.
* **Dashboards:** Full CRUD.
* **Datasets:** Full CRUD.
* **Endpoints:** Full CRUD.
* **Flows:** Full CRUD.
* **Monitors:** Full CRUD.
* **Notifiers:** Full CRUD.
* **Shared access keys:** Read and update only.
* **Users:** Full CRUD.
* **Views:** Full CRUD.
The table below describes these organization-level capabilities:
| Capability | Create | Read | Update | Delete |
| ------------------ | ---------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------- |
| Access control | User can create custom roles and groups. | User can view the list of existing roles and groups. | User can update the and description of roles and groups, and modify permissions. | User can delete custom roles or groups. |
| Annotations | User can create annotations. | User can view the list of existing annotations in an organization. | User can modify annotations. | User can delete annotations. |
| API tokens | User can create an API token with access to the datasets their user has access to. | User can access the list of tokens that have been in their organization. | User can regenerate a token from the list of tokens in an organization. | User can delete API tokens created in their organization. |
| Apps | User can create a new app. | Users can access the list of installed apps in their organization. | Users can modify the existing apps in their organization. | User can disconnect apps installed in their organization. |
| Audit log | — | Users can access the audit log in an organization. | — | — |
| Billing | — | User can access billing settings. | User can change the organization plan. | — |
| Dashboards | User can create new dashboards. | User can access their own dashboards and those created by other users in their organization. | User can modify dashboard titles and descriptions. User can add, resize, and delete charts from dashboards. | User can delete a dashboard from their organization. |
| Datasets | User can create a new dataset. | Users can access the list of datasets in an organization, and their associated fields. | User can trim a dataset, and modify dataset fields. | User can delete a dataset from their organization. |
| Endpoints | User can create a new endpoint. | User can access the list of existing endpoints in an organization. | Users can rename an endpoint and modify which dataset data is ingested into. | User can delete an endpoint from their organization. |
| Flows | User can create a new flow. | User can access the list of existing flows in an organization. | Users can modify flows. | User can delete a flow from their organization. |
| Monitors | User can create a monitor. | User can access the list of monitors in their organization. User can also review the monitor status. | Users can modify a monitor configuration in their organization. | Users can delete monitors that have been created in their organization. |
| Notifiers | User can create a new notifier in their organization. | User can access the list of notifiers in their organization. | User can update existing notifiers in their organization. User can snooze a notifier. | User can delete notifiers that have been created in their organization. |
| Shared access keys | — | User can access shared access keys in their organization. | User can update shared access keys in their organization. | — |
| Users | Users can invite new users to an organization. | User can access the list of users that are part of their organization. | User can update user roles and information within the organization. | Users can remove other users from their organization and delete their own account. |
| Views | User can create new views. | User can access the list of views in an organization in their organization. | User can modify views. | User can delete views from their organization. |
Dataset-level capabilities provide fine-grained control over access to datasets. You can assign the following capabilities for all datasets or individual datasets:
* **Data:** Delete only.
* **Ingest:** Create only.
* **Query:** Read only.
* **Share:** Create, read, and update only.
* **Saved queries:** Full CRUD.
* **Trim:** Update only.
* **Vacuum:** Update only.
* **Virtual fields:** Full CRUD.
The table below describes these dataset-level capabilities:
| Datasets | Create | Read | Update | Delete |
| -------------- | ------------------------------------------- | ------------------------------------------------------------------ | ----------------------------------------------------------------- | --------------------------------------------- |
| Data | — | — | — | — |
| Ingest | User can ingest events to datasets. | — | — | User can delete data from datasets. |
| Query | — | User can query events from datasets. | — | — |
| Share | User can share datasets. | User can access the list of shared datasets in their organization. | User can modify an existing shared dataset in their organization. | — |
| Saved queries | User can create a saved query for datasets. | User can access the list of saved queries in their organization. | User can modify an existing saved query in their organization. | User can delete a saved query from a dataset. |
| Trim | — | — | User can trim datasets. | — |
| Vacuum | — | — | User can vacuum datasets. | — |
| Virtual fields | User can create a new virtual field. | User can see the list of virtual fields. | User can modify the definition of a virtual field. | User can delete a virtual field. |
### Access to datasets
The datasets that individual users have access to determine the following:
* The data they see in dashboards. If a user has access to a dashboard but only to some of the datasets referenced in the dashboard’s elements, the user only sees data from the datasets they have access to.
* The monitors they see. A user only sees the monitors that reference the datasets that the user has access to. If a user has access to the monitors of an organization but only to some of the datasets referenced in the monitors, the user only sees the monitors that reference the datasets they have access to. If a monitor joins several datasets, a user can only see the monitor if the user has access to all of the datasets.
## Users
Users in Axiom are the individual accounts that have access to an Axiom organization. You assign a base role to users when you invite them to join your organization. For organizations with the role-based access control (RBAC) add-on, additional roles can be added through group membership.
### Assign roles to users
1. Click **Settings > Users**.
2. Find the user in the list, and then assign a role to them on the right.
Access for a user is the additive union of capabilities assigned through their default role, plus any capabilities included in roles assigned through group membership.
### Delete users
This is a destructive action. After you delete a user, you cannot recover their account.
1. Click **Settings > Users**.
2. Find the user in the list, and then click **Delete user** on the right.
## Directory Sync
Directory Sync automatically mirrors user account data between a central directory, such as Active Directory, and connected applications. When the status of an employee changes, all systems are automatically updated.
For this feature, Axiom relies on WorkOS. For more information, see [Directory Sync](https://workos.com/directory-sync) and [Enterprise Single Sign-On](https://workos.com/single-sign-on) in the WorkOS documentation.
## Single Sign-On (SAML SSO)
To simplify access management and enhance security, Security Assertion Markup Language-based Single Sign-On (SAML SSO) allows you to keep access grants up-to-date with support for the industry standard SCIM protocol.
Axiom supports secure, centralized user authentication through both types of flow for SAML-based SSO:
* IdP-initiated flow (identity-provider-initiated flow)
* SP-initiated flow (service-provider-initiated flow)
# System requirements
Source: https://axiom.co/docs/reference/system-requirements
This page explains what versions of third-party applications Axiom supports.
## Browsers and platforms
The Axiom web app supports the latest versions of the following browsers and platforms:
| | Android | iOS | macOS | Linux | Windows |
| :------ | :------ | :-- | :---- | :---- | :------ |
| Chrome | ✓ | ✓ | ✓ | ✓ | ✓ |
| Edge | ✓ | ✓ | ✓ | ✓ | ✓ |
| Firefox | ✓ | ✓ | ✓ | ✓ | ✓ |
| Safari | - | ✓ | ✓ | - | - |
Some actions in the Dashboards tab, such as moving dashboard elements, aren’t supported in mobile view.
## OpenTelemetry
Axiom supports the following versions of OTel semantic conventions:
| Version | Date when supported added | Schema in OTel docs |
| :--------------- | :------------------------ | :---------------------------------------------------------------------------------------- |
| 1.34.0 | 01-07-2025 | [1.34.0](https://github.com/open-telemetry/semantic-conventions/blob/main/schemas/1.34.0) |
| 1.33.0 | 01-07-2025 | [1.33.0](https://github.com/open-telemetry/semantic-conventions/blob/main/schemas/1.33.0) |
| 1.32.0 (default) | 12-06-2025 | [1.32.0](https://github.com/open-telemetry/semantic-conventions/blob/main/schemas/1.32.0) |
| 1.31.0 | 12-06-2025 | [1.31.0](https://github.com/open-telemetry/semantic-conventions/blob/main/schemas/1.31.0) |
| 1.30.0 | 12-06-2025 | [1.30.0](https://github.com/open-telemetry/semantic-conventions/blob/main/schemas/1.30.0) |
| 1.28.0 | 12-06-2025 | [1.28.0](https://github.com/open-telemetry/semantic-conventions/blob/main/schemas/1.28.0) |
| 1.27.0 | 12-06-2025 | [1.27.0](https://github.com/open-telemetry/semantic-conventions/blob/main/schemas/1.27.0) |
| 1.26.0 | 03-07-2024 | [1.26.0](https://github.com/open-telemetry/semantic-conventions/blob/main/schemas/1.26.0) |
| 1.25.0 | 26-04-2024 | [1.25.0](https://github.com/open-telemetry/semantic-conventions/blob/main/schemas/1.25.0) |
| 1.24.0 | 19-01-2024 | [1.24.0](https://github.com/open-telemetry/semantic-conventions/blob/main/schemas/1.24.0) |
| 1.23.1 | 26-03-2024 | [1.23.1](https://github.com/open-telemetry/semantic-conventions/blob/main/schemas/1.23.1) |
| 1.23.0 | 26-03-2024 | [1.23.0](https://github.com/open-telemetry/semantic-conventions/blob/main/schemas/1.23.0) |
| 1.22.0 | 26-03-2024 | [1.22.0](https://github.com/open-telemetry/semantic-conventions/blob/main/schemas/1.22.0) |
| 1.21.0 | 26-03-2024 | [1.21.0](https://github.com/open-telemetry/semantic-conventions/blob/main/schemas/1.21.0) |
Version 1.29.0 of OTel semantic conventions isn’t supported.
For more information, see [Semantic conventions](/reference/semantic-conventions).
# Authenticate API requests with tokens
Source: https://axiom.co/docs/reference/tokens
Learn how you can authenticate your requests to the Axiom API with tokens.
This reference article explains how you can authenticate your requests to the Axiom API with tokens.
## Why authenticate with tokens
You can use the Axiom API and CLI to programmatically ingest and query data, and manage settings and resources. For example, you can create new API tokens and change existing datasets with API requests. To prove that these requests come from you, you must include forms of authentication called tokens in your API requests. Axiom offers two types of tokens:
* [API tokens](#api-tokens) let you control the actions that can be performed with the token. For example, you can specify that requests authenticated with a certain API token can only query data from a particular dataset.
* [Personal access tokens (PATs)](#personal-access-tokens-pat) provide full control over your Axiom account. Requests authenticated with a PAT can perform every action you can perform in Axiom. When possible, use API tokens instead of PATs.
Keep tokens confidential. Anyone with these forms of authentication can perform actions on your behalf such as sending data to your Axiom dataset.
When working with tokens, use the principle of least privilege:
* Assign only those privileges to API tokens that are necessary to perform the actions that you want.
* When possible, use API tokens instead of PATs because PATs have full control over your Axiom account.
For more information on how to use tokens in API requests, see [Get started with Axiom API](/restapi/introduction).
## API tokens
You can use two types of API tokens in Axiom:
* Basic API tokens let you ingest data to Axiom. When you create a basic API token, you select the datasets that you allow the basic API token to access.
* Advanced API tokens let you perform a wide range of actions in Axiom beyond ingesting data. When you create an advanced API token, you select which actions you allow the advanced API token to perform. For example, you can create an advanced API token that can only query data from a particular dataset and another that has wider privileges such as creating datasets and changing existing monitors.
After creating an API token, you cannot change the privileges assigned to that API token.
### Create basic API token
1. Click **Settings > API tokens**, and then click **New API token**.
2. Name your API token.
3. Optional: Give a description to the API token and set an expiration date.
4. In **Token permissions**, click **Basic**.
5. In **Dataset access**, select the datasets where this token can ingest data.
6. Click **Create**.
7. Copy the API token that appears and store it securely. It won’t be displayed again.
### Create advanced API token
1. Click **Settings > API tokens**, and then click **New API token**.
2. Name your API token.
3. Optional: Give a description to the API token and set an expiration date.
4. In **Token permissions**, click **Advanced**.
5. Select the datasets that this token can access and the actions it can perform.
6. In **Org level permissions**, select the actions the token can perform that affect your whole Axiom organisation. For example, creating users and changing existing notifiers.
7. Click **Create**.
8. Copy the API token that appears and store it securely. It won’t be displayed again.
### Regenerate API token
Similarly to passwords, it’s recommended to change API tokens regularly and to set an expiration date after which the token becomes invalid. When a token expires, you can regenerate it.
To regenerate an advanced API token, follow these steps:
1. Click **Settings > API tokens**.
2. In the list, select the API token you want to regenerate.
3. Click **Regenerate token**.
4. Copy the regenerated API token that appears and store it securely. It won’t be displayed again.
5. Update all the API requests where you use the API token with the regenerated token.
### Delete API token
1. Click **Settings > API tokens**.
2. In the list, hold the pointer over the API token you want to delete.
3. To the right, click **Delete**.
## Personal access tokens (PAT)
Personal access tokens (PATs) provide full control over your Axiom account. Requests authenticated with a PAT can perform every action you can perform in Axiom. When possible, use API tokens instead of PATs.
### Create PAT
1. Click **Settings > Profile**.
2. In the **Personal tokens** section, click **New token**.
3. Name the PAT.
4. Optional: Give a description to the PAT.
5. Copy the PAT that appears and store it securely. It wont be displayed again.
### Delete PAT
1. Click **Settings > Profile**.
2. In the list, find the PAT that you want to delete.
3. To the right of the PAT, click **Delete**.
## Determine organization ID
If you authenticate requests with a PAT, you must include the organization ID in the requests. For more information on including the organization ID in the request, see [Axiom API](/restapi/introduction) and [Axiom CLI](/reference/cli).
Determine the organization ID in one of the following ways:
* Click **Settings**, and then copy the organization ID in the top right corner.
* Click **Settings > General**, and then find the **ID** section.
* Go to the [Axiom app](https://app.axiom.co/) and check the URL. For example, in the URL `https://app.axiom.co/axiom-abcd/datasets`, the organization ID is `axiom-abcd`.
# API limits
Source: https://axiom.co/docs/restapi/api-limits
Learn how to limit the number of calls a user can make over a certain period of time.
Axiom limits the number of calls a user (and their organization) can make over a certain period
of time to ensure fair usage and to maintain the quality of our service for everyone.
Our systems closely monitor API usage and if a user exceeds any thresholds, we will
temporarily halt further processing of requests from that user (and/or organization).
This is to prevent any single user or app from overloading the system,
which could potentially impact other users' experience.
## Rate Limits
Rate limits vary and are specified by the following header in all responses:
| Header | Description |
| ----------------------- | ----------------------------------------------------------------------------------------------------------------------- |
| `X-RateLimit-Scope` | Indicates if the limits counts against the organisation or personal rate limit. |
| `X-RateLimit-Limit` | The maximum number of requests a user is permitted to make per minute. |
| `X-RateLimit-Remaining` | The number of requests remaining in the current rate limit window. |
| `X-RateLimit-Reset` | The time at which the current rate limit window resets in UTC [epoch seconds](https://en.wikipedia.org/wiki/Unix_time). |
**Possible values for X-RateLimit-Scope :**
* `user`
* `organization`
**When the rate limit is exceeded, an error is returned with the status "429 Too Many Requests"**:
```json
{
"message": "rate limit exceeded",
}
```
## Query Limits
| Header | Description |
| ------------------------ | ----------------------------------------------------------------------------------------------------------------------- |
| `X-QueryLimit-Limit` | The query cost limit of your plan in Gigabyte Milliseconds (GB\*ms). |
| `X-QueryLimit-Remaining` | The remaining query Gigabyte Milliseconds. |
| `X-QueryLimit-Reset` | The time at which the current rate limit window resets in UTC [epoch seconds](https://en.wikipedia.org/wiki/Unix_time). |
## Ingest Limits
| Header | Description |
| ------------------------- | ----------------------------------------------------------------------------------------------------------------------- |
| `X-IngestLimit-Limit` | The maximum bytes ingested a user is permitted to make per month. |
| `X-IngestLimit-Remaining` | The bytes ingested remaining in the current rate limit window. |
| `X-IngestLimit-Reset` | The time at which the current rate limit window resets in UTC [epoch seconds](https://en.wikipedia.org/wiki/Unix_time). |
Alongside data volume limits, we also monitor the rate of ingest requests.
If an organization consistently sends an excessive number of requests per second,
far exceeding normal usage patterns, we reserve the right to suspend their ingest
to maintain system stability and ensure fair resource allocation for all users.
To prevent exceeding these rate limits, it is highly recommended to use batching clients,
which can efficiently manage the number of requests by aggregating data before sending.
## Limits on ingested data
The table below summarizes the limits Axiom applies to each data ingest. These limits are independent of your pricing plan.
| | Limit |
| ------------------------- | --------- |
| Maximum event size | 1 MB |
| Maximum events in a batch | 10,000 |
| Maximum field name length | 200 bytes |
# Create annotation
Source: https://axiom.co/docs/restapi/endpoints/createAnnotation
v2 post /annotations
Create annotation
# Create dataset
Source: https://axiom.co/docs/restapi/endpoints/createDataset
v2 post /datasets
Create dataset
# Create group
Source: https://axiom.co/docs/restapi/endpoints/createGroup
v2 post /rbac/groups
Creates a new group in the organization.
# Create map field
Source: https://axiom.co/docs/restapi/endpoints/createMapField
v2 post /datasets/{dataset_id}/mapfields
# Create monitor
Source: https://axiom.co/docs/restapi/endpoints/createMonitor
v2 post /monitors
Create monitor
# Create notifier
Source: https://axiom.co/docs/restapi/endpoints/createNotifier
v2 post /notifiers
Creates a new notifier configuration for sending alerts through various channels (Slack, Email, etc)
# Create org
Source: https://axiom.co/docs/restapi/endpoints/createOrg
v2 post /orgs
# Create role
Source: https://axiom.co/docs/restapi/endpoints/createRole
v2 post /rbac/roles
Creates a new role in the organization with the specified permissions and member assignments.
# Create starred query
Source: https://axiom.co/docs/restapi/endpoints/createStarred
v2 post /apl-starred-queries
# Create API token
Source: https://axiom.co/docs/restapi/endpoints/createToken
v2 post /tokens
Create API token
# Create user
Source: https://axiom.co/docs/restapi/endpoints/createUser
v2 post /users
Create user
# Create view
Source: https://axiom.co/docs/restapi/endpoints/createView
v2 post /views
# Create virtual field
Source: https://axiom.co/docs/restapi/endpoints/createVirtualField
v2 post /vfields
# Delete annotation
Source: https://axiom.co/docs/restapi/endpoints/deleteAnnotation
v2 delete /annotations/{id}
Delete annotation
# Delete dataset
Source: https://axiom.co/docs/restapi/endpoints/deleteDataset
v2 delete /datasets/{dataset_id}
Delete dataset
# Delete group
Source: https://axiom.co/docs/restapi/endpoints/deleteGroup
v2 delete /rbac/groups/{id}
Permanently removes a group from the organization.
# Delete map fields
Source: https://axiom.co/docs/restapi/endpoints/deleteMapField
v2 delete /datasets/{dataset_id}/mapfields/{map_field_name}
# Delete monitor
Source: https://axiom.co/docs/restapi/endpoints/deleteMonitor
v2 delete /monitors/{id}
Delete monitor
# Delete notifier
Source: https://axiom.co/docs/restapi/endpoints/deleteNotifier
v2 delete /notifiers/{id}
Delete notifier
# Delete role
Source: https://axiom.co/docs/restapi/endpoints/deleteRole
v2 delete /rbac/roles/{id}
Permanently removes a role from the organization.
# Delete starred query
Source: https://axiom.co/docs/restapi/endpoints/deleteStarred
v2 delete /apl-starred-queries/{id}
# Delete API token
Source: https://axiom.co/docs/restapi/endpoints/deleteToken
v2 delete /tokens/{id}
Delete API token
# Delete view
Source: https://axiom.co/docs/restapi/endpoints/deleteView
v2 delete /views/{id}
# Delete virtual field
Source: https://axiom.co/docs/restapi/endpoints/deleteVirtualField
v2 delete /vfields/{id}
# Retrieve annotation
Source: https://axiom.co/docs/restapi/endpoints/getAnnotation
v2 get /annotations/{id}
Get annotation by ID
# List all annotations
Source: https://axiom.co/docs/restapi/endpoints/getAnnotations
v2 get /annotations
Get annotations
# Retrieve current user
Source: https://axiom.co/docs/restapi/endpoints/getCurrentUser
v2 get /user
Get current user
# Retrieve dataset
Source: https://axiom.co/docs/restapi/endpoints/getDataset
v2 get /datasets/{dataset_id}
Get dataset by ID
# List all datasets
Source: https://axiom.co/docs/restapi/endpoints/getDatasets
v2 get /datasets
Get list of datasets
# Retrieve field in dataset
Source: https://axiom.co/docs/restapi/endpoints/getFieldForDataset
v2 get /datasets/{dataset_id}/fields/{field_id}
# List all fields in dataset
Source: https://axiom.co/docs/restapi/endpoints/getFieldsForDataset
v2 get /datasets/{dataset_id}/fields
# Retrieve group
Source: https://axiom.co/docs/restapi/endpoints/getGroupById
v2 get /rbac/groups/{id}
Retrieves detailed information about a specific group by its unique identifier.
# List all map fields
Source: https://axiom.co/docs/restapi/endpoints/getMapFields
v2 get /datasets/{dataset_id}/mapfields
# Retrieve monitor
Source: https://axiom.co/docs/restapi/endpoints/getMonitor
v2 get /monitors/{id}
Retrieves detailed configuration for a specific monitor by its unique identifier
# Retrieve monitor history
Source: https://axiom.co/docs/restapi/endpoints/getMonitorHistory
v2 get /monitors/{id}/history
Get monitor history
# List all monitors
Source: https://axiom.co/docs/restapi/endpoints/getMonitors
v2 get /monitors
Lists all configured monitors. Returns an array of monitor configurations including their IDs and current status.
# Retrieve notifier
Source: https://axiom.co/docs/restapi/endpoints/getNotifier
v2 get /notifiers/{id}
Retrieves detailed configuration for a specific notifier by its unique identifier
# List all notifiers
Source: https://axiom.co/docs/restapi/endpoints/getNotifiers
v2 get /notifiers
Lists all configured notifiers. Returns an array of notification configurations including their IDs and current status.
# Retrieve org
Source: https://axiom.co/docs/restapi/endpoints/getOrg
v2 get /orgs/{id}
# List all orgs
Source: https://axiom.co/docs/restapi/endpoints/getOrgs
v2 get /orgs
# Retrieve role
Source: https://axiom.co/docs/restapi/endpoints/getRoleById
v2 get /rbac/roles/{id}
Retrieves detailed information about a specific role by its unique identifier.
# Retrieve starred query
Source: https://axiom.co/docs/restapi/endpoints/getStarred
v2 get /apl-starred-queries/{id}
# List all starred queries
Source: https://axiom.co/docs/restapi/endpoints/getStarredQueries
v2 get /apl-starred-queries
# Retrieve API token
Source: https://axiom.co/docs/restapi/endpoints/getToken
v2 get /tokens/{id}
Get API token by ID
# List all API tokens
Source: https://axiom.co/docs/restapi/endpoints/getTokens
v2 get /tokens
Get API tokens
# Retrieve user
Source: https://axiom.co/docs/restapi/endpoints/getUser
v2 get /users/{id}
Get user by ID
# List all users
Source: https://axiom.co/docs/restapi/endpoints/getUsers
v2 get /users
Get users
# Retrieve view
Source: https://axiom.co/docs/restapi/endpoints/getView
v2 get /views/{id}
# List all views
Source: https://axiom.co/docs/restapi/endpoints/getViews
v2 get /views
# Retrieve virtual field
Source: https://axiom.co/docs/restapi/endpoints/getVirtualField
v2 get /vfields/{id}
# List all virtual fields
Source: https://axiom.co/docs/restapi/endpoints/getVirtualFields
v2 get /vfields
# Ingest data
Source: https://axiom.co/docs/restapi/endpoints/ingestIntoDataset
v1 post /datasets/{dataset_name}/ingest
Ingest
# List all groups
Source: https://axiom.co/docs/restapi/endpoints/listGroups
v2 get /rbac/groups
Retrieves all groups in the organization.
# List all roles
Source: https://axiom.co/docs/restapi/endpoints/listRoles
v2 get /rbac/roles
Retrieves all roles in the organization with their associated permissions and members.
# Run query
Source: https://axiom.co/docs/restapi/endpoints/queryApl
v1 post /datasets/_apl?format=tabular
Query
# Run query (legacy)
Source: https://axiom.co/docs/restapi/endpoints/queryDataset
v1 post /datasets/{dataset_name}/query
Query (Legacy)
# Regenerate API token
Source: https://axiom.co/docs/restapi/endpoints/regenerateToken
v2 post /tokens/{id}/regenerate
Regenerate API token
# Delete user from org
Source: https://axiom.co/docs/restapi/endpoints/removeUserFromOrg
v2 delete /users/{id}
Remove user from org
# Trim dataset
Source: https://axiom.co/docs/restapi/endpoints/trimDataset
v1 post /datasets/{dataset_name}/trim
Trim dataset
When the endpoint returns the status code 200, it means that the deletion request is queued. The deletion itself might take several hours to complete.
# Update annotation
Source: https://axiom.co/docs/restapi/endpoints/updateAnnotation
v2 put /annotations/{id}
Update annotation
# Update current user
Source: https://axiom.co/docs/restapi/endpoints/updateCurrentUser
v2 put /user
Update current user
# Update dataset
Source: https://axiom.co/docs/restapi/endpoints/updateDataset
v2 put /datasets/{dataset_id}
Update dataset
# Update field
Source: https://axiom.co/docs/restapi/endpoints/updateFieldForDataset
v2 put /datasets/{dataset_id}/fields/{field_id}
# Update group
Source: https://axiom.co/docs/restapi/endpoints/updateGroup
v2 put /rbac/groups/{id}
Updates an existing group's configuration.
# Update list of map fields
Source: https://axiom.co/docs/restapi/endpoints/updateMapFields
v2 put /datasets/{dataset_id}/mapfields
In the body of the API request that you send to this endpoint, specify a list of field names:
* Fields you haven’t previously defined as map fields but include in the list become map fields.
* Fields you have previously defined as map fields and include in the list remain map fields.
* Fields you have previously defined as map fields but exclude from the list are removed.
# Update monitor
Source: https://axiom.co/docs/restapi/endpoints/updateMonitor
v2 put /monitors/{id}
Update monitor
# Update notifier
Source: https://axiom.co/docs/restapi/endpoints/updateNotifier
v2 put /notifiers/{id}
Update notifier
# Update org
Source: https://axiom.co/docs/restapi/endpoints/updateOrg
v2 put /orgs/{id}
# Update role
Source: https://axiom.co/docs/restapi/endpoints/updateRole
v2 put /rbac/roles/{id}
Updates an existing role's configuration including its permissions and member assignments.
# Update starred query
Source: https://axiom.co/docs/restapi/endpoints/updateStarred
v2 put /apl-starred-queries/{id}
# Update user role
Source: https://axiom.co/docs/restapi/endpoints/updateUserRole
v2 put /users/{id}/role
Update user role
# Update view
Source: https://axiom.co/docs/restapi/endpoints/updateView
v2 put /views/{id}
# Update virtual field
Source: https://axiom.co/docs/restapi/endpoints/updateVirtualField
v2 put /vfields/{id}
# Vacuum dataset
Source: https://axiom.co/docs/restapi/endpoints/vacuumDataset
v2 post /datasets/{dataset_id}/vacuum
# Send data to Axiom via API
Source: https://axiom.co/docs/restapi/ingest
This page explains how to send to Axiom using the API.
The Axiom REST API accepts the following data formats:
* [JSON](#send-data-in-json-format)
* [NDJSON](#send-data-in-ndjson-format)
* [CSV](#send-data-in-csv-format)
This page explains how to send data to Axiom via cURL commands in each of these formats, and how to send data with the [Axiom Node.js library](#send-data-with-axiom-node-js).
For more information on other ingest options, see [Send data](send-data/methods).
For an introduction to the basics of the Axiom API and to the authentication options, see [Introduction to Axiom API](/restapi/introduction).
The API requests on this page use the ingest data endpoint. For more information, see the [API reference](/restapi/endpoints/ingestIntoDataset).
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
## Send data in JSON format
To send data to Axiom in JSON format:
1. Encode the events as JSON objects.
2. Enter the array of JSON objects into the body of the API request.
3. Optional: In the body of the request, set optional parameters such as `timestamp-field` and `timestamp-format`. For more information, see the [ingest data API reference](/restapi/endpoints/ingestIntoDataset).
4. Set the `Content-Type` header to `application/json`.
5. Set the `Authorization` header to `Bearer API_TOKEN`.
6. Send the POST request to `https://AXIOM_DOMAIN/v1/datasets/DATASET_NAME/ingest`.
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
### Example with grouped events
The following example request contains grouped events. The structure of the JSON payload has the scheme of `[ { "labels": { "key1": "value1", "key2": "value2" } }, ]` where the array contains one or more JSON objects describing events.
**Example request**
```bash
curl -X 'POST' 'https://AXIOM_DOMAIN/v1/datasets/DATASET_NAME/ingest' \
-H 'Authorization: Bearer API_TOKEN' \
-H 'Content-Type: application/json' \
-d '[
{
"time":"2025-01-12T00:00:00.000Z",
"data":{"key1":"value1","key2":"value2"}
},
{
"data":{"key3":"value3"},
"labels":{"key4":"value4"}
}
]'
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
**Example response**
```json
{
"ingested": 2,
"failed": 0,
"failures": [],
"processedBytes": 219,
"blocksCreated": 0,
"walLength": 2
}
```
### Example with nested arrays
**Example request**
```bash
curl -X 'POST' 'https://AXIOM_DOMAIN/v1/datasets/DATASET_NAME/ingest' \
-H 'Authorization: Bearer API_TOKEN' \
-H 'Content-Type: application/json' \
-d '[
{
"axiom": [{
"logging":[{
"observability":[{
"location":[{
"credentials":[{
"datasets":[{
"first_name":"axiom",
"last_name":"logging",
"location":"global"
}],
"work":[{
"details":"https://app.axiom.co/",
"tutorials":"https://www.axiom.co/blog",
"changelog":"https://www.axiom.co/changelog",
"documentation": "https://www.axiom.co/docs"
}]
}],
"social_media":[{
"details":[{
"twitter":"https://twitter.com/AxiomFM",
"linkedin":"https://linkedin.com/company/axiomhq",
"github":"https://github.com/axiomhq"
}],
"features":[{
"datasets":"view logs",
"stream":"live_tail",
"explorer":"queries"
}]
}]
}]
}],
"logs":[{
"apl": "functions"
}]
}],
"storage":[{}]
}]}
]'
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
**Example response**
```json
{
"ingested":1,
"failed":0,
"failures":[],
"processedBytes":1587,
"blocksCreated":0,
"walLength":3
}
```
### Example with objects, strings, and arrays
**Example request**
```bash
curl -X 'POST' 'https://AXIOM_DOMAIN/v1/datasets/DATASET_NAME/ingest' \
-H 'Authorization: Bearer API_TOKEN' \
-H 'Content-Type: application/json' \
-d '[{ "axiom": {
"logging": {
"observability": [
{ "apl": 23, "function": "tostring" },
{ "apl": 24, "operator": "summarize" }
],
"axiom": [
{ "stream": "livetail", "datasets": [4, 0, 16], "logging": "observability", "metrics": 8, "dashboard": 10, "alerting": "kubernetes" }
]
},
"apl": {
"reference":
[[80, 12], [30, 40]]
}
}
}]'
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
**Example response**
```json
{
"ingested":1,
"failed":0,
"failures":[],
"processedBytes":432,
"blocksCreated":0,
"walLength":4
}
```
## Send data in NDJSON format
To send data to Axiom in NDJSON format:
1. Encode the events as JSON objects.
2. Enter each JSON object in a separate line into the body of the API request.
3. Optional: In the body of the request, set optional parameters such as `timestamp-field` and `timestamp-format`. For more information, see the [ingest data API reference](/restapi/endpoints/ingestIntoDataset).
4. Set the `Content-Type` header to either `application/json` or `application/x-ndjson`.
5. Set the `Authorization` header to `Bearer API_TOKEN`. Replace `API_TOKEN` with the Axiom API token you have generated.
6. Send the POST request to `https://AXIOM_DOMAIN/v1/datasets/DATASET_NAME/ingest`. Replace `DATASET_NAME` with the name of the Axiom dataset where you want to send data.
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
**Example request**
```bash
curl -X 'POST' 'https://AXIOM_DOMAIN/v1/datasets/DATASET_NAME/ingest' \
-H 'Authorization: Bearer API_TOKEN' \
-H 'Content-Type: application/x-ndjson' \
-d '{"id":1,"name":"machala"}
{"id":2,"name":"axiom"}
{"id":3,"name":"apl"}
{"index": {"_index": "products"}}
{"timestamp": "2016-06-06T12:00:00+02:00", "attributes": {"key1": "value1","key2": "value2"}}
{"queryString": "count()"}'
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
**Example response**
```json
{
"ingested": 6,
"failed": 0,
"failures": [],
"processedBytes": 266,
"blocksCreated": 0,
"walLength": 6
}
```
## Send data in CSV format
To send data to Axiom in JSON format:
1. Encode the events in CSV format. The first line specifies the field names separated by commas. Subsequent new lines specify the values separated by commas.
2. Enter the CSV representation in the body of the API request.
3. Optional: In the body of the request, set optional parameters such as `timestamp-field` and `timestamp-format`. For more information, see the [ingest data API reference](/restapi/endpoints/ingestIntoDataset).
4. Set the `Content-Type` header to `text/csv`.
5. Set the `Authorization` header to `Bearer API_TOKEN`. Replace `API_TOKEN` with the Axiom API token you have generated.
6. Send the POST request to `https://AXIOM_DOMAIN/v1/datasets/DATASET_NAME/ingest`. Replace `DATASET_NAME` with the name of the Axiom dataset where you want to send data.
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
**Example request**
```bash
curl -X 'POST' 'https://AXIOM_DOMAIN/v1/datasets/DATASET_NAME/ingest' \
-H 'Authorization: Bearer API_TOKEN' \
-H 'Content-Type: text/csv' \
-d 'user, name
foo, bar'
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
**Example response**
```json
{
"ingested": 1,
"failed": 0,
"failures": [],
"processedBytes": 28,
"blocksCreated": 0,
"walLength": 2
}
```
## Send data with Axiom Node.js
1. [Install and configure](/guides/javascript#use-axiomhq-js) the Axiom Node.js library.
2. Encode the events as JSON objects.
3. Pass the dataset name and the array of JSON objects to the `axiom.ingest` function.
```ts
axiom.ingest('DATASET_NAME', [{ foo: 'bar' }]);
await axiom.flush();
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
For more information on other libraries you can use to query data, see [Send data](send-data/methods).
## What’s next
After ingesting data to Axiom, you can [query it via API](/restapi/query) or the [Axiom app UI](/query-data/explore).
# Get started with Axiom API
Source: https://axiom.co/docs/restapi/introduction
This section explains how to send data to Axiom, query data, and manage resources using the Axiom API.
You can use the Axiom API (Application Programming Interface) to send data to Axiom, query data, and manage resources programmatically. This page covers the basics for interacting with the Axiom API.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
## API basics
Axiom API follows the REST architectural style and uses JSON for serialization. You can send API requests to Axiom with curl or API tools such as [Postman](https://www.postman.com/).
For example, the following curl command ingests data to an Axiom dataset:
```bash
curl -X 'POST' 'https://AXIOM_DOMAIN/v1/datasets/DATASET_NAME/ingest' \
-H 'Authorization: Bearer API_TOKEN' \
-H 'Content-Type: application/json' \
-d '[
{
"axiom": "logs"
}
]'
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
For more information, see [Send data to Axiom via API](/restapi/ingest) and [Ingest data endpoint](/restapi/endpoints/ingestIntoDataset).
## Regions
All examples in the Axiom API reference use the base domain `https://api.axiom.co`, which is the default for the US region. If your organization uses the EU region, change the base domain in the examples to `https://api.eu.axiom.co`.
For more information on regions, see [Regions](/reference/regions).
## Content type
Encode the body of API requests as JSON objects and set the `Content-Type` header to `application/json`. Unless otherwise specified, Axiom encodes all responses (including errors) as JSON objects.
## Authentication
To prove that API requests come from you, you must include forms of authentication called tokens in your API requests. Axiom offers two types of tokens:
* [API tokens](/reference/tokens#api-tokens) let you control the actions that can be performed with the token. For example, you can specify that requests authenticated with a certain API token can only query data from a particular dataset.
* [Personal access tokens (PATs)](/reference/tokens#personal-access-tokens-pat) provide full control over your Axiom account. Requests authenticated with a PAT can perform every action you can perform in Axiom. When possible, use API tokens instead of PATs.
If you use an API token for authentication, include the API token in the `Authorization` header.
```bash
Authorization: Bearer API_TOKEN
```
If you use a PAT for authentication, include the PAT in the `Authorization` header and the org ID in the `x-axiom-org-id` header. For more information, see [Determine org ID](/reference/tokens#determine-org-id).
```bash
Authorization: Bearer API_TOKEN
x-axiom-org-id: ORG_ID
```
If authentication is unsuccessful for a request, Axiom returns the error status code `403`.
## Data types
Below is a list of the types of data used within the Axiom API:
| Name | Definition | Example |
| ----------- | ----------------------------------------------------------------- | ----------------------- |
| **ID** | A unique value used to identify resources. | "io12h34io1h24i" |
| **String** | A sequence of characters used to represent text. | "string value" |
| **Boolean** | A type of two possible values representing true or false. | true |
| **Integer** | A number without decimals. | 4567 |
| **Float** | A number with decimals. | 15.67 |
| **Map** | A data structure with a list of values assigned to a unique key. | \{ "key": "value" } |
| **List** | A data structure with only a list of values separated by a comma. | \["value", 4567, 45.67] |
## What's next
* [Ingest data via API](/restapi/ingest)
* [Query data via API](/restapi/query)
# Pagination in Axiom API
Source: https://axiom.co/docs/restapi/pagination
Learn how to use pagination with the Axiom API.
Pagination allows you to retrieve responses in manageable chunks.
You can use pagination for the following endpoints:
* [Run Query](/restapi/endpoints/queryApl)
* [Run Query (Legacy)](/restapi/endpoints/queryDataset)
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
## Pagination mechanisms
You can use one of the following pagination mechanisms:
* [Pagination based on timestamp](#timestamp-based-pagination) (stable)
* [Pagination based on cursor](#cursor-based-pagination) (public preview)
Axiom recommends timestamp-based pagination. Cursor-based pagination is in public preview and may return unexpected query results.
## Timestamp-based pagination
The parameters and mechanisms differ between the current and legacy endpoints.
### Run Query
To use timestamp-based pagination with the Run Query endpoint:
* Include the [limit operator](/apl/tabular-operators/limit-operator) in the APL query of your API request. The argument of this operator determines the number of events to display per page.
* Use `sort by _time asc` or `sort by _time desc` in the APL query. This returns the results in ascending or descending chronological order. For more information, see [sort operator](/apl/tabular-operators/sort-operator).
* Specify `startTime` and `endTime` in the body of your API request.
### Run Query (Legacy)
To use timestamp-based pagination with the legacy Run Query endpoint:
* Add the `limit` parameter to the body of your API request. The value of this parameter determines the number of events to display per page.
* Add the `order` parameter to the body of your API request. In the value of this parameter, order the results by time in either ascending or descending chronological order. For example, `[{ "field": "_time", "desc": true }]`. For more information, see [order operator](/apl/tabular-operators/order-operator).
* Specify `startTime` and `endTime` in the body of your API request.
## Page through the result set
Use the timestamps as boundaries to page through the result set.
### Queries with descending order
To go to the next page of the result set for queries with descending order (`_time desc`):
1. Determine the timestamp of last item on the current page. This is the least recent event.
2. Optional: Subtract 1 nanosecond from the timestamp.
3. In your next request, change the value `endTime` parameter in the body of your API request to the timestamp of the last item (optionally, minus 1 nanosecond).
Repeat this process until the result set is empty.
### Queries with ascending order
To go to the next page of the result set for queries with ascending order (`_time asc`):
1. Determine the timestamp of last item on the current page. This is the most recent event.
2. Optional: Add 1 nanosecond to the timestamp.
3. In your next request, change the value `startTime` parameter in the body of your API request to the timestamp of the last item (optionally, plus 1 nanosecond).
Repeat this process until the result set is empty.
### Deduplication mechanism
In the procedures above, the steps about incrementing the timestamp are optional. If you increment the timestamp, there is a risk of duplication. If you don’t increment the timestamp, there is a risk of overlap. Duplicated data is possible for many reasons, such as backfill or natural duplication from external data sources. For these reasons, regardless of the method you choose (increment or not increment the timestamp, sort by descending or ascending order), Axiom recommends you implement some form of deduplication mechanism in your pagination script.
### Limits
Both the Run Query and the Run Query (Legacy) endpoints allow request-based limit configuration. This means that the limit they use is the lowest of the following: the query limit, the request limit, and Axiom’s server-side internal limit. Without a query or request limit, Axiom currently defaults to the limit of 1,000 events per page. For the pagination of datasets that are greater than 1,000 events, Axioms recommends specifying the same limit in the request and the APL query to avoid the default value and contradictory limits.
### Examples
#### Example request Run Query
```bash
curl -X 'POST' 'https://AXIOM_DOMAIN/v1/datasets/_apl?format=tabular' \
-H 'Authorization: Bearer API_TOKEN' \
-H 'Content-Type: application/json' \
-d '{
"apl": "DATASET_NAME | sort by _time desc | limit 100",
"startTime": "2024-11-30T00:00:00.000Z",
"endTime": "2024-11-30T23:59:59.999Z"
}'
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
#### Example request Run Query (Legacy)
```bash
curl -X 'POST' 'https://AXIOM_DOMAIN/v1/datasets/DATASET_NAME/query' \
-H 'Authorization: Bearer API_TOKEN' \
-H 'Content-Type: application/json' \
-d '{
"startTime": "2024-11-30T00:00:00.000Z",
"endTime": "2024-11-30T23:59:59.999Z",
"limit": 100,
"order": [{ "field": "_time", "desc": true }]
}'
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
#### Example request to page through the result set
Example request to go to next page for Run Query:
```bash
curl -X 'POST' 'https://AXIOM_DOMAIN/v1/datasets/_apl?format=tabular' \
-H 'Authorization: Bearer API_TOKEN' \
-H 'Content-Type: application/json' \
-d '{
"apl": "DATASET_NAME | sort by _time desc | limit 100",
"startTime": "2024-11-30T00:00:00.000Z",
"endTime": "2024-11-30T22:59:59.999Z"
}'
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
Example request to go to next page for Run Query (Legacy):
```bash
curl -X 'POST' 'https://AXIOM_DOMAIN/v1/datasets/DATASET_NAME/query' \
-H 'Authorization: Bearer API_TOKEN' \
-H 'Content-Type: application/json' \
-d '{
"startTime": "2024-11-30T00:00:00.000Z",
"endTime": "2024-11-30T22:59:59.999Z",
"limit": 100,
"order": [{ "field": "_time", "desc": true }]
}'
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
## Cursor-based pagination
Cursor-based pagination is in public preview and may return unexpected query results. Axiom recommends timestamp-based pagination.
The parameters and mechanisms differ between the current and legacy endpoints.
### Run Query
To use cursor-based pagination with the Run Query endpoint:
* Include the [`limit` operator](/apl/tabular-operators/limit-operator) in the APL query of your API request. The argument of this operator determines the number of events to display per page.
* Use `sort by _time asc` or `sort by _time desc` in the APL query. This returns the results in ascending or descending chronological order. For more information, see [sort operator](/apl/tabular-operators/sort-operator).
* Specify `startTime` and `endTime` in the body of your API request.
### Run Query (Legacy)
To use cursor-based pagination with the legacy Run Query endpoint:
* Add the `limit` parameter to the body of your API request. The value of this parameter determines the number of events to display per page.
* Add the `order` parameter to the body of your API request. In the value of this parameter, order the results by time in either ascending or descending chronological order. For example, `[{ "field": "_time", "desc": true }]`. For more information, see [order operator](/apl/tabular-operators/order-operator).
* Specify `startTime` and `endTime` in the body of your API request.
### Response format
Contains metadata about the response including pagination information.
Cursor for the first item in the current page.
Cursor for the last item in the current page.
Total number of rows matching the query.
Contains the list of returned objects.
## Page through the result set
To page through the result set, add the `cursor` parameter to the body of your API request.
Optional. A cursor for use in pagination. Use the cursor string returned in previous responses to fetch the next or previous page of results.
The `minCursor` and `maxCursor` fields in the response are boundaries that help you page through the result set.
For queries with descending order (`_time desc`), use `minCursor` from the response as the `cursor` in your next request to go to the next page. You reach the end when your provided `cursor` matches the `minCursor` in the response.
For queries with ascending order (`_time asc`), use `maxCursor` from the response as the `cursor` in your next request to go to the next page. You reach the end when your provided `cursor` matches the `maxCursor` in the response.
If the query returns fewer results than the specified limit, paging can stop.
### Examples
#### Example request Run Query
```bash
curl -X 'POST' 'https://AXIOM_DOMAIN/v1/datasets/_apl?format=tabular' \
-H 'Authorization: Bearer API_TOKEN' \
-H 'Content-Type: application/json' \
-d '{
"apl": "DATASET_NAME | sort by _time desc | limit 100",
"startTime": "2024-01-01T00:00:00.000Z",
"endTime": "2024-01-31T23:59:59.999Z"
}'
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
#### Example request Run Query (Legacy)
```bash
curl -X 'POST' 'https://AXIOM_DOMAIN/v1/datasets/DATASET_NAME/query' \
-H 'Authorization: Bearer API_TOKEN' \
-H 'Content-Type: application/json' \
-d '{
"startTime": "2024-01-01T00:00:00.000Z",
"endTime": "2024-01-31T23:59:59.999Z",
"limit": 100,
"order": [{ "field": "_time", "desc": true }]
}'
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
#### Example response
```json
{
"status": {
"rowsMatched": 2500,
"minCursor": "0d3wo7v7e1oii-075a8c41710018b9-0000ecc5",
"maxCursor": "0d3wo7v7e1oii-075a8c41710018b9-0000faa3"
},
"matches": [
// ... events ...
]
}
```
#### Example request to page through the result set
To page through the result set, use the appropriate cursor value in your next request. For more information, see [Page through the result set](#page-through-the-result-set).
Example request to go to next page for Run Query:
```bash
curl -X 'POST' 'https://AXIOM_DOMAIN/v1/datasets/_apl?format=tabular' \
-H 'Authorization: Bearer API_TOKEN' \
-H 'Content-Type: application/json' \
-d '{
"apl": "DATASET_NAME | sort by _time desc | limit 100",
"startTime": "2024-01-01T00:00:00.000Z",
"endTime": "2024-01-31T23:59:59.999Z",
"cursor": "0d3wo7v7e1oii-075a8c41710018b9-0000ecc5"
}'
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
Example request to go to next page for Run Query (Legacy):
```bash
curl -X 'POST' 'https://AXIOM_DOMAIN/v1/datasets/DATASET_NAME/query' \
-H 'Authorization: Bearer API_TOKEN' \
-H 'Content-Type: application/json' \
-d '{
"startTime": "2024-01-01T00:00:00.000Z",
"endTime": "2024-01-31T23:59:59.999Z",
"limit": 100,
"order": [{ "field": "_time", "desc": true }],
"cursor": "0d3wo7v7e1oii-075a8c41710018b9-0000ecc5"
}'
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
# Query data via Axiom API
Source: https://axiom.co/docs/restapi/query
Learn how to use the Axiom API to query data.
This page explains how to query data via the Axiom API using the following:
* [cURL](#query-data-with-curl)
* [Axiom Node.js library](#query-data-with-axiom-nodejs)
For an introduction to the basics of the Axiom API and to the authentication options, see [Introduction to Axiom API](/restapi/introduction).
The API requests on this page use the query data endpoint. For more information, see the [API reference](/restapi/endpoints/queryApl).
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
## Query data with cURL
To query data with cURL:
1. Build the APL query. For more information, see [Introduction to APL](/apl/introduction).
2. Encode the APL query as a JSON object and enter it into the body of the API request.
3. Optional: In the body of the request, set optional parameters such as `startTime` and `endTime`. For more information, see the [query data API reference](/restapi/endpoints/queryApl).
4. Set the `Content-Type` header to `application/json`.
5. Set the `Authorization` header to `Bearer API_TOKEN`.
6. Send the POST request to one of the following:
* For tabular output, use `https://AXIOM_DOMAIN/v1/datasets/_apl?format=tabular`.
* For legacy output, use `https://AXIOM_DOMAIN/v1/datasets/_apl?format=legacy`.
### Example
```bash
curl --request POST \
--url 'https://AXIOM_DOMAIN/v1/datasets/_apl?format=tabular' \
--header 'Authorization: Bearer API_TOKEN' \
--header 'Content-Type: application/json' \
--data '{
"apl": "DATASET_NAME | limit 10",
"startTime": "string",
"endTime": "string"
}'
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
**Example response**
```json [expandable]
{
"format": "tabular",
"status": {
"elapsedTime": 260650,
"minCursor": "0d8q6stroluyo-07c3957e7400015c-0000c875",
"maxCursor": "0d8q6stroluyo-07c3957e7400015c-0000c877",
"blocksExamined": 4,
"blocksCached": 0,
"blocksMatched": 0,
"rowsExamined": 197604,
"rowsMatched": 197604,
"numGroups": 0,
"isPartial": false,
"cacheStatus": 1,
"minBlockTime": "2025-03-26T12:03:14Z",
"maxBlockTime": "2025-03-26T12:12:42Z"
},
"tables": [
{
"name": "0",
"sources": [
{
"name": "DATASET_NAME"
}
],
"fields": [
{
"name": "_sysTime",
"type": "datetime"
},
{
"name": "_time",
"type": "datetime"
},
{
"name": "content_type",
"type": "string"
},
{
"name": "geo.city",
"type": "string"
},
{
"name": "geo.country",
"type": "string"
},
{
"name": "id",
"type": "string"
},
{
"name": "is_tls",
"type": "boolean"
},
{
"name": "message",
"type": "string"
},
{
"name": "method",
"type": "string"
},
{
"name": "req_duration_ms",
"type": "float"
},
{
"name": "resp_body_size_bytes",
"type": "integer"
},
{
"name": "resp_header_size_bytes",
"type": "integer"
},
{
"name": "server_datacenter",
"type": "string"
},
{
"name": "status",
"type": "string"
},
{
"name": "uri",
"type": "string"
},
{
"name": "user_agent",
"type": "string"
},
{
"name": "is_ok_2 ",
"type": "boolean"
},
{
"name": "city_str_len",
"type": "integer"
}
],
"order": [
{
"field": "_time",
"desc": true
}
],
"groups": [],
"range": {
"field": "_time",
"start": "1970-01-01T00:00:00Z",
"end": "2025-03-26T12:12:43Z"
},
"columns": [
[
"2025-03-26T12:12:42.68112905Z",
"2025-03-26T12:12:42.68112905Z",
"2025-03-26T12:12:42.68112905Z"
],
[
"2025-03-26T12:12:42Z",
"2025-03-26T12:12:42Z",
"2025-03-26T12:12:42Z"
],
[
"text/html",
"text/plain-charset=utf-8",
"image/jpeg"
],
[
"Ojinaga",
"Humboldt",
"Nevers"
],
[
"Mexico",
"United States",
"France"
],
[
"8af366cf-6f25-42e6-bbb4-d860ab535a60",
"032e7f68-b0ab-47c0-a24a-35af566359e5",
"4d2c7baa-ff28-4b1f-9db9-8e6c0ed5a9c9"
],
[
false,
false,
true
],
[
"QCD permutations were not solvable in linear time, expected compressed time",
"QCD permutations were not solvable in linear time, expected compressed time",
"Expected a new layer of particle physics but got a Higgs Boson"
],
[
"GET",
"GET",
"GET"
],
[
1.396373193863436,
0.16252390534308514,
0.4093416175186162
],
[
3448,
2533,
1906
],
[
84,
31,
29
],
[
"DCA",
"GRU",
"FRA"
],
[
"201",
"200",
"200"
],
[
"/api/v1/buy/commit/id/go",
"/api/v1/textdata/cnfigs",
"/api/v1/bank/warn"
],
[
"Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; AS; rv:11.0) like Gecko",
"Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24",
"Mozilla/5.0 (Windows; U; MSIE 9.0; WIndows NT 9.0; en-US))"
],
[
true,
true,
true
],
[
7,
8,
6
]
]
}
],
"datasetNames": [
"DATASET_NAME"
],
"fieldsMetaMap": {
"DATASET_NAME": [
{
"name": "status",
"type": "",
"unit": "",
"hidden": false,
"description": "HTTP status code"
},
{
"name": "resp_header_size_bytes",
"type": "integer",
"unit": "none",
"hidden": false,
"description": ""
},
{
"name": "geo.city",
"type": "string",
"unit": "",
"hidden": false,
"description": "the city"
},
{
"name": "resp_body_size_bytes",
"type": "integer",
"unit": "decbytes",
"hidden": false,
"description": ""
},
{
"name": "content_type",
"type": "string",
"unit": "",
"hidden": false,
"description": ""
},
{
"name": "geo.country",
"type": "string",
"unit": "",
"hidden": false,
"description": ""
},
{
"name": "req_duration_ms",
"type": "float",
"unit": "ms",
"hidden": false,
"description": "Request duration"
}
]
}
}
```
## Query data with Axiom Node.js
1. [Install and configure](/guides/javascript#use-axiomhq-js) the Axiom Node.js library.
2. Build the APL query. For more information, see [Introduction to APL](/apl/introduction).
3. Pass the APL query as a string to the `axiom.query` function.
```ts
const res = await axiom.query(`['DATASET_NAME'] | where foo == 'bar' | limit 100`);
console.log(res);
```
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
For more examples, see the [examples in GitHub](https://github.com/axiomhq/axiom-js/tree/main/examples).
For more information on other libraries you can use to query data, see [Send data](send-data/methods).
# Send data from Amazon Data Firehose to Axiom
Source: https://axiom.co/docs/send-data/aws-firehose
This page explains how to send data from Amazon Data Firehose to Axiom.
Amazon Data Firehose is a service for delivering real-time streaming data to different destinations. Send event data from Amazon Data Firehose to Axiom to analyse and monitor your data efficiently.
To determine the best method to send data from different AWS services, see [Send data from AWS to Axiom](/send-data/aws-overview).
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
- [Create an account on AWS Cloud](https://signin.aws.amazon.com/signup?request_type=register).
## Setup
1. In Axiom, determine the ID of the dataset you’ve created.
2. In Amazon Data Firehose, create an HTTP endpoint destination. For more information, see the [Amazon Data Firehose documentation](https://docs.aws.amazon.com/firehose/latest/dev/create-destination.html#create-destination-http).
3. Set HTTP endpoint URL to `https://AXIOM_DOMAIN/v1/datasets/DATASET_NAME/ingest/firehose`.
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
4. Set the access key to the Axiom API token.
You have configured Amazon Data Firehose to send data to Axiom. Go to the Axiom UI and ensure your dataset receives events properly.
# Send data from AWS FireLens to Axiom
Source: https://axiom.co/docs/send-data/aws-firelens
Leverage AWS FireLens to forward logs from Amazon ECS tasks to Axiom for efficient, real-time analysis and insights.
AWS FireLens is a log routing feature for Amazon ECS. It lets you use popular open-source logging projects [Fluent Bit](https://fluentbit.io/) or [Fluentd](https://www.fluentd.org/) with Amazon ECS to route your logs to various AWS and partner monitoring solutions like Axiom without installing third-party agents on your tasks.
FireLens integrates with your Amazon ECS tasks and services seamlessly, so you can send logs from your containers to Axiom seamlessly.
To determine the best method to send data from different AWS services, see [Send data from AWS to Axiom](/send-data/aws-overview).
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
## Use AWS FireLens with Fluent Bit and Axiom
Here’s a basic configuration for using FireLens with Fluent Bit to forward logs to Axiom:
## Fluent Bit configuration for Axiom
You'll typically define this in a file called `fluent-bit.conf`:
```ini
[SERVICE]
Log_Level info
[INPUT]
Name forward
Listen 0.0.0.0
Port 24224
[OUTPUT]
Name http
Match *
Host AXIOM_DOMAIN
Port 443
URI /v1/datasets/DATASET_NAME/ingest
Format json_lines
tls On
format json
json_date_key _time
json_date_format iso8601
Header Authorization Bearer API_TOKEN
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
Read more about [Fluent Bit configuration here](/send-data/fluent-bit)
## ECS task definition with FireLens
You'll want to include this within your ECS task definition, and reference the FireLens configuration type and options:
```json
{
"family": "myTaskDefinition",
"containerDefinitions": [
{
"name": "log_router",
"image": "amazon/aws-for-fluent-bit:latest",
"essential": true,
"firelensConfiguration": {
"type": "fluentbit",
"options": {
"config-file-type": "file",
"config-file-value": "/fluent-bit/etc/fluent-bit.conf"
}
}
},
{
"name": "myApp",
"image": "my-app-image",
"logConfiguration": {
"logDriver": "awsfirelens"
}
}
]
}
```
## Use AWS FireLens with Fluentd and Axiom
Create the `fluentd.conf` file and add your configuration:
```bash
@type forward
port 24224
bind 0.0.0.0
@type http
headers {"Authorization": "Bearer API_TOKEN"}
data_type json
endpoint https://AXIOM_DOMAIN/v1/datasets/DATASET_NAME/ingest
sourcetype ecs
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
Read more about [Fluentd configuration here](/send-data/fluentd)
## ECS Task Definition for Fluentd
The task definition would be similar to the Fluent Bit example, but using Fluentd and its configuration:
```json
{
"family": "fluentdTaskDefinition",
"containerDefinitions": [
{
"name": "log_router",
"image": "YOUR_ECR_REPO_URI:latest",
"essential": true,
"memory": 512,
"cpu": 256,
"firelensConfiguration": {
"type": "fluentd",
"options": {
"config-file-type": "file",
"config-file-value": "/path/to/your/fluentd.conf"
}
}
},
{
"name": "myApp",
"image": "my-app-image",
"essential": true,
"memory": 512,
"cpu": 256,
"logConfiguration": {
"logDriver": "awsfirelens",
"options": {
"Name": "forward",
"Host": "log_router",
"Port": "24224"
}
}
}
]
}
```
By efficiently routing logs with FireLens and analyzing them with Axiom, businesses and development teams can save on operational overheads and reduce time spent on troubleshooting.
# Send data from AWS IoT to Axiom
Source: https://axiom.co/docs/send-data/aws-iot-rules
This page explains how to route device log data from AWS IoT Core to Axiom using AWS IoT and Lambda functions
To determine the best method to send data from different AWS services, see [Send data from AWS to Axiom](/send-data/aws-overview).
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
- Create an AWS account with permissions to create and manage IoT rules, Lambda functions, and IAM roles.
## Create AWS Lambda function
Create a Lambda function with Python runtime and the following content. For more information, see the [AWS documentation](https://docs.aws.amazon.com/lambda/latest/dg/getting-started.html#getting-started-create-function). The Lambda function acts as an intermediary to process data from AWS IoT and send it to Axiom.
```python
import os # Import the os module to access environment variables
import json # Import the json module to handle JSON data
import requests # Import the requests module to make HTTP requests
def lambda_handler(event, context):
# Retrieve the dataset name and the Axiom domain from the environment variables
dataset_name = os.environ['DATASET_NAME']
axiom_domain = os.environ['AXIOM_DOMAIN']
# Construct the Axiom API URL using the dataset name
axiom_api_url = f"https://{axiom_domain}/v1/datasets/{dataset_name}/ingest"
# Retrieve the Axiom API token from the environment variable
api_token = os.environ['API_TOKEN']
# Define the headers for the HTTP request to Axiom
headers = {
"Authorization": f"Bearer {api_token}", # Set the Authorization header with the token
"Content-Type": "application/json", # Specify the content type as JSON
"X-Axiom-Dataset": dataset_name # Include the dataset name in the headers
}
# Create the payload for the HTTP request
payload = {
"tags": {"source": "aws-iot"}, # Add a tag to indicate the source of the data
"events": [{"timestamp": event['timestamp'], "attributes": event}] # Include the event data
}
# Send a POST request to the Axiom API with the headers and payload
response = requests.post(axiom_api_url, headers=headers, data=json.dumps(payload))
# Return the status code and a confirmation message
return {
'statusCode': response.status_code, # Return the HTTP status code from the Axiom API response
'body': json.dumps('Log sent to Axiom!') # Return a confirmation message as JSON
}
```
In the environment variables section of the Lambda function configuration, add the following environment variables:
* `DATASET_NAME` is the name of the Axiom dataset where you want to send data.
* `AXIOM_DOMAIN` is the Axiom domain that your organization uses. For more information, see [Regions](/reference/regions).
* `API_TOKEN` is the Axiom API token you have generated. For added security, store the API token in an environment variable.
This example uses Python for the Lambda function. To use another language, change the code above accordingly.
## Create AWS IoT rule
Create an IoT rule with an SQL statement similar to the example below that matches the MQTT messages. For more information, see the [AWS documentation](https://docs.aws.amazon.com/iot/latest/developerguide/iot-create-rule.html).
```sql
SELECT * FROM 'iot/topic'
```
In **Rule actions**, select the action to send a message to a Lambda function, and then choose the Lambda function you created earlier.
## Check logs in Axiom
Use the AWS IoT Console, AWS CLI, or an MQTT client to publish messages to the topic that matches your rule. For example, `iot/topic`.
In Axiom, go to the Stream tab and select the dataset you specified in the Lambda function. You now see your logs from your IoT devices in Axiom.
# Send data from AWS Lambda to Axiom
Source: https://axiom.co/docs/send-data/aws-lambda
This page explains how to send Lambda function logs and platform events to Axiom.
Use the Axiom Lambda Extension to send logs and platform events of your Lambda function to Axiom.
Alternatively, you can use the AWS Distro for OpenTelemetry to send Lambda function logs and platform events to Axiom. For more information, see [AWS Lambda Using OTel](/send-data/aws-lambda-dot).
Axiom detects the extension and provides you with quick filters and a dashboard. For more information on how this enriches your Axiom organization, see [AWS Lambda app](/apps/lambda).
To determine the best method to send data from different AWS services, see [Send data from AWS to Axiom](/send-data/aws-overview).
The Axiom Lambda Extension is an open-source project and welcomes your contributions. For more information, see the [GitHub repository](https://github.com/axiomhq/axiom-lambda-extension).
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
- [Create an account on AWS Cloud](https://signin.aws.amazon.com/signup?request_type=register).
## Setup
1. [Install the Axiom Lambda extension](#installation).
2. Ensure everything works properly in Axiom.
3. [Turn off the permissions for Amazon CloudWatch](#turn-off-cloudwatch-logging).
The last step is important because after you install the Axiom Lambda extension, the Lambda service still sends logs to Amazon CloudWatch Logs. You need to manually turn off Amazon CloudWatch logging.
## Installation
To install the Axiom Lambda Extension, choose one of the following methods:
* [AWS CLI](#install-with-aws-cli)
* [Terraform](#install-with-terraform)
* [AWS Lambda function UI](#install-with-aws-lambda-function-ui)
### Install with AWS CLI
Add the extension as a layer with the AWS CLI:
```bash
aws lambda update-function-configuration --function-name my-function \
--layers arn:aws:lambda:AWS_REGION:694952825951:layer:axiom-extension-ARCH:VERSION
```
* Replace `AWS_REGION` with the AWS Region to send the request to. For example, `us-west-1`.
* Replace `ARCH` with the system architecture type. For example, `arm64`.
* Replace `VERSION` with the latest version number specified on the [GitHub Releases](https://github.com/axiomhq/axiom-lambda-extension/releases) page. For example, `11`.
Add the Axiom dataset name and API token to the list of environment variables. For more information on setting environment variables, see the [AWS documentation](https://docs.aws.amazon.com/lambda/latest/dg/configuration-envvars.html).
```bash
AXIOM_TOKEN: API_TOKEN
AXIOM_DATASET: DATASET_NAME
```
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
You have installed the Axiom Lambda Extension. Go to the Axiom UI and ensure your dataset receives events properly.
### Install with Terraform
Choose one of the following to install the Axiom Lambda Extension with Terraform:
* Use plain Terraform code
```tf
resource "aws_lambda_function" "test_lambda" {
filename = "lambda_function_payload.zip"
function_name = "lambda_function_name"
role = aws_iam_role.iam_for_lambda.arn
handler = "index.test"
runtime = "nodejs14.x"
ephemeral_storage {
size = 10240 # Min 512 MB and the Max 10240 MB
}
environment {
variables = {
AXIOM_TOKEN = "API_TOKEN"
AXIOM_DATASET = "DATASET_NAME"
}
}
layers = [
"arn:aws:lambda:AWS_REGION:694952825951:layer:axiom-extension-ARCH:VERSION"
]
}
```
Replace `AWS_REGION` with the AWS Region to send the request to. For example, `us-west-1`.
Replace `ARCH` with the system architecture type. For example, `arm64`.
Replace `VERSION` with the latest version number specified on the [GitHub Releases](https://github.com/axiomhq/axiom-lambda-extension/releases) page. For example, `11`.
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
* Use the [AWS Lambda Terraform module](https://registry.terraform.io/modules/terraform-aws-modules/lambda/aws/latest)
```tf
module "lambda_function" {
source = "terraform-aws-modules/lambda/aws"
function_name = "my-lambda1"
description = "My awesome lambda function"
handler = "index.lambda_handler"
runtime = "python3.8"
source_path = "../src/lambda-function1"
layers = [
"arn:aws:lambda:AWS_REGION:694952825951:layer:axiom-extension-ARCH:VERSION"
]
environment_variables = {
AXIOM_TOKEN = "API_TOKEN"
AXIOM_DATASET = "DATASET_NAME"
}
}
```
Replace `AWS_REGION` with the AWS Region to send the request to. For example, `us-west-1`.
Replace `ARCH` with the system architecture type. For example, `arm64`.
Replace `VERSION` with the latest version number specified on the [GitHub Releases](https://github.com/axiomhq/axiom-lambda-extension/releases) page. For example, `11`.
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
You have installed the Axiom Lambda Extension. Go to the Axiom UI and ensure your dataset receives events properly.
### Install with AWS Lambda function UI
Add a new layer to your Lambda function with the following ARN (Amazon Resource Name). For more information on adding layers to your function, see the [AWS documentation](https://docs.aws.amazon.com/lambda/latest/dg/adding-layers.html).
```bash
arn:aws:lambda:AWS_REGION:694952825951:layer:axiom-extension-ARCH:VERSION
```
Replace `AWS_REGION` with the AWS Region to send the request to. For example, `us-west-1`.
Replace `ARCH` with the system architecture type. For example, `arm64`.
Replace `VERSION` with the latest version number specified on the [GitHub Releases](https://github.com/axiomhq/axiom-lambda-extension/releases) page. For example, `11`.
Add the Axiom dataset name and API token to the list of environment variables. For more information on setting environment variables, see the [AWS documentation](https://docs.aws.amazon.com/lambda/latest/dg/configuration-envvars.html).
```bash
AXIOM_TOKEN: API_TOKEN
AXIOM_DATASET: DATASET_NAME
```
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
You have installed the Axiom Lambda Extension. Go to the Axiom UI and ensure your dataset receives events properly.
## Turn off Amazon CloudWatch logging
After you install the Axiom Lambda extension, the Lambda service still sends logs to CloudWatch Logs. You need to manually turn off Amazon CloudWatch logging.
To turn off Amazon CloudWatch logging, deny the Lambda function access to Amazon CloudWatch by editing the permissions:
1. In the AWS Lambda function UI, go to **Configuration > Permissions**.
2. In the **Execution role** section, click the role related to Amazon CloudWatch Logs.
3. In the **Permissions** tab, select the role, and then click **Remove**.
### Requirements for log level fields
The Stream and Query tabs allow you to easily detect warnings and errors in your logs by highlighting the severity of log entries in different colors. As a prerequisite, specify the log level in the data you send to Axiom. For Open Telemetry logs, specify the log level in the following fields:
* `record.error`
* `record.level`
* `record.severity`
* `type`
## Troubleshooting
* Ensure the Axiom API token has permission to ingest data into the dataset.
* Check the function logs on the AWS console. The Axiom Lambda Extension logs any errors with setup or ingest.
For testing purposes, set the `PANIC_ON_API_ERR` environment variable to `true`. This means that the Axiom Lambda Extension crashes if it can’t connect to Axiom.
# Send data from AWS to Axiom using AWS Distro for OpenTelemetry
Source: https://axiom.co/docs/send-data/aws-lambda-dot
This page explains how to auto-instrument AWS Lambda functions and send telemetry data to Axiom using AWS Distro for OpenTelemetry.
This page explains how to auto-instrument and monitor applications running on AWS Lambda using the AWS Distro for OpenTelemetry (ADOT). ADOT is an OpenTelemetry collector layer managed by and optimized for AWS.
Alternatively, you can use the Axiom Lambda Extension to send Lambda function logs and platform events to Axiom. For more information, see [AWS Lambda](/send-data/aws-lambda).
Axiom detects the extension and provides you with quick filters and a dashboard. For more information on how this enriches your Axiom organization, see [AWS Lambda app](/apps/lambda).
## ADOT Lambda collector layer
[AWS Distro for OpenTelemetry Lambda](https://aws-otel.github.io/docs/getting-started/lambda) provides a plug-and-play user experience by automatically instrumenting a Lambda function. It packages OpenTelemetry together with an out-of-the-box configuration for AWS Lambda and OTLP in an easy-to-setup layer. You can turn on and off OpenTelemetry for your Lambda function without changing your code.
With the ADOT collector layer, you can send telemetry data to Axiom with a simple configuration.
To determine the best method to send data from different AWS services, see [Send data from AWS to Axiom](/send-data/aws-overview).
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
## Set up ADOT Lambda layer
This example creates a new Lambda function and applies the ADOT Lambda layer to it with the proper configuration.
You can deploy your Lambda function with the choice of your runtime. This example uses the Python3.10 runtime.
Create a new Lambda function with the following content. For more information on creating Lambda functions, see the [AWS documentation](https://docs.aws.amazon.com/lambda/latest/dg/getting-started.html).
```python
import json
print('Loading function')
def lambda_handler(event, context):
#print("Received event: " + json.dumps(event, indent=2))
print("value1 = " + event['key1'])
print("value2 = " + event['key2'])
print("value3 = " + event['key3'])
return event['key1'] # Echo back the first key value
#raise Exception('Something went wrong')
```
Add a new ADOT Lambda layer to your function with the following ARN (Amazon Resource Name). For more information on adding layers to your function, see the [AWS documentation](https://docs.aws.amazon.com/lambda/latest/dg/adding-layers.html).
```bash
arn:aws:lambda:AWS_REGION:901920570463:layer:aws-otel-python-ARCH-VERSION
```
* Replace `AWS_REGION` with the AWS Region to send the request to. For example, `us-west-1`.
* Replace `ARCH` with the system architecture type. For example, `arm64`.
* Replace `VERSION` with the latest version number specified in the [AWS documentation](https://aws-otel.github.io/docs/getting-started/lambda/lambda-python). For example, `ver-1-25-0:1`.
The configuration file is a YAML file that contains the configuration for the OpenTelemetry collector. Create the configuration file `/var/task/collector.yaml` with the following content. This tells the collector to receive telemetry data from the OTLP receiver and export it to Axiom.
```yaml
receivers:
otlp:
protocols:
grpc:
http:
exporters:
otlphttp:
compression: gzip
endpoint: https://AXIOM_DOMAIN
headers:
authorization: Bearer API_TOKEN
x-axiom-dataset: DATASET_NAME
service:
pipelines:
logs:
receivers: [otlp]
exporters: [otlphttp]
traces:
receivers: [otlp]
exporters: [otlphttp]
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
Set the following environment variables. For more information on setting environment variables, see the [AWS documentation](https://docs.aws.amazon.com/lambda/latest/dg/configuration-envvars.html).
```bash
AWS_LAMBDA_EXEC_WRAPPER: /opt/otel-instrument
OPENTELEMETRY_COLLECTOR_CONFIG_FILE: /var/task/collector.yaml
```
* `AWS_LAMBDA_EXEC_WRAPPER` wraps the function handler with the OpenTelemetry Lambda wrapper. This layer enables the auto-instrumentation for your Lambda function by initializing the OpenTelemetry agent and handling the lifecycle of spans.
* `OPENTELEMETRY_COLLECTOR_CONFIG_FILE` specified the location of the collector configuration file.
As the app runs, it sends traces to Axiom. To view the traces:
1. In Axiom, click the **Stream** tab.
2. Click your dataset.
# Send data from AWS to Axiom
Source: https://axiom.co/docs/send-data/aws-overview
This page explains how to send data from different AWS services to Axiom.
For most AWS services, the fastest and easiest way to send logs to Axiom is the [Axiom CloudWatch Forwarder](/send-data/cloudwatch). It’s subscribed to one or more of your CloudWatch Log Groups and runs as a Lambda function. To determine which AWS service sends logs to Amazon CloudWatch and/or Amazon S3, see the [AWS Documentation](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AWS-logs-and-resource-policy.html).
## Choose the best method to send data
To choose the best method to send data from AWS services to Axiom, consider that Amazon CloudWatch Logs captures three main types of logs:
* **Service logs**: More than 30 AWS services, including Amazon API Gateway, AWS Lambda, AWS CloudTrail, can send service logs to CloudWatch.
* **Vended logs**: Automatically published by certain AWS services like Amazon VPC and Amazon Route 53.
* **Custom logs**: Logs from your own applications, on-premise resources, and other clouds.
You can only send vended logs to Axiom through Amazon CloudWatch. Use the [Axiom CloudWatch Forwarder](/send-data/cloudwatch) to send vended logs from Amazon CloudWatch to Axiom for richer insights. After sending vended logs to Axiom, shorten the retention period for these logs in Amazon CloudWatch to cut costs even more.
For service logs and custom logs, you can skip Amazon CloudWatch altogether and send them to Axiom using open-source collectors like [Fluent Bit](/send-data/fluent-bit), [Fluentd](/send-data/fluentd) and [Vector](/send-data/vector). Completely bypassing Amazon CloudWatch results in significant cost savings.
## Amazon services exclusively supported by Axiom CloudWatch Forwarder
To send data from the following Amazon services to Axiom, use the [Axiom CloudWatch Forwarder](/send-data/cloudwatch).
* Amazon API Gateway
* Amazon Aurora MySQL
* Amazon Chime
* Amazon CloudWatch
* Amazon CodeWhisperer
* Amazon Cognito
* Amazon Connect
* AWS AppSync
* AWS Elastic Beanstalk
* AWS CloudHSM
* AWS CloudTrail
* AWS CodeBuild
* AWS DataSync
* AWS Elemental MediaTailor
* AWS Fargate
* AWS Glue
To send evaluation event logs from Amazon CloudWatch to Axiom, you can also use [Amazon Data Firehose](/send-data/aws-firehose).
## Amazon services supported by other methods
The table below summarizes the methods you can use to send data from the other supported Amazon services to Axiom.
| Supported Amazon service | Supported methods to send data to Axiom |
| ----------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------- |
| Amazon Bedrock | [Axiom CloudWatch Forwarder](/send-data/cloudwatch) [AWS S3 Forwarder](/send-data/aws-s3) [Amazon Data Firehose](/send-data/aws-firehose) |
| Amazon CloudFront | [AWS S3 Forwarder](/send-data/aws-s3) |
| Amazon Data Firehose | [Amazon Data Firehose](/send-data/aws-firehose) |
| Amazon Elastic Container Service | [Fluentbit](/send-data/fluent-bit) |
| Amazon Elastic Load Balancing (ELB) | [Fluentbit](/send-data/fluent-bit) |
| Amazon ElastiCache (Redis OSS) | [Axiom CloudWatch Forwarder](/send-data/cloudwatch) [Amazon Data Firehose](/send-data/aws-firehose) |
| Amazon EventBridge Pipes | [Axiom CloudWatch Forwarder](/send-data/cloudwatch) [AWS S3 Forwarder](/send-data/aws-s3) [Amazon Data Firehose](/send-data/aws-firehose) |
| Amazon FinSpace | [Axiom CloudWatch Forwarder](/send-data/cloudwatch) [AWS S3 Forwarder](/send-data/aws-s3) [Amazon Data Firehose](/send-data/aws-firehose) |
| Amazon S3 | [AWS S3 Forwarder](/send-data/aws-s3) [Vector](/send-data/vector) |
| Amazon Virtual Private Cloud (VPC) | [AWS S3 Forwarder](/send-data/aws-s3) |
| AWS Fault Injection Service | [AWS S3 Forwarder](/send-data/aws-s3) |
| AWS FireLens | [AWS FireLens](/send-data/aws-firelens) |
| AWS Global Accelerator | [AWS S3 Forwarder](/send-data/aws-s3) |
| AWS IoT Core | [AWS IoT](/send-data/aws-iot-rules) |
| AWS Lambda | [AWS Lambda](/send-data/aws-lambda) |
To request support for AWS services not listed above, please [reach out to Axiom](https://axiom.co/contact).
# Send data from AWS S3 to Axiom
Source: https://axiom.co/docs/send-data/aws-s3
Efficiently send log data from AWS S3 to Axiom via Lambda function
This page explains how to set up an AWS Lambda function to send logs from an S3 bucket to Axiom. The Lambda function triggers when a new log file is uploaded to an S3 bucket, processes the log data, and sends it to Axiom.
To determine the best method to send data from different AWS services, see [Send data from AWS to Axiom](/send-data/aws-overview).
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
- Create an AWS account with permissions to create and manage S3 buckets, Lambda functions, and IAM roles. For more information, see the [AWS documentation](https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html).
## Package the requests module
Before creating the Lambda function, package the requests module so it can be used in the function:
1. Create a new directory.
2. Install the requests module into the current directory using pip.
3. Zip the contents of the directory.
4. Add your Lambda function file to the zip file.
## Create AWS Lambda function
Create a Lambda function with Python runtime and upload the packaged zip file containing the requests module and your function code below:
```py
import os
import json
import boto3
import requests
import csv
import io
import ndjson
import re
def lambda_handler(event, context):
# Extract the bucket name and object key from the event
bucket = event['Records'][0]['s3']['bucket']['name']
key = event['Records'][0]['s3']['object']['key']
try:
# Fetch the log file from S3
s3 = boto3.client('s3')
obj = s3.get_object(Bucket=bucket, Key=key)
except Exception as e:
print(f"Error fetching from S3: {str(e)}")
raise e
# Read the log data from the S3 object
log_data = obj['Body'].read().decode('utf-8')
# Determine the file format and parse accordingly
file_extension = os.path.splitext(key)[1].lower()
if file_extension == '.csv':
csv_data = csv.DictReader(io.StringIO(log_data))
json_logs = list(csv_data)
elif file_extension == '.txt':
log_lines = log_data.strip().split("\n")
json_logs = [{'message': line} for line in log_lines]
elif file_extension == '.log':
# IMPORTANT: Log files can be in various formats (JSON, XML, syslog, etc.)
try:
# First, try to parse as JSON (either one JSON object per line or a JSON array)
if log_data.strip().startswith('[') and log_data.strip().endswith(']'):
# Appears to be a JSON array
json_logs = json.loads(log_data)
else:
# Try parsing as NDJSON (one JSON object per line)
try:
json_logs = ndjson.loads(log_data)
except:
# If not valid NDJSON, check if each line might be JSON
log_lines = log_data.strip().split("\n")
json_logs = []
for line in log_lines:
try:
# Try to parse each line as JSON
parsed_line = json.loads(line)
json_logs.append(parsed_line)
except:
# Create a dictionary and let json module handle the escaping
message_dict = {'message': line}
json_logs.append(message_dict)
except:
# If JSON parsing fails, default to treating as plain text
log_lines = log_data.strip().split("\n")
json_logs = [{'message': line} for line in log_lines]
print("Warning: Log file format could not be determined. Treating as plain text.")
elif file_extension == '.ndjson' or file_extension == '.jsonl':
json_logs = ndjson.loads(log_data)
else:
print(f"Unsupported file format: {file_extension}")
return
# Prepare Axiom API request
dataset_name = os.environ['DATASET_NAME']
axiom_domain = os.environ['AXIOM_DOMAIN']
axiom_api_url = f"https://{axiom_domain}/v1/datasets/{dataset_name}/ingest"
api_token = os.environ['API_TOKEN']
axiom_headers = {
"Authorization": f"Bearer {api_token}",
"Content-Type": "application/json"
}
try:
response = requests.post(axiom_api_url, headers=axiom_headers, json=json_logs)
if response.status_code != 200:
print(f"Failed to send logs to Axiom: {response.text}")
else:
print(f"Successfully sent logs to Axiom. Response: {response.text}")
except Exception as e:
print(f"Error sending to Axiom: {str(e)}")
print(f"Processed {len(json_logs)} log entries")
```
In the environment variables section of the Lambda function configuration, add the following environment variables:
* `DATASET_NAME` is the name of the Axiom dataset where you want to send data.
* `AXIOM_DOMAIN` is the Axiom domain that your organization uses. For more information, see [Regions](/reference/regions).
* `API_TOKEN` is the Axiom API token you have generated. For added security, store the API token in an environment variable.
This example uses Python for the Lambda function. To use another language, change the code above accordingly.
### Understanding log format
The `.log` extension doesn't guarantee any specific format. Log files might contain:
* JSON (single object or array)
* NDJSON/JSONL (one JSON object per line)
* Syslog format
* XML
* Application-specific formats (Apache, Nginx, ELB, etc.)
* Custom formats with quoted strings and special characters
The example code includes format detection for common formats, but you'll need to customize this based on your specific log structure.
#### Example: Custom parser for structured logs
For logs with a specific structure (like AWS ELB logs), you have to implement a custom parser. Here's a simplified example:
```py
import shlex
import re
class Parser:
def parse_line(self, line):
try:
line = re.sub(r"[\[\]]", "", line)
data = shlex.split(line)
result = {
"protocol": data[0],
"timestamp": data[1],
"client_ip_port": data[2],
# ...more fields...
}
return result
except Exception as e:
raise e
```
## Configure S3 to trigger Lambda
In the Amazon S3 console, select the bucket where your log files are stored. Go to the properties tab, find the event notifications section, and create an event notification. Select All object create events as the event type and choose the Lambda function you created earlier as the destination. For more information, see the [AWS documentation](https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html).
## Upload a test log file
Ensure the log file you upload to the S3 bucket is in the correct format, such as JSON or newline-delimited JSON (NDJSON) or CSV. Here’s an example:
```json
[
{
"_time":"2021-02-04T03:11:23.222Z",
"data":{"key1":"value1","key2":"value2"}
},
{
"data":{"key3":"value3"},
"attributes":{"key4":"value4"}
},
{
"tags": {
"server": "aws",
"source": "wordpress"
}
}
]
```
After uploading a test log file to your S3 bucket, the Lambda function automatically processes the log data and sends it to Axiom. In Axiom, go to the Stream tab and select the dataset you specified in the Lambda function. You now see your logs from your IoT devices in Axiom.
# Send data from CloudFront to Axiom
Source: https://axiom.co/docs/send-data/cloudfront
Send data from CloudFront to Axiom using AWS S3 bucket and Lambda to monitor your static and dynamic content.
Use the Axiom CloudFront Lambda to send CloudFront logs to Axiom using AWS S3 bucket and Lambda. After you set this up, you can observe your static and dynamic content and run deep queries on your CloudFront distribution logs efficiently and properly.
To determine the best method to send data from different AWS services, see [Send data from AWS to Axiom](/send-data/aws-overview).
The Axiom CloudFront Lambda is an open-source project and welcomes your contributions. For more information, see the [GitHub repository](https://github.com/axiomhq/axiom-cloudfront-lambda).
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
- [Create an account on AWS Cloud](https://signin.aws.amazon.com/signup?request_type=register).
## Setup
1. Select one of the following:
* If you already have an S3 bucket for your CloudFront data, [launch the base stack on AWS](https://us-east-2.console.aws.amazon.com/cloudformation/home?region=us-east-2#/stacks/create/template?stackName=CloudFront-Axiom\&templateURL=https://axiom-cloudformation-stacks.s3.amazonaws.com/axiom-cloudfront-lambda-base-cloudformation-stack.yaml).
* If you don’t have an S3 bucket for your CloudFront data, [launch the stack on AWS](https://us-east-2.console.aws.amazon.com/cloudformation/home?region=us-east-2#/stacks/create/template?stackName=CloudFront-Axiom\&templateURL=https://axiom-cloudformation-stacks.s3.amazonaws.com/axiom-cloudfront-lambda-cloudformation-stack.yaml) that creates an S3 bucket for you.
2. Add the name of the Axiom dataset where you want to send data.
3. Enter the Axiom API token you have previously created.
## Configuration
To configure your CloudFront distribution:
1. In AWS, select your origin domain.
2. In **Origin access**, select **Legacy access identities**, and then select your origin access identity in the list.
3. In **Bucket policy**, select **Yes, update the bucket policy**.
4. In **Standard logging**, select **On**. This means that your data is delivered to your S3 bucket.
5. Click **Create Distribution**, and then click **Run your Distribution**.
Go back to Axiom to see the CloudFront distribution logs.
# Send data from Amazon CloudWatch to Axiom
Source: https://axiom.co/docs/send-data/cloudwatch
This page explains how to send data from Amazon CloudWatch to Axiom.
Axiom CloudWatch Forwarder is a set of easy-to-use AWS CloudFormation stacks designed to forward logs from Amazon CloudWatch to Axiom. It includes a Lambda function to handle the forwarding and stacks to create Amazon CloudWatch log group subscription filters for both existing and future log groups.
Axiom CloudWatch Forwarder includes templates for the following CloudFormation stacks:
* **Forwarder** creates a Lambda function that forwards logs from Amazon CloudWatch to Axiom.
* **Subscriber** runs once to create subscription filters on Forwarder for Amazon CloudWatch log groups specified by a combination of names, prefix, and regular expression filters.
* **Listener** creates a Lambda function that listens for new log groups and creates subscription filters for them on Forwarder. This way, you don’t have to create subscription filters manually for new log groups.
* **Unsubscriber** runs once to remove subscription filters on Forwarder for Amazon CloudWatch log groups specified by a combination of names, prefix, and regular expression filters.
To determine the best method to send data from different AWS services, see [Send data from AWS to Axiom](/send-data/aws-overview).
The Axiom CloudWatch Forwarder is an open-source project and welcomes your contributions. For more information, see the [GitHub repository](https://github.com/axiomhq/axiom-cloudwatch-forwarder).
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
- [Create an account on AWS Cloud](https://signin.aws.amazon.com/signup?request_type=register).
## Installation
To install the Axiom CloudWatch Forwarder, choose one of the following:
* [Cloudformation stacks](#install-with-cloudformation-stacks)
* [Terraform module](#install-with-terraform-module)
### Install with Cloudformation stacks
1. [Launch the Forwarder stack template on AWS](https://console.aws.amazon.com/cloudformation/home?#/stacks/new?stackName=axiom-cloudwatch-forwarder\&templateURL=https://axiom-cloudformation.s3.amazonaws.com/stacks/axiom-cloudwatch-forwarder-v1.1.1-cloudformation-stack.yaml). Copy the Forwarder Lambda ARN because it’s referenced in the Subscriber stack.
2. [Launch the Subscriber stack template on AWS](https://console.aws.amazon.com/cloudformation/home?#/stacks/new?stackName=axiom-cloudwatch-subscriber\&templateURL=https://axiom-cloudformation.s3.amazonaws.com/stacks/axiom-cloudwatch-subscriber-v1.1.1-cloudformation-stack.yaml).
3. [Launch the Listener stack template on AWS](https://console.aws.amazon.com/cloudformation/home?#/stacks/new?stackName=axiom-cloudwatch-listener\&templateURL=https://axiom-cloudformation.s3.amazonaws.com/stacks/axiom-cloudwatch-listener-v1.1.1-cloudformation-stack.yaml).
### Install with Terraform module
Create a new Forwarder module in your Terraform file in the following way:
```hcl
module "forwarder" {
source = "axiomhq/axiom-cloudwatch-forwarder/aws//modules/forwarder"
axiom_dataset = "DATASET_NAME"
axiom_token = "API_TOKEN"
prefix = "axiom-cloudwatch-forwarder"
}
```
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
Alternatively, create a dataset with the [Axiom Terraform provider](/apps/terraform#create-dataset).
Create a new Subscriber module in your Terraform file in the following way:
```hcl
module "subscriber" {
source = "axiomhq/axiom-cloudwatch-forwarder/aws//modules/subscriber"
prefix = "axiom-cloudwatch-forwarder"
forwarder_lambda_arn = module.forwarder.lambda_arn
log_groups_prefix = "/aws/lambda/"
}
```
Create a new Listener module in your Terraform file in the following way:
```hcl
module "listener" {
source = "axiomhq/axiom-cloudwatch-forwarder/aws//modules/listener"
prefix = "axiom-cloudwatch-forwarder"
forwarder_lambda_arn = module.forwarder.lambda_arn
log_groups_prefix = "/aws/lambda/"
}
```
In your terminal, go to the folder of your main Terraform file, and then run `terraform init`.
Run `terraform plan` to check the changes, and then run `terraform apply`.
## Filter Amazon CloudWatch log groups
The Subscriber and Unsubscriber stacks allow you to filter the log groups by a combination of names, prefix, and regular expression filters. If no filters are specified, the stacks subscribe to or unsubscribe from all log groups. You can also whitelist a specific set of log groups using filters in the CloudFormation stack parameters. The log group names, prefix, and regular expression filters included are additive, meaning the union of all provided inputs is matched.
### Example
For example, you have the following list of log groups:
```
/aws/lambda/function-foo
/aws/lambda/function-bar
/aws/eks/cluster/cluster-1
/aws/rds/instance-baz
```
* To subscribe to the Lambda log groups exclusively, use a prefix filter with the value of `/aws/lambda`.
* To subscribe to EKS and RDS log groups, use a list of names with the value of `/aws/eks/cluster/cluster-1,/aws/rds/instance-baz`.
* To subscribe to the EKS log group and all Lambda log groups, use a combination of prefix and names list.
* To use the regular expression filter, write a regular expression to match the log group names. For example, `\/aws\/lambda\/.*` matches all Lambda log groups.
* To subscribe to all log groups, leave the filters empty.
## Listener architecture
The optional Listener stack does the following:
* Creates an Amazon S3 bucket for AWS CloudTrail.
* Creates a trail to capture the creation of new log groups.
* Creates an event rule to pass those creation events to an Amazon EventBridge event bus.
* Sends an event via EventBridge to a Lambda function when a new log group is created.
* Creates a subscription filter for each new log group.
## Remove subscription filters
To remove subscription filters for one or more log groups, [launch the Unsubscriber stack template on AWS](https://console.aws.amazon.com/cloudformation/home?#/stacks/new?stackName=axiom-cloudwatch-subscriber\&templateURL=https://axiom-cloudformation.s3.amazonaws.com/stacks/axiom-cloudwatch-unsubscriber-v1.1.1-cloudformation-stack.yaml).
The log group filtering works the same way as the Subscriber stack. You can filter the log groups by a combination of names, prefix, and regular expression filters.
Alternatively, to turn off log forwarding to Axiom, create a new Unsubscriber module in your Terraform file in the following way:
```hcl
module "unsubscriber" {
source = "axiomhq/axiom-cloudwatch-forwarder/aws//modules/unsubscriber"
prefix = "axiom-cloudwatch-forwarder"
forwarder_lambda_arn = module.forwarder.lambda_arn
log_groups_prefix = "/aws/lambda/"
}
```
# Send data from Convex to Axiom
Source: https://axiom.co/docs/send-data/convex
This guide explains how to send data from Convex to Axiom.
Convex lets you manage the backend of your app (database, server, and more) from a centralized cloud interface. Set up a log stream in Convex to send your app’s logs to Axiom and make it your single source of truth about events.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
- [Create a Convex account](https://www.convex.dev/login).
- Set up your app with Convex. For example, follow one of the quickstart guides in the [Convex documentation](https://docs.convex.dev/quickstarts).
## Configure Convex log streams
To send data from Convex to Axiom, set up a Convex log stream using the [Convex documentation](https://docs.convex.dev/production/integrations/log-streams#axiom). During this process, you need the following:
* The name of the Axiom dataset where you want to send data.
* The Axiom API token you have generated.
* Optional: A list of key-value pairs to include in all events your app sends to Axiom.
# Send data from Cribl to Axiom
Source: https://axiom.co/docs/send-data/cribl
Learn how to configure Cribl LogStream to forward logs to Axiom using both HTTP and Syslog destinations.
export const endpointName_0 = "Syslog"
Cribl is a data processing framework often used with machine data. It allows you to parse, reduce, transform, and route data to and from various systems in your infrastructure.
You can send logs from Cribl LogStream to Axiom using HTTP or Syslog destination.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
## Set up log forwarding from Cribl to Axiom using the HTTP destination
Below are the steps to set up and send logs from Cribl to Axiom using the HTTP destination:
1. Create a new HTTP destination in Cribl LogStream:
Open Cribl’s UI and navigate to **Destinations > HTTP**. Click on `+` Add New to create a new destination.
2. Configure the destination:
* **Name:** Choose a name for the destination.
* **Endpoint URL:** The URL of your Axiom log ingest endpoint `https://AXIOM_DOMAIN/v1/datasets/DATASET_NAME/ingest`.
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
* **Method:** Choose `POST`.
* **Event Breaker:** Set this to One Event Per Request or CRLF (Carriage Return Line Feed), depending on how you want to separate events.
3. Headers:
You may need to add some headers. Here is a common example:
* **Content-Type:** Set this to `application/json`.
* **Authorization:** Set this to `Bearer API_TOKEN`.
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
4. Body:
In the Body Template, input `{{_raw}}`. This forwards the raw log event to Axiom.
5. Save and enable the destination:
After you've finished configuring the destination, save your changes and make sure the destination is enabled.
## Set up log forwarding from Cribl to Axiom using the Syslog destination
### Create Syslog endpoint
1. Click **Settings > Endpoints**.
2. Click **New endpoint**.
3. Click **{endpointName_0}**.
4. Name the endpoint.
5. Select the dataset where you want to send data.
6. Copy the URL displayed for the newly created endpoint. This is the target URL where you send the data.
### Configure destination in Cribl
1. Create a new Syslog destination in Cribl LogStream:
Open Cribl’s UI and navigate to **Destinations > Syslog**. Click on `+` Add New to create a new destination.
2. Configure the destination:
* **Name:** Choose a name and output ID for the destination.
* **Protocol:** Choose the protocol for the Syslog messages. Select the TCP protocol.
* **Destination Address:** Input the address of the Axiom endpoint to which you want to send logs. This address is generated from your Syslog endpoint in Axiom and follows this format: `tcp+tls://qsfgsfhjsfkbx9.syslog.axiom.co:6514`.
* **Destination Port:** Enter the port number on which the Axiom endpoint is listening for Syslog messages which is `6514`
* **Format:** Choose the Syslog message format. `RFC3164` is a common format and is generally recommended.
* **Facility:** Choose the facility code to use in the Syslog messages. The facility code represents the type of process that’s generating the Syslog messages.
* **Severity:** Choose the severity level to use in the Syslog messages. The severity level represents the importance of the Syslog messages.
3. Configure the Message:
* **Timestamp Format:** Choose the timestamp format to use in the Syslog messages.
* **Application Name Field:** Enter the name of the field to use as the app name in the Syslog messages.
* **Message Field:** Enter the name of the field to use as the message in the Syslog messages. Typically, this would be `_raw`.
* **Throttling:** Enter the throttling value. Throttling is a mechanism to control the data flow rate from the source (Cribl) to the destination (in this case, an Axiom Syslog Endpoint).
4. Save and enable the destination
After you've finished configuring the destination, save your changes and make sure the destination is enabled.
# Send data from Elastic Beats to Axiom
Source: https://axiom.co/docs/send-data/elastic-beats
Collect metrics and logs from elastic beats, and monitor them with Axiom.
[Elastic Beats](https://www.elastic.co/beats/) serves as a lightweight platform for data shippers that transfer information from the source to Axiom and other tools based on the configuration. Before shipping data, it collects metrics and logs from different sources, which later are deployed to your Axiom deployments.
There are different [Elastic Beats](https://www.elastic.co/beats/) you could use to ship logs. Axiom’s documentation provides a detailed step by step procedure on how to use each Beats.
To ensure compatibility with Axiom, use the following versions:
* For Elastic Beats log shippers such as Filebeat, Metricbeat, Heartbeat, Auditbeat, and Packetbeat, use their open-source software (OSS) version 8.12.1 or lower.
* For Winlogbeat, use the OSS version 7.17.22 or lower.
* For Journalbeat, use the OSS version 7.15.2 or lower.
If you get a 400 error when you use the field name `_time` or when you override the [`timestamp` field](/reference/field-restrictions), use the query parameter `?timestamp-field` to set a field as the time field.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
## Filebeat
[Filebeat](https://www.elastic.co/beats/filebeat) is a lightweight shipper for logs. It helps you centralize logs and files, and can read files from your system.
Filebeats is useful for workloads, system, app log files, and data logs you would like to ingest to Axiom in some way.
In the logging case, it helps centralize logs and files in a structured pattern by reading from your various apps, services, workloads, and VMs, then shipping to your Axiom deployments.
### Installation
Visit the [Filebeat OSS download page](https://www.elastic.co/downloads/beats/filebeat-oss) to install Filebeat. For more information, check out Filebeat’s [official documentation](https://www.elastic.co/guide/en/beats/filebeat/current/index.html)
When downloading Filebeats, install the OSS version being that the non-oss version doesn’t work with Axiom.
### Configuration
Axiom lets you ingest data with the ElasticSearch bulk ingest API.
In order for Filebeat to work, disable index lifecycle management (ILM). To do so, `add setup.ilm.enabled: false` to the `filebeat.yml` configuration file.
```yaml
setup.ilm.enabled: false
filebeat.inputs:
- type: log
# Specify the path of the system log files to be sent to Axiom deployment.
paths:
- $PATH_TO_LOG_FILE
output.elasticsearch:
hosts: ['https://AXIOM_DOMAIN:443/v1/datasets/DATASET_NAME/elastic']
api_key: 'axiom:API_TOKEN'
allow_older_versions: true
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
## Metricbeat
[Metricbeat](https://www.elastic.co/beats/metricbeat) is a lightweight shipper for metrics.
Metricbeat is installed on your systems and services and used for monitoring their performance, as well as different remote packages/utilities running on them.
### Installation
Visit the [MetricBeat OSS download page](https://www.elastic.co/downloads/beats/metricbeat-oss) to install Metricbeat. For more information, check out Metricbeat’s [official documentation](https://www.elastic.co/guide/en/beats/metricbeat/current/index.html)
### Configuration
```yaml
setup.ilm.enabled: false
metricbeat.config.modules:
path:
-$PATH_TO_LOG_FILE
metricbeat.modules:
- module: system
metricsets:
- filesystem
- cpu
- load
- fsstat
- memory
- network
output.elasticsearch:
hosts: ["https://AXIOM_DOMAIN:443/v1/datasets/DATASET_NAME/elastic"]
# Specify Axiom API token
api_key: 'axiom:API_TOKEN'
allow_older_versions: true
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
### Send AWS RDS metric set to Axiom
The RDS metric set enables you to monitor your AWS RDS service. [RDS metric set](https://www.elastic.co/guide/en/beats/metricbeat/current/metricbeat-metricset-aws-rds.html) fetches a set of metrics from Amazon RDS and Amazon Aurora DB. With Amazon RDS, users can monitor network throughput, I/O for read, write, and/or metadata operations, client connections, and burst credit balances for their DB instances and send the data to Axiom.
```yaml
setup.ilm.enabled: false
metricbeat.config.modules:
path:
-$PATH_TO_LOG_FILE
metricbeat.modules:
- module: aws
period: 60s
metricsets:
- rds
access_key_id: ''
secret_access_key: ''
session_token: ''
# Add other AWS configurations if needed
output.elasticsearch:
hosts: ["https://AXIOM_DOMAIN:443/v1/datasets/DATASET_NAME/elastic"]
api_key: 'axiom:API_TOKEN'
allow_older_versions: true
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
## Winlogbeat
[Winlogbeat](https://www.elastic.co/guide/en/beats/winlogbeat/current/index.html) is an open-source Windows specific event-log shipper that’s installed as a Windows service. It can be used to collect and send event logs to Axiom.
Winlogbeat reads from one or more event logs using Windows APIs, filters the events based on user-configured criteria, then sends the event data to the configured outputs.
You can Capture:
* app events
* hardware events
* security events
* system events
### Installation
Visit the [Winlogbeat download page](https://www.elastic.co/downloads/beats/winlogbeat) to install Winlogbeat. For more information, check out Winlogbeat’s [official documentation](https://www.elastic.co/guide/en/beats/winlogbeat/current/winlogbeat-installation-configuration.html)
* Extract the contents of the zip file into `C:\Program Files`.
* Rename the `winlogbeat-$version` directory to Winlogbeat
* Open a PowerShell prompt as an Administrator and run
```bash
PS C:\Users\Administrator> cd C:\Program Files\Winlogbeat
PS C:\Program Files\Winlogbeat> .\install-service-winlogbeat.ps1
```
### Configuration
Configuration for Winlogbeat Service is found in the `winlogbeat.yml` file in `C:\Program Files\Winlogbeat.`
Edit the `winlogbeat.yml` configuration file found in `C:\Program Files\Winlogbeat` to send data to Axiom.
The `winlogbeat.yml` file contains the configuration on which windows events and service it should monitor and the time required.
```yaml
winlogbeat.event_logs:
- name: Application
- name: System
- name: Security
logging.to_files: true
logging.files:
path: C:\ProgramData\Winlogbeat\Logs
logging.level: info
output.elasticsearch:
hosts: ['https://AXIOM_DOMAIN:443/v1/datasets/DATASET_NAME/elastic']
# token should be an API token
api_key: 'axiom:API_TOKEN'
allow_older_versions: true
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
#### Validate configuration
```bash
# Check if your configuration is correct
PS C:\Program Files\Winlogbeat> .\winlogbeat.exe test config -c .\winlogbeat.yml -e
```
#### Start Winlogbeat
```bash
PS C:\Program Files\Winlogbeat> Start-Service winlogbeat
```
You can view the status of your service and control it from the Services management console in Windows.
To launch the management console, run this command:
```bash
PS C:\Program Files\Winlogbeat> services.msc
```
#### Stop Winlogbeat
```bash
PS C:\Program Files\Winlogbeat> Stop-Service winlogbeat
```
### Ignore older Winlogbeat configuration
The `ignore_older` option in the Winlogbeat configuration is used to ignore older events.
Winlogbeat reads from the Windows event log system. When it starts up, it starts reading from a specific point in the event log. By default, Winlogbeat starts reading new events created after Winlogbeat started.
However, you might want Winlogbeat to read some older events as well. For instance, if you restart Winlogbeat, you might want it to continue where it left off, rather than skipping all the events that were created while it wasn’t running. In this case, you can use the `ignore_older` option to specify how old events Winlogbeat should read. The `ignore_older` option takes a duration as a value. Any events that are older than this duration are ignored. The duration is a string of a number followed by a unit. Units can be one of `ms` (milliseconds), `s` (seconds), `m` (minutes), `h` (hours) or `d` (days).
```yaml
winlogbeat.event_logs:
- name: Application
ignore_older: 72h
output.elasticsearch:
hosts: ['https://AXIOM_DOMAIN:443/v1/datasets/DATASET_NAME/elastic']
protocol: "https"
ssl.verification_mode: "full"
# token should be an API token
api_key: 'axiom:API_TOKEN'
allow_older_versions: true
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
You can start Winlogbeat from the command line by running `.\winlogbeat.exe -c winlogbeat.yml` in the Winlogbeat installation directory.
### Add verification modes and processors
Verification mode refers to the SSL/TLS verification performed when Winlogbeat connects to your output destination, for instance, a Logstash instance, ElasticSearch instance or an Axiom instance. You can add your verification modes, additional processors data, and multiple windows event logs to you configurations and send the logs to Axiom. The configuration is specified in the`winlogbeat.event_logs` configuration option.
```yaml
winlogbeat.event_logs:
- name: Application
ignore_older: 72h
- name: Security
- name: System
output.elasticsearch:
hosts: ['https://AXIOM_DOMAIN:443/v1/datasets/DATASET_NAME/elastic']
# token should be an API token
api_key: 'axiom:API_TOKEN'
allow_older_versions: true
ssl.verification_mode: "certificate"
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
logging.level: info
logging.to_files: true
logging.files:
path: C:/ProgramData/winlogbeat/Logs
name: winlogbeat
keepfiles: 7
permissions: 0600
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
You can start Winlogbeat from the command line by running `.\winlogbeat.exe -c winlogbeat.yml` in the Winlogbeat installation directory.
For more information on Winlogbeat event logs, visit the Winlogbeat [documentation](https://www.elastic.co/guide/en/beats/winlogbeat/current/index.html).
## Heartbeat
[Heartbeat](https://www.elastic.co/guide/en/beats/heartbeat/current/heartbeat-overview.html) is a lightweight shipper for uptime monitoring.
It monitors your services and sends response time to Axiom. It lets you periodically check the status of your services and determine whether they’re available.
Heartbeat is useful when you need to verify that you’re meeting your service level agreements for service uptime.
Heartbeat currently supports monitors for checking hosts via:
* ICMP (v4 and v6) echo requests: Use the `icmp monitor` when you simply want to check whether a service is available. This monitor requires root access.
* TCP: Use the TCP monitor to connect `via TCP.` You can optionally configure this monitor to verify the endpoint by sending and/or receiving a custom payload.
* HTTP: Use the HTTP monitor to connect `via HTTP.` You can optionally configure this monitor to verify that the service returns the expected response, such as a specific status code, response header, or content.
### Installation
Visit the [Heartbeat download page](https://www.elastic.co/guide/en/beats/heartbeat/current/heartbeat-installation-configuration.html#installation) to install Heartbeat on your system.
### Configuration
Heartbeat provides monitors to check the status of hosts at set intervals. Heartbeat currently provides monitors for ICMP, TCP, and HTTP.
You configure each monitor individually. In `heartbeat.yml`, specify the list of monitors that you want to enable. Each item in the list begins with a dash (-).
The example below configures Heartbeat to use three monitors: an ICMP monitor, a TCP monitor, and an HTTP monitor deployed instantly to Axiom.
```yaml
# Disable index lifecycle management (ILM)
setup.ilm.enabled: false
heartbeat.monitors:
- type: icmp
schedule: '*/5 * * * * * *'
hosts: ['myhost']
id: my-icmp-service
name: My ICMP Service
- type: tcp
schedule: '@every 5s'
hosts: ['myhost:12345']
mode: any
id: my-tcp-service
- type: http
schedule: '@every 5s'
urls: ['http://example.net']
service.name: apm-service-name
id: my-http-service
name: My HTTP Service
output.elasticsearch:
hosts: ['https://AXIOM_DOMAIN:443/v1/datasets/DATASET_NAME/elastic']
# token should be an API token
api_key: 'axiom:API_TOKEN'
allow_older_versions: true
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
## Auditbeat
Auditbeat is a lightweight shipper that ships events in real time to Axiom for further analysis. It Collects your Linux audit framework data and monitor the integrity of your files. It’s also used to evaluate the activities of users and processes on your system.
You can also use Auditbeat to detect changes to critical files, like binaries and configuration files, and identify potential security policy violations.
### Installation
Visit the [Auditbeat download page](https://www.elastic.co/downloads/beats/auditbeat) to install Auditbeat on your system.
### Configuration
Auditbeat uses modules to collect audit information:
* Auditd
* File integrity
* System
By default, Auditbeat uses a configuration that’s tailored to the operating system where Auditbeat is running.
To use a different configuration, change the module settings in `auditbeat.yml.`
The example below configures Auditbeat to use the `file_integrity` module configured to generate events whenever a file in one of the specified paths changes on disk. The events contains the file metadata and hashes, and it’s deployed instantly to Axiom.
```yaml
# Disable index lifecycle management (ILM)
setup.ilm.enabled: false
auditbeat.modules:
- module: file_integrity
paths:
- /usr/bin
- /sbin
- /usr/sbin
- /etc
- /bin
- /usr/local/sbin
output.elasticsearch:
hosts: ['https://AXIOM_DOMAIN:443/v1/datasets/DATASET_NAME/elastic']
# token should be an API token
api_key: 'axiom:API_TOKEN'
allow_older_versions: true
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
## Packetbeat
Packetbeat is a real-time network packet analyzer that you can integrate with Axiom to provide an app monitoring and performance analytics system between the servers of your network.
With Axiom you can use Packetbeat to capture the network traffic between your app servers, decode the app layer protocols (HTTP, MySQL, Redis, PGSQL, Thrift, MongoDB, and so on), and correlate the requests with the responses.
Packetbeat sniffs the traffic between your servers, and parses the app-level protocols on the fly directly into Axiom.
Currently, Packetbeat supports the following protocols:
* ICMP (v4 and v6)
* DHCP (v4)
* DNS
* HTTP
* AMQP 0.9.1
* Cassandra
* MySQL
* PostgreSQL
* Redis
* Thrift-RPC
* MongoDB
* MemCache
* NFS
* TLS
* SIP/SDP (beta)
### Installation
Visit the [Packetbeat download page](https://www.elastic.co/downloads/beats/packetbeat) to install Packetbeat on your system.
### Configuration
In `packetbeat.yml`, configure the network devices and protocols to capture traffic from.
To see a list of available devices for `packetbeat.yml` configuration , run:
| OS type | Command |
| ------- | -------------------------------------------------------------- |
| DEB | Run `packetbeat devices` |
| RPM | Run `packetbeat devices` |
| MacOS | Run `./packetbeat devices` |
| Brew | Run `packetbeat devices` |
| Linux | Run `./packetbeat devices` |
| Windows | Run `PS C:\Program Files\Packetbeat> .\packetbeat.exe devices` |
Packetbeat supports these sniffer types:
* `pcap`
* `af_packet`
In the protocols section, configure the ports where Packetbeat can find each protocol. If you use any non-standard ports, add them here. Otherwise, use the default values:
```yaml
# Disable index lifecycle management (ILM)
setup.ilm.enabled: false
packetbeat.interfaces.auto_promisc_mode: true
packetbeat.flows:
timeout: 30s
period: 10s
protocols:
dns:
ports: [53]
include_authorities: true
include_additionals: true
http:
ports: [80, 8080, 8081, 5000, 8002]
memcache:
ports: [11211]
mysql:
ports: [3306]
pgsql:
ports: [5432]
redis:
ports: [6379]
thrift:
ports: [9090]
mongodb:
ports: [27017]
output.elasticsearch:
hosts: ['https://AXIOM_DOMAIN:443/v1/datasets/DATASET_NAME/elastic']
# api_key should be your API token
api_key: 'axiom:API_TOKEN'
allow_older_versions: true
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
For more information on configuring Packetbeats, visit the [documentation](https://www.elastic.co/guide/en/beats/packetbeat/current/configuring-howto-packetbeat.html).
## Journalbeat
Journalbeat is a lightweight shipper for forwarding and centralizing log data from [systemd journals](https://www.freedesktop.org/software/systemd/man/systemd-journald.service.html) to a log management tool like Axiom.
Journalbeat monitors the journal locations that you specify, collects log events, and eventually forwards the logs to Axiom.
### Installation
Visit the [Journalbeat download page](https://www.elastic.co/guide/en/beats/journalbeat/current/journalbeat-installation-configuration.html) to install Journalbeat on your system.
### Configuration
Before running Journalbeat, specify the location of the systemd journal files and configure how you want the files to be read.
The example below configures Journalbeat to use the `path` of your systemd journal files. Each path can be a directory path (to collect events from all journals in a directory), or a path configured to deploy logs instantly to Axiom.
```yaml
# Disable index lifecycle management (ILM)
setup.ilm.enabled: false
journalbeat.inputs:
- paths:
- "/dev/log"
- "/var/log/messages/my-journal-file.journal"
seek: head
journalbeat.inputs:
- paths: []
include_matches:
- "CONTAINER_TAG=redis"
- "_COMM=redis"
- "container.image.tag=redis"
- "process.name=redis"
output.elasticsearch:
hosts: ['https://AXIOM_DOMAIN:443/v1/datasets/DATASET_NAME/elastic']
# token should be an API token
api_key: 'axiom:API_TOKEN'
allow_older_versions: true
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
For more information on configuring Journalbeat, visit the [documentation](https://www.elastic.co/guide/en/beats/journalbeat/current/configuration-journalbeat-options.html).
# Send data from Elastic Bulk API to Axiom
Source: https://axiom.co/docs/send-data/elasticsearch-bulk-api
This step-by-step guide will help you get started with migrating from Elasticsearch to Axiom using the Elastic Bulk API
Axiom is a log management platform that offers an Elasticsearch Bulk API emulation to facilitate migration from Elasticsearch or integration with tools that support the Elasticsearch Bulk API.
Using the Elastic Bulk API and Axiom in your app provides a robust way to store and manage logs.
The Elasticsearch Bulk API expects the timestamp to be formatted as `@timestamp`, not `_time`. For example:
```json
{"index": {"_index": "myindex", "_id": "1"}}
{"@timestamp": "2024-01-07T12:00:00Z", "message": "axiom elastic bulk", "severity": "INFO"}
```
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
## Send logs to Axiom using the Elasticsearch Bulk API and Go
To send logs to Axiom using the Elasticsearch Bulk API and Go, use the `net/http` package to create and send the HTTP request.
### Prepare your data
The data needs to be formatted as per the Bulk API’s requirements. Here’s a simple example of how to prepare your data:
```json
data :=
{"index": {"_index": "myindex", "_id": "1"}}
{"@timestamp": "2023-06-06T12:00:00Z", "message": "axiom elastic bulk", "severity": "INFO"}
{"index": {"_index": "myindex", "_id": "2"}}
{"@timestamp": "2023-06-06T12:00:01Z", "message": "axiom elastic bulk api", "severity": "ERROR"}
```
### Send data to Axiom
Get an Axiom [API token](/reference/tokens) for the Authorization header, and create a [dataset](/reference/datasets).
```go
package main
import (
"bytes"
"fmt"
"io/ioutil"
"log"
"net/http"
)
func main() {
data := []byte(`{"index": {"_index": "myindex", "_id": "1"}}
{"@timestamp": "2023-06-06T12:00:00Z", "message": "axiom elastic bulk", "severity": "INFO"}
{"index": {"_index": "myindex", "_id": "2"}}
{"@timestamp": "2023-06-06T12:00:01Z", "message": "axiom elastic bulk api", "severity": "ERROR"}
`)
// Create a new request using http
req, err := http.NewRequest("POST", "https://AXIOM_DOMAIN:443/v1/datasets/DATASET_NAME/elastic/_bulk", bytes.NewBuffer(data))
if err != nil {
log.Fatalf("Error creating request: %v", err)
}
// Add authorization header to the request
req.Header.Add("Authorization", "Bearer API_TOKEN")
req.Header.Add("Content-Type", "application/x-ndjson")
// Send request using http.Client
client := &http.Client{}
resp, err := client.Do(req)
if err != nil {
log.Fatalf("Error on response: %v", err)
}
defer resp.Body.Close()
// Read and print the response body
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
log.Fatalf("Error reading response body: %v", err)
}
fmt.Printf("Response status: %s\nResponse body: %s\n", resp.Status, string(body))
}
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
## Send logs to Axiom using the Elasticsearch Bulk API and Python
To send logs to Axiom using the Elasticsearch Bulk API and Python, use the built-in `requests` library.
### Prepare your data
The data sent needs to be formatted as per the Bulk API’s requirements. Here’s a simple example of how to prepare the data:
```json
data = """
{"index": {"_index": "myindex", "_id": "1"}}
{"@timestamp": "2023-06-06T12:00:00Z", "message": "Log message 1", "severity": "INFO"}
{"index": {"_index": "myindex", "_id": "2"}}
{"@timestamp": "2023-06-06T12:00:01Z", "message": "Log message 2", "severity": "ERROR"}
"""
```
### Send data to Axiom
Obtain an Axiom [API token](/reference/tokens) for the Authorization header, and [dataset](/reference/datasets).
```py
import requests
import json
data = """
{"index": {"_index": "myindex", "_id": "1"}}
{"@timestamp": "2024-01-07T12:00:00Z", "message": "axiom elastic bulk", "severity": "INFO"}
{"index": {"_index": "myindex", "_id": "2"}}
{"@timestamp": "2024-01-07T12:00:01Z", "message": "Log message 2", "severity": "ERROR"}
"""
# Replace these with your actual dataset name and API token
dataset = "DATASET_NAME"
api_token = "API_TOKEN"
# The URL for the bulk API
url = f'https://AXIOM_DOMAIN:443/v1/datasets/{dataset}/elastic/_bulk'
try:
response = requests.post(
url,
data=data,
headers={
'Content-Type': 'application/x-ndjson',
'Authorization': f'Bearer {api_token}'
}
)
response.raise_for_status()
except requests.HTTPError as http_err:
print(f'HTTP error occurred: {http_err}')
print('Response:', response.text)
except Exception as err:
print(f'Other error occurred: {err}')
else:
print('Success!')
try:
print(response.json())
except json.JSONDecodeError:
print(response.text)
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
## Send logs to Axiom using the Elasticsearch Bulk API and JavaScript
Use the axios library in JavaScript to send logs to Axiom using the Elasticsearch Bulk API.
### Prepare your data
The data sent needs to be formatted as per the Bulk API’s requirements. Here’s a simple example of how to prepare the data:
```json
let data = `
{"index": {"_index": "myindex", "_id": "1"}}
{"@timestamp": "2023-06-06T12:00:00Z", "message": "Log message 1", "severity": "INFO"}
{"index": {"_index": "myindex", "_id": "2"}}
{"@timestamp": "2023-06-06T12:00:01Z", "message": "Log message 2", "severity": "ERROR"}
`;
```
### Send data to Axiom
Obtain an Axiom [API token](/reference/tokens) for the Authorization header, and [dataset](/reference/datasets).
```js
const axios = require('axios');
// Axiom elastic API URL
const AxiomApiUrl = 'https://AXIOM_DOMAIN:443/v1/datasets/DATASET_NAME/elastic/_bulk';
// Your Axiom API token
const AxiomToken = 'API_TOKEN';
// The logs data retrieved from Elasticsearch
const logs = [
{"index": {"_index": "myindex", "_id": "1"}},
{"@timestamp": "2023-06-06T12:00:00Z", "message": "axiom logging", "severity": "INFO"},
{"index": {"_index": "myindex", "_id": "2"}},
{"@timestamp": "2023-06-06T12:00:01Z", "message": "axiom log data", "severity": "ERROR"}
];
// Convert the logs to a single string with newline separators
const data = logs.map(log => JSON.stringify(log)).join('\n') + '\n';
axios.post(AxiomApiUrl, data, {
headers: {
'Content-Type': 'application/x-ndjson',
'Authorization': `Bearer ${AxiomToken}`
}
})
.then((response) => {
console.log('Response Status:', response.status);
console.log('Response Data:', response.data);
})
.catch((error) => {
console.error('Error:', error.response ? error.response.data : error.message);
});
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
## Send logs to Axiom using the Elasticsearch Bulk API and PHP
To send logs from PHP to Axiom using the Elasticsearch Bulk API, make sure you have installed the necessary PHP libraries: [Guzzle](https://docs.guzzlephp.org/en/stable/overview.html) for making HTTP requests and [JsonMachine](https://packagist.org/packages/halaxa/json-machine) for handling newline-delimited JSON data.
### Prepare your data
The data sent needs to be formatted as per the Bulk API’s requirements. Here’s a simple example of how to prepare the data:
```json
$data = << 'https://AXIOM_DOMAIN:443/v1/datasets/DATASET_NAME/elastic/_bulk', // Update with your Axiom host
'timeout' => 2.0,
]);
// Your Axiom API token
$AxiomToken = 'API_TOKEN';
// The logs data retrieved from Elasticsearch
// Note: Replace this with your actual code to retrieve logs from Elasticsearch
$logs = [
["@timestamp" => "2023-06-06T12:00:00Z", "message" => "axiom logger", "severity" => "INFO"],
["@timestamp" => "2023-06-06T12:00:01Z", "message" => "axiom logging elasticsearch", "severity" => "ERROR"]
];
$events = array_map(function ($log) {
return [
'@timestamp' => $log['@timestamp'],
'attributes' => $log
];
}, $logs);
// Create the payload for Axiom
$payload = [
'tags' => [
'source' => 'myapplication',
'host' => 'myhost'
],
'events' => $events
];
try {
$response = $client->post('', [
'headers' => [
'Authorization' => 'Bearer ' . $AxiomToken,
'Content-Type' => 'application/x-ndjson',
],
'json' => $payload,
]);
// handle response here
$statusCode = $response->getStatusCode();
$content = $response->getBody();
echo "Status code: $statusCode \nContent: $content";
} catch (\Exception $e) {
// handle exception here
echo "Error: " . $e->getMessage();
}
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
# Send data from Fluent Bit to Axiom
Source: https://axiom.co/docs/send-data/fluent-bit
This page explains how to send data from Fluent Bit to Axiom.
Fluent Bit is an open-source log processor and forwarder that allows you to collect any data like metrics and logs from different sources, enrich them with filters, and send them to multiple destinations like Axiom.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
- [Install Fluent Bit](https://docs.fluentbit.io/manual/installation/getting-started-with-fluent-bit).
## Configure Fluent Bit
1. Set up the Fluent Bit configuration file based on the [Fluent Bit documentation](https://docs.fluentbit.io/manual/administration/configuring-fluent-bit/classic-mode/configuration-file).
2. In the Fluent Bit configuration file, use the HTTP output plugin with the following configuration. For more information on the plugin, see the [Fluent Bit documentation](https://docs.fluentbit.io/manual/pipeline/outputs/http).
```ini
[OUTPUT]
Name http
Match *
Host AXIOM_DOMAIN
Port 443
URI /v1/datasets/DATASET_NAME/ingest
Header Authorization Bearer API_TOKEN
Compress gzip
Format json
JSON_Date_Key _time
JSON_Date_Format iso8601
TLS On
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
# Send data from Fluentd to Axiom
Source: https://axiom.co/docs/send-data/fluentd
This step-by-step guide will help you collect, aggregate, analyze, and route log files from multiple Fluentd sources into Axiom
Fluentd is an open-source log collector that allows you to collect, aggregate, process, analyze, and route log files.
With Fluentd, you can collect logs from multiple sources and ship it instantly into Axiom
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
## Installation
Visit the [Fluentd download page](https://www.fluentd.org/download) to install Fluentd on your system.
## Configuration
Fluentd lifecycle consist of five different components which are:
* Setup: Configure your `fluent.conf` file.
* Inputs: Define your input listeners.
* Filters: Create a rule to allow or disallow an event.
* Matches: Send output to Axiom when input data match and pair specific data from your data input within your configuration.
* Labels: Groups filters and simplifies tag handling.
When setting up Fluentd, the configuration file `.conf` is used to connect its components.
## Configuring Fluentd using the HTTP output plugin
The example below shows a Fluentd configuration that sends data to Axiom using the [HTTP output plugin](https://docs.fluentd.org/output/http):
```xml
@type forward
port 24224
@type http
endpoint https://AXIOM_DOMAIN/v1/datasets/DATASET_NAME/ingest
# Authorization Bearer should be an ingest token
headers {"Authorization": "Bearer API_TOKEN"}
json_array false
open_timeout 3
@type json
flush_interval 5s
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
## Configuring Fluentd using the OpenSearch output plugin
The example below shows a Fluentd configuration that sends data to Axiom using the [OpenSearch plugin](https://docs.fluentd.org/output/opensearch):
```xml
@type tail
@id input_tail
@type apache2
path /var/log/*.log
tag td.logs
@type opensearch
@id out_os
@log_level info
include_tag_key true
include_timestamp true
host "#{ENV['FLUENT_OPENSEARCH_HOST'] || 'AXIOM_DOMAIN'}"
port "#{ENV['FLUENT_OPENSEARCH_PORT'] || '443'}"
path "#{ENV['FLUENT_OPENSEARCH_PATH']|| '/v1/datasets/DATASET_NAME/elastic'}"
scheme "#{ENV['FLUENT_OPENSEARCH_SCHEME'] || 'https'}"
ssl_verify "#{ENV['FLUENT_OPENSEARCH_SSL_VERIFY'] || 'true'}"
ssl_version "#{ENV['FLUENT_OPENSEARCH_SSL_VERSION'] || 'TLSv1_2'}"
user "#{ENV['FLUENT_OPENSEARCH_USER'] || 'axiom'}"
password "#{ENV['FLUENT_OPENSEARCH_PASSWORD'] || 'xaat-xxxxxxxxxx-xxxxxxxxx-xxxxxxx'}"
index_name "#{ENV['FLUENT_OPENSEARCH_INDEX_NAME'] || 'fluentd'}"
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
## Configure buffer interval with filter patterns
The example below shows a Fluentd configuration to hold logs in memory with specific flush intervals, size limits, and how to exclude specific logs based on patterns.
```xml
# Collect common system logs
@type tail
@id system_logs
@type none
path /var/log/*.log
pos_file /var/log/fluentd/system.log.pos
read_from_head true
tag system.logs
# Collect Apache2 logs (if they’re located in /var/log/apache2/)
@type tail
@id apache_logs
@type apache2
path /var/log/apache2/*.log
pos_file /var/log/fluentd/apache2.log.pos
read_from_head true
tag apache.logs
# Filter to exclude certain patterns (optional)
@type grep
key message
pattern /exclude_this_pattern/
# Send logs to Axiom
@type opensearch
@id out_os
@log_level info
include_tag_key true
include_timestamp true
host "#{ENV['FLUENT_OPENSEARCH_HOST'] || 'AXIOM_DOMAIN'}"
port "#{ENV['FLUENT_OPENSEARCH_PORT'] || '443'}"
path "#{ENV['FLUENT_OPENSEARCH_PATH']|| '/v1/datasets/DATASET_NAME/elastic'}"
scheme "#{ENV['FLUENT_OPENSEARCH_SCHEME'] || 'https'}"
ssl_verify "#{ENV['FLUENT_OPENSEARCH_SSL_VERIFY'] || 'true'}"
ssl_version "#{ENV['FLUENT_OPENSEARCH_SSL_VERSION'] || 'TLSv1_2'}"
user "#{ENV['FLUENT_OPENSEARCH_USER'] || 'axiom'}"
password "#{ENV['FLUENT_OPENSEARCH_PASSWORD'] || 'xaat-xxxxxxxxxx-xxxxxxxxx-xxxxxxx'}"
index_name "#{ENV['FLUENT_OPENSEARCH_INDEX_NAME'] || 'fluentd'}"
@type memory
flush_mode interval
flush_interval 10s
chunk_limit_size 5M
retry_max_interval 30
retry_forever true
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
## Collect and send PHP logs to Axiom
The example below shows a Fluentd configuration that sends PHP data to Axiom.
```xml
# Collect PHP logs
@type tail
@id php_logs
@type multiline
format_firstline /^\[\d+-\w+-\d+ \d+:\d+:\d+\]/
format1 /^\[(?
path /var/log/php*.log
pos_file /var/log/fluentd/php.log.pos
read_from_head true
tag php.logs
# Send PHP logs to Axiom
@type opensearch
@id out_os
@log_level info
include_tag_key true
include_timestamp true
host "#{ENV['FLUENT_OPENSEARCH_HOST'] || 'AXIOM_DOMAIN'}"
port "#{ENV['FLUENT_OPENSEARCH_PORT'] || '443'}"
path "#{ENV['FLUENT_OPENSEARCH_PATH']|| '/v1/datasets/DATASET_NAME/elastic'}"
scheme "#{ENV['FLUENT_OPENSEARCH_SCHEME'] || 'https'}"
ssl_verify "#{ENV['FLUENT_OPENSEARCH_SSL_VERIFY'] || 'true'}"
ssl_version "#{ENV['FLUENT_OPENSEARCH_SSL_VERSION'] || 'TLSv1_2'}"
user "#{ENV['FLUENT_OPENSEARCH_USER'] || 'axiom'}"
password "#{ENV['FLUENT_OPENSEARCH_PASSWORD'] || 'xaat-xxxxxxxxxx-xxxxxxxxx-xxxxxxx'}"
index_name "#{ENV['FLUENT_OPENSEARCH_INDEX_NAME'] || 'php-logs'}"
@type memory
flush_mode interval
flush_interval 10s
chunk_limit_size 5M
retry_max_interval 30
retry_forever true
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
## Collect and send Scala logs to Axiom
The example below shows a Fluentd configuration that sends Scala data to Axiom
```xml
# Collect Scala logs
@type tail
@id scala_logs
@type multiline
format_firstline /^\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2},\d{3}/
format1 /^(?
path /var/log/scala-app.log
pos_file /var/log/fluentd/scala.log.pos
read_from_head true
tag scala.logs
# Send Scala logs using HTTP plugin to Axiom
@type http
endpoint "#{ENV['FLUENT_HTTP_ENDPOINT'] || 'https://AXIOM_DOMAIN/v1/datasets/DATASET_NAME/ingest'}"
headers {"Authorization": "Bearer #{ENV['FLUENT_HTTP_TOKEN'] || ''}"}
@type json
@type memory
flush_mode interval
flush_interval 10s
chunk_limit_size 5M
retry_max_interval 30
retry_forever true
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
## Send virtual machine logs to Axiom using the HTTP output plugin
The example below shows a Fluentd configuration that sends data from your virtual machine to Axiom using the `apache` source type.
```xml
@type tail
@id input_tail
@type apache2
path /var/log/**/*.log
pos_file /var/log/fluentd/fluentd.log.pos
tag vm.logs
read_from_head true
@type record_transformer
hostname "#{Socket.gethostname}"
service "vm_service"
@type http
@id out_http_axiom
@log_level info
endpoint "#{ENV['AXIOM_URL'] || 'https://api.axiom.co'}"
path "/v1/datasets/DATASET_NAME/ingest"
ssl_verify "#{ENV['AXIOM_SSL_VERIFY'] || 'true'}"
Authorization "Bearer API_TOKEN"
Content-Type "application/json"
@type json
@type memory
flush_mode interval
flush_interval 5s
chunk_limit_size 5MB
retry_forever true
```
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
The example below shows a Fluentd configuration that sends data from your virtual machine to Axiom using the `nginx` source type.
```xml
@type tail
@id input_tail
@type nginx
path /var/log/nginx/access.log, /var/log/nginx/error.log
pos_file /var/log/fluentd/nginx.log.pos
tag nginx.logs
read_from_head true
@type record_transformer
hostname "#{Socket.gethostname}"
service "nginx"
@type http
@id out_http_axiom
@log_level info
endpoint "#{ENV['AXIOM_URL'] || 'https://api.axiom.co'}"
path "/v1/datasets/DATASET_NAME/ingest"
ssl_verify "#{ENV['AXIOM_SSL_VERIFY'] || 'true'}"
Authorization "Bearer API_TOKEN"
Content-Type "application/json"
@type json
@type memory
flush_mode interval
flush_interval 5s
chunk_limit_size 5MB
retry_forever true
```
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
# Send data from Heroku Log Drains to Axiom
Source: https://axiom.co/docs/send-data/heroku-log-drains
This step-by-step guide will help you forward logs from your apps, and deployments to Axiom by sending them via HTTPS.
Log Drains make it easy to collect logs from your deployments and forward them to archival, search, and alerting services by sending them via HTTPS, HTTP, TLS, and TCP.
## Heroku Log Drains
With Heroku log drains you can forward logs from your apps, and deployments to Axiom by sending them via HTTPS.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
## Installation
Sign up and login to your account on [Heroku](https://heroku.com/), and download the Heroku [CLI](https://devcenter.heroku.com/articles/heroku-cli#download-and-install)
## Configuration
Heroku log drains configuration consists of three main components
```bash
heroku drains:add https://axiom:API_TOKEN@AXIOM_DOMAIN/v1/datasets/DATASET_NAME/ingest -a HEROKU_APPLICATION_NAME
```
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `HEROKU_APPLICATION_NAME` with the name of the app that you created on the Heroku dashboard or the Heroku CLI.
[See creating an Heroku app for more](https://devcenter.heroku.com/articles/creating-apps)
Back in your dataset you see your Heroku logs.
# Send data from Kubernetes Cluster to Axiom
Source: https://axiom.co/docs/send-data/kubernetes
This step-by-step guide helps you ingest logs from your Kubernetes cluster into Axiom using the DaemonSet configuration.
Axiom makes it easy to collect, analyze, and monitor logs from your Kubernetes clusters. Integrate popular tools like Filebeat, Vector, or Fluent Bit with Axiom to send your cluster logs.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
## Send Kubernetes Cluster logs to Axiom using Filebeat
Ingest logs from your Kubernetes cluster into Axiom using Filebeat.
The following is an example of a DaemonSet configuration to ingest your data logs into Axiom.
### Configuration
```yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: kube-system
labels:
k8s-app: filebeat
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: filebeat
labels:
k8s-app: filebeat
rules:
- apiGroups: [''] # "" indicates the core API group
resources:
- namespaces
- pods
- nodes
verbs:
- get
- watch
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: kube-system
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
data:
filebeat.yml: |-
filebeat.autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
hints.enabled: true
hints.default_config:
type: container
paths:
- /var/log/containers/*${data.kubernetes.container.id}.log
allow_older_versions: true
processors:
- add_cloud_metadata:
output.elasticsearch:
hosts: ['${AXIOM_HOST}/v1/datasets/${AXIOM_DATASET_NAME}/elastic']
api_key: 'axiom:${AXIOM_API_TOKEN}'
setup.ilm.enabled: false
kind: ConfigMap
metadata:
annotations: {}
labels:
k8s-app: filebeat
name: filebeat-config
namespace: kube-system
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
k8s-app: filebeat
name: filebeat
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: filebeat
template:
metadata:
annotations: {}
labels:
k8s-app: filebeat
spec:
containers:
- args:
- -c
- /etc/filebeat.yml
- -e
env:
- name: AXIOM_HOST
value: AXIOM_DOMAIN
- name: AXIOM_DATASET_NAME
value: DATASET_NAME
- name: AXIOM_API_TOKEN
value: API_TOKEN
- name: NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
image: docker.elastic.co/beats/filebeat-oss:8.11.1
imagePullPolicy: IfNotPresent
name: filebeat
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
securityContext:
runAsUser: 0
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/filebeat.yml
name: config
readOnly: true
subPath: filebeat.yml
- mountPath: /usr/share/filebeat/data
name: data
- mountPath: /var/lib/docker/containers
name: varlibdockercontainers
readOnly: true
- mountPath: /var/log
name: varlog
readOnly: true
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: filebeat
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
volumes:
- configMap:
defaultMode: 416
name: filebeat-config
name: config
- hostPath:
path: /var/lib/docker/containers
type: ''
name: varlibdockercontainers
- hostPath:
path: /var/log
type: ''
name: varlog
- hostPath:
path: /var/lib/filebeat-data
type: ''
name: data
updateStrategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
After editing your values, apply the changes to your cluster using `kubectl apply -f daemonset.yaml`
## Send Kubernetes Cluster logs to Axiom using Vector
Collect logs from your Kubernetes cluster and send them directly to Axiom using the Vector daemonset.
### Configuration
```yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: vector
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: vector
rules:
- apiGroups: [""]
resources:
- pods
- nodes
- namespaces
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: vector
subjects:
- kind: ServiceAccount
name: vector
namespace: kube-system
roleRef:
kind: ClusterRole
name: vector
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ConfigMap
metadata:
name: vector-config
namespace: kube-system
data:
vector.yml: |-
sources:
kubernetes_logs:
type: kubernetes_logs
self_node_name: ${VECTOR_SELF_NODE_NAME}
sinks:
axiom:
type: axiom
inputs:
- kubernetes_logs
compression: gzip
dataset: ${AXIOM_DATASET_NAME}
token: ${AXIOM_API_TOKEN}
healthcheck:
enabled: true
log_level: debug
logging:
level: debug
log_level: debug
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: vector
namespace: kube-system
spec:
selector:
matchLabels:
name: vector
template:
metadata:
labels:
name: vector
spec:
serviceAccountName: vector
containers:
- name: vector
image: timberio/vector:0.37.0-debian
args:
- --config-dir
- /etc/vector/
env:
- name: AXIOM_HOST
value: AXIOM_DOMAIN
- name: AXIOM_DATASET_NAME
value: DATASET_NAME
- name: AXIOM_API_TOKEN
value: API_TOKEN
- name: VECTOR_SELF_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumeMounts:
- name: config
mountPath: /etc/vector/vector.yml
subPath: vector-config.yml
- name: data-dir
mountPath: /var/lib/vector
- name: var-log
mountPath: /var/log
readOnly: true
- name: var-lib
mountPath: /var/lib
readOnly: true
resources:
limits:
memory: 500Mi
requests:
cpu: 200m
memory: 100Mi
securityContext:
runAsUser: 0
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumes:
- name: config
configMap:
name: vector-config
items:
- key: vector.yml
path: vector-config.yml
- name: data-dir
hostPath:
path: /var/lib/vector
type: DirectoryOrCreate
- name: var-log
hostPath:
path: /var/log
- name: var-lib
hostPath:
path: /var/lib
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
updateStrategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
After editing your values, apply the changes to your cluster using `kubectl apply -f daemonset.yaml`
## Send Kubernetes Cluster logs to Axiom using Fluent Bit
Collect logs from your Kubernetes cluster and send them directly to Axiom using Fluent Bit.
### Configuration
```yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluent-bit
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fluent-bit
rules:
- apiGroups: [""]
resources:
- pods
- nodes
- namespaces
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: fluent-bit
subjects:
- kind: ServiceAccount
name: fluent-bit
namespace: kube-system
roleRef:
kind: ClusterRole
name: fluent-bit
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ConfigMap
metadata:
name: fluent-bit-config
namespace: kube-system
data:
fluent-bit.conf: |-
[SERVICE]
Flush 1
Log_Level debug
Daemon off
Parsers_File parsers.conf
HTTP_Server On
HTTP_Listen 0.0.0.0
HTTP_Port 2020
[INPUT]
Name tail
Tag kube.*
Path /var/log/containers/*.log
Parser docker
DB /var/log/flb_kube.db
Mem_Buf_Limit 7MB
Skip_Long_Lines On
Refresh_Interval 10
[FILTER]
Name kubernetes
Match kube.*
Kube_URL https://kubernetes.default.svc:443
Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token
Kube_Tag_Prefix kube.var.log.containers.
Merge_Log On
Merge_Log_Key log_processed
K8S-Logging.Parser On
K8S-Logging.Exclude Off
[OUTPUT]
Name http
Match *
Host ${AXIOM_HOST}
Port 443
URI /v1/datasets/${AXIOM_DATASET_NAME}/ingest
Header Authorization Bearer ${AXIOM_API_TOKEN}
Format json
Json_date_key time
Json_date_format iso8601
Retry_Limit False
Compress gzip
tls On
tls.verify Off
parsers.conf: |-
[PARSER]
Name docker
Format json
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L
Time_Keep On
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluent-bit
namespace: kube-system
spec:
selector:
matchLabels:
name: fluent-bit
template:
metadata:
labels:
name: fluent-bit
spec:
serviceAccountName: fluent-bit
containers:
- name: fluent-bit
image: fluent/fluent-bit:1.9.9
env:
- name: AXIOM_HOST
value: AXIOM_DOMAIN
- name: AXIOM_DATASET_NAME
value: DATASET_NAME
- name: AXIOM_API_TOKEN
value: API_TOKEN
volumeMounts:
- name: config
mountPath: /fluent-bit/etc/fluent-bit.conf
subPath: fluent-bit.conf
- name: config
mountPath: /fluent-bit/etc/parsers.conf
subPath: parsers.conf
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
volumes:
- name: config
configMap:
name: fluent-bit-config
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
terminationGracePeriodSeconds: 10
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
After editing your values, apply the changes to your cluster using `kubectl apply -f daemonset.yaml`
# Send data from Logstash to Axiom
Source: https://axiom.co/docs/send-data/logstash
This step-by-step guide helps you collect, and parse logs from your logstash processing pipeline into Axiom
Logstash is an open-source log aggregation, transformation tool, and server-side data processing pipeline that simultaneously ingests data from many sources. With Logstash, you can collect, parse, send, and store logs for future use on Axiom.
Logstash works as a data pipeline tool with Axiom, where, from one end, the data is input from your servers and system and, from the other end, Axiom takes out the data and converts it into useful information.
It can read data from various `input` sources, filter data for the specified configuration, and eventually store it.
Logstash sits between your data and where you want to keep it.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
- Visit the [Logstash download page](https://www.elastic.co/downloads/logstash) to install Logstash on your system.
## Configuration
To configure the `logstash.conf` file, define the source, set the rules to format your data, and set Axiom as the destination where the data is sent.
The Logstash configuration works with OpenSearch, so you can use the OpenSearch syntax to define the source and destination.
The Logstash Pipeline has three stages:
* [Input stage](https://www.elastic.co/guide/en/logstash/8.0/pipeline.html#_inputs) generates the event & Ingest Data of all volumes, Sizes, forms, and Sources
* [Filter stage](https://www.elastic.co/guide/en/logstash/8.0/pipeline.html#_filters) modifies the event as you specify in the filter component
* [Output stage](https://www.elastic.co/guide/en/logstash/8.0/pipeline.html#_outputs) shifts and sends the event into Axiom.
## OpenSearch output
For installation instructions for the plugin, check out the [OpenSearch documentation](https://opensearch.org/docs/latest/tools/logstash/index/#install-logstash)
In `logstash.conf`, configure your Logstash pipeline to collect and send data logs to Axiom.
The example below shows Logstash configuration that sends data to Axiom:
```js
input{
exec{
command => "date"
interval => "1"
}
}
output{
opensearch{
hosts => ["https://AXIOM_DOMAIN:443/v1/datasets/DATASET_NAME/elastic"]
user => "axiom"
password => "API_TOKEN"
}
}
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
## Combining filters with conditionals on Logstash events
Logstash provides an extensive array of filters that allow you to enhance, manipulate, and transform your data. These filters can be used to perform tasks such as extracting, removing, and adding new fields and changing the content of fields.
Some valuable filters include the following.
## Grok filter plugin
The Grok filter plugin allows you to parse the unstructured log data into something structured and queryable, and eventually send the structured logs to Axiom. It matches the unstructured data to patterns and maps the data to specified fields.
Here’s an example of how to use the Grok plugin:
```js
input{
exec{
command => "axiom"
interval => "1"
}
}
filter {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
mutate {
add_field => { "foo" => "Hello Axiom, from Logstash" }
remove_field => [ "axiom", "logging" ]
}
}
output{
opensearch{
hosts => ["https://AXIOM_DOMAIN:443/v1/datasets/DATASET_NAME/elastic"]
user => "axiom"
password => "API_TOKEN"
}
}
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
This configuration parses Apache log data by matching the pattern of `COMBINEDAPACHELOG`.
## Mutate filter plugin
The Mutate filter plugin allows you to perform general transformations on fields. For example, rename, convert, strip, and modify fields in event data.
Here’s an example of using the Mutate plugin:
```js
input{
exec{
command => "axiom"
interval => "1"
}
}
filter {
mutate {
rename => { "hostname" => "host" }
convert => { "response" => "integer" }
uppercase => [ "method" ]
remove_field => [ "request", "httpversion" ]
}
}
output{
opensearch{
hosts => ["https://AXIOM_DOMAIN:443/v1/datasets/DATASET_NAME/elastic"]
user => "axiom"
password => "API_TOKEN"
}
}
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
This configuration renames the field `hostname` to `host`, converts the `response` field value to an integer, changes the `method` field to uppercase, and removes the `request` and `httpversion` fields.
## Drop filter plugin
The Drop filter plugin allows you to drop certain events based on specified conditions. This helps you to filter out unnecessary data.
Here’s an example of using the Drop plugin:
```js
input {
syslog {
port => 5140
type => syslog
}
}
filter {
if [type] == "syslog" and [severity] == "debug" {
drop { }
}
}
output{
opensearch{
hosts => ["https://AXIOM_DOMAIN:443/v1/datasets/DATASET_NAME/elastic"]
user => "axiom"
password => "API_TOKEN"
}
}
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
This configuration drops all events of type `syslog` with severity `debug`.
## Clone filter plugin
The Clone filter plugin creates a copy of an event and stores it in a new event. The event continues along the pipeline until it ends or is dropped.
Here’s an example of using the Clone plugin:
```js
input {
syslog {
port => 5140
type => syslog
}
}
filter {
clone {
clones => ["cloned_event"]
}
}
output{
opensearch{
hosts => ["https://AXIOM_DOMAIN:443/v1/datasets/DATASET_NAME/elastic"]
user => "axiom"
password => "API_TOKEN"
}
}
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
This configuration creates a new event named `cloned_event` that is a clone of the original event.
## GeoIP filter plugin
The GeoIP filter plugin adds information about the geographical location of IP addresses. This data includes the latitude, longitude, continent, country, and so on.
Here’s an example of using the GeoIP plugin:
```js
input{
exec{
command => "axiom"
interval => "6"
}
}
filter {
geoip {
source => "ip"
}
}
output{
opensearch{
hosts => ["https://AXIOM_DOMAIN:443/v1/datasets/DATASET_NAME/elastic"]
user => "axiom"
password => "API_TOKEN"
}
}
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
This configuration adds geographical location data for the IP address in the `ip` field. Note that you may need to specify the path to the GeoIP database file in the plugin configuration, depending on your setup.
# Send data from Loki Multiplexer to Axiom
Source: https://axiom.co/docs/send-data/loki-multiplexer
This step-by-step guide provides a gateway for you to connect a direct link interface to Axiom via Loki endpoint.
Loki by Prometheus is a multi-tenant log aggregation system that’s highly scalable and capable of indexing metadata about your logs.
Loki exposes an HTTP API for pushing, querying, and tailing Axiom log data.
Axiom Loki Proxy provides a gateway for you to connect a direct link interface to Axiom via Loki endpoint.
Using the Axiom Loki Proxy, you can ship logs to Axiom via the [Loki HTTP API](https://grafana.com/docs/loki/latest/reference/loki-http-api/#ingest-logs).
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
## Installation
### Install and update using Homebrew
```bash
brew tap axiomhq/tap
brew install axiom-loki-proxy
brew update
brew upgrade axiom-loki-proxy
```
### Install using `go get`
```bash
go get -u github.com/axiomhq/axiom-loki-proxy/cmd/axiom-loki-proxy
```
### Install from source
```bash
git clone https://github.com/axiomhq/axiom-loki-proxy.git
cd axiom-loki-proxy
make build
```
### Run the Loki-Proxy Docker
```bash
docker pull axiomhq/axiom-loki-proxy:latest
```
## Configuration
Specify the environmental variables for your Axiom deployment:
```bash
AXIOM_URL = AXIOM_DOMAIN
AXIOM_TOKEN = API_TOKEN
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
## Run and test
```bash
./axiom-loki-proxy
```
### Using Docker
```bash
docker run -p8080:8080/tcp \
-e=AXIOM_TOKEN= \
axiomhq/axiom-loki-proxy
```
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
For more information on Axiom Loki Proxy and how you can propose bug fix, report issues and submit PRs, see the [GitHub repository](https://github.com/axiomhq/axiom-loki-proxy).
# Methods for sending data
Source: https://axiom.co/docs/send-data/methods
Explore the various methods for sending data to Axiom, from direct API calls and OpenTelemetry to platform integrations and logging libraries.
The easiest way to send your first event data to Axiom is with a direct HTTP request using a tool like `cURL`.
```sh
curl -X 'POST' \
'https://api.axiom.co/v1/datasets//ingest' \
-H 'Authorization: Bearer ' \
-H 'Content-Type: application/x-ndjson' \
-d '{ "http": { "request": { "method": "GET", "duration_ms": 231 }, "response": { "body": { "size": 3012 } } }, "url": { "path": "/download" } }'
```
When you’re ready to send events continuously, Axiom supports a wide range of standard tools, libraries, and platform integrations.
## Popular methods
| Method | Description |
| :---------------------------------------- | :----------------------------------------------- |
| [Rest API](/restapi/introduction) | Direct HTTP API for sending logs and events |
| [OpenTelemetry](/send-data/opentelemetry) | Industry-standard observability framework |
| [Vector](/send-data/vector) | High-performance observability data pipeline |
| [Cribl](/send-data/cribl) | Route and transform data with Cribl Stream |
| [Fluent Bit](/send-data/fluent-bit) | Fast and lightweight log processor |
| [Fluentd](/send-data/fluentd) | Open source data collector with plugin ecosystem |
| [JavaScript](/guides/javascript) | Browser and Node.js logging |
## Other methods
| Method | Description |
| :------------------------------------------------------------- | :-------------------------------------------- |
| [.NET](/guides/send-logs-from-dotnet) | Send logs from .NET applications |
| [Apache Log4j](/guides/send-logs-from-apache-log4j) | Java logging with Log4j integration |
| [Apex](/guides/apex) | Structured logging for Go |
| [Cloudflare Workers](/guides/opentelemetry-cloudflare-workers) | Edge computing with Workers and OpenTelemetry |
| [Convex](/send-data/convex) | Stream data from Convex applications |
| [Elastic Beats](/send-data/elastic-beats) | Lightweight data shippers from Elastic |
| [Elasticsearch Bulk API](/send-data/elasticsearch-bulk-api) | Compatible endpoint for Elasticsearch clients |
| [Go](/guides/go) | Native Go logging integration |
| [Heroku Log Drains](/send-data/heroku-log-drains) | Stream logs directly from Heroku apps |
| [Honeycomb](/endpoints/honeycomb) | Compatible endpoint for Honeycomb clients |
| [Kubernetes](/send-data/kubernetes) | Collect logs and metrics from K8s clusters |
| [Laravel](/guides/send-logs-from-laravel) | PHP Laravel framework integration |
| [Logrus](/guides/logrus) | Structured logging for Go with Logrus |
| [Logstash](/send-data/logstash) | Server-side data processing pipeline |
| [Loki](/endpoints/loki) | Compatible endpoint for Grafana Loki clients |
| [Loki Multiplexer](/send-data/loki-multiplexer) | Forward Loki data to multiple destinations |
| [Next.js](/send-data/nextjs) | Full-stack React framework logging |
| [Pino](/guides/pino) | Fast Node.js logger integration |
| [Python](/guides/python) | Python logging with standard library |
| [React](/send-data/react) | Client-side React app logging |
| [Render](/send-data/render) | Stream logs from Render.com services |
| [Ruby on Rails](/guides/send-logs-from-ruby-on-rails) | Rails app logging |
| [Rust](/guides/rust) | High-performance Rust logging |
| [Secure Syslog](/send-data/secure-syslog) | TLS-encrypted syslog forwarding |
| [Serverless](/send-data/serverless) | Best practices for serverless environments |
| [Splunk](/endpoints/splunk) | Compatible endpoint for Splunk forwarders |
| [Syslog Proxy](/send-data/syslog-proxy) | Forward syslog data with transformation |
| [Tremor](/send-data/tremor) | Event processing system for complex workflows |
| [Winston](/guides/winston) | Popular Node.js logging library |
| [Zap](/guides/zap) | Uber's fast, structured Go logger |
## Amazon Web Services (AWS)
Axiom offers deep integration with the AWS ecosystem.
| Method | Description |
| :------------------------------------------------------ | :------------------------------------ |
| [Amazon CloudFront](/send-data/cloudfront) | CDN access logs and real-time logs |
| [Amazon CloudWatch](/send-data/cloudwatch) | Stream logs from CloudWatch Logs |
| [Amazon Kinesis Data Firehose](/send-data/aws-firehose) | Real-time streaming with Firehose |
| [Amazon S3](/send-data/aws-s3) | Process logs stored in S3 buckets |
| [AWS FireLens](/send-data/aws-firelens) | Container log routing for ECS/Fargate |
| [AWS IoT Rules](/send-data/aws-iot-rules) | Route IoT device data and telemetry |
| [AWS Lambda](/send-data/aws-lambda) | Serverless function logs and traces |
| [AWS Lambda .NET](/send-data/aws-lambda-dot) | .NET-specific Lambda integration |
## Example configurations
The following examples show how to send data using OpenTelemetry from various languages and frameworks.
| Application | Description |
| :---------------------------------------------------- | :------------------------------- |
| [OpenTelemetry .NET](/guides/opentelemetry-dotnet) | Complete .NET app example |
| [OpenTelemetry Django](/guides/opentelemetry-django) | Python Django with OpenTelemetry |
| [OpenTelemetry Go](/guides/opentelemetry-go) | Go app with full observability |
| [OpenTelemetry Java](/guides/opentelemetry-java) | Java Spring Boot example |
| [OpenTelemetry Next.js](/guides/opentelemetry-nextjs) | Full-stack Next.js with tracing |
| [OpenTelemetry Node.js](/guides/opentelemetry-nodejs) | Node.js Express example |
| [OpenTelemetry Python](/guides/opentelemetry-python) | Python Flask/FastAPI example |
| [OpenTelemetry Ruby](/guides/opentelemetry-ruby) | Ruby on Rails with OpenTelemetry |
If you don't see a method you're looking for, please [contact](https://www.axiom.co/contact) the Axiom team for support.
# Send data from Next.js app to Axiom
Source: https://axiom.co/docs/send-data/nextjs
This page explains how to send data from your Next.js app to Axiom.
Next.js is a popular open-source JavaScript framework built on top of React, developed by Vercel. It’s used by a wide range of companies and organizations, from startups to large enterprises, due to its performance benefits and developer-friendly features.
To send data from your Next.js app to Axiom, choose one of the following options:
* [Axiom Vercel app](/apps/vercel)
* [next-axiom library](#use-next-axiom-library)
* [@axiomhq/nextjs library](#use-axiomhq-nextjs-library)
The choice between these options depends on your individual requirements:
* The two options can collect different event types.
| Event type | Axiom Vercel app | next-axiom library | @axiomhq/nextjs library |
| ---------------- | ---------------- | ------------------ | ----------------------- |
| Application logs | Yes | Yes | Yes |
| Web Vitals | No | Yes | Yes |
| HTTP logs | Yes | Soon | Yes |
| Build logs | Yes | No | No |
| Tracing | Yes | No | Yes |
* If you already use Vercel for deployments, the Axiom Vercel app can be easier to integrate into your existing experience.
* The cost of these options can differ widely depending on the volume of data you transfer. The Axiom Vercel app depends on Vercel Log Drains, a feature that’s only available on paid plans. For more information, see [the blog post on the changes to Vercel Log Drains](https://axiom.co/blog/changes-to-vercel-log-drains).
For information on the Axiom Vercel app and migrating from the Vercel app to the next-axiom library, see [Axiom Vercel app](/apps/vercel).
The rest of this page explains how to send data from your Next.js app to Axiom using the next-axiom or the @axiomhq/nextjs library.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
- [A new or existing Next.js app](https://nextjs.org/).
## Use next-axiom library
The next-axiom library is an open-source project and welcomes your contributions. For more information, see the [GitHub repository](https://github.com/axiomhq/next-axiom).
### Install next-axiom
1. In your terminal, go to the root folder of your Next.js app and run the following command:
```sh
npm install --save next-axiom
```
2. Add the following environment variables to your Next.js app:
* `NEXT_PUBLIC_AXIOM_DATASET` is the name of the Axiom dataset where you want to send data.
* `NEXT_PUBLIC_AXIOM_TOKEN` is the Axiom API token you have generated.
3. In the `next.config.ts` file, wrap your Next.js configuration in `withAxiom`:
```js
const { withAxiom } = require("next-axiom");
module.exports = withAxiom({
// Your existing configuration.
});
```
### Capture traffic requests
To capture traffic requests, create a `middleware.ts` file in the root folder of your Next.js app:
```ts [expandable]
import { Logger } from 'next-axiom'
import { NextResponse } from 'next/server'
import type { NextFetchEvent, NextRequest } from 'next/server'
export async function middleware(request: NextRequest, event: NextFetchEvent) {
const logger = new Logger({ source: 'middleware' }); // traffic, request
logger.middleware(request)
event.waitUntil(logger.flush())
return NextResponse.next()
// For more information, see Matching Paths below
export const config = {
}
```
### Web Vitals
To send Web Vitals to Axiom, add the `AxiomWebVitals` component from next-axiom to the `app/layout.tsx` file:
```ts [expandable]
import { AxiomWebVitals } from "next-axiom";
export default function RootLayout() {
return (
...
...
);
}
```
Web Vitals are only sent from production deployments.
### Logs
Send logs to Axiom from different parts of your app. Each log function call takes a message and an optional `fields` object.
```ts [expandable]
log.debug("Login attempt", { user: "j_doe", status: "success" }); // Results in {"message": "Login attempt", "fields": {"user": "j_doe", "status": "success"}}
log.info("Payment completed", { userID: "123", amount: "25USD" });
log.warn("API rate limit exceeded", {
endpoint: "/users/1",
rateLimitRemaining: 0,
});
log.error("System Error", { code: "500", message: "Internal server error" });
```
#### Route handlers
Wrap your route handlers in `withAxiom` to add a logger to your request and log exceptions automatically:
```ts [expandable]
import { withAxiom, AxiomRequest } from "next-axiom";
export const GET = withAxiom((req: AxiomRequest) => {
req.log.info("Login function called");
// You can create intermediate loggers
const log = req.log.with({ scope: "user" });
log.info("User logged in", { userId: 42 });
return NextResponse.json({ hello: "world" });
});
```
#### Client components
To send logs from client components, add `useLogger` from next-axiom to your component:
```ts [expandable]
"use client";
import { useLogger } from "next-axiom";
export default function ClientComponent() {
const log = useLogger();
log.debug("User logged in", { userId: 42 });
return
Logged in
;
}
```
#### Server components
To send logs from server components, add `Logger` from next-axiom to your component, and call flush before returning:
```ts [expandable]
import { Logger } from "next-axiom";
export default async function ServerComponent() {
const log = new Logger();
log.info("User logged in", { userId: 42 });
// ...
await log.flush();
return
Logged in
;
}
```
#### Log levels
The log level defines the lowest level of logs sent to Axiom. Choose one of the following levels (from lowest to highest):
* `debug` is the default setting. It means that you send all logs to Axiom.
* `info`
* `warn`
* `error` means that you only send the highest-level logs to Axiom.
* `off` means that you don’t send any logs to Axiom.
For example, to send all logs except for debug logs to Axiom:
```sh
export NEXT_PUBLIC_AXIOM_LOG_LEVEL=info
```
### Capture errors
To capture routing errors, use the [error handling mechanism of Next.js](https://nextjs.org/docs/app/building-your-application/routing/error-handling):
1. Go to the `app` folder.
2. Create an `error.tsx` file.
3. Inside your component function, add `useLogger` from next-axiom to send the error to Axiom. For example:
```ts [expandable]
"use client";
import NavTable from "@/components/NavTable";
import { LogLevel } from "@/next-axiom/logger";
import { useLogger } from "next-axiom";
import { usePathname } from "next/navigation";
export default function ErrorPage({
error,
}: {
error: Error & { digest?: string };
}) {
const pathname = usePathname();
const log = useLogger({ source: "error.tsx" });
let status = error.message == "Invalid URL" ? 404 : 500;
log.logHttpRequest(
LogLevel.error,
error.message,
{
host: window.location.href,
path: pathname,
statusCode: status,
},
{
error: error.name,
cause: error.cause,
stack: error.stack,
digest: error.digest,
}
);
return (
Ops! An Error has occurred:{" "}
`{error.message}`
);
}
```
### Extend logger
To extend the logger, use `log.with` to create an intermediate logger. For example:
```ts [expandable]
const logger = useLogger().with({ userId: 42 });
logger.info("Hi"); // will ingest { ..., "message": "Hi", "fields" { "userId": 42 }}
```
## Use @axiomhq/nextjs library
The @axiomhq/nextjs library is part of the Axiom JavaScript SDK, an open-source project and welcomes your contributions. For more information, see the [GitHub repository](https://github.com/axiomhq/axiom-js).
### Install @axiomhq/nextjs
1. In your terminal, go to the root folder of your Next.js app and run the following command:
```sh
npm install --save @axiomhq/js @axiomhq/logging @axiomhq/nextjs @axiomhq/react
```
2. Create the folder `lib/axiom` to store configurations for Axiom.
3. Create a `axiom.ts` file in the `lib/axiom` folder with the following content:
```ts lib/axiom/axiom.ts [expandable]
import { Axiom } from '@axiomhq/js';
const axiomClient = new Axiom({
token: process.env.NEXT_PUBLIC_AXIOM_TOKEN!,
});
export default axiomClient;
```
4. In the `lib/axiom` folder, create a `server.ts` file with the following content:
```ts lib/axiom/server.ts [expandable]
import axiomClient from '@/lib/axiom/axiom';
import { Logger, AxiomJSTransport } from '@axiomhq/logging';
import { createAxiomRouteHandler, nextJsFormatters } from '@axiomhq/nextjs';
export const logger = new Logger({
transports: [
new AxiomJSTransport({ axiom: axiomClient, dataset: process.env.NEXT_PUBLIC_AXIOM_DATASET! }),
],
formatters: nextJsFormatters,
});
export const withAxiom = createAxiomRouteHandler(logger);
```
The `createAxiomRouteHandler` is a builder function that returns a wrapper for your route handlers. The wrapper handles successful responses and errors thrown within the route handler. For more information on the logger, see [the @axiomhq/logging library](/guides/javascript#use-axiomhqlogging).
5. In the `lib/axiom` folder, create a `client.ts` file with the following content:
Ensure the API token you use on the client side has the appropriate permissions. Axiom recommends you create a client-side token with the only permission to ingest data into a specific dataset.
If you don’t want to expose the token to the client, use the [proxy transport](#proxy-for-client-side-usage) to send logs to Axiom.
```ts lib/axiom/client.ts [expandable]
'use client';
import axiomClient from '@/lib/axiom/axiom';
import { Logger, AxiomJSTransport } from '@axiomhq/logging';
import { createUseLogger, createWebVitalsComponent } from '@axiomhq/react';
import { nextJsFormatters } from '@axiomhq/nextjs/client';
export const logger = new Logger({
transports: [
new AxiomJSTransport({ axiom: axiomClient, dataset: process.env.NEXT_PUBLIC_AXIOM_DATASET! }),
],
formatters: nextJsFormatters,
});
const useLogger = createUseLogger(logger);
const WebVitals = createWebVitalsComponent(logger);
export { useLogger, WebVitals };
```
For more information on React client side helpers, see [React](/send-data/react).
### Capture traffic requests
To capture traffic requests, create a `middleware.ts` file in the root folder of your Next.js app with the following content:
```ts middleware.ts [expandable]
import { logger } from "@/lib/axiom/server";
import { transformMiddlewareRequest } from "@axiomhq/nextjs";
import { NextResponse } from "next/server";
import type { NextFetchEvent, NextRequest } from "next/server";
export async function middleware(request: NextRequest, event: NextFetchEvent) {
logger.info(...transformMiddlewareRequest(request));
event.waitUntil(logger.flush());
return NextResponse.next();
}
export const config = {
matcher: [
/*
* Match all request paths except for the ones starting with:
* - api (API routes)
* - _next/static (static files)
* - _next/image (image optimization files)
* - favicon.ico, sitemap.xml, robots.txt (metadata files)
*/
"/((?!api|_next/static|_next/image|favicon.ico|sitemap.xml|robots.txt).*)",
],
};
```
### Web Vitals
To capture Web Vitals, add the `WebVitals` component to the `app/layout.tsx` file:
```tsx /app/layout.tsx [expandable]
import { WebVitals } from "@/lib/axiom/client";
export default function RootLayout({
children,
}: Readonly<{
children: React.ReactNode;
}>) {
return (
{children}
);
}
```
### Logs
Send logs to Axiom from different parts of your app. Each log function call takes a message and an optional `fields` object.
```ts [expandable]
import { logger } from "@/lib/axiom/server";
log.debug("Login attempt", { user: "j_doe", status: "success" }); // Results in {"message": "Login attempt", "fields": {"user": "j_doe", "status": "success"}}
log.info("Payment completed", { userID: "123", amount: "25USD" });
log.warn("API rate limit exceeded", {
endpoint: "/users/1",
rateLimitRemaining: 0,
});
log.error("System Error", { code: "500", message: "Internal server error" });
```
#### Route handlers
You can use the `withAxiom` function exported from the setup file in `lib/axiom/server.ts` to wrap your route handlers.
```ts
import { logger } from "@/lib/axiom/server";
import { withAxiom } from "@/lib/axiom/server";
export const GET = withAxiom(async () => {
return new Response("Hello World!");
});
```
For more information on customizing the data sent to Axiom, see [Advanced route handlers](#advanced-route-handlers).
#### Client components
To send logs from client components, add `useLogger` to your component:
```tsx [expandable]
"use client";
import { useLogger } from "@/lib/axiom/client";
export default function ClientComponent() {
const log = useLogger();
log.debug("User logged in", { userId: 42 });
const handleClick = () => log.info("User logged out");
return (
Logged in
);
}
```
#### Server components
To send logs from server components, use the following:
```tsx [expandable]
import { logger } from "@/lib/axiom/server";
import { after } from "next/server";
export default async function ServerComponent() {
log.info("User logged in", { userId: 42 });
after(() => {
logger.flush();
});
return
Logged in
;
}
```
### Capture errors
#### Capture errors on Next 15 or later
To capture errors on Next 15 or later, use the `onRequestError` option. Create an `instrumentation.ts` file in the `src` or root folder of your Next.js app (depending on your configuration) with the following content:
```ts instrumentation.ts [expandable]
import { logger } from "@/lib/axiom/server";
import { createOnRequestError } from "@axiomhq/nextjs";
export const onRequestError = createOnRequestError(logger);
```
Alternatively, customize the error logging by creating a custom `onRequestError` function:
```ts [expandable]
import { logger } from "@/lib/axiom/server";
import { transformOnRequestError } from "@axiomhq/nextjs";
import { Instrumentation } from "next";
export const onRequestError: Instrumentation.onRequestError = async (
error,
request,
ctx
) => {
logger.error(...transformOnRequestError(error, request, ctx));
await logger.flush();
};
```
#### Capture errors on Next 14 or earlier
To capture routing errors on Next 14 or earlier, use the [error handling mechanism of Next.js](https://nextjs.org/docs/app/building-your-application/routing/error-handling):
1. Create an `error.tsx` file in the `app` folder.
2. Inside your component function, add `useLogger` to send the error to Axiom. For example:
```tsx [expandable]
"use client";
import NavTable from "@/components/NavTable";
import { LogLevel } from "@axiomhq/logging";
import { useLogger } from "@/lib/axiom/client";
import { usePathname } from "next/navigation";
export default function ErrorPage({
error,
}: {
error: Error & { digest?: string };
}) {
const pathname = usePathname();
const log = useLogger({ source: "error.tsx" });
let status = error.message == "Invalid URL" ? 404 : 500;
log.log(LogLevel.error, error.message, {
error: error.name,
cause: error.cause,
stack: error.stack,
digest: error.digest,
request: {
host: window.location.href,
path: pathname,
statusCode: status,
},
});
return (
Ops! An Error has occurred:{" "}
`{error.message}`
);
}
```
### Advanced customizations
This section describes some advanced customizations.
#### Proxy for client-side usage
Instead of sending logs directly to Axiom, you can send them to a proxy endpoint in your Next.js app. This is useful if you don’t want to expose the Axiom API token to the client or if you want to send the logs from the client to transports on your server.
1. Create a `client.ts` file in the `lib/axiom` folder with the following content:
```ts lib/axiom/client.ts [expandable]
'use client';
import { Logger, ProxyTransport } from '@axiomhq/logging';
import { createUseLogger, createWebVitalsComponent } from '@axiomhq/react';
export const logger = new Logger({
transports: [
new ProxyTransport({ url: '/api/axiom', autoFlush: true }),
],
});
const useLogger = createUseLogger(logger);
const WebVitals = createWebVitalsComponent(logger);
export { useLogger, WebVitals };
```
2. In the `/app/api/axiom` folder, create a `route.ts` file with the following content. This example uses `/api/axiom` as the Axiom proxy path.
```ts /app/api/axiom/route.ts
import { logger } from "@/lib/axiom/server";
import { createProxyRouteHandler } from "@axiomhq/nextjs";
export const POST = createProxyRouteHandler(logger);
```
For more information on React client side helpers, see [React](/send-data/react).
#### Customize data reports sent to Axiom
To customize the reports sent to Axiom, use the `onError` and `onSuccess` functions that the `createAxiomRouteHandler` function accepts in the configuration object.
In the `lib/axiom/server.ts` file, use the `transformRouteHandlerErrorResult` and `transformRouteHandlerSuccessResult` functions to customize the data sent to Axiom by adding fields to the report object:
```ts [expandable]
import { Logger, AxiomJSTransport } from '@axiomhq/logging';
import {
createAxiomRouteHandler,
getLogLevelFromStatusCode,
nextJsFormatters,
transformRouteHandlerErrorResult,
transformRouteHandlerSuccessResult
} from '@axiomhq/nextjs';
/* ... your logger setup ... */
export const withAxiom = createAxiomRouteHandler(logger, {
onError: (error) => {
if (error.error instanceof Error) {
logger.error(error.error.message, error.error);
}
const [message, report] = transformRouteHandlerErrorResult(error);
report.customField = "customValue";
report.request.searchParams = error.req.nextUrl.searchParams;
logger.log(getLogLevelFromStatusCode(report.statusCode), message, report);
logger.flush();
},
onSuccess: (data) => {
const [message, report] = transformRouteHandlerSuccessResult(data);
report.customField = "customValue";
report.request.searchParams = data.req.nextUrl.searchParams;
logger.info(message, report);
logger.flush();
},
});
```
Changing the `transformSuccessResult()` or `transformErrorResult()` functions can change the shape of your data. This can affect dashboards (especially auto-generated dashboards) and other integrations.
Axiom recommends you add fields on top of the ones returned by the default `transformSuccessResult()` or `transformErrorResult()` functions, without replacing the default fields.
Alternatively, create your own `transformSuccessResult()` or `transformErrorResult()` functions:
```ts [expandable]
import { Logger, AxiomJSTransport } from '@axiomhq/logging';
import {
createAxiomRouteHandler,
getLogLevelFromStatusCode,
nextJsFormatters,
transformRouteHandlerErrorResult,
transformRouteHandlerSuccessResult
} from '@axiomhq/nextjs';
/* ... your logger setup ... */
export const transformSuccessResult = (
data: SuccessData
): [message: string, report: Record] => {
const report = {
request: {
type: "request",
method: data.req.method,
url: data.req.url,
statusCode: data.res.status,
durationMs: data.end - data.start,
path: new URL(data.req.url).pathname,
endTime: data.end,
startTime: data.start,
},
};
return [
`${data.req.method} ${report.request.path} ${
report.request.statusCode
} in ${report.request.endTime - report.request.startTime}ms`,
report,
];
};
export const transformRouteHandlerErrorResult = (data: ErrorData): [message: string, report: Record] => {
const statusCode = data.error instanceof Error ? getNextErrorStatusCode(data.error) : 500;
const report = {
request: {
startTime: new Date().getTime(),
endTime: new Date().getTime(),
path: data.req.nextUrl.pathname ?? new URL(data.req.url).pathname,
method: data.req.method,
host: data.req.headers.get('host'),
userAgent: data.req.headers.get('user-agent'),
scheme: data.req.url.split('://')[0],
ip: data.req.headers.get('x-forwarded-for'),
region: getRegion(data.req),
statusCode: statusCode,
},
};
return [
`${data.req.method} ${report.request.path} ${report.request.statusCode} in ${report.request.endTime - report.request.startTime}ms`,
report,
];
};
export const withAxiom = createAxiomRouteHandler(logger, {
onError: (error) => {
if (error.error instanceof Error) {
logger.error(error.error.message, error.error);
}
const [message, report] = transformRouteHandlerErrorResult(error);
report.customField = "customValue";
report.request.searchParams = error.req.nextUrl.searchParams;
logger.log(getLogLevelFromStatusCode(report.statusCode), message, report);
logger.flush();
},
onSuccess: (data) => {
const [message, report] = transformRouteHandlerSuccessResult(data);
report.customField = "customValue";
report.request.searchParams = data.req.nextUrl.searchParams;
logger.info(message, report);
logger.flush();
},
});
```
#### Change the log level from Next.js built-in function errors
By default, Axiom uses the following log levels:
* Errors thrown by the `redirect()` function are logged as `info`.
* Errors thrown by the `forbidden()`, `notFound()` and `unauthorized()` functions are logged as `warn`.
To customize this behavior, provide a custom `logLevelByStatusCode()` function when logging errors from your route handler:
```ts [expandable]
import { Logger, AxiomJSTransport, LogLevel } from '@axiomhq/logging';
import {
createAxiomRouteHandler,
nextJsFormatters,
transformRouteHandlerErrorResult,
} from '@axiomhq/nextjs';
/* ... your logger setup ... */
const getLogLevelFromStatusCode = (statusCode: number) => {
if (statusCode >= 300 && statusCode < 400) {
return LogLevel.info;
} else if (statusCode >= 400 && statusCode < 500) {
return LogLevel.warn;
}
return LogLevel.error;
};
export const withAxiom = createAxiomRouteHandler(logger, {
onError: (error) => {
if (error.error instanceof Error) {
logger.error(error.error.message, error.error);
}
const [message, report] = transformRouteHandlerErrorResult(error);
report.customField = 'customValue';
report.request.searchParams = error.req.nextUrl.searchParams;
logger.log(getLogLevelFromStatusCode(report.statusCode), message, report);
logger.flush();
}
});
```
Internally, the status code gets captured in the `transformErrorResult()` function using a `getNextErrorStatusCode()` function. To compose these functions yourself, create your own `getNextErrorStatusCode()` function and inject the result into the `transformErrorResult()` report.
```ts [expandable]
import { Logger, AxiomJSTransport, LogLevel } from '@axiomhq/logging';
import {
createAxiomRouteHandler,
nextJsFormatters,
transformRouteHandlerErrorResult,
} from '@axiomhq/nextjs';
import { isRedirectError } from 'next/dist/client/components/redirect-error';
import { isHTTPAccessFallbackError } from 'next/dist/client/components/http-access-fallback/http-access-fallback';
import axiomClient from '@/lib/axiom/axiom';
export const logger = new Logger({
transports: [
new AxiomJSTransport({ axiom: axiomClient, dataset: process.env.NEXT_PUBLIC_AXIOM_DATASET! }),
],
formatters: nextJsFormatters,
});
export const getNextErrorStatusCode = (error: Error & { digest?: string }) => {
if (!error.digest) {
return 500;
}
if (isRedirectError(error)) {
return parseInt(error.digest.split(';')[3]);
} else if (isHTTPAccessFallbackError(error)) {
return parseInt(error.digest.split(';')[1]);
}
};
const getLogLevelFromStatusCode = (statusCode: number) => {
if (statusCode >= 300 && statusCode < 400) {
return LogLevel.info;
} else if (statusCode >= 400 && statusCode < 500) {
return LogLevel.warn;
}
return LogLevel.error;
};
export const withAxiom = createAxiomRouteHandler(logger, {
onError: (error) => {
if (error.error instanceof Error) {
logger.error(error.error.message, error.error);
}
const [message, report] = transformRouteHandlerErrorResult(error);
const statusCode = error.error instanceof Error ? getNextErrorStatusCode(error.error) : 500;
report.request.statusCode = statusCode;
report.customField = 'customValue';
report.request.searchParams = error.req.nextUrl.searchParams;
logger.log(getLogLevelFromStatusCode(report.statusCode), message, report);
logger.flush();
},
});
```
### Server execution context
The `serverContextFieldsFormatter` function included in the `nextJsFormatters` adds the server execution context to the logs, this is useful to have information about the scope where the logs were generated.
By default, the `createAxiomRouteHandler` function adds a `request_id` field to the logs using this server context and the server context fields formatter.
#### Route handlers server context
The `createAxiomRouteHandler` accepts a `store` field in the configuration object. The store can be a map, an object, or a function that accepts a request and context. It returns a map or an object.
The fields in the store are added to the `fields` object of the log report. For example, you can use this to add a `trace_id` field to every log report within the same function execution in the route handler.
```ts [expandable]
import { Logger, AxiomJSTransport } from '@axiomhq/logging';
import { createAxiomRouteHandler, nextJsFormatters } from '@axiomhq/nextjs';
import { NextRequest } from 'next/server';
import axiomClient from '@/lib/axiom/axiom';
export const logger = new Logger({
transports: [
new AxiomJSTransport({ axiom: axiomClient, dataset: process.env.NEXT_PUBLIC_AXIOM_DATASET! }),
],
formatters: nextJsFormatters,
});
export const withAxiom = createAxiomRouteHandler(logger, {
store: (req: NextRequest) => {
return {
request_id: crypto.randomUUID(),
trace_id: req.headers.get('x-trace-id'),
};
},
});
```
#### Sever context on arbitrary functions
You can also add the server context to any function that runs in the server. For example, server actions, middleware, and server components.
```ts [expandable]
"use server";
import { runWithServerContext } from "@axiomhq/nextjs";
export const serverAction = () =>
runWithServerContext({ request_id: crypto.randomUUID() }, () => {
return "Hello World";
});
```
```ts middleware.ts [expandable]
import { runWithServerContext } from '@axiomhq/nextjs';
export const middleware = (req: NextRequest) =>
runWithServerContext({ trace_id: req.headers.get('x-trace-id') }, () => {
// trace_id will be added to the log fields
logger.info(...transformMiddlewareRequest(request));
// trace_id will also be added to the log fields
log.info("Hello from middleware");
event.waitUntil(logger.flush());
return NextResponse.next();
});
```
# Send OpenTelemetry data to Axiom
Source: https://axiom.co/docs/send-data/opentelemetry
Learn how OpenTelemetry-compatible events flow into Axiom and explore Axiom comprehensive observability through browsing, querying, dashboards, and alerting of OpenTelemetry data.
OpenTelemetry (OTel) is a set of APIs, libraries, and agents to capture distributed traces and metrics from your app. It’s a Cloud Native Computing Foundation (CNCF) project that was started to create a unified solution for service and app performance monitoring.
The OpenTelemetry project has published strong specifications for the three main pillars of observability: logs, traces, and metrics. These schemas are supported by all tools and services that support interacting with OpenTelemetry. Axiom supports OpenTelemetry natively on an API level, allowing you to connect any existing OpenTelemetry shipper, library, or tool to Axiom for sending data.
OpenTelemetry-compatible events flow into Axiom, where they’re organized into datasets for easy segmentation. Users can create a dataset to receive OpenTelemetry data and obtain an API token for ingestion. Axiom provides comprehensive observability through browsing, querying, dashboards, and alerting of OpenTelemetry data.
OTel traces and OTel logs support are already live. Axiom will soon support OpenTelemetry Metrics (OTel Metrics).
| OpenTelemetry component | Currently supported |
| ------------------------------------------------------------------ | ------------------- |
| [Traces](https://opentelemetry.io/docs/concepts/signals/traces/) | Yes |
| [Logs](https://opentelemetry.io/docs/concepts/signals/logs/) | Yes |
| [Metrics](https://opentelemetry.io/docs/concepts/signals/metrics/) | No (coming soon) |
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
## OpenTelemetry Collector
Configuring the OpenTelemetry collector is as simple as creating an HTTP exporter that sends data to the Axiom API together with headers to set the dataset and API token:
```yaml
exporters:
otlphttp:
compression: gzip
endpoint: https://AXIOM_DOMAIN
headers:
authorization: Bearer API_TOKEN
x-axiom-dataset: DATASET_NAME
service:
pipelines:
traces:
receivers:
- otlp
processors:
- memory_limiter
- batch
exporters:
- otlphttp
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
When using the OTLP/HTTP endpoint for traces and logs, the following endpoint URLs should be used in your SDK exporter OTel configuration.
* Traces: `https://AXIOM_DOMAIN/v1/traces`
* Logs: `https://AXIOM_DOMAIN/v1/logs`
## OpenTelemetry for Go
The example below configures a Go app using the [OpenTelemetry SDK for Go](https://github.com/open-telemetry/opentelemetry-go) to send OpenTelemetry data to Axiom.
```go
package main
import (
"context" // For managing request-scoped values, cancellation signals, and deadlines.
"crypto/tls" // For configuring TLS options, like certificates.
// OpenTelemetry imports for setting up tracing and exporting telemetry data.
"go.opentelemetry.io/otel" // Core OpenTelemetry APIs for managing tracers.
"go.opentelemetry.io/otel/attribute" // For creating and managing trace attributes.
"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp" // HTTP trace exporter for OpenTelemetry Protocol (OTLP).
"go.opentelemetry.io/otel/propagation" // For managing context propagation formats.
"go.opentelemetry.io/otel/sdk/resource" // For defining resources that describe an entity producing telemetry.
"go.opentelemetry.io/otel/sdk/trace" // For configuring tracing, like sampling and processors.
semconv "go.opentelemetry.io/otel/semconv/v1.24.0" // Semantic conventions for resource attributes.
)
const (
serviceName = "axiom-go-otel" // Name of the service for tracing.
serviceVersion = "0.1.0" // Version of the service.
otlpEndpoint = "AXIOM_DOMAIN" // OTLP collector endpoint.
bearerToken = "Bearer API_TOKEN" // Authorization token.
deploymentEnvironment = "production" // Deployment environment.
)
func SetupTracer() (func(context.Context) error, error) {
ctx := context.Background()
return InstallExportPipeline(ctx) // Setup and return the export pipeline for telemetry data.
}
func Resource() *resource.Resource {
// Defines resource with service name, version, and environment.
return resource.NewWithAttributes(
semconv.SchemaURL,
semconv.ServiceNameKey.String(serviceName),
semconv.ServiceVersionKey.String(serviceVersion),
attribute.String("environment", deploymentEnvironment),
)
}
func InstallExportPipeline(ctx context.Context) (func(context.Context) error, error) {
// Sets up OTLP HTTP exporter with endpoint, headers, and TLS config.
exporter, err := otlptracehttp.New(ctx,
otlptracehttp.WithEndpoint(otlpEndpoint),
otlptracehttp.WithHeaders(map[string]string{
"Authorization": bearerToken,
"X-AXIOM-DATASET": "DATASET_NAME",
}),
otlptracehttp.WithTLSClientConfig(&tls.Config{}),
)
if err != nil {
return nil, err
}
// Configures the tracer provider with the exporter and resource.
tracerProvider := trace.NewTracerProvider(
trace.WithBatcher(exporter),
trace.WithResource(Resource()),
)
otel.SetTracerProvider(tracerProvider)
// Sets global propagator to W3C Trace Context and Baggage.
otel.SetTextMapPropagator(propagation.NewCompositeTextMapPropagator(
propagation.TraceContext{},
propagation.Baggage{},
))
return tracerProvider.Shutdown, nil // Returns a function to shut down the tracer provider.
}
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
## OpenTelemetry for Ruby
To send traces to an OpenTelemetry Collector using the [OTLP over HTTP in Ruby](https://github.com/open-telemetry/opentelemetry-ruby), use the `opentelemetry-exporter-otlp-http` gem provided by the OpenTelemetry project.
```bash
require 'opentelemetry/sdk'
require 'opentelemetry/exporter/otlp'
require 'opentelemetry/instrumentation/all'
OpenTelemetry::SDK.configure do |c|
c.service_name = 'ruby-traces' # Set your service name
c.use_all # or specify individual instrumentation you need
c.add_span_processor(
OpenTelemetry::SDK::Trace::Export::BatchSpanProcessor.new(
OpenTelemetry::Exporter::OTLP::Exporter.new(
endpoint: 'https://AXIOM_DOMAIN/v1/traces',
headers: {
'Authorization' => 'Bearer API_TOKEN',
'X-AXIOM-DATASET' => 'DATASET_NAME'
}
)
)
)
end
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
## OpenTelemetry for Java
Here is a basic configuration for a Java app that sends traces to an OpenTelemetry Collector using OTLP over HTTP using the [OpenTelemetry Java SDK](https://github.com/open-telemetry/opentelemetry-java):
```java
package com.example;
import io.opentelemetry.api.OpenTelemetry;
import io.opentelemetry.api.common.Attributes;
import io.opentelemetry.api.common.AttributeKey;
import io.opentelemetry.exporter.otlp.http.trace.OtlpHttpSpanExporter;
import io.opentelemetry.sdk.OpenTelemetrySdk;
import io.opentelemetry.sdk.resources.Resource;
import io.opentelemetry.sdk.trace.SdkTracerProvider;
import io.opentelemetry.sdk.trace.export.BatchSpanProcessor;
import java.util.concurrent.TimeUnit;
public class OtelConfiguration {
// OpenTelemetry configuration
private static final String SERVICE_NAME = "SERVICE_NAME";
private static final String SERVICE_VERSION = "SERVICE_VERSION";
private static final String OTLP_ENDPOINT = "https://AXIOM_DOMAIN/v1/traces";
private static final String BEARER_TOKEN = "Bearer API_TOKEN";
private static final String AXIOM_DATASET = "DATASET_NAME";
public static OpenTelemetry initializeOpenTelemetry() {
// Create a Resource with service name and version
Resource resource = Resource.getDefault()
.merge(Resource.create(Attributes.of(
AttributeKey.stringKey("service.name"), SERVICE_NAME,
AttributeKey.stringKey("service.version"), SERVICE_VERSION
)));
// Create an OTLP/HTTP span exporter
OtlpHttpSpanExporter spanExporter = OtlpHttpSpanExporter.builder()
.setEndpoint(OTLP_ENDPOINT)
.addHeader("Authorization", BEARER_TOKEN)
.addHeader("X-Axiom-Dataset", AXIOM_DATASET)
.build();
// Create a BatchSpanProcessor with the OTLP/HTTP exporter
SdkTracerProvider sdkTracerProvider = SdkTracerProvider.builder()
.addSpanProcessor(BatchSpanProcessor.builder(spanExporter)
.setScheduleDelay(100, TimeUnit.MILLISECONDS)
.build())
.setResource(resource)
.build();
// Build and register the OpenTelemetry SDK
OpenTelemetrySdk openTelemetry = OpenTelemetrySdk.builder()
.setTracerProvider(sdkTracerProvider)
.buildAndRegisterGlobal();
// Add a shutdown hook to properly close the SDK
Runtime.getRuntime().addShutdownHook(new Thread(sdkTracerProvider::close));
return openTelemetry;
}
}
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
## OpenTelemetry for .NET
You can send traces to Axiom using the [OpenTelemetry .NET SDK](https://github.com/open-telemetry/opentelemetry-dotnet) by configuring an OTLP HTTP exporter in your .NET app. Here is a simple example:
```csharp
using OpenTelemetry;
using OpenTelemetry.Resources;
using OpenTelemetry.Trace;
using System;
using System.Diagnostics;
using System.Reflection;
// Class to configure OpenTelemetry tracing
public static class TracingConfiguration
{
// Declares an ActivitySource for creating tracing activities
private static readonly ActivitySource ActivitySource = new("MyCustomActivitySource");
// Configures OpenTelemetry with custom settings and instrumentation
public static void ConfigureOpenTelemetry()
{
// Retrieve the service name and version from the executing assembly metadata
var serviceName = Assembly.GetExecutingAssembly().GetName().Name ?? "UnknownService";
var serviceVersion = Assembly.GetExecutingAssembly().GetName().Version?.ToString() ?? "UnknownVersion";
// Setting up the tracer provider with various configurations
Sdk.CreateTracerProviderBuilder()
.SetResourceBuilder(
// Set resource attributes including service name and version
ResourceBuilder.CreateDefault().AddService(serviceName, serviceVersion: serviceVersion)
.AddAttributes(new[] { new KeyValuePair("environment", "development") }) // Additional attributes
.AddTelemetrySdk() // Add telemetry SDK information to the traces
.AddEnvironmentVariableDetector()) // Detect resource attributes from environment variables
.AddSource(ActivitySource.Name) // Add the ActivitySource defined above
.AddAspNetCoreInstrumentation() // Add automatic instrumentation for ASP.NET Core
.AddHttpClientInstrumentation() // Add automatic instrumentation for HttpClient requests
.AddOtlpExporter(options => // Configure the OTLP exporter
{
options.Endpoint = new Uri("https://AXIOM_DOMAIN/v1/traces"); // Set the endpoint for the exporter
options.Protocol = OpenTelemetry.Exporter.OtlpExportProtocol.HttpProtobuf; // Set the protocol
options.Headers = "Authorization=Bearer API_TOKEN, X-Axiom-Dataset=DATASET_NAME"; // Update API token and dataset
})
.Build(); // Build the tracer provider
}
// Method to start a new tracing activity with an optional activity kind
public static Activity? StartActivity(string activityName, ActivityKind kind = ActivityKind.Internal)
{
// Starts and returns a new activity if sampling allows it, otherwise returns null
return ActivitySource.StartActivity(activityName, kind);
}
}
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
## OpenTelemetry for Python
You can send traces to Axiom using the [OpenTelemetry Python SDK](https://github.com/open-telemetry/opentelemetry-python) by configuring an OTLP HTTP exporter in your Python app. Here is a simple example:
```python
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.sdk.resources import Resource, SERVICE_NAME
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
# Define the service name resource for the tracer.
resource = Resource(attributes={
SERVICE_NAME: "NAME_OF_SERVICE" # Replace `NAME_OF_SERVICE` with the name of the service you want to trace.
})
# Create a TracerProvider with the defined resource for creating tracers.
provider = TracerProvider(resource=resource)
# Configure the OTLP/HTTP Span Exporter with Axiom headers and endpoint. Replace `API_TOKEN` with your Axiom API key, and replace `DATASET_NAME` with the name of the Axiom dataset where you want to send data.
otlp_exporter = OTLPSpanExporter(
endpoint="https://AXIOM_DOMAIN/v1/traces",
headers={
"Authorization": "Bearer API_TOKEN",
"X-Axiom-Dataset": "DATASET_NAME"
}
)
# Create a BatchSpanProcessor with the OTLP exporter to batch and send trace spans.
processor = BatchSpanProcessor(otlp_exporter)
provider.add_span_processor(processor)
# Set the TracerProvider as the global tracer provider.
trace.set_tracer_provider(provider)
# Define a tracer for external use in different parts of the app.
service1_tracer = trace.get_tracer("service1")
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
## OpenTelemetry for Node
You can send traces to Axiom using the [OpenTelemetry Node SDK](https://github.com/open-telemetry/opentelemetry-js) by configuring an OTLP HTTP exporter in your Node app. Here is a simple example:
```js
const opentelemetry = require('@opentelemetry/sdk-node');
const { getNodeAutoInstrumentations } = require('@opentelemetry/auto-instrumentations-node');
const { OTLPTraceExporter } = require('@opentelemetry/exporter-trace-otlp-proto');
const { BatchSpanProcessor } = require('@opentelemetry/sdk-trace-base');
const { Resource } = require('@opentelemetry/resources');
const { SemanticResourceAttributes } = require('@opentelemetry/semantic-conventions');
// Initialize OTLP trace exporter with the URL and headers for the Axiom API
const traceExporter = new OTLPTraceExporter({
url: 'https://AXIOM_DOMAIN/v1/traces', // Axiom API endpoint for trace data
headers: {
'Authorization': 'Bearer API_TOKEN', // Replace API_TOKEN with your actual API token
'X-Axiom-Dataset': 'DATASET_NAME' // Replace DATASET_NAME with your dataset
},
});
// Define the resource attributes, in this case, setting the service name for the traces
const resource = new Resource({
[SemanticResourceAttributes.SERVICE_NAME]: 'node traces', // Name for the tracing service
});
// Create a NodeSDK instance with the configured span processor, resource, and auto-instrumentations
const sdk = new opentelemetry.NodeSDK({
spanProcessor: new BatchSpanProcessor(traceExporter), // Use BatchSpanProcessor for batching and sending traces
resource: resource, // Attach the defined resource to provide additional context
instrumentations: [getNodeAutoInstrumentations()], // Automatically instrument common Node.js modules
});
// Start the OpenTelemetry SDK
sdk.start();
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
## OpenTelemetry for Cloudflare Workers
Configure OpenTelemetry in Cloudflare Workers to send telemetry data to Axiom using the [OTel CF Worker package](https://github.com/evanderkoogh/otel-cf-workers). Here is an example exporter configuration:
```js
// index.ts
import { trace } from '@opentelemetry/api';
import { instrument, ResolveConfigFn } from '@microlabs/otel-cf-workers';
export interface Env {
AXIOM_API_TOKEN: string,
AXIOM_DATASET: string,
AXIOM_DOMAIN: string
}
const handler = {
async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise {
await fetch('https://cloudflare.com');
const greeting = "Welcome to Axiom Cloudflare instrumentation";
trace.getActiveSpan()?.setAttribute('greeting', greeting);
ctx.waitUntil(fetch('https://workers.dev'));
return new Response(`${greeting}!`);
},
};
const config: ResolveConfigFn = (env: Env, _trigger) => {
return {
exporter: {
url: `https://${env.AXIOM_DOMAIN}/v1/traces`,
headers: {
'Authorization': `Bearer ${env.AXIOM_API_TOKEN}`,
'X-Axiom-Dataset': `${env.AXIOM_DATASET}`
},
},
service: { name: 'axiom-cloudflare-workers' },
};
};
export default instrument(handler, config);
```
### Requirements for log level fields
The Stream and Query tabs allow you to easily detect warnings and errors in your logs by highlighting the severity of log entries in different colors. As a prerequisite, specify the log level in the data you send to Axiom. For Open Telemetry logs, specify the log level in the following fields:
* `severity`
* `severityNumber`
* `severityText`
## Additional resources
For further guidance on integrating OpenTelemetry with Axiom, explore the following guides:
* [Node.js OpenTelemetry guide](/guides/opentelemetry-nodejs)
* [Python OpenTelemetry guide](/guides/opentelemetry-python)
* [Golang OpenTelemetry guide](/guides/opentelemetry-go)
* [Cloudflare Workers guide](/guides/opentelemetry-cloudflare-workers)
* [Ruby on Rails OpenTelemetry guide](/guides/opentelemetry-ruby)
* [.NET OpenTelemetry guide](/guides/opentelemetry-dotnet)
# Send data from client-side React apps to Axiom
Source: https://axiom.co/docs/send-data/react
This page explains how to send data from your client-side React apps to Axiom using the @axiomhq/react library.
React is a popular open-source JavaScript library developed by Meta for building user interfaces. Known for its component-based architecture and efficient rendering with a virtual DOM, React is widely used by companies of all sizes to create fast, scalable, and dynamic web applications.
This page explains how to use the @axiomhq/react library to send data from your client-side React apps to Axiom.
The @axiomhq/react library is part of the Axiom JavaScript SDK, an open-source project and welcomes your contributions. For more information, see the [GitHub repository](https://github.com/axiomhq/axiom-js).
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
- A new or existing React app.
## Install @axiomhq/react library
1. In your terminal, go to the root folder of your React app and run the following command:
```sh
npm install --save @axiomhq/logging @axiomhq/react
```
2. Create a `Logger` instance and export the utils. The example below uses the `useLogger` and `WebVitals` components.
```tsx [expandable]
'use client';
import { Logger, AxiomJSTransport } from '@axiomhq/logging';
import { Axiom } from '@axiomhq/js';
import { createUseLogger, createWebVitalsComponent } from '@axiomhq/react';
const axiomClient = new Axiom({
token: process.env.AXIOM_TOKEN!,
});
export const logger = new Logger({
transports: [
new AxiomJSTransport({
axiom: axiomClient,
dataset: process.env.AXIOM_DATASET!,
}),
],
});
const useLogger = createUseLogger(logger);
const WebVitals = createWebVitalsComponent(logger);
export { useLogger, WebVitals };
```
## Send logs from components
To send logs from components, use the `useLogger` hook that returns your logger instance.
```tsx
import { useLogger } from "@/lib/axiom/client";
import { useEffect } from "react";
export default function ClientComponent() {
const log = useLogger();
const handleClick = () => log.info("User logged out");
useEffect(() => {
log.info("User logged in", { userId: 42 });
}, []);
return (
Logged in
);
}
```
## Send Web Vitals
To send Web Vitals, mount the `WebVitals` component in the root of your React app.
```tsx
import { WebVitals } from "@/lib/axiom/client";
export default function App({ children }: { children: React.ReactNode }) {
return (
{children}
);
}
```
# Reference architectures
Source: https://axiom.co/docs/send-data/reference-architectures
Recommended patterns for reliably sending high-volume telemetry to Axiom using industry-standard collectors like OpenTelemetry and Vector.
Successfully adopting a new platform at scale requires a robust and reliable data collection strategy. In complex distributed systems, how you collect and forward telemetry is as important as the platform that analyzes it.
While Axiom is flexible and supports dozens of [methods](/send-data/methods), there is a confident, battle-tested recommendation for large-scale environments. This guide outlines Axiom’s recommended architectural patterns using two popular tools: the [OpenTelemetry Collector](https://opentelemetry.io/docs/collector/) and [Vector](https://vector.dev/).
These patterns are designed to provide:
* **Resilience:** Ensure no data is lost during network partitions or downstream issues.
* **Enrichment:** Add valuable, consistent metadata to all your telemetry at the source.
* **Performance:** Offload processing and batching from your applications.
* **Governance:** Centrally manage data routing, filtering, and sampling to control costs.
While these patterns are recommended for large-scale environments, Axiom has many other simpler [methods](/send-data/methods) available to send data.
***
## The core patterns
There are two primary deployment patterns for a collector: the Agent model and the Aggregator model. Most large organizations use a hybrid of both.
Most organizations find it easy to send data to Axiom. If you are having trouble, please [contact](https://axiom.co/contact) the Axiom team for help. We can provide ad-hoc guidance or design tailored implementation services for larger projects.
### Agent pattern
In this model, a lightweight collector runs as an agent on every host or as a sidecar in every pod. It's responsible for collecting telemetry from applications on that specific node and forwarding it directly to Axiom.
**Best for:** Capturing rich, host-specific metadata (e.g., `k8s.pod.name`, `host.id`) and providing a resilient, decentralized collection point with local buffering.
```mermaid
graph TD
subgraph "Node/VM 1"
App1[Application] --> Agent1[Collector Agent];
end
subgraph "Node/VM 2"
App2[Application] --> Agent2[Collector Agent];
end
subgraph "Node/VM 3"
App3[Application] --> Agent3[Collector Agent];
end
Agent1 --> Axiom[(Axiom)];
Agent2 --> Axiom;
Agent3 --> Axiom;
```
### Aggregator pattern
In this model, a separate, horizontally-scalable pool of collectors acts as a centralized aggregation layer. Applications and agents send their data to this layer, which then handles final processing, enrichment, and forwarding to Axiom.
**Best for:** Centralized control over data routing and filtering, managing data from sources that cannot run an agent (e.g., third-party APIs, cloud provider log streams), and simplifying management by maintaining a small fleet of aggregators instead of thousands of agents.
```mermaid
flowchart LR
subgraph Sources["Data Sources"]
direction TB
App[Application]
Agent[Collector Agent]
CloudLogs[Cloud Provider Logs]
end
subgraph AggLayer["Aggregation Layer"]
Pool[Collector Aggregator Pool]
end
App --> Pool
Agent --> Pool
CloudLogs --> Pool
Pool --> Axiom[(Axiom)]
```
***
## Tool recommendations
Both the OpenTelemetry Collector and Vector are excellent choices that can be deployed in either an Agent or Aggregator pattern. The best choice depends on your team's existing ecosystem and primary use case.
### OpenTelemetry Collector
The [OpenTelemetry Collector](https://opentelemetry.io/docs/collector/) is the CNCF-backed, vendor-neutral standard for collecting and processing telemetry. It's the ideal choice when your organization is standardizing on the OpenTelemetry framework for all signals (logs, metrics, and traces).
**Use the OTel Collector when:**
* You are instrumenting your applications with OpenTelemetry SDKs.
* You need to process traces, metrics, and logs in a single, unified pipeline.
* You want to align with the fastest-growing standard in the observability community.
Axiom natively supports the OpenTelemetry Line Protocol (OTLP). Configuring the collector to send data to Axiom is simple.
**Example Collector Configuration (`otel-collector-config.yaml`):**
```yaml
exporters:
otlphttp:
compression: gzip
endpoint: https://AXIOM_DOMAIN
headers:
authorization: Bearer API_TOKEN
x-axiom-dataset: DATASET_NAME
service:
pipelines:
traces:
receivers:
- otlp
processors:
- memory_limiter
- batch
exporters:
- otlphttp
```
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
### Vector
[Vector](https://vector.dev/) is a high-performance, Rust-based observability data pipeline. It excels at log collection and transformation, offering a powerful domain-specific language (VRL) for complex data manipulation.
**Use Vector when:**
* Your primary focus is on logs from a wide variety of sources (files, syslog, APIs).
* You need to perform complex parsing, filtering, enrichment, or redaction on your event data.
* You require an extremely lightweight and memory-efficient agent for edge deployments.
Vector has a [native sink](https://vector.dev/docs/reference/configuration/sinks/axiom/) for Axiom, making configuration straightforward.
**Example Vector Configuration (`vector.toml`):**
```toml
[sources.VECTOR_SOURCE_ID]
type = "file"
include = ["PATH_TO_LOGS"]
[sinks.SINK_ID]
type = "axiom"
inputs = ["VECTOR_SOURCE_ID"]
token = "API_TOKEN"
dataset = "DATASET_NAME"
```
Replace `VECTOR_SOURCE_ID` with the Vector source ID.
Replace `PATH_TO_LOGS` with the path to the log files. For example, `/var/log/**/*.log`.
Replace `SINK_ID` with the sink ID.
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
***
## Summary
| Aspect | OpenTelemetry Collector | Vector |
| :------------------- | :--------------------------------------------- | :------------------------------------------------------ |
| **Primary Use Case** | Unified pipeline for logs, metrics, and traces | High-performance log collection and transformation |
| **Ecosystem** | CNCF and OpenTelemetry standard | Standalone, broad source/sink compatibility |
| **Transformation** | Processors (limited for complex logic) | Vector Remap Language (VRL) for advanced logic |
| **Performance** | Excellent | Excellent, often with lower resource footprint for logs |
Both are first-class choices for sending data to Axiom. Your decision should be based on whether you need a unified OTel pipeline or a specialized, high-performance log processing tool.
### What's next?
* Explore the complete range of options for sending data in the [Methods](/send-data/methods) page.
* For direct ingestion, see the [Axiom REST API](/restapi/introduction).
# Send logs from Render to Axiom
Source: https://axiom.co/docs/send-data/render
This page explains how to send logs from Render to Axiom.
export const endpointName_0 = "Secure Syslog"
Render is a unified cloud to build and run all your apps and websites. Axiom provides complete visibility into your Render projects, allowing you to monitor the behavior of your websites and apps.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
- [Create an account on Render](https://dashboard.render.com/login).
## Setup
### Create endpoint in Axiom
1. Click **Settings > Endpoints**.
2. Click **New endpoint**.
3. Click **{endpointName_0}**.
4. Name the endpoint.
5. Select the dataset where you want to send data.
6. Copy the URL displayed for the newly created endpoint. This is the target URL where you send the data.
### Create log stream in Render
In Render, create a log stream. For more information, see the [Render documentation](https://docs.render.com/log-streams). As the log endpoint, use the target URL generated in Axiom in the procedure above.
Back in your Axiom dataset, you see logs coming from Render.
# Send data from syslog to Axiom over a secure connection
Source: https://axiom.co/docs/send-data/secure-syslog
This page explains how to send data securely from a syslog logging system to Axiom.
export const endpointName_0 = "Secure Syslog"
The Secure Syslog endpoint allows you to send syslog data to Axiom over a secure connection. With the Secure Syslog endpoint, the logs you send to Axiom are encrypted using SSL/TLS.
## Syslog limitations and recommended alternatives
Syslog is an outdated protocol. Some of the limitations are the following:
* Lack of error reporting and feedback mechanisms when issues occur.
* Inability to gracefully end the connection. This can result in missing data.
For a more reliable and modern logging experience, consider using tools like [Vector](https://vector.dev/) to receive syslog messages and [forward them to Axiom](/send-data/vector). This approach bypasses many of syslog’s limitations.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
## Configure {endpointName_0} endpoint in Axiom
1. Click **Settings > Endpoints**.
2. Click **New endpoint**.
3. Click **{endpointName_0}**.
4. Name the endpoint.
5. Select the dataset where you want to send data.
6. Copy the URL displayed for the newly created endpoint. This is the target URL where you send the data.
## Configure syslog client
1. Ensure the syslog client meets the following requirements:
* **Message size limit:** Axiom currently enforces a 64KB per-message size limit. This is in line with RFC5425 guidelines. Any message exceeding the limit causes the connection to close because Axiom doesn’t support ingesting truncated messages.
* **TLS requirement:** Axiom only supports syslog over TLS, specifically following RFC5425. Ensure you have certificate authority certificates installed in your environment to validate Axiom’s SSL certificate. For example, on Ubuntu/Debian systems, install the `ca-certificates` package. For more information, see the [RFC Series documentation](https://www.rfc-editor.org/rfc/rfc5425).
* **Port requirements:** TCP log messages are sent on TCP port `6514`.
2. Configure your syslog client to connect to Axiom. Use the target URL for the endpoint you have generated in Axiom by following the procedure above. For example, `https://opbizplsf8klnw.ingress.axiom.co`. Consider this URL as secret information because syslog doesn’t support additional authentication such as API tokens.
## Troubleshooting
Ensure your messages conform to the size limit and TLS requirements. If the connection is frequently re-established and messages are rejected, the issue can be the size of the messages or other formatting issues.
# Send data from Serverless to Axiom
Source: https://axiom.co/docs/send-data/serverless
This page explains how to send data from Serverless to Axiom.
Serverless is an open-source web framework for building apps on AWS Lambda. Sending event data from your Serverless apps to Axiom allows you to gain deep insights into your apps’ performance and behavior without complex setup or configuration.
To send data from Serverless to Axiom:
1. [Create an Axiom account](https://app.axiom.co/register).
2. [Create an API token in Axiom](/reference/tokens) with **Ingest**, **Query**, **Datasets**, **Dashboards**, and **Monitors** permissions.
3. [Create a Serverless account](https://app.serverless.com/).
4. Set up your app with Serverless using the [Serverless documentation](https://www.serverless.com/framework/docs/getting-started).
5. Configure Axiom in your Serverless Framework Service using the [Serverless documentation](https://www.serverless.com/framework/docs/guides/observability/axiom).
# Send data from syslog to Axiom
Source: https://axiom.co/docs/send-data/syslog-proxy
This page explains how to send data from a syslog logging system to Axiom.
The Axiom Syslog Proxy acts as a syslog server to send data to Axiom.
The Axiom Syslog Proxy is an open-source project and welcomes your contributions. For more information, see the [GitHub repository](https://github.com/axiomhq/axiom-syslog-proxy).
## Syslog limitations and recommended alternatives
Syslog is an outdated protocol. Some of the limitations are the following:
* Lack of error reporting and feedback mechanisms when issues occur.
* Inability to gracefully end the connection. This can result in missing data.
For a more reliable and modern logging experience, consider using tools like [Vector](https://vector.dev/) to receive syslog messages and [forward them to Axiom](/send-data/vector). This approach bypasses many of syslog’s limitations.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
Other requirements:
* **Message size limit:** Axiom currently enforces a 64KB per-message size limit. This is in line with RFC5425 guidelines. Any message exceeding the limit causes the connection to close because Axiom doesn’t support ingesting truncated messages.
* **TLS requirement:** Axiom only supports syslog over TLS, specifically following RFC5425. Configure your syslog client accordingly.
* **Port requirements:** UDP log messages are sent on UDP port `514` to the Syslog server. TCP log messages are sent on TCP port `601` to the Syslog server.
Ensure your messages conform to the size limit and TLS requirements. If the connection is frequently re-established and messages are rejected, the issue can be the size of the messages or other formatting issues.
## Install Axiom Syslog Proxy
To install the Axiom Syslog Proxy, choose one of the following options:
* [Install using a pre-compiled binary file](#install-using-pre-compiled-binary-file)
* [Install using Homebrew](#install-using-homebrew)
* [Install using Go command](#install-using-go-command)
* [Install from the GitHub source](#install-from-github-source)
* [Install using a Docker image](#install-using-docker-image)
### Install using pre-compiled binary file
To install the Axiom Syslog Proxy using a pre-compiled binary file, download one of the [releases in GitHub](https://github.com/axiomhq/axiom-syslog-proxy/releases/latest).
### Install using Homebrew
Run the following to install the Axiom Syslog Proxy using Homebrew:
```shell
brew tap axiomhq/tap
brew install axiom-syslog-proxy
```
### Install using Go command
Run the following to install the Axiom Syslog Proxy using `go get`:
```shell
go install github.com/axiomhq/axiom-syslog-proxy/cmd/axiom-syslog-proxy@latest
```
### Install from GitHub source
Run the following to install the Axiom Syslog Proxy from the GitHub source:
```shell
git clone https://github.com/axiomhq/axiom-syslog-proxy.git
cd axiom-syslog-proxy
make install
```
### Install using Docker image
To install the Axiom Syslog Proxy using a Docker image, use a [Docker image from DockerHub](https://hub.docker.com/r/axiomhq/axiom-syslog-proxy/tags)
## Configure Axiom Syslog Proxy
Set the following environment variables to connect to Axiom:
* `AXIOM_TOKEN` is the Axiom API token you have generated.
* `AXIOM_DATASET` is the name of the Axiom dataset where you want to send data.
* Optional: `AXIOM_URL` is the URL of the Axiom API. By default, it uses the US region. Change the default value if your organization uses another region. For more information, see [Regions](/reference/regions).
## Run Axiom Syslog Proxy
To run Axiom Syslog Proxy, run the following in your terminal.
```shell
./axiom-syslog-proxy
```
If you use Docker, run the following:
```shell
docker run -p601:601/tcp -p514:514/udp \
-e=AXIOM_TOKEN=API_TOKEN \
-e=AXIOM_DATASET=DATASET_NAME \
axiomhq/axiom-syslog-proxy
```
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
## Test configuration
To test that the Axiom Syslog Proxy configuration:
1. Run the following in your terminal to send two messages:
```shell
echo -n "tcp message" | nc -w1 localhost 601
echo -n "udp message" | nc -u -w1 localhost 514
```
2. In Axiom, click the **Stream** tab.
3. Click your dataset.
4. Check whether Axiom displays the messages you have sent.
# Send data from Tremor to Axiom
Source: https://axiom.co/docs/send-data/tremor
This step-by-step guide helps you configure Tremor connectors and events components to interact with your databases, APIs, and ingest data from these sources into Axiom.
export const endpointName_0 = "Syslog"
Axiom provides a unique way of ingesting [Tremor logs](https://www.tremor.rs/) into Axiom. With your connector definitions, you can configure Tremor connectors and events components to interact with your external systems, such as databases, message queues, or APIs, and eventually ingest data from these sources into Axiom.
## Installation
Install the latest package from the runtime [releases tag](https://github.com/tremor-rs/tremor-runtime/releases) on your local machine.
## Configuration using HTTP
To send logs via Tremor to Axiom, you need to create a configuration file. For example, create `axiom-http.troy` with the following content (using a file as example data source):
```troy
define flow client_sink_only
flow
use std::time::nanos;
use tremor::pipelines;
define connector input from file
args
file = "in.json" # Default input file is 'in.json' in current working directory
with
codec = "json", # Data is JSON encoded
preprocessors = ["separate"], # Data is newline separated
config = {
"path": args.file,
"mode": "read"
},
end;
create connector input;
define connector http_client from http_client
args
dataset,
token
with
config = {
"url": "https://AXIOM_DOMAIN/v1/datasets/#{args.dataset}/ingest",
"tls": true,
"method": "POST",
"headers": {
"Authorization": "Bearer #{args.token}"
},
"timeout": nanos::from_seconds(10),
"mime_mapping": {
"*/*": {"name": "json"},
}
}
end;
create connector http_client
with
dataset = "DATASET_NAME",
token = "API_TOKEN"
end;
create pipeline passthrough from pipelines::passthrough;
connect /connector/input to /pipeline/passthrough;
connect /pipeline/passthrough to /connector/http_client;
end;
deploy flow client_sink_only;
```
This assumes you have set `TREMOR_PATH` in your environment pointing to `tremor-runtime/tremor-script/lib` if you are using a `src` clone then you can execute it as follows `tremor server run axiom-http.troy`
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
Replace `AXIOM_DOMAIN` with `api.axiom.co` if your organization uses the US region, and with `api.eu.axiom.co` if your organization uses the EU region. For more information, see [Regions](/reference/regions).
## Configuration using Syslog
You can also send logs via Tremor to the Syslog endpoint using a file as an example data source.
1. Click **Settings > Endpoints**.
2. Click **New endpoint**.
3. Click **{endpointName_0}**.
4. Name the endpoint.
5. Select the dataset where you want to send data.
6. Copy the URL displayed for the newly created endpoint. This is the target URL where you send the data.
In the code below, replace `url` with the URL of your Syslog endpoint.
```troy
define flow client_sink_only
flow
use std::time::nanos;
use tremor::pipelines;
define connector input from file
args
file = "in.json" # Default input file is 'in.json' in current working directory
with
codec = "json", # Data is JSON encoded
preprocessors = ["separate"], # Data is newline separated
config = {
"path": args.file,
"mode": "read"
},
end;
create connector input;
define connector syslog_forwarder from tcp_client
args
endpoint_hostport,
with
tls = true,
codec = "syslog",
config = {
"url": "#{args.endpoint_hostport}",
"no_delay": false,
"buf_size": 1024,
},
reconnect = {
"retry": {
"interval_ms": 100,
"growth_rate": 2,
"max_retries": 3,
}
}
end;
create connector syslog_forwarder
with
endpoint_hostport = "tcp+tls://testsyslog.syslog.axiom.co:6514"
end;
create pipeline passthrough from pipelines::passthrough;
connect /connector/input to /pipeline/passthrough;
connect /pipeline/passthrough to /connector/syslog_forwarder;
end;
deploy flow client_sink_only;
```
# Send data from Vector to Axiom
Source: https://axiom.co/docs/send-data/vector
This step-by-step guide will help you configure Vector to read and collect metrics from your sources using the Axiom sink.
Vector is a lightweight and ultra-fast tool for building observability pipelines. It has a built-in support for shipping logs to Axiom through the [`axiom` sink](https://vector.dev/docs/reference/configuration/sinks/axiom/).
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
## Installation
Follow the [quickstart guide in the Vector documentation](https://vector.dev/docs/setup/quickstart/) to install Vector, and to configure sources and sinks.
If you use Vector version v0.41.1 (released on September 11, 2024) or earlier, use the `@timestamp` field instead of `_time` to specify the timestamp of the events. For more information, see [Timestamp in legacy Vector versions](#timestamp-in-legacy-vector-versions).
If you upgrade from Vector version v0.41.1 or earlier to a newer version, update your configuration. For more information, see [Upgrade from legacy Vector version](#upgrade-from-legacy-vector-version).
## Configuration
Send data to Axiom with Vector using the [`file` method](https://vector.dev/docs/reference/configuration/sources/file/) and the [`axiom` sink](https://vector.dev/docs/reference/configuration/sinks/axiom/).
The example below configures Vector to read and collect logs from files and send them to Axiom.
1. Create a vector configuration file `vector.toml` with the following content:
```toml
[sources.VECTOR_SOURCE_ID]
type = "file"
include = ["PATH_TO_LOGS"]
[sinks.SINK_ID]
type = "axiom"
inputs = ["VECTOR_SOURCE_ID"]
token = "API_TOKEN"
dataset = "DATASET_NAME"
```
Replace `VECTOR_SOURCE_ID` with the Vector source ID.
Replace `PATH_TO_LOGS` with the path to the log files. For example, `/var/log/**/*.log`.
Replace `SINK_ID` with the sink ID.
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
2. Run Vector to send logs to Axiom.
### Example with data transformation
The example below deletes a field before sending the data to Axiom:
```toml
[sources.VECTOR_SOURCE_ID]
type = "file"
include = ["PATH_TO_LOGS"]
[transforms.filter_json_fields]
type = "remap"
inputs = ["VECTOR_SOURCE_ID"]
source = '''
. = del(.FIELD_TO_REMOVE)
'''
[sinks.SINK_ID]
type = "axiom"
inputs = ["filter_json_fields"]
token = "API_TOKEN"
dataset = "DATASET_NAME"
```
Replace `FIELD_TO_REMOVE` with the field you want to remove.
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
Any changes to Vector’s `file` method can make the code example above outdated. If this happens, please refer to the [official Vector documentation on the `file` method](https://vector.dev/docs/reference/configuration/sources/file/), and we kindly ask you to inform us of the issue using the feedback tool at the bottom of this page.
## Send Kubernetes logs to Axiom
Send Kubernetes logs to Axiom using the Kubernetes source.
```toml
[sources.my_source_id]
type = "kubernetes_logs"
auto_partial_merge = true
ignore_older_secs = 600
read_from = "beginning"
self_node_name = "${VECTOR_SELF_NODE_NAME}"
exclude_paths_glob_patterns = [ "**/exclude/**" ]
extra_field_selector = "metadata.name!=pod-name-to-exclude"
extra_label_selector = "my_custom_label!=my_value"
extra_namespace_label_selector = "my_custom_label!=my_value"
max_read_bytes = 2_048
max_line_bytes = 32_768
fingerprint_lines = 1
glob_minimum_cooldown_ms = 60_000
delay_deletion_ms = 60_000
data_dir = "/var/lib/vector"
timezone = "local"
[sinks.axiom]
type = "axiom"
inputs = ["my_source_id"]
token = "API_TOKEN"
dataset = "DATASET_NAME"
```
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
## Send Docker logs to Axiom
To send Docker logs using the Axiom sink, you need to create a configuration file, for example, `vector.toml`, with the following content:
```toml
# Define the Docker logs source
[sources.docker_logs]
type = "docker_logs"
docker_host = "unix:///var/run/docker.sock"
# Define the Axiom sink
[sinks.axiom]
type = "axiom"
inputs = ["docker_logs"]
dataset = "DATASET_NAME"
token = "API_TOKEN"
```
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
Run Vector: Start Vector with the configuration file you just created:
```bash
vector --config /path/to/vector.toml
```
Vector collects logs from Docker and forward them to Axiom using the Axiom sink. You can view and analyze your logs in your dataset.
## Send AWS S3 logs to Axiom
To send AWS S3 logs using the Axiom sink, create a configuration file, for example, `vector.toml`, with the following content:
```toml
[sources.my_s3_source]
type = "aws_s3"
bucket = "my-bucket" # replace with your bucket name
region = "us-west-2" # replace with the AWS region of your bucket
[sinks.axiom]
type = "axiom"
inputs = ["my_s3_source"]
dataset = "DATASET_NAME"
token = "API_TOKEN"
```
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
Finally, run Vector with the configuration file using `vector --config ./vector.toml`. This starts Vector and begins reading logs from the specified S3 bucket and sending them to the specified Axiom dataset.
## Send Kafka logs to Axiom
To send Kafka logs using the Axiom sink, you need to create a configuration file, for example, `vector.toml`, with the following code:
```toml
[sources.my_kafka_source]
type = "kafka" # must be: kafka
bootstrap_servers = "10.14.22.123:9092" # your Kafka bootstrap servers
group_id = "my_group_id" # your Kafka consumer group ID
topics = ["my_topic"] # the Kafka topics to consume from
auto_offset_reset = "earliest" # start reading from the beginning
[sinks.axiom]
type = "axiom"
inputs = ["my_kafka_source"] # connect the Axiom sink to your Kafka source
dataset = "DATASET_NAME" # replace with the name of your Axiom dataset
token = "API_TOKEN" # replace with your Axiom API token
```
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
Finally, you can start Vector with your configuration file: `vector --config /path/to/your/vector.toml`
## Send NGINX metrics to Axiom
To send NGINX metrics using Vector to the Axiom sink, first enable NGINX to emit metrics, then use Vector to capture and forward those metrics. Here is a step-by-step guide:
### Step 1: Enable NGINX Metrics
Configure NGINX to expose metrics. This typically involves enabling the `ngx_http_stub_status_module` module in your NGINX configuration.
1. Open your NGINX configuration file (often located at `/etc/nginx/nginx.conf`) and in your `server` block, add:
```bash
location /metrics {
stub_status;
allow 127.0.0.1; # only allow requests from localhost
deny all; # deny all other hosts
}
```
2. Restart or reload NGINX to apply the changes:
```bash
sudo systemctl restart nginx
```
This exposes basic NGINX metrics at the `/metrics` endpoint on your server.
### Step 2: Configure Vector
Configure Vector to scrape the NGINX metrics and send them to Axiom. Create a new configuration file (`vector.toml`), and add the following:
```toml
[sources.nginx_metrics]
type = "nginx_metrics" # must be: nginx_metrics
endpoints = ["http://localhost/metrics"] # the endpoint where NGINX metrics are exposed
[sinks.axiom]
type = "axiom" # must be: axiom
inputs = ["nginx_metrics"] # use the metrics from the NGINX source
dataset = "DATASET_NAME" # replace with the name of your Axiom dataset
token = "API_TOKEN" # replace with your Axiom API token
```
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
Finally, you can start Vector with your configuration file: `vector --config /path/to/your/vector.toml`
## Send Syslog logs to Axiom
To send Syslog logs using the Axiom sink, you need to create a configuration file, for example, `vector.toml`, with the following code:
```toml
[sources.my_source_id]
type="syslog"
address="0.0.0.0:6514"
max_length=102_400
mode="tcp"
[sinks.axiom]
type="axiom"
inputs = [ "my_source_id" ] # required
dataset="DATASET_NAME" # replace with the name of your Axiom dataset
token="API_TOKEN" # replace with your Axiom API token
```
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
## Send Prometheus metrics to Axiom
To send Prometheus scrape metrics using the Axiom sink, you need to create a configuration file, for example, `vector.toml`, with the following code:
```toml
# Define the Prometheus source that scrapes metrics
[sources.my_prometheus_source]
type = "prometheus_scrape" # scrape metrics from a Prometheus endpoint
endpoints = ["http://localhost:9090/metrics"] # replace with your Prometheus endpoint
# Define Axiom sink where logs will be sent
[sinks.axiom]
type = "axiom" # Axiom type
inputs = ["my_prometheus_source"] # connect the Axiom sink to your Prometheus source
dataset = "DATASET_NAME" # replace with the name of your Axiom dataset
token = "API_TOKEN" # replace with your Axiom API token
```
Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
Check out the [advanced configuration on Batch, Buffer configuration, and Encoding on Vector Documentation](https://vector.dev/docs/reference/configuration/sinks/axiom/)
## Timestamp in legacy Vector versions
If you use Vector version v0.41.1 (released on September 11, 2024) or earlier, use the `@timestamp` field instead of `_time` to specify the timestamp in the event data you send to Axiom. For example: `{"@timestamp":"2022-04-14T21:30:30.658Z..."}`. For more information, see [Requirements of the timestamp field](/reference/field-restrictions#requirements-of-the-timestamp-field). In the case of Vector version v0.41.1 or earlier, the requirements explained on the page apply to the `@timestamp` field, not to `_time`.
If you use Vector version v0.42.0 (released on October 21, 2024) or newer, use the `_time` field as usual for other collectors.
### Upgrade from legacy Vector version
If you upgrade from Vector version v0.41.1 or earlier to a newer version, change all references from the `timestamp` field to the `_time` field and remap the logic.
Example `vrl` file:
```vrl example.vrl
# Set time explicitly rather than allowing Axiom to default to the current time
. = set!(value: ., path: ["_time"], data: .timestamp)
# Remove the original value as it is effectively a duplicate
del(.timestamp)
```
Example Vector configuration file:
```toml
# ...
[transforms.migrate]
type = "remap"
inputs = [ "k8s"]
file= 'example.vrl' # See above
[sinks.debug]
type = "axiom"
inputs = [ "migrate" ]
dataset = "DATASET_NAME" # No change
token = "API_TOKEN" # No change
[sinks.debug.encoding]
codec = "json"
```
### Set compression algorithm
Upgrading to Vector version v0.42.0 or newer automatically enables the `zstd` compression algorithm by default.
To set another compression algorithm, use the example below:
```toml
# ...
[transforms.migrate]
type = "remap"
inputs = [ "k8s"]
file= 'example.vrl' # See above
[sinks.debug]
type = "axiom"
compression = "gzip" # Set the compression algorithm
inputs = [ "migrate" ]
dataset = "DATASET_NAME" # No change
token = "API_TOKEN" # No change
[sinks.debug.encoding]
codec = "json"
```