### Access GenAI dashboard
Axiom automatically creates the GenAI dashboard if the field `attributes.gen_ai.operation.name` is present in your data.
To access the GenAI dashboard:
1. Click the Dashboards tab.
2. Click the dashboard **Generative AI Overview (DATASET\_NAME)** where `DATASET_NAME` is the name of your GenAI dataset.
The GenAI dashboard provides you with important insights about your GenAI app such as:
* Vitals about requests, broken down by operation, capability, and step.
* Token usage and cost analysis
* Error analysis
* Comparison of performance and reliability of different AI models
## What’s next?
After capturing and analyzing production telemetry, use these insights to improve your capability. Learn more in [Iterate](/ai-engineering/iterate).
# Instrumentation with Axiom AI SDK
Source: https://axiom.co/docs/ai-engineering/observe/axiom-ai-sdk-instrumentation
Learn how to instrument your TypeScript app with Axiom AI SDK for AI telemetry capture.
export const Badge = ({children}) => {
return {res.text}
; } ``` ## Instrument tool calls For many AI capabilities, the LLM call is only part of the story. If your capability uses tools to interact with external data or services, observing the performance and outcome of those tools is critical. Axiom AI SDK provides the `wrapTool` and `wrapTools` functions to automatically instrument your Vercel AI SDK tool definitions. The `wrapTool` helper takes your tool’s name and its definition and returns an instrumented version. This wrapper creates a dedicated child span for every tool execution, capturing its arguments, output, and any errors. ```ts /src/app/generate-text/page.tsx theme={null} import { tool } from 'ai'; import { z } from 'zod'; import { wrapTool } from 'axiom/ai'; import { generateText } from 'ai'; import { gpt4o } from '@/shared/openai'; // In your generateText call, provide wrapped tools const { text, toolResults } = await generateText({ model: gpt4o, messages: [ { role: 'system', content: 'You are a helpful assistant.' }, { role: 'user', content: 'How do I get from Paris to Berlin?' }, ], tools: { // Wrap each tool with its name findDirections: wrapTool( 'findDirections', // The name of the tool tool({ description: 'Find directions to a location', inputSchema: z.object({ from: z.string(), to: z.string(), }), execute: async (params) => { // Your tool logic here... return { directions: `To get from ${params.from} to ${params.to}, use a teleporter.` }; }, }) ) } }); ``` ## Complete example Example of how all three instrumentation functions work together in a single, real-world example: ```ts /src/app/page.tsx expandable theme={null} import { withSpan, wrapAISDKModel, wrapTool } from 'axiom/ai'; import { generateText, tool } from 'ai'; import { createOpenAI } from '@ai-sdk/openai'; import { z } from 'zod'; // 1. Create and wrap the AI model client const openaiProvider = createOpenAI({ apiKey: process.env.OPENAI_API_KEY, }); const gpt4o = wrapAISDKModel(openaiProvider('gpt-4o')); // 2. Define and wrap your tool(s) const findDirectionsTool = wrapTool( 'findDirections', // The tool name must be passed to the wrapper tool({ description: 'Find directions to a location', inputSchema: z.object({ from: z.string(), to: z.string() }), execute: async ({ from, to }) => ({ directions: `To get from ${from} to ${to}, use a teleporter.`, }), }) ); // 3. In your application logic, use `withSpan` to add context // and call the AI model with your wrapped tools. export default async function Page() { const userId = 123; const { text } = await withSpan({ capability: 'get_directions', step: 'generate_ai_response' }, async (span) => { // You have access to the OTel span to add custom attributes span.setAttribute('user_id', userId); return generateText({ model: gpt4o, // Use the wrapped model messages: [ { role: 'system', content: 'You are a helpful assistant.' }, { role: 'user', content: 'How do I get from Paris to Berlin?' }, ], tools: { findDirections: findDirectionsTool, // Use the wrapped tool }, }); }); return{text}
; } ``` This demonstrates the three key steps to rich observability: 1. **`wrapAISDKModel`**: Automatically captures telemetry for the LLM provider call 2. **`wrapTool`**: Instruments the tool execution with detailed spans 3. **`withSpan`**: Creates a parent span that ties everything together under a business capability ## What’s next? After sending traces to Axiom: * View your [traces](/query-data/traces) in Console * Set up [monitors and alerts](/monitor-data/monitors) based on your AI telemetry data * Learn about [developing AI features](/ai-engineering/create) with confidence using Axiom # Generative AI attributes Source: https://axiom.co/docs/ai-engineering/observe/gen-ai-attributes Understand the key attributes that your generative AI app sends to Axiom. After you instrument your app, every LLM call sends a detailed span to your Axiom dataset. The spans are enriched with standardized `gen_ai.*` attributes that make your AI interactions easy to query and analyze. Key attributes include the following: * `gen_ai.capability.name`: The high-level capability name you defined in `withSpan`. * `gen_ai.step.name`: The specific step within the capability. * `gen_ai.request.model`: The model requested for the completion. * `gen_ai.response.model`: The model that actually fulfilled the request. * `gen_ai.usage.input_tokens`: The number of tokens in the prompt. * `gen_ai.usage.output_tokens`: The number of tokens in the generated response. * `gen_ai.prompt`: The full, rendered prompt or message history sent to the model (as a JSON string). * `gen_ai.completion`: The full response from the model, including tool calls (as a JSON string). * `gen_ai.response.finish_reasons`: The reason the model stopped generating tokens. For example: `stop`, `tool-calls`. * `gen_ai.tool.name`: The name of the executed tool. * `gen_ai.tool.arguments`: The arguments passed to the tool (as a JSON string). * `gen_ai.tool.message`: The result returned by the tool (as a JSON string). ## What’s next? After capturing and analyzing production telemetry: * [Visualize traces](/query-data/traces) in Console. * Use the new insights to [iterate](/ai-engineering/iterate) on your capability. # Manual instrumentation Source: https://axiom.co/docs/ai-engineering/observe/manual-instrumentation Learn how to manually instrument your generative AI apps using OpenTelemetry tooling. Manually instrumenting your generative AI apps using language-agnostic OpenTelemetry tooling gives you full control over your instrumentation while ensuring compatibility with Axiom’s AI engineering features.
You can create a token that has access to a single zone, single account or a mix of all these, depending on your needs. For account access, the token must
have theses permissions:
* Logs: Edit
* Account settings: Read
For the zones, only edit permission is required for logs.
## Steps
* Log in to Cloudflare, go to your Cloudflare dashboard, and then select the Enterprise zone (domain) you want to enable Logpush for.
* Optionally, set filters and fields. You can filter logs by field (like Client IP, User Agent, etc.) and set the type of logs you want (for example, HTTP requests, firewall events).
* In Axiom, click **Settings**, select **Apps**, and install the Cloudflare Logpush app with the token you created from the profile settings in Cloudflare.
* You see your available accounts and zones. Select the Cloudflare datasets you want to subscribe to.
* The installation uses the Cloudflare API to create Logpush jobs for each selected dataset.
* After the installation completes, you can find the installed Logpush jobs at Cloudflare.
For zone-scoped Logpush jobs:
For account-scoped Logpush jobs:
* In the Axiom, you can see your Cloudflare Logpush dashboard.
Using Axiom with Cloudflare Logpush offers a powerful solution for real-time monitoring, observability, and analytics. Axiom can help you gain deep insights into your app’s performance, errors, and app bottlenecks.
### Benefits of using the Axiom Cloudflare Logpush Dashboard
* Real-time visibility into web performance: One of the most crucial features is the ability to see how your website or app is performing in real-time. The dashboard can show everything from page load times to error rates, giving you immediate insights that can help in timely decision-making.
* Actionable insights for troubleshooting: The dashboard doesn’t just provide raw data; it provides insights. Whether it’s an error that needs immediate fixing or performance metrics that show an error from your app, having this information readily available makes it easier to identify problems and resolve them swiftly.
* DNS metrics: Understanding the DNS requests, DNS queries, and DNS cache hit from your app is vital to track if there’s a request spike or get the total number of queries in your system.
* Centralized logging and error tracing: With logs coming in from various parts of your app stack, centralizing them within Axiom makes it easier to correlate events across different layers of your infrastructure. This is crucial for troubleshooting complex issues that may span multiple services or components.
## Supported Cloudflare Logpush Datasets
Axiom supports all the Cloudflare account-scoped datasets.
Zone-scoped
* DNS logs
* Firewall events
* HTTP requests
* NEL reports
* Spectrum events
Account-scoped
* Access requests
* Audit logs
* CASB Findings
* Device posture results
* DNS Firewall Logs
* Gateway DNS
* Gateway HTTP
* Gateway Network
* Magic IDS Detections
* Network Analytics Logs
* Workers Trace Events
* Zero Trust Network Session Logs
# Connect Axiom with Cloudflare Workers
Source: https://axiom.co/docs/apps/cloudflare-workers
This page explains how to enrich your Axiom experience with Cloudflare Workers.
The Axiom Cloudflare Workers app provides granular detail about the traffic coming in from your monitored sites. This includes edge requests, static resources, client auth, response duration, and status. Axiom gives you an all-at-once view of key Cloudflare Workers metrics and logs, out of the box, with the dynamic Cloudflare Workers dashboard.
The data obtained with the Axiom dashboard gives you better insights into the state of your Cloudflare Workers so you can easily monitor bad requests, popular URLs, cumulative execution time, successful requests, and more. The app is part of Axiom’s unified logging and observability platform, so you can easily track Cloudflare Workers edge requests alongside a comprehensive view of other resources in your Cloudflare Worker environments.
## What’s a Grafana data source plugin?
Grafana is an open-source tool for time-series analytics, visualization, and alerting. It’s frequently used in DevOps and IT Operations roles to provide real-time information on system health and performance.
Data sources in Grafana are the actual databases or services where the data is stored. Grafana has a variety of data source plugins that connect Grafana to different types of databases or services. This enables Grafana to query those sources from display that data on its dashboards. The data sources can be anything from traditional SQL databases to time-series databases or metrics, and logs from Axiom.
A Grafana data source plugin extends the functionality of Grafana by allowing it to interact with a specific type of data source. These plugins enable users to extract data from a variety of different sources, not just those that come supported by default in Grafana.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/).
* [Create a dataset in Axiom](/reference/datasets) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to create, read, update, and delete datasets.
## Install the Axiom Grafana data source plugin on Grafana Cloud
* In Grafana, click Administration > Plugins in the side navigation menu to view installed plugins.
* In the filter bar, search for the Axiom plugin
* Click on the plugin logo.
* Click Install.
When the update is complete, a confirmation message is displayed, indicating that the installation was successful.
* The Axiom Grafana Plugin is also installable from the [Grafana Plugins page](https://grafana.com/grafana/plugins/axiomhq-axiom-datasource/)
## Install the Axiom Grafana data source plugin on local Grafana
The Axiom data source plugin for Grafana is [open source on GitHub](https://github.com/axiomhq/axiom-grafana). It can be installed via the Grafana CLI, or via Docker.
### Install the Axiom Grafana Plugin using Grafana CLI
```bash theme={null}
grafana-cli plugins install axiomhq-axiom-datasource
```
### Install Via Docker
* Add the plugin to your `docker-compose.yml` or `Dockerfile`
* Set the environment variable `GF_INSTALL_PLUGINS` to include the plugin
Example:
`GF_INSTALL_PLUGINS="axiomhq-axiom-datasource"`
## Configuration
* Add a new data source in Grafana
* Select the Axiom data source type.
* Enter the previously generated API token.
* Save and test the data source.
## Build Queries with Query Editor
The Axiom data source Plugin provides a custom query editor to build and visualize your Axiom event data. After configuring the Axiom data source, start building visualizations from metrics and logs stored in Axiom.
* Create a new panel in Grafana by clicking on Add visualization
* Select the Axiom data source.
* Use the query editor to choose the desired metrics, dimensions, and filters.
## Benefits of the Axiom Grafana data source plugin
The Axiom Grafana data source plugin allows users to display and interact with their Axiom data directly from within Grafana. By doing so, it provides several advantages:
1. **Unified visualization:** The Axiom Grafana data source plugin allows users to utilize Grafana’s powerful visualization tools with Axiom’s data. This enables users to create, explore, and share dashboards which visually represent their Axiom logs and metrics.
2. **Rich Querying Capability:** Grafana has a powerful and flexible interface for building data queries. With the Axiom plugin, and leverage this capability to build complex queries against your Axiom data.
3. **Customizable Alerting:** Grafana’s alerting feature allows you to set alerts based on your queries' results, and set up custom alerts based on specific conditions in your Axiom log data.
4. **Sharing and Collaboration:** Grafana’s features for sharing and collaboration can help teams work together more effectively. Share Axiom data visualizations with others, collaborate on dashboards, and discuss insights directly in Grafana.
# Map location data with Axiom and Hex
Source: https://axiom.co/docs/apps/hex
This page exlains how to visualize geospatial log data from Axiom using Hex interactive maps.
Hex is a powerful collaborative data platform that allows you to create notebooks with Python/SQL code and interactive visualizations.
This page explains how to integrate Hex with Axiom to visualize geospatial data from your logs. You ingest location data into Axiom, query it using APL, and create interactive map visualizations in Hex.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to ingest data to the dataset you have created.
- [Create a Hex account](https://app.hex.tech/).
## Send geospatial data to Axiom
Send your sample location data to Axiom using the API endpoint. For example, the following HTTP request sends sample robot location data with latitude, longitude, status, and satellite information.
```bash theme={null}
curl -X 'POST' 'https://AXIOM_DOMAIN/v1/ingest/DATASET_NAME' \
-H 'Authorization: Bearer API_TOKEN' \
-H 'Content-Type: application/json' \
-d '[
{
"data": {
"robot_id": "robot-001",
"latitude": 37.7749,
"longitude": -122.4194,
"num_satellites": 8,
"status": "active"
}
}
]'
```
## Monitor Lambda functions and usage in Axiom
Having real-time visibility into your function logs is important because any duration between sending your lambda request and the execution time can cause a delay and adds to customer-facing latency. You need to be able to measure and track your Lambda invocations, maximum execution time, minimum execution time, and all invocations by function.
The Axiom Lambda Extension gives you full visibility into the most important metrics and logs coming from your Lambda function out of the box without any further configuration required.
## Track cold start on your Lambda function
A cold start occurs when there’s a delay between your invocation and runtime created during the initialization process. During this period, there’s no available function instance to respond to an invocation. With the Axiom built-in Serverless AWS Lambda dashboard, you can track and see the effect of cold start on your Lambda functions and its impact on every Lambda function. This data lets you know when to take actionable steps, such as using provisioned concurrency or reducing function dependencies.
## Optimize slow-performing Lambda queries
Grouping logs with Lambda invocations and execution time by function provides insights into your events request and response pattern. You can extend your query to view when an invocation request is rejected and configure alerts to be notified on Serverless log patterns and Lambda function payloads. With the invocation request dashboard, you can monitor request function logs and see how your Lambda serverless functions process your events and Lambda queues over time.
## Detect timeout on your Lambda function
Axiom Lambda function monitors let you identify the different points of invocation failures, cold-start delays, and AWS Lambda errors on your Lambda functions. With standard function logs like invocations by function, and Lambda cold start, monitoring the rate of your execution time can alert you to be aware of a significant spike whenever an error occurs in your Lambda function.
## Smart filters
Axiom Lambda Serverless Smart Filters lets you easily filter down to specific AWS Lambda functions or Serverless projects and use saved queries to get deep insights on how functions are performing with a single click.
# Connect Axiom with Netlify
Source: https://axiom.co/docs/apps/netlify
Integrating Axiom with Netlify to get a comprehensive observability experience for your Netlify projects. This gives you a better understanding of how your Jamstack apps are performing.
Integrate Axiom with Netlify to get a comprehensive observability experience for your Netlify projects. This integration gives you a better understanding of how your Jamstack apps are performing.
You can easily monitor logs and metrics related to your website traffic, serverless functions, and app requests. The integration is easy to set up, and you don’t need to configure anything to get started.
With Axiom’s Zero-Config Observability app, you can see all your metrics in real-time, without sampling. That means you can get a complete view of your app’s performance without any gaps in data.
Axiom’s Netlify app is complete with a pre-built dashboard that gives you control over your Jamstack projects. You can use this dashboard to track key metrics and make informed decisions about your app’s performance.
Overall, the Axiom Netlify app makes it easy to monitor and optimize your Jamstack apps. However, do note that this integration is only available for Netlify customers enterprise-level plans where [Log Drains are supported](https://docs.netlify.com/monitor-sites/log-drains/).
## What’s Netlify
Netlify is a platform for building highly performant and dynamic websites, e-commerce stores, and web apps. Netlify automatically builds your site and deploys it across its global edge network.
The Netlify platform provides teams everything they need to take modern web projects from the first preview to full production.
## Sending logs to Axiom
The log events gotten from Axiom gives you better insight into the state of your Netlify sites environment so that you can easily monitor traffic volume, website configurations, function logs, resource usage, and more.
1. Log in to your [Axiom account](https://app.axiom.co/), click **Apps** in the **Settings** menu, select the **Netlify app**, and then click **Install now**.
2. Click **Authorize**, and then copy the integration token.
3. Log in to your **Netlify Team Account**, select your site settings, and then select **Log Drains**.
4. In your log drain service, select **Axiom**, paste the integration token from Step 1, and then click **Connect**.
## App overview
### Traffic and function Logs
With Axiom, you can instrument, and actively monitor your Netlify sites, stream your build logs, and analyze your deployment process, or use the pre-build Netlify Dashboard to get an overview of all the important traffic data, usage, and metrics. Various logs are produced when users collaborate and interact with your sites and websites hosted on Netlify. Axiom captures and ingests all these logs into the `netlify` dataset.
You can also drill down to your site source with Axiom’s advanced query language and fork the dashboard to start building your own site monitors.
Back in your Axiom datasets console, you see all your traffic and function logs in your `netlify` dataset.
### Live stream logs
Stream your sites and app logs live, and filter them to see important information.
### Zero-config dashboard for your Netlify sites
Use the pre-build Netlify Dashboard to get an overview of all the important metrics. You can fork the dashboard and start building your own.
## Start logging Netlify Sites today
Axiom Netlify integration allows you to monitor, and log all of your sites, and apps in one place. With the Axiom app, you can quickly detect site errors, and get high-level insights into your Netlify projects.
# Connect Axiom with Tailscale
Source: https://axiom.co/docs/apps/tailscale
This page explains how to integrate Axiom with Tailscale.
Tailscale is a secure networking solution that allows you to create and manage a private network (tailnet), securely connecting all your devices.
Integrating Axiom with Tailscale allows you to stream your audit and network flow logs directly to Axiom seamlessly, unlocking powerful insights and analysis. Whether you’re conducting a security audit, optimizing performance, or ensuring compliance, Axiom’s Tailscale dashboard equips you with the tools to maintain a secure and efficient network, respond quickly to potential issues, and make informed decisions about your network configuration and usage.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to ingest data to the dataset you have created.
- [Create a Tailscale account](https://login.tailscale.com/start).
## Setup
1. In Tailscale, go to the [configuration logs page](https://login.tailscale.com/admin/logs) of the admin console.
2. Add Axiom as a configuration log streaming destination in Tailscale. For more information, see the [Tailscale documentation](https://tailscale.com/kb/1255/log-streaming?q=stream#add-a-configuration-log-streaming-destination).
## Tailscale dashboard
Axiom displays the data it receives in a pre-built Tailscale dashboard that delivers immediate, actionable insights into your tailnet’s activity and health.
This comprehensive overview includes:
* **Log type distribution**: Understand the balance between configuration audit logs and network flow logs over time.
* **Top actions and hosts**: Identify the most common network actions and most active devices.
* **Traffic visualization**: View physical, virtual, and exit traffic patterns for both sources and destinations.
* **User activity tracking**: Monitor actions by user display name, email, and ID for security audits and compliance.
* **Configuration log stream**: Access a detailed audit trail of all configuration changes.
With these insights, you can:
* Quickly identify unusual network activity or traffic patterns.
* Track configuration changes and user actions.
* Monitor overall network health and performance.
* Investigate specific events or users as needed.
* Understand traffic distribution across your tailnet.
# Connect Axiom with Terraform
Source: https://axiom.co/docs/apps/terraform
Provision and manage Axiom resources such as datasets and monitors with Terraform.
Axiom Terraform Provider lets you provision and manage Axiom resources (datasets, notifiers, monitors, and users) with Terraform. This means that you can programmatically create resources, access existing ones, and perform further infrastructure automation tasks.
Install the Axiom Terraform Provider from the [Terraform Registry](https://registry.terraform.io/providers/axiomhq/axiom/latest). To see the provider in action, check out the [example](https://github.com/axiomhq/terraform-provider-axiom/blob/main/example/main.tf).
This guide explains how to install the provider and perform some common procedures such as creating new resources and accessing existing ones. For the full API reference, see the [documentation in the Terraform Registry](https://registry.terraform.io/providers/axiomhq/axiom/latest/docs).
## Prerequisites
* [Sign up for a free Axiom account](https://app.axiom.co/register). All you need is an email address.
* [Create an advanced API token in Axiom](/reference/tokens#create-advanced-api-token) with the permissions to perform the actions you want to use Terraform for. For example, to use Terraform to create and update datasets, create the advanced API token with these permissions.
* [Create a Terraform account](https://app.terraform.io/signup/account).
* [Install the Terraform CLI](https://developer.hashicorp.com/terraform/cli).
## Install the provider
To install the Axiom Terraform Provider from the [Terraform Registry](https://registry.terraform.io/providers/axiomhq/axiom/latest), follow these steps:
1. Add the following code to your Terraform configuration file. Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
```hcl theme={null}
terraform {
required_providers {
axiom = {
source = "axiomhq/axiom"
}
}
}
provider "axiom" {
api_token = "API_TOKEN"
}
```
2. In your terminal, go to the folder of your main Terraform configuration file, and then run the command `terraform init`.
## Create new resources
### Create dataset
To create a dataset in Axiom using the provider, add the following code to your Terraform configuration file. Customize the `name` and `description` fields.
```hcl theme={null}
resource "axiom_dataset" "test_dataset" {
name = "test_dataset"
description = "This is a test dataset created by Terraform."
}
```
### Create notifier
To create a Slack notifier in Axiom using the provider, add the following code to your Terraform configuration file. Replace `SLACK_URL` with the webhook URL from your Slack instance. For more information on obtaining this URL, see the [Slack documentation](https://api.slack.com/messaging/webhooks).
```hcl theme={null}
resource "axiom_notifier" "test_slack_notifier" {
name = "test_slack_notifier"
properties = {
slack = {
slack_url = "SLACK_URL"
}
}
}
```
To create a Discord notifier in Axiom using the provider, add the following code to your Terraform configuration file.
* Replace `DISCORD_CHANNEL` with the webhook URL from your Discord instance. For more information on obtaining this URL, see the [Discord documentation](https://discord.com/developers/resources/webhook).
* Replace `DISCORD_TOKEN` with your Discord API token. For more information on obtaining this token, see the [Discord documentation](https://discord.com/developers/topics/oauth2).
```hcl theme={null}
resource "axiom_notifier" "test_discord_notifier" {
name = "test_discord_notifier"
properties = {
discord = {
discord_channel = "DISCORD_CHANNEL"
discord_token = "DISCORD_TOKEN"
}
}
}
```
To create an email notifier in Axiom using the provider, add the following code to your Terraform configuration file. Replace `EMAIL1` and `EMAIL2` with the email addresses you want to notify.
```hcl theme={null}
resource "axiom_notifier" "test_email_notifier" {
name = "test_email_notifier"
properties = {
email= {
emails = ["EMAIL1","EMAIL2"]
}
}
}
```
For more information on the types of notifier you can create, see the [documentation in the Terraform Registry](https://registry.terraform.io/providers/axiomhq/axiom/latest/resources/notifier).
### Create monitor
To create a monitor in Axiom using the provider, add the following code to your Terraform configuration file and customize it:
```hcl theme={null}
resource "axiom_monitor" "test_monitor" {
depends_on = [axiom_dataset.test_dataset, axiom_notifier.test_slack_notifier]
# `type` can be one of the following:
# - "Threshold": For numeric values against thresholds. It requires `operator` and `threshold`.
# - "MatchEvent": For detecting specific events. It doesn’t require `operator` and `threshold`.
# - "AnomalyDetection": For detecting anomalies. It requires `compare_days` and `tolerance, operator`.
type = "Threshold"
name = "test_monitor"
description = "This is a test monitor created by Terraform."
apl_query = "['test_dataset'] | summarize count() by bin_auto(_time)"
interval_minutes = 5
# `operator` is required for threshold and anomaly detection monitors.
# Valid values are "Above", "AboveOrEqual", "Below", "BelowOrEqual".
operator = "Above"
range_minutes = 5
# `threshold` is required for threshold monitors
threshold = 1
# `compare_days` and `tolerance` are required for anomaly detection monitors.
# Uncomment the two lines below for anomaly detection monitors.
# compare_days = 7
# tolerance = 25
notifier_ids = [
axiom_notifier.test_slack_notifier.id
]
alert_on_no_data = false
notify_by_group = false
}
```
This example creates a monitor using the dataset `test_dataset` and the notifier `test_slack_notifier`. These are resources you have created and accessed in the sections above.
* Customize the `name` and the `description` fields.
* In the `apl_query` field, specify the APL query for the monitor.
For more information on these fields, see the [documentation in the Terraform Registry](https://registry.terraform.io/providers/axiomhq/axiom/latest/resources/monitor).
### Create user
To create a user in Axiom using the provider, add the following code to your Terraform configuration file. Customize the `name`, `email`, and `role` fields.
```hcl theme={null}
resource "axiom_user" "test_user" {
name = "test_user"
email = "test@abc.com"
role = "user"
}
```
## Access existing resources
### Access existing dataset
To access an existing dataset, follow these steps:
1. Determine the ID of the Axiom dataset by sending a GET request to the [`datasets` endpoint of the Axiom API](/restapi/endpoints/getDatasets).
2. Add the following code to your Terraform configuration file. Replace `DATASET_ID` with the ID of the Axiom dataset.
```hcl theme={null}
data "axiom_dataset" "test_dataset" {
id = "DATASET_ID"
}
```
### Access existing notifier
To access an existing notifier, follow these steps:
1. Determine the ID of the Axiom notifier by sending a GET request to the `notifiers` endpoint of the Axiom API.
2. Add the following code to your Terraform configuration file. Replace `NOTIFIER_ID` with the ID of the Axiom notifier.
```hcl theme={null}
data "axiom_dataset" "test_slack_notifier" {
id = "NOTIFIER_ID"
}
```
### Access existing monitor
To access an existing monitor, follow these steps:
1. Determine the ID of the Axiom monitor by sending a GET request to the `monitors` endpoint of the Axiom API.
2. Add the following code to your Terraform configuration file. Replace `MONITOR_ID` with the ID of the Axiom monitor.
```hcl theme={null}
data "axiom_monitor" "test_monitor" {
id = "MONITOR_ID"
}
```
### Access existing user
To access an existing user, follow these steps:
1. Determine the ID of the Axiom user by sending a GET request to the `users` endpoint of the Axiom API.
2. Add the following code to your Terraform configuration file. Replace `USER_ID` with the ID of the Axiom user.
```hcl theme={null}
data "axiom_user" "test_user" {
id = "USER_ID"
}
```
# Connect Axiom with Vercel
Source: https://axiom.co/docs/apps/vercel
Easily monitor data from requests, functions, and web vitals in one place to get the deepest observability experience for your Vercel projects.
Connect Axiom with Vercel to get the deepest observability experience for your Vercel projects.
Easily monitor data from requests, functions, and web vitals in one place. 100% live and 100% of your data, no sampling.
Axiom’s Vercel app ships with a pre-built dashboard and pre-installed monitors so you can be in complete control of your projects with minimal effort.
If you use Axiom Vercel integration, [annotations](/query-data/annotate-charts) are automatically created for deployments.
## What’s Vercel?
Vercel is a platform for frontend frameworks and static sites, built to integrate with your headless content, commerce, or database.
Vercel provides a frictionless developer experience to take care of the hard things: deploying instantly, scaling automatically, and serving personalized content around the globe.
Vercel makes it easy for frontend teams to develop, preview, and ship delightful user experiences, where performance is the default.
## Send logs to Axiom
Simply install the [Axiom Vercel app from here](https://vercel.com/integrations/axiom) and be streaming logs and web vitals within minutes.
## App Overview
### Request and function logs
For both requests and serverless functions, Axiom automatically installs a [drain](https://vercel.com/docs/drains/using-drains) in your Vercel account to capture data live.
As users interact with your website, various logs are produced. Axiom captures all these logs and ingests them into the `vercel` dataset. You can stream and analyze these logs live, or use the pre-built Vercel Dashboard to get an overview of all the important metrics. When you’re ready, you can fork the dashboard and start building your own.
For function logs, if you call `console.log`, `console.warn` or `console.error` in your function, the output is also captured and made available as part of the log. You can use APL to easily search these logs.
## Web vitals
Axiom supports capturing and analyzing Web Vital data directly from your user’s browser without any sampling and with more data than is available elsewhere. It’s perfect to pair with Vercel’s in-built analytics when you want to get really deep into a specific problem or debug issues with a specific audience (user-agent, location, region, etc).
* **Boxplots** for numeric fields (integers, floats, timespans) with many distinct values
* Shows the range of values in both comparison and baseline sets.
* Identifies the minimum, P25, P75, and maximum values.
* Useful for understanding differences in response times or other numeric quantities.
For each visualization, Axiom displays the proportion of selected and baseline events (where the field is present).
### Dig deeper
To dig deeper, iteratively refine your Spotlight analysis or jump to a view of matching events.
1. **Filter and re-run**: Right-click specific values in the results and select **Re-run spotlight** to filter your data and run Spotlight again with a more focused scope.
2. **Show events**: Rick-click specific values in the results and select **Show events** to filter your data and see matching events.
## Spotlight limitations
* **Custom attributes**: Currently, custom attributes in OTel spans aren’t included in the Spotlight results. Axiom will soon support custom attributes in Spotlight.
* **Complex queries**: Spotlight works well for queries with maximum one aggregation step. Complex queries with multiple aggregations aren’t supported.
## Example workflows
### Investigate slow traces
1. Create a heatmap of trace durations. For example, run the following query:
```kusto theme={null}
['otel-demo-traces']
| summarize histogram(duration, 20) by bin_auto(_time)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20histogram\(duration%2C%2020\)%20by%20bin_auto\(_time\)%22%7D)
2. Select the region showing the slowest traces.
3. Run Spotlight to see if slow traces are associated with specific endpoints, regions, or user segments.
### Understand error spikes
1. Build a time series of error-level logs. For example, run the following query:
```kusto theme={null}
['sample-http-logs']
| where status startswith "5"
| summarize count() by bin_auto(_time)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20where%20status%20startswith%20'5'%20%7C%20summarize%20count\(\)%20by%20bin_auto\(_time\)%22%7D)
2. Select the time period where errors spiked.
3. Run Spotlight to identify if there’s anything different about the selected errors.
# Configure dashboard elements
Source: https://axiom.co/docs/dashboard-elements/configure
This section explains how to configure dashboard elements.
When you create a chart, click
**Bars:** A bar chart represents data in rectangular bars. The length of each bar is proportional to the value it represents. Bar charts can be used to compare discrete quantities, or when you have categorical data.
**Line:** A line chart connects individual data points into a continuous line, which is useful for showing logs over time. Line charts are often used for time series data.
## Y-Axis
Specify the scale of the vertical axis.
**Linear:** A linear scale maintains a consistent scale where equal distances represent equal changes in value. This is the most common scale type and is useful for most types of data.
**Log:** A logarithmic (or log) scale represents values in terms of their order of magnitude. Each unit of distance on a log scale represents a tenfold increase in value. Log scales make it easy to see backend errors and compare values across a wide range.
## Annotations
Specify the types of annotations to display in the chart:
* Show all annotations
* Hide all annotations
* Selective determine the annotations types to display
# Create dashboard elements
Source: https://axiom.co/docs/dashboard-elements/create
This section explains how to create dashboard elements.
Dashboard elements are the different visual elements that you can include in your dashboard to display your data and other information. For example, you can track key metrics, logs, and traces, and monitor real-time data flow.
You can create the following dashboard elements:
* [Filter bar](/query-data/filters)
* [Heatmap](/dashboard-elements/heatmap)
* [Log stream](/dashboard-elements/log-stream)
* [Monitor list](/dashboard-elements/monitor-list)
* [Note](/dashboard-elements/note)
* [Pie](/dashboard-elements/pie-chart)
* [Scatter](/dashboard-elements/scatter-plot)
* [Statistic](/dashboard-elements/statistic)
* [Table](/dashboard-elements/table)
* [Time series](/dashboard-elements/time-series)
* [Top list](/dashboard-elements/top-list)
* [Spacer](/dashboard-elements/spacer)
## Create dashboard elements
1. [Create a dashboard](/dashboards/create) or open an existing dashboard.
2. Click
This component is a visual query builder that eases the process of building visualizations and segments of your data.
This guide walks you through the individual sections of the query builder.
### Time range
Every query has a start and end time and the time range component allows quick selection of common time ranges as well as the ability to input specific start and end timestamps:
* Use the **Quick Range** items to quickly select popular ranges
* Use the **Custom Start/End Date** inputs to select specific times
* Use the **Resolution** items to choose between various time bucket resolutions
### Against
When a time series visualization is selected, such as `count`, the **Against** menu is enabled and it’s possible to select a historical time to compare the results of your time range too.
For example, to compare the last hour’s average response time to the same time yesterday, select `1 hr` in the time range menu, and then select `-1D` from the **Against** menu:
The results look like this:
The dotted line represents results from the base date, and the totals table includes the comparative totals.
When you add `field` to the `group by` clause, the **time range against** values are attached to each `events`.
### Visualizations
Axiom provides powerful visualizations that display the output of running aggregate functions across your dataset. The Visualization menu allows you to add these visualizations and, where required, input their arguments:
You can select a visualization to add it to the query. If a visualization requires an argument (such as the field and/or other parameters), the menu allows you to select eligible fields and input those arguments. Press Enter to complete the addition.
Click Visualization in the query builder to edit it at any time.
[Learn about supported visualizations](/query-data/visualizations)
### Filters
Use the filter menu to attach filter clauses to your search.
Axiom supports AND/OR operators at the top-level as well as one level deep. This means you can create filters that would read as `status == 200 AND (method == get OR method == head) AND (user-agent contains Mozilla or user-agent contains Webkit)`.
Filters are divided up by the field type they operate on, but some may apply to more than one field type.
#### List of filters
*String Fields*
* `==`
* `!=`
* `exists`
* `not-exists`
* `starts-with`
* `not-starts-with`
* `ends-with`
* `not-ends-with`
* `contains`
* `not-contains`
* `regexp`
* `not-regexp`
*Number Fields*
* `==`
* `!=`
* `exists`
* `not-exists`
* `>`
* `>=`
* `<`
* `<=`
*Boolean Fields*
* `==`
* `!=`
* `exists`
* `not-exists`
*Array Fields*
* `contains`
* `not-contains`
* `exists`
* `not-exists`
### Group by (segmentation)
When visualizing data, it can be useful to segment data into specific groups to more clearly understand how the data behaves.
The Group By component enables you to add one or more fields to group events by:
### Other options
#### Order
By default, Axiom automatically chooses the best ordering for results. However, you can manually set the desired order through this menu.
#### Limit
By default, Axiom chooses a reasonable limit for the query that has been passed in. However, you can control that limit manually through this component.
## Change element’s position
To change element’s position on the dashboard, drag the title bar of the chart.
## Change element size
To change the size of the element, drag the bottom-right corner.
## Set custom time range
You can set a custom time range for individual dashboard elements that’s different from the dashboard’s time range. For example, the dashboard displays data about the last 30 minutes but individual dashboard elements display data for different time ranges. This can be useful for visualizing the same chart or statistic for different time periods, among others.
To set a custom time range for a dashboard element:
1. In the top right of the dashboard element, click
## Example with APL
```kusto theme={null}
['sample-http-logs']
| summarize histogram(req_duration_ms, 15) by bin_auto(_time)
```
# Log stream
Source: https://axiom.co/docs/dashboard-elements/log-stream
This section explains how to create log stream dashboard elements and add them to your dashboard.
export const elementName_0 = "log stream"
export const elementButtonLabel_0 = "Log stream"
The log stream dashboard element displays your logs as they come in real-time. Each log appears as a separate line with various details. The benefit of a log stream is that it provides immediate visibility into your system’s operations. When you debug an issue or trying to understand an ongoing event, the log stream allows you to see exactly what’s happening as it occurs.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets) where you send your data.
* [Send data](/send-data/methods) to your Axiom dataset.
* [Create an empty dashboard](/dashboards/create).
## Create {elementName_0}
1. Go to the Dashboards tab and open the dashboard to which you want to add the {elementName_0}.
2. Click
## Example with APL
```kusto theme={null}
['sample-http-logs']
| project method, status, content_type
```
# Monitor list
Source: https://axiom.co/docs/dashboard-elements/monitor-list
This section explains how to create monitor list dashboard elements and add them to your dashboard.
The monitor list dashboard element provides a visual overview of the monitors you specify. It offers a quick glance into important developments about the monitors such as their status and history.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create an empty dashboard](/dashboards/create).
- [Create a monitor](/monitor-data/monitors).
## Create monitor list
1. Go to the Dashboards tab and open the dashboard to which you want to add the monitor list.
2. Click
# Note
Source: https://axiom.co/docs/dashboard-elements/note
This section explains how to create note dashboard elements and add them to your dashboard.
The note dashboard element adds a textbox to your dashboard that you can customise to your needs. For example, you can provide context in a note about the other dashboard elements.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create an empty dashboard](/dashboards/create).
## Create note
1. Go to the Dashboards tab and open the dashboard to which you want to add the note.
2. Click
## Example with APL
```kusto theme={null}
['sample-http-logs']
| summarize count() by status
```
# Scatter plot
Source: https://axiom.co/docs/dashboard-elements/scatter-plot
This section explains how to create scatter plot dashboard elements and add them to your dashboard.
export const elementName_0 = "scatter plot"
export const elementButtonLabel_0 = "Scatter"
Scatter plots are used to visualize the correlation or distribution between two distinct metrics or logs. Each point in the scatter plot could represent a log entry, with the X and Y axes showing different log attributes (like request time and response size). The scatter plot chart can be created using the simple query builder or advanced query builder.
For example, plot response size against response time for an API to see if larger responses are correlated with slower response times.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets) where you send your data.
* [Send data](/send-data/methods) to your Axiom dataset.
* [Create an empty dashboard](/dashboards/create).
## Create {elementName_0}
1. Go to the Dashboards tab and open the dashboard to which you want to add the {elementName_0}.
2. Click
## Example with APL
```kusto theme={null}
['sample-http-logs']
| summarize avg(req_duration_ms), avg(resp_header_size_bytes) by resp_body_size_bytes
```
# Spacer
Source: https://axiom.co/docs/dashboard-elements/spacer
This section explains how to create spacer dashboard elements and add them to your dashboard.
The spacer dashboard element adds empty space to your dashboard layout. Use spacers to create visual separation between dashboard elements, improve the organization of your dashboard, and control the positioning of other elements.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create an empty dashboard](/dashboards/create).
## Create spacer
1. Go to the Dashboards tab and open the dashboard to which you want to add the spacer.
2. Click
## Example with APL
```kusto theme={null}
['sample-http-logs']
| summarize avg(resp_body_size_bytes)
```
# Table
Source: https://axiom.co/docs/dashboard-elements/table
This section explains how to create table dashboard elements and add them to your dashboard.
export const elementName_0 = "table"
export const elementButtonLabel_0 = "Table"
The table dashboard element displays a summary of any attributes from your metrics, logs, or traces in a sortable table format. Each row in the table could represent a different service, host, or other entity, with columns showing various attributes or metrics for that entity.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets) where you send your data.
* [Send data](/send-data/methods) to your Axiom dataset.
* [Create an empty dashboard](/dashboards/create).
## Create {elementName_0}
1. Go to the Dashboards tab and open the dashboard to which you want to add the {elementName_0}.
2. Click
## Example with APL
```kusto theme={null}
['sample-http-logs']
| summarize avg(resp_body_size_bytes) by bin_auto(_time)
```
# Time series
Source: https://axiom.co/docs/dashboard-elements/time-series
This section explains how to create time series dashboard elements and add them to your dashboard.
export const elementName_0 = "time series"
export const elementButtonLabel_0 = "Timeseries"
Time series charts show the change in your data over time which can help identify infrastructure issues, spikes, or dips in the data. This can be a simple line chart, an area chart, or a bar chart. A time series chart might be used to show the change in the volume of log events, error rates, latency, or other time-sensitive data.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets) where you send your data.
* [Send data](/send-data/methods) to your Axiom dataset.
* [Create an empty dashboard](/dashboards/create).
## Create {elementName_0}
1. Go to the Dashboards tab and open the dashboard to which you want to add the {elementName_0}.
2. Click
## Example with APL
```kusto theme={null}
['sample-http-logs']
| summarize count() by bin_auto(_time)
```
# Top list
Source: https://axiom.co/docs/dashboard-elements/top-list
This section explains how to create top list dashboard elements and add them to your dashboard.
export const elementName_0 = "top list"
export const elementButtonLabel_0 = "Top list"
The top list dashboard element displays the top results from your query, showing the most significant items based on your aggregation and grouping. It can display results as either a table of totals or as time series charts, depending on the aggregation type used.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets) where you send your data.
* [Send data](/send-data/methods) to your Axiom dataset.
* [Create an empty dashboard](/dashboards/create).
## Create {elementName_0}
1. Go to the Dashboards tab and open the dashboard to which you want to add the {elementName_0}.
2. Click
## Example with APL
```kusto theme={null}
['sample-http-logs']
| summarize count() by status
| top 10 by count_ desc
```
# Configure dashboards
Source: https://axiom.co/docs/dashboards/configure
This page explains how to configure your dashboards.
## Select time range
When you select the time range, you specify the time interval for which you want to display data in the dashboard. Changing the time range affects the data displayed in all dashboard elements.
To select the time range:
1. In the top right, click
Axiom enables you to make the most of your event data without compromises: all your data, all the time, for all possible needs. Say goodbye to data sampling, waiting times, and hefty fees.
This page explains how to start using Axiom and leverage the power of event data in your organization.
## 1. Send your data to Axiom
You can send data to Axiom in a variety of ways. Each individual piece of data is an event.
Events can be emitted from internal or third-party services, cloud functions, containers, virtual machines (VMs), or even scripts. Events follow the [JSON specification](https://www.json.org/json-en.html) for which field types are supported, an event could look like this:
```json theme={null}
{
"service": "api-http",
"severity": "error",
"duration": 231,
"customer_id": "ghj34g32poiu4",
"tags": ["aws-east-1", "zone-b"],
"metadata": {
"version": "3.1.2"
}
}
```
An event must belong to a dataset which is a collection of similar events. You can have multiple datasets that help to segment your events to make them easier to query and visualize, and also aide in access control.
Axiom stores every event you send and makes it available to you for querying either by streaming logs in real-time, or by analyzing events to produce visualizations.
The underlying data store of Axiom is a time series database. This means every event is indexed with a timestamp specified at ingress or set automatically.
Axiom doesn’t sample your data on ingest or querying, unless you’ve expressly instructed it to.
* Click your app’s name to view its details. Within the app’s page, select the triggers tab to review the triggers associated with your app.
* Under the routes section of the triggers tab, you will find the URL route assigned to your Worker. This is where your Cloudflare Worker responds to incoming requests. Vist the [Cloudflare Workers documentation](https://developers.cloudflare.com/workers/get-started/guide/) to learn how to configure routes
## Observe the telemetry data in Axiom
As you interact with your app, traces will be collected and exported to Axiom, allowing you to monitor, analyze, and gain insights into your app’s performance and behavior.
## Dynamic OpenTelemetry traces dashboard
This data can then be further viewed and analyzed in Axiom’s dashboard, offering a deeper understanding of your app’s performance and behavior.
**Working with Cloudflare Pages Functions:** Integration with OpenTelemetry is similar to Workers but uses the Cloudflare Dashboard for configuration, bypassing **`wrangler.toml`**. This simplifies setup through the Cloudflare dashboard web interface.
## Manual Instrumentation
Manual instrumentation requires adding code into your Worker’s script to create and manage spans around the code blocks you want to trace.
1. Initialize Tracer:
Use the OpenTelemetry API to create a tracer instance at the beginning of your script using the **`@microlabs/otel-cf-workers`** package.
```js theme={null}
import { trace } from '@opentelemetry/api';
const tracer = trace.getTracer('your-service-name');
```
2. Create start and end Spans:
Manually start spans before the operations or events you want to trace and ensure you end them afterward to complete the tracing lifecycle.
```js theme={null}
const span = tracer.startSpan('operationName');
try {
// Your operation code here
} finally {
span.end();
}
```
3. Annotate Spans:
Add important metadata to spans to provide additional context. This can include setting attributes or adding events within the span.
```js theme={null}
span.setAttribute('key', 'value');
span.addEvent('eventName', { 'eventAttribute': 'value' });
```
## Automatic Instrumentation
Automatic instrumentation uses the **`@microlabs/otel-cf-workers`** package to automatically trace incoming requests and outbound fetch calls without manual span management.
1. Instrument your Worker:
Wrap your Cloudflare Workers script with the `instrument` function from the **`@microlabs/otel-cf-workers`** package. This automatically instruments incoming requests and outbound fetch calls.
```js theme={null}
import { instrument } from '@microlabs/otel-cf-workers';
export default instrument(yourHandler, yourConfig);
```
2. Configuration: Provide configuration details, including how to export telemetry data and service metadata to Axiom as part of the `instrument` function call.
```js theme={null}
const config = (env) => ({
exporter: {
url: 'https://AXIOM_DOMAIN/v1/traces',
headers: {
'Authorization': `Bearer ${env.AXIOM_API_TOKEN}`,
'X-Axiom-Dataset': `${env.AXIOM_DATASET}`
},
},
service: { name: 'axiom-cloudflare-workers' },
});
```
## Dynamic OpenTelemetry traces dashboard
This data can then be further viewed and analyzed in Axiom’s dashboard, providing insights into the performance and behavior of your app.
## Send data from an existing Golang project
### Manual Instrumentation
Manual instrumentation in Go involves managing spans within your code to track operations and events. This method offers precise control over what is instrumented and how spans are configured.
1. Initialize the tracer:
Use the OpenTelemetry API to obtain a tracer instance. This tracer will be used to start and manage spans.
```go theme={null}
tracer := otel.Tracer("serviceName")
```
2. Create and manage spans:
Manually start spans before the operations you want to trace and ensure they are ended after the operations complete.
```go theme={null}
ctx, span := tracer.Start(context.Background(), "operationName")
defer span.End()
// Perform the operation here
```
3. Annotate spans:
Enhance spans with additional information using attributes or events to provide more context about the traced operation.
```go theme={null}
span.SetAttributes(attribute.String("key", "value"))
span.AddEvent("eventName", trace.WithAttributes(attribute.String("key", "value")))
```
### Automatic Instrumentation
Automatic instrumentation in Go uses libraries and integrations that automatically create spans for operations, simplifying the addition of observability to your app.
1. Instrumentation libraries:
Use `OpenTelemetry-contrib` libraries designed for automatic instrumentation of standard Go frameworks and libraries, such as `net/http`.
```go theme={null}
import "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
```
2. Wrap handlers and clients:
Automatically instrument HTTP servers and clients by wrapping them with OpenTelemetry’s instrumentation. For HTTP servers, wrap your handlers with `otelhttp.NewHandler`.
```go theme={null}
http.Handle("/path", otelhttp.NewHandler(handler, "operationName"))
```
3. Minimal code changes:
After setting up automatic instrumentation, no further changes are required for tracing standard operations. The instrumentation takes care of starting, managing, and ending spans.
## Reference
### List of OpenTelemetry trace fields
| Field Category | Field Name | Description |
| ---------------------------- | --------------------------------------- | ------------------------------------------------------------------- |
| **Unique Identifiers** | | |
| | \_rowid | Unique identifier for each row in the trace data. |
| | span\_id | Unique identifier for the span within the trace. |
| | trace\_id | Unique identifier for the entire trace. |
| **Timestamps** | | |
| | \_systime | System timestamp when the trace data was recorded. |
| | \_time | Timestamp when the actual event being traced occurred. |
| **HTTP Attributes** | | |
| | attributes.custom\["http.host"] | Host information where the HTTP request was sent. |
| | attributes.custom\["http.server\_name"] | Server name for the HTTP request. |
| | attributes.http.flavor | HTTP protocol version used. |
| | attributes.http.method | HTTP method used for the request. |
| | attributes.http.route | Route accessed during the HTTP request. |
| | attributes.http.scheme | Protocol scheme (HTTP/HTTPS). |
| | attributes.http.status\_code | HTTP response status code. |
| | attributes.http.target | Specific target of the HTTP request. |
| | attributes.http.user\_agent | User agent string of the client. |
| | attributes.custom.user\_agent.original | Original user agent string, providing client software and OS. |
| **Network Attributes** | | |
| | attributes.net.host.port | Port number on the host receiving the request. |
| | attributes.net.peer.port | Port number on the peer (client) side. |
| | attributes.custom\["net.peer.ip"] | IP address of the peer in the network interaction. |
| | attributes.net.sock.peer.addr | Socket peer address, indicating the IP version used. |
| | attributes.net.sock.peer.port | Socket peer port number. |
| | attributes.custom.net.protocol.version | Protocol version used in the network interaction. |
| **Operational Details** | | |
| | duration | Time taken for the operation. |
| | kind | Type of span (for example,, server, client). |
| | name | Name of the span. |
| | scope | Instrumentation scope. |
| | service.name | Name of the service generating the trace. |
| | service.version | Version of the service generating the trace. |
| **Resource Attributes** | | |
| | resource.environment | Environment where the trace was captured, for example,, production. |
| | attributes.custom.http.wrote\_bytes | Number of bytes written in the HTTP response. |
| **Telemetry SDK Attributes** | | |
| | telemetry.sdk.language | Language of the telemetry SDK (if previously not included). |
| | telemetry.sdk.name | Name of the telemetry SDK (if previously not included). |
| | telemetry.sdk.version | Version of the telemetry SDK (if previously not included). |
### List of imported libraries
### OpenTelemetry Go SDK
**`go.opentelemetry.io/otel`**
This is the core SDK for OpenTelemetry in Go. It provides the necessary tools to create and manage telemetry data (traces, metrics, and logs).
### OTLP Trace Exporter
**`go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp`**
This package allows your app to export telemetry data over HTTP using the OpenTelemetry Protocol (OTLP). It’s important for sending data to Axiom or any other backend that supports OTLP.
### Resource and Trace Packages
**`go.opentelemetry.io/otel/sdk/resource`** and **`go.opentelemetry.io/otel/sdk/trace`**
These packages help define the properties of your telemetry data, such as service name and version, and manage trace data within your app.
### Semantic Conventions
**`go.opentelemetry.io/otel/semconv/v1.24.0`**
This package provides standardized schema URLs and attributes, ensuring consistency across different OpenTelemetry implementations.
### Tracing API
**`go.opentelemetry.io/otel/trace`**
This package offers the API for tracing. It enables you to create spans, record events, and manage context propagation in your app.
### HTTP Instrumentation
**`go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp`**
Used for instrumenting HTTP clients and servers. It automatically records data about HTTP requests and responses, which is essential for web apps.
### Propagators
**`go.opentelemetry.io/otel/propagation`**
This package provides the ability to propagate context and trace information across service boundaries.
# Send data from Java app using OpenTelemetry
Source: https://axiom.co/docs/guides/opentelemetry-java
This page explains how to configure a Java app using the Java OpenTelemetry SDK to send telemetry data to Axiom.
OpenTelemetry provides a unified approach to collecting telemetry data from your Java applications. This page demonstrates how to configure OpenTelemetry in a Java app to send telemetry data to Axiom using the OpenTelemetry SDK.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to ingest data to the dataset you have created.
- [Install JDK 11](https://www.oracle.com/java/technologies/java-se-glance.html) or later
- [Install Maven](https://maven.apache.org/download.cgi)
- Use your own app written in Java or the provided `DiceRollerApp.java` sample.
## Create project
To create a Java project, run the Maven archetype command in the terminal:
```bash theme={null}
mvn archetype:generate -DgroupId=com.example -DartifactId=MyProject -DarchetypeArtifactId=maven-archetype-quickstart -DinteractiveMode=false
```
This command creates a new project in a directory named `MyProject` with a standard directory structure.
## Create core app
`DiceRollerApp.java` is the core of the sample app. It simulates rolling a dice and demonstrates the usage of OpenTelemetry for tracing. The app includes two methods: one for a simple dice roll and another that demonstrates the usage of span links to establish relationships between spans across different traces.
Create the `DiceRollerApp.java` in the `src/main/java/com/example` directory with the following content:
```java theme={null}
package com.example;
import io.opentelemetry.api.OpenTelemetry;
import io.opentelemetry.api.trace.Span;
import io.opentelemetry.api.trace.Tracer;
import io.opentelemetry.context.Scope;
import java.util.Random;
public class DiceRollerApp {
private static final Tracer tracer;
static {
OpenTelemetry openTelemetry = OtelConfiguration.initializeOpenTelemetry();
tracer = openTelemetry.getTracer(DiceRollerApp.class.getName());
}
public static void main(String[] args) {
rollDice();
rollDiceWithLink();
}
private static void rollDice() {
Span span = tracer.spanBuilder("rollDice").startSpan();
try (Scope scope = span.makeCurrent()) {
int roll = 1 + new Random().nextInt(6);
System.out.println("Rolled a dice: " + roll);
} finally {
span.end();
}
}
private static void rollDiceWithLink() {
Span parentSpan = tracer.spanBuilder("rollWithLink").startSpan();
try (Scope parentScope = parentSpan.makeCurrent()) {
Span childSpan = tracer.spanBuilder("rolldice")
.addLink(parentSpan.getSpanContext())
.startSpan();
try (Scope childScope = childSpan.makeCurrent()) {
int roll = 1 + new Random().nextInt(6);
System.out.println("Dice roll result (with link): " + roll);
} finally {
childSpan.end();
}
} finally {
parentSpan.end();
}
}
}
```
## Configure OpenTelemetry
`OtelConfiguration.java` sets up the OpenTelemetry SDK and configures the exporter to send data to Axiom. It initializes the tracer provider, sets up the Axiom exporter, and configures the resource attributes.
Create the `OtelConfiguration.java` file in the `src/main/java/com/example` directory with the following content:
```java theme={null}
package com.example;
import io.opentelemetry.api.OpenTelemetry;
import io.opentelemetry.api.common.Attributes;
import io.opentelemetry.api.common.AttributeKey;
import io.opentelemetry.exporter.otlp.http.trace.OtlpHttpSpanExporter;
import io.opentelemetry.sdk.OpenTelemetrySdk;
import io.opentelemetry.sdk.resources.Resource;
import io.opentelemetry.sdk.trace.SdkTracerProvider;
import io.opentelemetry.sdk.trace.export.BatchSpanProcessor;
import java.util.concurrent.TimeUnit;
public class OtelConfiguration {
private static final String SERVICE_NAME = "YOUR_SERVICE_NAME";
private static final String SERVICE_VERSION = "YOUR_SERVICE_VERSION";
private static final String OTLP_ENDPOINT = "https://AXIOM_DOMAIN/v1/traces";
private static final String BEARER_TOKEN = "Bearer API_TOKEN";
private static final String AXIOM_DATASET = "DATASET_NAME";
public static OpenTelemetry initializeOpenTelemetry() {
Resource resource = Resource.getDefault()
.merge(Resource.create(Attributes.of(
AttributeKey.stringKey("service.name"), SERVICE_NAME,
AttributeKey.stringKey("service.version"), SERVICE_VERSION
)));
OtlpHttpSpanExporter spanExporter = OtlpHttpSpanExporter.builder()
.setEndpoint(OTLP_ENDPOINT)
.addHeader("Authorization", BEARER_TOKEN)
.addHeader("X-Axiom-Dataset", AXIOM_DATASET)
.build();
SdkTracerProvider sdkTracerProvider = SdkTracerProvider.builder()
.addSpanProcessor(BatchSpanProcessor.builder(spanExporter)
.setScheduleDelay(100, TimeUnit.MILLISECONDS)
.build())
.setResource(resource)
.build();
OpenTelemetrySdk openTelemetry = OpenTelemetrySdk.builder()
.setTracerProvider(sdkTracerProvider)
.buildAndRegisterGlobal();
Runtime.getRuntime().addShutdownHook(new Thread(sdkTracerProvider::close));
return openTelemetry;
}
}
```
## Dynamic OpenTelemetry traces dashboard
This data can then be further viewed and analyzed in Axiom’s dashboard, providing insights into the performance and behaviour of your app.
## Send data from an existing Node project
### Manual Instrumentation
Manual instrumentation in Node.js requires adding code to create and manage spans around the code blocks you want to trace.
1. Initialize Tracer:
Import and configure a tracer in your Node.js app. Use the tracer configured in your instrumentation setup (instrumentation.ts).
```js theme={null}
// Assuming OpenTelemetry SDK is already configured
const { trace } = require('@opentelemetry/api');
const tracer = trace.getTracer('example-tracer');
```
2. Create Spans:
Wrap the code blocks that you want to trace with spans. Start and end these spans within your code.
```js theme={null}
const span = tracer.startSpan('operation_name');
try {
// Your code here
span.end();
} catch (error) {
span.recordException(error);
span.end();
}
```
3. Annotate Spans:
Add metadata and logs to your spans for the trace data.
```js theme={null}
span.setAttribute('key', 'value');
span.addEvent('event name', { eventKey: 'eventValue' });
```
### Automatic Instrumentation
Automatic instrumentation in Node.js simplifies adding telemetry data to your app. It uses pre-built libraries to automatically instrument common frameworks and libraries.
1. Install Instrumentation Libraries:
Use OpenTelemetry packages that automatically instrument common Node.js frameworks and libraries.
```bash theme={null}
npm install @opentelemetry/auto-instrumentations-node
```
2. Instrument Application:
Configure your app to use these libraries, which will automatically generate spans for standard operations.
```js theme={null}
// In your instrumentation setup (instrumentation.ts)
const { getNodeAutoInstrumentations } = require('@opentelemetry/auto-instrumentations-node');
const sdk = new NodeSDK({
// ... other configurations ...
instrumentations: [getNodeAutoInstrumentations()]
});
```
After you set them up, these libraries automatically trace relevant operations without additional code changes in your app.
## Reference
### List of OpenTelemetry trace fields
| Field Category | Field Name | Description |
| ------------------------------- | --------------------------------------- | ------------------------------------------------------------ |
| **Unique Identifiers** | | |
| | \_rowid | Unique identifier for each row in the trace data. |
| | span\_id | Unique identifier for the span within the trace. |
| | trace\_id | Unique identifier for the entire trace. |
| **Timestamps** | | |
| | \_systime | System timestamp when the trace data was recorded. |
| | \_time | Timestamp when the actual event being traced occurred. |
| **HTTP Attributes** | | |
| | attributes.custom\["http.host"] | Host information where the HTTP request was sent. |
| | attributes.custom\["http.server\_name"] | Server name for the HTTP request. |
| | attributes.http.flavor | HTTP protocol version used. |
| | attributes.http.method | HTTP method used for the request. |
| | attributes.http.route | Route accessed during the HTTP request. |
| | attributes.http.scheme | Protocol scheme (HTTP/HTTPS). |
| | attributes.http.status\_code | HTTP response status code. |
| | attributes.http.target | Specific target of the HTTP request. |
| | attributes.http.user\_agent | User agent string of the client. |
| **Network Attributes** | | |
| | attributes.net.host.port | Port number on the host receiving the request. |
| | attributes.net.peer.port | Port number on the peer (client) side. |
| | attributes.custom\["net.peer.ip"] | IP address of the peer in the network interaction. |
| **Operational Details** | | |
| | duration | Time taken for the operation. |
| | kind | Type of span (for example,, server, client). |
| | name | Name of the span. |
| | scope | Instrumentation scope. |
| | service.name | Name of the service generating the trace. |
| **Resource Process Attributes** | | |
| | resource.process.command | Command line string used to start the process. |
| | resource.process.command\_args | List of command line arguments used in starting the process. |
| | resource.process.executable.name | Name of the executable running the process. |
| | resource.process.executable.path | Path to the executable running the process. |
| | resource.process.owner | Owner of the process. |
| | resource.process.pid | Process ID. |
| | resource.process.runtime.description | Description of the runtime environment. |
| | resource.process.runtime.name | Name of the runtime environment. |
| | resource.process.runtime.version | Version of the runtime environment. |
| **Telemetry SDK Attributes** | | |
| | telemetry.sdk.language | Language of the telemetry SDK. |
| | telemetry.sdk.name | Name of the telemetry SDK. |
| | telemetry.sdk.version | Version of the telemetry SDK. |
### List of imported libraries
The `instrumentation.ts` file imports the following libraries:
### **`@opentelemetry/sdk-node`**
This package is the core SDK for OpenTelemetry in Node.js. It provides the primary interface for configuring and initializing OpenTelemetry in a Node.js app. It includes functionalities for managing traces and context propagation. The SDK is designed to be extensible, allowing for custom configurations and integration with different telemetry backends like Axiom.
### **`@opentelemetry/auto-instrumentations-node`**
This package offers automatic instrumentation for Node.js apps. It simplifies the process of instrumenting various common Node.js libraries and frameworks. By using this package, developers can automatically collect telemetry data (such as traces) from their apps without needing to manually instrument each library or API call. This is important for apps with complex dependencies, as it ensures comprehensive and consistent telemetry collection across the app.
### **`@opentelemetry/exporter-trace-otlp-proto`**
The **`@opentelemetry/exporter-trace-otlp-proto`** package provides an exporter that sends trace data using the OpenTelemetry Protocol (OTLP). OTLP is the standard protocol for transmitting telemetry data in the OpenTelemetry ecosystem. This exporter allows Node.js apps to send their collected traces to any backend that supports OTLP, such as Axiom. The use of OTLP ensures broad compatibility and a standardized way of transmitting telemetry data.
### **`@opentelemetry/sdk-trace-base`**
Contained within this package is the **`BatchSpanProcessor`**, among other foundational elements for tracing in OpenTelemetry. The **`BatchSpanProcessor`** is a component that collects and processes spans (individual units of trace data). As the name suggests, it batches these spans before sending them to the configured exporter (in this case, the `OTLPTraceExporter`). This batching mechanism is efficient as it reduces the number of outbound requests by aggregating multiple spans into fewer batches. It helps in the performance and scalability of trace data export in an OpenTelemetry-instrumented app.
# Send OpenTelemetry data from a Python app to Axiom
Source: https://axiom.co/docs/guides/opentelemetry-python
This guide explains how to send OpenTelemetry data from a Python app to Axiom using the Python OpenTelemetry SDK.
This guide explains how to send OpenTelemetry data from a Python app to Axiom using the [Python OpenTelemetry SDK](https://opentelemetry.io/docs/languages/python/instrumentation/).
## Prerequisites
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to ingest data to the dataset you have created.
- Install Python version 3.7 or higher.
## Install required dependencies
To install the required Python dependencies, run the following code in your terminal:
```bash theme={null}
pip install opentelemetry-api opentelemetry-sdk opentelemetry-instrumentation-flask opentelemetry-exporter-otlp Flask
```
### Install dependencies with requirements file
Alternatively, if you use a `requirements.txt` file in your Python project, add these lines:
```txt theme={null}
opentelemetry-api
opentelemetry-sdk
opentelemetry-instrumentation-flask
opentelemetry-exporter-otlp
Flask
```
Then run the following code in your terminal to install dependencies:
```bash theme={null}
pip install -r requirements.txt
```
## Create an app.py file
Create an `app.py` file with the following content. This file creates a basic HTTP server using Flask. It also demonstrates the usage of span links to establish relationships between spans across different traces.
```python theme={null}
# app.py
from flask import Flask
from opentelemetry.instrumentation.flask import FlaskInstrumentor
from opentelemetry import trace
from random import randint
import exporter
# Creating a Flask app instance
app = Flask(__name__)
# Automatically instruments Flask app to enable tracing
FlaskInstrumentor().instrument_app(app)
# Retrieving a tracer from the custom exporter
tracer = exporter.service1_tracer
@app.route("/rolldice")
def roll_dice(parent_span=None):
# Starting a new span for the dice roll. If a parent span is provided, link to its span context.
with tracer.start_as_current_span("roll_dice_span",
links=[trace.Link(parent_span.get_span_context())] if parent_span else None) as span:
# Spans can be created with zero or more Links to other Spans that are related.
# Links allow creating connections between different traces
return str(roll())
@app.route("/roll_with_link")
def roll_with_link():
# Starting a new 'parent_span' which may later link to other spans
with tracer.start_as_current_span("parent_span") as parent_span:
# A common scenario is to correlate one or more traces with the current span.
# This can help in tracing and debugging complex interactions across different parts of the app.
result = roll_dice(parent_span)
return f"Dice roll result (with link): {result}"
def roll():
# Function to generate a random number between 1 and 6
return randint(1, 6)
if __name__ == "__main__":
# Starting the Flask server on the specified PORT and enabling debug mode
app.run(port=8080, debug=True)
```
## Create an exporter.py file
Create an `exporter.py` file with the following content. This file establishes an OpenTelemetry configuration and sets up an exporter that sends trace data to Axiom.
```python theme={null}
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.sdk.resources import Resource, SERVICE_NAME
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
# Define the service name resource for the tracer.
resource = Resource(attributes={
SERVICE_NAME: "NAME_OF_SERVICE" # Replace `NAME_OF_SERVICE` with the name of the service you want to trace.
})
# Create a TracerProvider with the defined resource for creating tracers.
provider = TracerProvider(resource=resource)
# Configure the OTLP/HTTP Span Exporter with Axiom headers and endpoint.
otlp_exporter = OTLPSpanExporter(
endpoint="https://AXIOM_DOMAIN/v1/traces",
headers={
"Authorization": "Bearer API_TOKEN",
"X-Axiom-Dataset": "DATASET_NAME"
}
)
# Create a BatchSpanProcessor with the OTLP exporter to batch and send trace spans.
processor = BatchSpanProcessor(otlp_exporter)
provider.add_span_processor(processor)
# Set the TracerProvider as the global tracer provider.
trace.set_tracer_provider(provider)
# Define a tracer for external use in different parts of the app.
service1_tracer = trace.get_tracer("service1")
```
## Dynamic OpenTelemetry traces dashboard
In Axiom, go the **Dashboards** tab and click **OpenTelemetry Traces (python)**. This pre-built traces dashboard provides further insights into the performance and behavior of your app.
## Send data from an existing Python project
### Manual instrumentation
Manual instrumentation in Python with OpenTelemetry involves adding code to create and manage spans around the blocks of code you want to trace. This approach allows for precise control over the trace data.
1. Import and configure a tracer at the start of your main Python file. For example, use the tracer from the `exporter.py` configuration.
```python theme={null}
import exporter
tracer = exporter.service1_tracer
```
2. Enclose the code blocks in your app that you want to trace within spans. Start and end these spans in your code.
```python theme={null}
with tracer.start_as_current_span("operation_name"):
```
3. Add relevant metadata and logs to your spans to enrich the trace data, providing more context for your data.
```python theme={null}
with tracer.start_as_current_span("operation_name") as span:
span.set_attribute("key", "value")
```
### Automatic instrumentation
Automatic instrumentation in Python with OpenTelemetry simplifies the process of adding telemetry data to your app. It uses pre-built libraries that automatically instrument the frameworks and libraries.
1. Install the OpenTelemetry packages designed for specific frameworks like Flask or Django.
```bash theme={null}
pip install opentelemetry-instrumentation-flask
```
2. Configure your app to use these libraries that automatically generate spans for standard operations.
```python theme={null}
from opentelemetry.instrumentation.flask import FlaskInstrumentor
# This assumes `app` is your Flask app.
FlaskInstrumentor().instrument_app(app)
```
After you set them up, these libraries automatically trace relevant operations without additional code changes in your app.
## Reference
### List of OpenTelemetry trace fields
| Field Category | Field Name | Description |
| ------------------- | --------------------------------------- | ------------------------------------------------------ |
| Unique Identifiers | | |
| | \_rowid | Unique identifier for each row in the trace data. |
| | span\_id | Unique identifier for the span within the trace. |
| | trace\_id | Unique identifier for the entire trace. |
| Timestamps | | |
| | \_systime | System timestamp when the trace data was recorded. |
| | \_time | Timestamp when the actual event being traced occurred. |
| HTTP Attributes | | |
| | attributes.custom\["http.host"] | Host information where the HTTP request was sent. |
| | attributes.custom\["http.server\_name"] | Server name for the HTTP request. |
| | attributes.http.flavor | HTTP protocol version used. |
| | attributes.http.method | HTTP method used for the request. |
| | attributes.http.route | Route accessed during the HTTP request. |
| | attributes.http.scheme | Protocol scheme (HTTP/HTTPS). |
| | attributes.http.status\_code | HTTP response status code. |
| | attributes.http.target | Specific target of the HTTP request. |
| | attributes.http.user\_agent | User agent string of the client. |
| Network Attributes | | |
| | attributes.net.host.port | Port number on the host receiving the request. |
| | attributes.net.peer.port | Port number on the peer (client) side. |
| | attributes.custom\["net.peer.ip"] | IP address of the peer in the network interaction. |
| Operational Details | | |
| | duration | Time taken for the operation. |
| | kind | Type of span (for example,, server, client). |
| | name | Name of the span. |
| | scope | Instrumentation scope. |
| | service.name | Name of the service generating the trace. |
### List of imported libraries
The `exporter.py` file imports the following libraries:
from opentelemetry import trace
This module creates and manages trace data in your app. It creates spans and tracers which track the execution flow and performance of your app.
from opentelemetry.sdk.trace import TracerProvider
`TracerProvider` acts as a container for the configuration of your app’s tracing behavior. It allows you to define how spans are generated and processed, essentially serving as the central point for managing trace creation and propagation in your app.
from opentelemetry.sdk.trace.export import BatchSpanProcessor
`BatchSpanProcessor` is responsible for batching spans before they are exported. This is an important aspect of efficient trace data management as it aggregates multiple spans into fewer network requests, reducing the overhead on your app’s performance and the tracing backend.
from opentelemetry.sdk.resources import Resource, SERVICE\_NAME
The `Resource` class is used to describe your app’s service attributes, such as its name, version, and environment. This contextual information is attached to the traces and helps in identifying and categorizing trace data, making it easier to filter and analyze in your monitoring setup.
from opentelemetry.exporter.otlp.proto.http.trace\_exporter import OTLPSpanExporter
The `OTLPSpanExporter` is responsible for sending your app’s trace data to a backend that supports the OTLP such as Axiom. It formats the trace data according to the OTLP standards and transmits it over HTTP, ensuring compatibility and standardization in how telemetry data is sent across different systems and services.
# Send OpenTelemetry data from a Ruby on Rails app to Axiom
Source: https://axiom.co/docs/guides/opentelemetry-ruby
This guide explains how to send OpenTelemetry data from a Ruby on Rails App to Axiom using the Ruby OpenTelemetry SDK.
This guide provides detailed steps on how to configure OpenTelemetry in a Ruby app to send telemetry data to Axiom using the [OpenTelemetry Ruby SDK](https://opentelemetry.io/docs/languages/ruby/).
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to ingest data to the dataset you have created.
- Install a [Ruby version manager](https://www.ruby-lang.org/en/documentation/installation/) like `rbenv` and use it to install the latest Ruby version.
- Install [Rails](https://guides.rubyonrails.org/v5.0/getting_started.html) using the `gem install rails` command.
## Set up the Ruby on Rails app
1. Create a new Rails app using the `rails new myapp` command.
2. Go to the app directory with the `cd myapp` command.
3. Open the `Gemfile` and add the following OpenTelemetry packages:
```ruby theme={null}
gem 'opentelemetry-api'
gem 'opentelemetry-sdk'
gem 'opentelemetry-exporter-otlp'
gem 'opentelemetry-instrumentation-rails'
gem 'opentelemetry-instrumentation-http'
gem 'opentelemetry-instrumentation-active_record', require: false
gem 'opentelemetry-instrumentation-all'
```
Install the dependencies by running `bundle install`.
## Configure the OpenTelemetry exporter
In the `initializers` folder of your Rails app, create a new file called `opentelemetry.rb`, and then add the following OpenTelemetry exporter configuration:
```ruby theme={null}
require 'opentelemetry/sdk'
require 'opentelemetry/exporter/otlp'
require 'opentelemetry/instrumentation/all'
OpenTelemetry::SDK.configure do |c|
c.service_name = 'ruby-traces' # Set your service name
c.use_all # Or specify individual instrumentation you need
c.add_span_processor(
OpenTelemetry::SDK::Trace::Export::BatchSpanProcessor.new(
OpenTelemetry::Exporter::OTLP::Exporter.new(
endpoint: 'https://AXIOM_DOMAIN/v1/traces',
headers: {
'Authorization' => 'Bearer API_TOKEN',
'X-AXIOM-DATASET' => 'DATASET_NAME'
}
)
)
)
end
```
## Conclusion
This guide has introduced you to integrating Axiom for logging in Laravel apps. You’ve learned how to create a custom logger, configure log channels, and understand the significance of log levels. With this knowledge, you’re set to track errors and analyze log data effectively using Axiom.
# Send logs from a Ruby on Rails app using Faraday
Source: https://axiom.co/docs/guides/send-logs-from-ruby-on-rails
This guide provides step-by-step instructions on how to send logs from a Ruby on Rails app to Axiom using the Faraday library.
This guide provides step-by-step instructions on how to send logs from a Ruby on Rails app to Axiom using the Faraday library. By following this guide, you configure your Rails app to send logs to Axiom, allowing you to monitor and analyze your app logs effectively.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to ingest data to the dataset you have created.
- Install a [Ruby version manager](https://www.ruby-lang.org/en/documentation/installation/) like `rbenv` and use it to install the latest Ruby version.
- Install [Ruby on Rails](https://guides.rubyonrails.org/v5.0/getting_started.html) using the `gem install rails` command.
## Set up the Ruby on Rails app
1. Create a new Rails app using the `rails new myapp` command.
2. Navigate to the app directory: `cd myapp`
## Setting up the Gemfile
Open the `Gemfile` in your Rails app, and then add the following gems:
```ruby theme={null}
gem 'faraday'
gem 'dotenv-rails', groups: [:development, :test]
```
Install the dependencies by running `bundle install`.
## Create and configure the Axiom logger
1. Create a new file named `axiom_logger.rb` in the `app/services` directory of your Rails app.
2. Add the following code to `axiom_logger.rb`:
```ruby theme={null}
# app/services/axiom_logger.rb
require 'faraday'
require 'json'
class AxiomLogger
def self.send_log(log_data)
dataset_name = "DATASET_NAME"
axiom_ingest_api_url = "https://AXIOM_DOMAIN/v1/ingest/DATASET_NAME"
ingest_token = "API_TOKEN"
conn = Faraday.new(url: axiom_ingest_api_url) do |faraday|
faraday.request :url_encoded
faraday.adapter Faraday.default_adapter
end
wrapped_log_data = [log_data]
response = conn.post do |req|
req.headers['Content-Type'] = 'application/json'
req.headers['Authorization'] = "Bearer #{ingest_token}"
req.body = wrapped_log_data.to_json
end
puts "AxiomLogger Response status: #{response.status}, body: #{response.body}"
if response.status != 200
Rails.logger.error "Failed to send log to Axiom: #{response.body}"
end
end
end
```
### Inspect issue
1. From the chart, you see that the number of errors started to rise after the deployment of a new feature to the sales form. This correlation allows you to form the hypothesis that the errors might be caused by the deployment.
2. You decide to investigate the deployment by clicking on the link associated with the annotation. The link takes you to the GitHub pull request.
3. You inspect the code changes in depth and discover the cause of the errors.
4. You quickly fix the issue in another deployment.
# Analyze data
Source: https://axiom.co/docs/query-data/datasets
This page explains how to use the Datasets tab in Axiom.
The Datasets tab allows you to gain a better understanding of the fields you have in your datasets.
In Axiom, an individual piece of data is an event, and a dataset is a collection of related events. Datasets contain incoming event data. The Datasets tab provides you with information about each field within your datasets.
## Datasets overview
When you open the Datasets tab, you see the list of datasets.
Right-click a dataset to access the following options:
* **Open dataset:** Open the dataset in the Datasets tab.
* **Open in Stream:** Open the dataset in the Stream tab.
* **Open in Query:** Open the dataset in the Query tab.
* **Edit dataset:** Change the dataset description or data retention period.
* **Copy dataset name:** Copy the name of the dataset to your clipboard.
* **Copy link to dataset:** Copy a link to the dataset to your clipboard.
* **Open in new tab:** Open the dataset in a new browser tab.
* **Delete dataset:** Delete the dataset.
### Explore datasets
To explore the fields in a dataset, select the dataset from the list.
When you select a dataset, Axiom displays the list of fields within the dataset on the left. The field types are the following:
* String
* Number
* Boolean
* Array
* [Virtual fields](#virtual-fields)
This view flattens field names with dot notation. This means that the event `{"foo": { "bar": "baz" }}` appears as `foo.bar`. Field names containing periods (`.`) are folded.
On the right, you see the following:
* [Views](#views)
* [Saved queries](#saved-queries)
* [Query history](#query-history)
### Edit field
To edit a field:
1. Go to the Datasets tab.
2. Select the dataset that contains the field.
3. Find the field in the list, and then click it.
4. Edit the following:
* Field description.
* Field unit. This is only available for number field types.
* Hidden. This means that the field is still present in the underlying Axiom database, but it doesn’t appear in the Axiom UI. Use this option if you sent the field to Axiom by mistake or you don’t want to use it anymore in Axiom.
## Quick charts
Quick charts allow fast charting of fields depending on their field type. For example, for number fields, choose one of the following for easily visualizing the data:
*
## Dashboards and monitors
You can use OTel metrics in dashboards and monitors the same way you use logs and traces.
* Build visualizations using metrics queries.
* Set alerts on derived metrics such as error rate or latency percentiles.
* Combine multiple signals in a single panel.
For more information, see [Dashboards](/dashboards/overview) and [Monitors](/monitor-data/monitors).
## Design choices and constraints
MetricsDB makes intentional architectural trade-offs to optimize for the most common metrics use cases while maintaining exceptional performance at scale.
### Query scope
You can query one dataset per query.
### Supported data types
MetricsDB focuses on the core OpenTelemetry metric types that cover the vast majority of observability scenarios.
Axiom supports the following OpenTelemetry metric types:
* **Gauge**: Point-in-time measurements. For example, CPU usage or temperature.
* **Histogram**: Distribution of values with configurable buckets. For example, request latency.
* **Sum**: Sum of values. For example, request count.
* **Summary**: Summary of values. For example, request latency.
Axiom doesn’t currently support the following data types:
* Exponential histograms
* `bytes`, `kvlist`, and `array` tag value types
* Exemplar, baggage, and context data
* Nanosecond-precision timestamps
### Data model optimizations
MetricsDB applies the following transformations to improve query performance and reduce storage costs:
* **Timestamp precision**: Truncate nanosecond timestamps to second precision. MetricsDB is built for use cases where second-level granularity is sufficient, and this optimization significantly improves compression ratios and query speed.
* **Unified tag namespace**: Flatten resource, scope, and metric tags into a single namespace. This simplification makes queries more straightforward and enables faster dimensional filtering. You don’t need to remember which tags came from which scope.
* **Unit normalization**: Convert the `unit` attribute to `otel.metric.unit` for consistent handling across all metric types.
* **Histogram handling**: Assume equal-width histograms and don’t preserve histogram metadata. This trade-off supports the most common histogram analysis patterns (percentiles, distribution visualization) while reducing storage requirements.
These design choices reflect real-world metrics usage patterns. If your use case requires capabilities not currently supported, [contact Axiom](https://axiom.co/contact) to discuss your requirements. Your feedback helps shape MetricsDB’s evolution.
# Query metrics
Source: https://axiom.co/docs/query-data/metrics/query-metrics
This page explains how to query OpenTelemetry metrics.
Query and analyze your OpenTelemetry metrics data using the Axiom Console. This page shows you how to extract insights from your metrics through filtering, aggregation, and transformation operations.
For more information on working with metrics at Axiom, see [Metrics overview](/query-data/metrics/overview).
This example queries the `axiom-dev.metrics` dataset’s `alertmanager_alerts` metric one hour before the current time. It filters results to events where `k8s.namespace.name` is `monitoring` and aggregates events over 30-second time windows into their average value.
## Elements of queries
The following explains each element of a metrics query.
### Source
Specify the dataset and the metric in the **Dataset** field. The dataset and metric names are separated by a colon in the Builder interface.
For example, `axiom-dev.metrics:alertmanager_alerts`.
### Filter
Use the **Where** section to filter series based on tag values.
1. Click **+** in the **Where** section.
2. Select the tag where you want to filter for values.
3. Select the logical operator of the filter. Available operators are:
* Equality: `==`, `!=`
* Comparisons: `<`, `<=`, `>`, `>=`
4. Specify the value for which you want to filter.
5. Click **+** to add another filter and join them with the logical and operator.
For example, the following joins three filters: `project == /.*metrics.*/ and code >= 200 and code < 300`.
### Transformations
Use the **Transformations** section to transform individual values or series.
1. Click **+** in the **Transformations** section.
2. Select the transformation you want to apply to the data. Available transformations are:
* **map:** Map the data to a new value using the expression you specify.
* **align:** Aggregate data using the function and the time window you specify.
* **group:** Group the data by a set of tags using the aggregation function you specify.
#### Map
Use `map` to transform individual values.
Available mapping functions:
| Function | Description |
| --------------------- | ------------------------------------------------------- |
| `rate` | Computes the per-second rate of change for a metric. |
| `abs` | Returns the absolute value of each data point. |
| `interpolate::linear` | Linear interpolation of missing values. |
| `fill::prev` | Fills missing values using the previous non-null value. |
For example, to calculate rate per second for the metric, use `map rate`. To fill empty values with the latest value, use `map fill::prev`.
#### Align
Use `align` to aggregate over time windows. You can specify the time window and the aggregation function to apply.
Available aggregation functions:
| Function | Description |
| ------------ | ------------------------------------- |
| `avg` | Averages values in each interval. |
| `count` | Counts non-null values per interval. |
| `max` | Takes the maximum value per interval. |
| `min` | Takes the minimum value per interval. |
| `prom::rate` | Prometheus style rate |
| `sum` | Sums values in each interval. |
For example, to calculate the average over 5-minute time windows, use `align to 5m using avg`. To count the data points in the last hour, use `align to 1h using count`.
#### Group
Use `group` to combine series by tags. You can specify the tags to group by and the aggregation function to apply. If you don’t specify tags, Axiom aggregates all series into one group.
Available aggregation functions:
| Function | Description |
| -------- | ---------------------------------- |
| `avg` | Averages values in each group. |
| `count` | Counts non-null values per group. |
| `max` | Takes the maximum value per group. |
| `min` | Takes the minimum value per group. |
| `sum` | Sums values in each group. |
For example:
* To calculate the number of series, use `group using count`.
To sum the values of all series, use `group using sum`.
* To group data by the `project` and `namespace` tags using the `sum` aggregation, use `group by project, namespace using sum`.
# Stream data with Axiom
Source: https://axiom.co/docs/query-data/stream
The Stream tab enables you to process and analyze high volumes of high-velocity data from a variety of sources in real time.
The Stream tab allows you to inspect individual events and watch as they’re ingested live.
It can be incredibly useful to be able to live-stream events as they’re ingested to know what’s going on in the context of the entire system. Like a supercharged terminal, the Stream tab in Axiom allows you to view streams of events, filter them to only see important information, and finally inspect each individual event.
This section introduces the Stream tab and its components that unlock powerful insights from your data.
## Choose a dataset
The default view is one where you can easily see which datasets are available and also see some recent Starred Queries in case you want to jump directly into a stream:
Select a dataset from the list of datasets to continue.
## Event stream
Upon selecting a dataset, you are immediately taken to the live event stream for that dataset:
You can click an event to be taken to the event details slide-out:
On this slide-out, you can copy individual field values, or copy the entire event as JSON.
You can view and copy the raw data:
## Filter data
The Stream tab provides access to a powerful filter builder right on the toolbar:
For more information, see the [filters documentation](/dashboard-elements/create#filters).
## Time range selection
The stream has two time modes:
* Live stream (default)
* Time range
Live stream continuously checks for new events and presents them in the stream.
Time range only shows events that fall between a specific start and end date. This can be useful when investigating an issue. The time range menu has some options to quickly choose some time ranges, or you can input a specific range for your search.
When you are ready to return to live streaming, click this button:
Click the button again to pause the stream.
## View settings
The Stream tab is customizable via the view settings menu:
Options include:
* Text size used in the stream
* Wrap lines
* Highlight severity (this is automatically extracted from the event)
* Show the raw event details
* Fields to display in their own column
## Starred queries
The starred queries slide-out is activated via the toolbar:
For more information, see [Starred queries](/query-data/datasets#starred-queries).
## Highlight severity
The Stream tab allows you to easily detect warnings and errors in your logs by highlighting the severity of log entries in different colors.
To highlight the severity of log entries:
1. Specify the log level in the data you send to Axiom. For more information, see [Requirements for log level fields](/reference/limits#requirements-for-log-level-fields).
2. In the Stream tab, click
### Navigate the app
* Use the **Filter Bar** at the top of the app to narrow the charts to a specific service or operation.
* Use the **Search Input** to find a trace ID in the selected time period.
* Use the **Slowest Operations** chart to identify performance issues across services and traces.
* Use the **Top Errors** list to quickly identify the worst-offending causes of errors.
* Use the **Results** table to get an overview and navigate between services, operations, and traces.
### View a trace
Click a trace ID in the results table to show the waterfall view. This view allows you to see that span in the context of the entire trace from start to finish.
### Customize the app
To customize the app, use the fork button to create an editable duplicate for you and your team.
## Query traces
In Axiom, trace events are just like any other events inside datasets. This means they’re directly queryable in the UI. While this is can be a powerful experience, it’s important to note some important details to consider before querying:
* Directly aggregating upon the `duration` field produces aggregate values across every span in the dataset. This is usually not the desired outcome when you want to inspect a service’s performance or robustness.
* For request, rate, and duration aggregations, it’s best to only include the root span using `isnull(parent_span_id)`.
## Waterfall view of traces
To see how spans in a trace are related to each other, explore the trace in a waterfall view. In this view, each span in the trace is correlated with its parent and child spans.
### Traces in OpenTelemetry Traces dashboard
To explore spans within a trace using the OpenTelemetry Traces app, follow these steps:
1. Click the `Dashboards` tab.
2. Click `OpenTelemetry Traces`.
3. In the `Slowest Operations` chart, click the service that contains the trace.
4. In the list of trace IDs, click the trace you want to explore.
5. Explore how spans within the trace are related to each other in the waterfall view. To reveal additional options such as collapsing and expanding child spans, right-click a span.
To try out this example, go to the Axiom Playground.
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/dashboards/otel.traces.otel-demo-traces)
### Traces in Query tab
To access the waterfall view from the Query tab, follow these steps:
1. Ensure the dataset you work with has trace data.
2. Click the Query tab.
3. Run a query that returns the `_time` and `trace_id` fields. For example, the following query returns the number of spans in each trace:
```kusto theme={null}
['otel-demo-traces']
| summarize count() by trace_id
```
4. In the list of trace IDs, click the trace you want to explore. To reveal additional options such as copying the trace ID, right-click a trace.
5. Explore how spans within the trace are related to each other in the waterfall view. To reveal additional options such as collapsing and expanding child spans, right-click a span. Event names are displayed on the timeline for each span.
To try out this example, go to the Axiom Playground.
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%5Cn%7C%20summarize%20count\(\)%20by%20trace_id%22%7D)
### Customize waterfall view
To toggle the display of the span details on the right, click
## `distinct`
The `distinct` visualization counts each distinct occurrence of the distinct field inside the dataset and produce a time series chart.
#### Arguments
`field: any` is the field to aggregate.
#### Group-By Behaviour
The visualization produces a separate result for each group plotted on a time series chart.
## `avg`
The `avg` visualization averages the values of the field inside the dataset and produces a time series chart.
#### Arguments
`field: number` is the number field to average.
#### Group-by behaviour
The visualization produces a separate result for each group plotted on a time series chart.
## `max`
The `max` visualization finds the maximum value of the field inside the dataset and produces a time series chart.
#### Arguments
`field: number` is the number field where Axiom finds the maximum value.
#### Group-by behaviour
The visualization produces a separate result for each group plotted on a time series chart.
## `min`
The `min` visualization finds the minimum value of the field inside the dataset and produces a time series chart.
#### Arguments
`field: number` is the number field where Axiom finds the minimum value.
#### Group-by behaviour
The visualization produces a separate result for each group plotted on a time series chart.
## `sum`
The `sum` visualization adds all the values of the field inside the dataset and produces a time series chart.
#### Arguments
`field: number` is the number field where Axiom calculates the sum.
#### Group-by behaviour
The visualization produces a separate result for each group plotted on a time series chart.
## `percentiles`
The `percentiles` visualization calculates the requested percentiles of the field in the dataset and produces a time series chart.
#### Arguments
* `field: number` is the number field where Axiom calculates the percentiles.
* `percentiles: number [, ...]` is a list of percentiles , each a float between 0 and 100. For example, `percentiles(request_size, 95, 99, 99.9)`.
#### Group-by behaviour
The visualization produces a separate result for each group plotted on a horizontal bar chart, allowing for visual comparison across the groups.
## `histogram`
The `histogram` visualization buckets the field into a distribution of N buckets, returning a time series heatmap chart.
#### Arguments
* `field: number` is the number field where Axiom calculates the distribution.
* `nBuckets` is the number of buckets to return. For example, `histogram(request_size, 15)`.
#### Group-by behaviour
The visualization produces a separate result for each group plotted on a time series histogram. Hovering over a group in the totals table shows only the results for that group in the histogram.
## `topk`
The `topk` visualization calculates the top values for a field in a dataset.
#### Arguments
* `field: number` is the number field where Axiom calculates the top values.
* `nResults` is the number of top values to return. For example, `topk(method, 10)`.
#### Group-by behaviour
The visualization produces a separate result for each group plotted on a time series chart.
## `variance`
The `variance` visualization calculates the variance of the field in the dataset and produces a time series chart.
The `variance` aggregation returns the sample variance of the fields of the dataset.
#### Arguments
`field: number` is the number field where Axiom calculates the variance.
#### Group-by behaviour
The visualization produces a separate result for each group plotted on a time series chart.
## `stddev`
The `stddev` visualization calculates the standard deviation of the field in the dataset and produces a time series chart.
The `stddev` aggregation returns the sample standard deviation of the fields of the dataset.
#### Arguments
`field: number` is the number field where Axiom calculates the standard deviation.
#### Group-by behaviour
The visualization produces a separate result for each group plotted on a time series chart.
# Track activity in Axiom
Source: https://axiom.co/docs/reference/audit-log
This page explains how to track activity in your Axiom organization with the audit log.
The audit log allows you to track who did what and when within your Axiom organization.
Tracking activity in your Axiom organization with the audit log is useful for legal compliance reasons. For example, you can investigate the following:
* Track who has accessed the Axiom platform.
* Track organization access over time.
* Track data access over time.
The audit log also make it easier to manage your Axiom organization. They allow you to do the following, among others:
* Track changes made by your team to your observability posture.
* Track monitoring performance and identify which monitors generate the most query load.
* Monitor query costs and optimize expensive queries before they impact your budget.
* Trace queries back to their source (monitors or direct queries) for debugging.
The audit log is available to all organizations. By default, you can query the audit log for the previous three days. You can purchase full access to the audit log as an add-on on the Axiom Cloud plan. For more information, see [Manage add-ons](/reference/usage-billing#manage-add-ons).
## Explore audit log
1. Go to the Query tab, and then click **APL**.
2. Query the `axiom-audit` dataset. For example, run the query `['axiom-audit']` to display the raw audit log data in a table.
3. Optional: Customize your query to filter or summarize the audit log. For more information, see [Explore data](/query-data/explore).
4. Click **Run**.
The `action` field specifies the type of activity that happened in your Axiom organization.
## Export audit log
1. Run the query to [display the audit log](#explore-audit-log).
2. Click
## Tokens
You can generate an ingest and personal token manually in your Axiom user settings.
See [Tokens](/reference/tokens) to know more about managing access and authorization.
## Configuration and Deployment
Axiom CLI lets you ingest, authenticate, and stream data.
For more information about Configuration, managing authentication status, ingesting, streaming, and more,
visit the [Axiom CLI](https://github.com/axiomhq/cli) repository on GitHub.
Axiom CLI supports the ingestion of different formats of data **( JSON, NDJSON, and CSV)**
## Querying
Get deeper insights into your data using [Axiom Processing Language](/apl/introduction)
## Ingestion
Import, transfer, load and process data for later use or storage using the Axiom CLI. With [Axiom CLI](https://github.com/axiomhq/cli) you can Ingest the contents of a **JSON, NDJSON, CSV** logfile into a dataset.
**To view a list of all the available commands run `axiom` on your terminal:**
```bash theme={null}
➜ ~ axiom
The power of Axiom on the command-line.
USAGE
axiom
Use the Axiom Lambda Extension to send logs and platform events of your Lambda function to Axiom.
Alternatively, you can use the AWS Distro for OpenTelemetry to send Lambda function logs and platform events to Axiom. For more information, see [AWS Lambda Using OTel](/send-data/aws-lambda-dot).
Axiom detects the extension and provides you with quick filters and a dashboard. For more information on how this enriches your Axiom organization, see [AWS Lambda app](/apps/lambda).
To determine the best method to send data from different AWS services, see [Send data from AWS to Axiom](/send-data/aws-overview).
2. Configure the destination:
* **Name:** Choose a name for the destination.
* **Endpoint URL:** The URL of your Axiom log ingest endpoint `https://AXIOM_DOMAIN/v1/ingest/DATASET_NAME`.
3. Headers:
You may need to add some headers. Here is a common example:
* **Content-Type:** Set this to `application/json`.
* **Authorization:** Set this to `Bearer API_TOKEN`.
4. Body:
In the Body Template, input `{{_raw}}`. This forwards the raw log event to Axiom.
5. Save and enable the destination:
After you’ve finished configuring the destination, save your changes and make sure the destination is enabled.
## Set up log forwarding from Cribl to Axiom using the Syslog destination
### Create Syslog endpoint
1. Click
3. Configure the Message:
* **Timestamp Format:** Choose the timestamp format to use in the Syslog messages.
* **Application Name Field:** Enter the name of the field to use as the app name in the Syslog messages.
* **Message Field:** Enter the name of the field to use as the message in the Syslog messages. Typically, this would be `_raw`.
* **Throttling:** Enter the throttling value. Throttling is a mechanism to control the data flow rate from the source (Cribl) to the destination (in this case, an Axiom Syslog Endpoint).
4. Save and enable the destination
After you’ve finished configuring the destination, save your changes and make sure the destination is enabled.
# Send data from Elastic Beats to Axiom
Source: https://axiom.co/docs/send-data/elastic-beats
Collect metrics and logs from elastic beats, and monitor them with Axiom.
[Elastic Beats](https://www.elastic.co/beats/) serves as a lightweight platform for data shippers that transfer information from the source to Axiom and other tools based on the configuration. Before shipping data, it collects metrics and logs from different sources, which later are deployed to your Axiom deployments.
There are different [Elastic Beats](https://www.elastic.co/beats/) you could use to ship logs. Axiom’s documentation provides a detailed step by step procedure on how to use each Beats.
# Send data from Kubernetes Cluster to Axiom
Source: https://axiom.co/docs/send-data/kubernetes
This step-by-step guide helps you ingest logs from your Kubernetes cluster into Axiom using the DaemonSet configuration.
Axiom makes it easy to collect, analyze, and monitor logs from your Kubernetes clusters. Integrate popular tools like Filebeat, Vector, or Fluent Bit with Axiom to send your cluster logs.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to ingest data to the dataset you have created.
## Send Kubernetes Cluster logs to Axiom using Filebeat
Ingest logs from your Kubernetes cluster into Axiom using Filebeat.
The following is an example of a DaemonSet configuration to ingest your data logs into Axiom.
### Configuration
```yaml theme={null}
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: kube-system
labels:
k8s-app: filebeat
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: filebeat
labels:
k8s-app: filebeat
rules:
- apiGroups: [''] # "" indicates the core API group
resources:
- namespaces
- pods
- nodes
verbs:
- get
- watch
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: kube-system
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
data:
filebeat.yml: |-
filebeat.autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
hints.enabled: true
hints.default_config:
type: container
paths:
- /var/log/containers/*${data.kubernetes.container.id}.log
allow_older_versions: true
processors:
- add_cloud_metadata:
output.elasticsearch:
hosts: ['${AXIOM_HOST}/v1/datasets/${AXIOM_DATASET_NAME}/elastic']
api_key: 'axiom:${AXIOM_API_TOKEN}'
setup.ilm.enabled: false
kind: ConfigMap
metadata:
annotations: {}
labels:
k8s-app: filebeat
name: filebeat-config
namespace: kube-system
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
k8s-app: filebeat
name: filebeat
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: filebeat
template:
metadata:
annotations: {}
labels:
k8s-app: filebeat
spec:
containers:
- args:
- -c
- /etc/filebeat.yml
- -e
env:
- name: AXIOM_HOST
value: AXIOM_DOMAIN
- name: AXIOM_DATASET_NAME
value: DATASET_NAME
- name: AXIOM_API_TOKEN
value: API_TOKEN
- name: NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
image: docker.elastic.co/beats/filebeat-oss:8.11.1
imagePullPolicy: IfNotPresent
name: filebeat
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
securityContext:
runAsUser: 0
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/filebeat.yml
name: config
readOnly: true
subPath: filebeat.yml
- mountPath: /usr/share/filebeat/data
name: data
- mountPath: /var/lib/docker/containers
name: varlibdockercontainers
readOnly: true
- mountPath: /var/log
name: varlog
readOnly: true
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: filebeat
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
volumes:
- configMap:
defaultMode: 416
name: filebeat-config
name: config
- hostPath:
path: /var/lib/docker/containers
type: ''
name: varlibdockercontainers
- hostPath:
path: /var/log
type: ''
name: varlog
- hostPath:
path: /var/lib/filebeat-data
type: ''
name: data
updateStrategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate
```
`{error.message}`
`{error.message}`
Vector is a lightweight and ultra-fast tool for building observability pipelines. It has a built-in support for shipping logs to Axiom through the [`axiom` sink](https://vector.dev/docs/reference/configuration/sinks/axiom/).
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets#create-dataset) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to ingest data to the dataset you have created.
## Installation
Follow the [quickstart guide in the Vector documentation](https://vector.dev/docs/setup/quickstart/) to install Vector, and to configure sources and sinks.