# avg
This page explains how to use the avg aggregation function in APL.
The `avg` aggregation in APL calculates the average value of a numeric field across a set of records. You can use this aggregation when you need to determine the mean value of numerical data, such as request durations, response times, or other performance metrics. It is useful in scenarios such as performance analysis, trend identification, and general statistical analysis.
When to use `avg`:
* When you want to analyze the average of numeric values over a specific time range or set of data.
* For comparing trends, like average request duration or latency across HTTP requests.
* To provide insight into system or user performance, such as the average duration of transactions in a service.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, the `avg` function works similarly, but the syntax differs slightly. Here’s how to write the equivalent query in APL.
```sql Splunk example
| stats avg(req_duration_ms) by status
```
```kusto APL equivalent
['sample-http-logs']
| summarize avg(req_duration_ms) by status
```
In ANSI SQL, the `avg` aggregation is used similarly, but APL has a different syntax for structuring the query.
```sql SQL example
SELECT status, AVG(req_duration_ms)
FROM sample_http_logs
GROUP BY status
```
```kusto APL equivalent
['sample-http-logs']
| summarize avg(req_duration_ms) by status
```
## Usage
### Syntax
```kusto
summarize avg(ColumnName) [by GroupingColumn]
```
### Parameters
* **ColumnName**: The numeric field you want to calculate the average of.
* **GroupingColumn** (optional): A column to group the results by. If not specified, the average is calculated over all records.
### Returns
* A table with the average value for the specified field, optionally grouped by another column.
## Use case examples
This example calculates the average request duration for HTTP requests, grouped by status.
**Query**
```kusto
['sample-http-logs']
| summarize avg(req_duration_ms) by status
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20summarize%20avg\(req_duration_ms\)%20by%20status%22%7D)
**Output**
| status | avg\_req\_duration\_ms |
| ------ | ---------------------- |
| 200 | 350.4 |
| 404 | 150.2 |
This query calculates the average request duration (in milliseconds) for each HTTP status code.
This example calculates the average span duration for each service to analyze performance across services.
**Query**
```kusto
['otel-demo-traces']
| summarize avg(duration) by ['service.name']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%5Cn%7C%20summarize%20avg\(duration\)%20by%20%5B'service.name'%5D%22%7D)
**Output**
| service.name | avg\_duration |
| ------------ | ------------- |
| frontend | 500ms |
| cartservice | 250ms |
This query calculates the average duration of spans for each service.
In security logs, you can calculate the average request duration by country to analyze regional performance trends.
**Query**
```kusto
['sample-http-logs']
| summarize avg(req_duration_ms) by ['geo.country']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20summarize%20avg\(req_duration_ms\)%20by%20%5B'geo.country'%5D%22%7D)
**Output**
| geo.country | avg\_req\_duration\_ms |
| ----------- | ---------------------- |
| US | 400.5 |
| DE | 250.3 |
This query calculates the average request duration for each country from where the requests originated.
## List of related aggregations
* [**sum**](/apl/aggregation-function/sum): Use `sum` to calculate the total of a numeric field. This is useful when you want the total of values rather than their average.
* [**count**](/apl/aggregation-function/count): The `count` function returns the total number of records. It’s useful when you want to count occurrences rather than averaging numerical values.
* [**min**](/apl/aggregation-function/min): The `min` function returns the minimum value of a numeric field. Use this when you’re interested in the smallest value in your dataset.
* [**max**](/apl/aggregation-function/max): The `max` function returns the maximum value of a numeric field. This is useful for finding the largest value in the data.
* [**stdev**](/apl/aggregation-function/stdev): This function calculates the standard deviation of a numeric field, providing insight into how spread out the data is around the mean.
# avgif
This page explains how to use the avgif aggregation function in APL.
The `avgif` aggregation function in APL allows you to calculate the average value of a field, but only for records that satisfy a given condition. This function is particularly useful when you need to perform a filtered aggregation, such as finding the average response time for requests that returned a specific status code or filtering by geographic regions. The `avgif` function is highly valuable in scenarios like log analysis, performance monitoring, and anomaly detection, where focusing on subsets of data can provide more accurate insights.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk, you achieve similar functionality using the combination of a `stats` function with conditional filtering. In APL, `avgif` provides this filtering inline as part of the aggregation function, which can simplify your queries.
```sql Splunk example
| stats avg(req_duration_ms) by id where status = "200"
```
```kusto APL equivalent
['sample-http-logs']
| summarize avgif(req_duration_ms, status == "200") by id
```
In ANSI SQL, you can use a `CASE` statement inside an `AVG` function to achieve similar behavior. APL simplifies this with `avgif`, allowing you to specify the condition directly.
```sql SQL example
SELECT id, AVG(CASE WHEN status = '200' THEN req_duration_ms ELSE NULL END)
FROM sample_http_logs
GROUP BY id
```
```kusto APL equivalent
['sample-http-logs']
| summarize avgif(req_duration_ms, status == "200") by id
```
## Usage
### Syntax
```kusto
summarize avgif(expr, predicate) by grouping_field
```
### Parameters
* **`expr`**: The field for which you want to calculate the average.
* **`predicate`**: A boolean condition that filters which records are included in the calculation.
* **`grouping_field`**: (Optional) A field by which you want to group the results.
### Returns
The function returns the average of the values from the `expr` field for the records that satisfy the `predicate`. If no records match the condition, the result is `null`.
## Use case examples
In this example, you calculate the average request duration for HTTP status 200 in different cities.
**Query**
```kusto
['sample-http-logs']
| summarize avgif(req_duration_ms, status == "200") by ['geo.city']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20summarize%20avgif%28req_duration_ms%2C%20status%20%3D%3D%20%22200%22%29%20by%20%5B%27geo.city%27%5D%22%7D)
**Output**
| geo.city | avg\_req\_duration\_ms |
| -------- | ---------------------- |
| New York | 325 |
| London | 400 |
| Tokyo | 275 |
This query calculates the average request duration (`req_duration_ms`) for HTTP requests that returned a status of 200 (`status == "200"`), grouped by the city where the request originated (`geo.city`).
In this example, you calculate the average span duration for traces that ended with HTTP status 500.
**Query**
```kusto
['otel-demo-traces']
| summarize avgif(duration, status == "500") by ['service.name']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D%20%7C%20summarize%20avgif%28duration%2C%20status%20%3D%3D%20%22500%22%29%20by%20%5B%27service.name%27%5D%22%7D)
**Output**
| service.name | avg\_duration |
| --------------- | ------------- |
| checkoutservice | 500ms |
| frontend | 600ms |
| cartservice | 475ms |
This query calculates the average span duration (`duration`) for traces where the status code is 500 (`status == "500"`), grouped by the service name (`service.name`).
In this example, you calculate the average request duration for failed HTTP requests (status code 400 or higher) by country.
**Query**
```kusto
['sample-http-logs']
| summarize avgif(req_duration_ms, toint(status) >= 400) by ['geo.country']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20summarize%20avgif%28req_duration_ms%2C%20toint%28status%29%20%3E%3D%20400%29%20by%20%5B%27geo.country%27%5D%22%7D)
**Output**
| geo.country | avg\_req\_duration\_ms |
| ----------- | ---------------------- |
| USA | 450 |
| Canada | 500 |
| Germany | 425 |
This query calculates the average request duration (`req_duration_ms`) for failed HTTP requests (`status >= 400`), grouped by the country of origin (`geo.country`).
## List of related aggregations
* [**minif**](/apl/aggregation-function/minif): Returns the minimum value of an expression, filtered by a predicate. Use when you want to find the smallest value for a subset of data.
* [**maxif**](/apl/aggregation-function/maxif): Returns the maximum value of an expression, filtered by a predicate. Use when you are looking for the largest value within specific conditions.
* [**countif**](/apl/aggregation-function/countif): Counts the number of records that match a condition. Use when you want to know how many records meet a specific criterion.
* [**sumif**](/apl/aggregation-function/sumif): Sums the values of a field that match a given condition. Ideal for calculating the total of a subset of data.
# count
This page explains how to use the count aggregation function in APL.
The `count` aggregation in APL returns the total number of records in a dataset or the total number of records that match specific criteria. This function is useful when you need to quantify occurrences, such as counting log entries, user actions, or security events.
When to use `count`:
* To count the total number of events in log analysis, such as the number of HTTP requests or errors.
* To monitor system usage, such as the number of transactions or API calls.
* To identify security incidents by counting failed login attempts or suspicious activities.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, the `count` function works similarly to APL, but the syntax differs slightly.
```sql Splunk example
| stats count by status
```
```kusto APL equivalent
['sample-http-logs']
| summarize count() by status
```
In ANSI SQL, the `count` function works similarly, but APL uses different syntax for querying.
```sql SQL example
SELECT status, COUNT(*)
FROM sample_http_logs
GROUP BY status
```
```kusto APL equivalent
['sample-http-logs']
| summarize count() by status
```
## Usage
### Syntax
```kusto
summarize count() [by GroupingColumn]
```
### Parameters
* **GroupingColumn** (optional): A column to group the count results by. If not specified, the total number of records across the dataset is returned.
### Returns
* A table with the count of records for the entire dataset or grouped by the specified column.
## Use case examples
In log analysis, you can count the number of HTTP requests by status to get a sense of how many requests result in different HTTP status codes.
**Query**
```kusto
['sample-http-logs']
| summarize count() by status
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20summarize%20count\(\)%20by%20status%22%7D)
**Output**
| status | count |
| ------ | ----- |
| 200 | 1500 |
| 404 | 200 |
This query counts the total number of HTTP requests for each status code in the logs.
For OpenTelemetry traces, you can count the total number of spans for each service, which helps you monitor the distribution of requests across services.
**Query**
```kusto
['otel-demo-traces']
| summarize count() by ['service.name']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%5Cn%7C%20summarize%20count\(\)%20by%20%5B'service.name'%5D%22%7D)
**Output**
| service.name | count |
| ------------ | ----- |
| frontend | 1000 |
| cartservice | 500 |
This query counts the number of spans for each service in the OpenTelemetry traces dataset.
In security logs, you can count the number of requests by country to identify where the majority of traffic or suspicious activity originates.
**Query**
```kusto
['sample-http-logs']
| summarize count() by ['geo.country']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20summarize%20count\(\)%20by%20%5B'geo.country'%5D%22%7D)
**Output**
| geo.country | count |
| ----------- | ----- |
| US | 3000 |
| DE | 500 |
This query counts the number of requests originating from each country.
## List of related aggregations
* [**sum**](/apl/aggregation-function/sum): Use `sum` to calculate the total sum of a numeric field, as opposed to counting the number of records.
* [**avg**](/apl/aggregation-function/avg): The `avg` function calculates the average of a numeric field. Use it when you want to determine the mean value of data instead of the count.
* [**min**](/apl/aggregation-function/min): The `min` function returns the minimum value of a numeric field, helping to identify the smallest value in a dataset.
* [**max**](/apl/aggregation-function/max): The `max` function returns the maximum value of a numeric field, useful for identifying the largest value.
* [**countif**](/apl/aggregation-function/countif): The `countif` function allows you to count only records that meet specific conditions, giving you more flexibility in your count queries.
# countif
This page explains how to use the countif aggregation function in APL.
The `countif` aggregation function in Axiom Processing Language (APL) counts the number of records that meet a specified condition. You can use this aggregation to filter records based on a specific condition and return a count of matching records. This is particularly useful for log analysis, security audits, and tracing events when you need to isolate and count specific data subsets.
Use `countif` when you want to count occurrences of certain conditions, such as HTTP status codes, errors, or actions in telemetry traces.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, conditional counting is typically done using the `eval` function combined with `stats`. APL provides a more streamlined approach with the `countif` function, which performs conditional counting directly.
```sql Splunk example
| stats count(eval(status="500")) AS error_count
```
```kusto APL equivalent
['sample-http-logs']
| summarize countif(status == '500')
```
In ANSI SQL, conditional counting is achieved by using the `COUNT` function with a `CASE` statement. In APL, `countif` simplifies this process by offering a direct approach to conditional counting.
```sql SQL example
SELECT COUNT(CASE WHEN status = '500' THEN 1 END) AS error_count
FROM sample_http_logs
```
```kusto APL equivalent
['sample-http-logs']
| summarize countif(status == '500')
```
## Usage
### Syntax
```kusto
countif(condition)
```
### Parameters
* **condition**: A boolean expression that filters the records based on a condition. Only records where the condition evaluates to `true` are counted.
### Returns
The function returns the number of records that match the specified condition.
## Use case examples
In log analysis, you might want to count how many HTTP requests returned a 500 status code to detect server errors.
**Query**
```kusto
['sample-http-logs']
| summarize countif(status == '500')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20countif\(status%20%3D%3D%20'500'\)%22%7D)
**Output**
| count\_errors |
| ------------- |
| 72 |
This query counts the number of HTTP requests with a `500` status, helping you identify how many server errors occurred.
In OpenTelemetry traces, you might want to count how many requests were initiated by the client service kind.
**Query**
```kusto
['otel-demo-traces']
| summarize countif(kind == 'client')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20countif\(kind%20%3D%3D%20'client'\)%22%7D)
**Output**
| count\_client\_kind |
| ------------------- |
| 345 |
This query counts how many requests were initiated by the `client` service kind, providing insight into the volume of client-side traffic.
In security logs, you might want to count how many HTTP requests originated from a specific city, such as New York.
**Query**
```kusto
['sample-http-logs']
| summarize countif(['geo.city'] == 'New York')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20countif\(%5B'geo.city'%5D%20%3D%3D%20'New%20York'\)%22%7D)
**Output**
| count\_nyc\_requests |
| -------------------- |
| 87 |
This query counts how many HTTP requests originated from New York, which can help detect traffic from a particular location for security analysis.
## List of related aggregations
* [**count**](/apl/aggregation-function/count): Counts all records in a dataset without applying a condition. Use this when you need the total count of records, regardless of any specific condition.
* [**sumif**](/apl/aggregation-function/sumif): Adds up the values of a field for records that meet a specific condition. Use `sumif` when you want to sum values based on a filter.
* [**dcountif**](/apl/aggregation-function/dcountif): Counts distinct values of a field for records that meet a condition. This is helpful when you need to count unique occurrences.
* [**avgif**](/apl/aggregation-function/avgif): Calculates the average value of a field for records that match a condition, useful for performance monitoring.
* [**maxif**](/apl/aggregation-function/maxif): Returns the maximum value of a field for records that meet a condition. Use this when you want to find the highest value in filtered data.
# dcount
This page explains how to use the dcount aggregation function in APL.
The `dcount` aggregation function in Axiom Processing Language (APL) counts the distinct values in a column. This function is essential when you need to know the number of unique values, such as counting distinct users, unique requests, or distinct error codes in log files.
Use `dcount` for analyzing datasets where it’s important to identify the number of distinct occurrences, such as unique IP addresses in security logs, unique user IDs in application logs, or unique trace IDs in OpenTelemetry traces.
The `dcount` aggregation in APL is a statistical aggregation that returns estimated results. The estimation comes with the benefit of speed at the expense of accuracy. This means that `dcount` is fast and light on resources even on a large or high-cardinality dataset, but it doesn’t provide precise results.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, you can count distinct values using the `dc` function within the `stats` command. In APL, the `dcount` function offers similar functionality.
```sql Splunk example
| stats dc(user_id) AS distinct_users
```
```kusto APL equivalent
['sample-http-logs']
| summarize dcount(id)
```
In ANSI SQL, distinct counting is typically done using `COUNT` with the `DISTINCT` keyword. In APL, `dcount` provides a direct and efficient way to count distinct values.
```sql SQL example
SELECT COUNT(DISTINCT user_id) AS distinct_users
FROM sample_http_logs
```
```kusto APL equivalent
['sample-http-logs']
| summarize dcount(id)
```
## Usage
### Syntax
```kusto
dcount(column_name)
```
### Parameters
* **column\_name**: The name of the column for which you want to count distinct values.
### Returns
The function returns the count of distinct values found in the specified column.
## Use case examples
In log analysis, you can count how many distinct users accessed the service.
**Query**
```kusto
['sample-http-logs']
| summarize dcount(id)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20dcount\(id\)%22%7D)
**Output**
| distinct\_users |
| --------------- |
| 45 |
This query counts the distinct values in the `id` field, representing the number of unique users who accessed the system.
In OpenTelemetry traces, you can count how many unique trace IDs are recorded.
**Query**
```kusto
['otel-demo-traces']
| summarize dcount(trace_id)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20dcount\(trace_id\)%22%7D)
**Output**
| distinct\_traces |
| ---------------- |
| 321 |
This query counts the distinct trace IDs in the dataset, helping you determine how many unique traces are being captured.
In security logs, you can count how many distinct IP addresses were logged.
**Query**
```kusto
['sample-http-logs']
| summarize dcount(['geo.city'])
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20dcount\(%5B'geo.city'%5D\)%22%7D)
**Output**
| distinct\_cities |
| ---------------- |
| 35 |
This query counts the number of distinct cities recorded in the logs, which helps analyze the geographic distribution of traffic.
## List of related aggregations
* [**count**](/apl/aggregation-function/count): Counts the total number of records in the dataset, including duplicates. Use it when you need to know the overall number of records.
* [**countif**](/apl/aggregation-function/countif): Counts records that match a specific condition. Use `countif` when you want to count records based on a filter or condition.
* [**dcountif**](/apl/aggregation-function/dcountif): Counts the distinct values in a column but only for records that meet a condition. It’s useful when you need a filtered distinct count.
* [**sum**](/apl/aggregation-function/sum): Sums the values in a column. Use this when you need to add up values rather than counting distinct occurrences.
* [**avg**](/apl/aggregation-function/avg): Calculates the average value for a column. Use this when you want to find the average of a specific numeric field.
# dcountif
This page explains how to use the dcountif aggregation function in APL.
The `dcountif` aggregation function in Axiom Processing Language (APL) counts the distinct values in a column that meet a specific condition. This is useful when you want to filter records and count only the unique occurrences that satisfy a given criterion.
Use `dcountif` in scenarios where you need a distinct count but only for a subset of the data, such as counting unique users from a specific region, unique error codes for specific HTTP statuses, or distinct traces that match a particular service type.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, counting distinct values conditionally is typically achieved using a combination of `eval` and `dc` in the `stats` function. APL simplifies this with the `dcountif` function, which handles both filtering and distinct counting in a single step.
```sql Splunk example
| stats dc(eval(status="200")) AS distinct_successful_users
```
```kusto APL equivalent
['sample-http-logs']
| summarize dcountif(id, status == '200')
```
In ANSI SQL, conditional distinct counting can be done using a combination of `COUNT(DISTINCT)` and `CASE`. APL's `dcountif` function provides a more concise and readable way to handle conditional distinct counting.
```sql SQL example
SELECT COUNT(DISTINCT CASE WHEN status = '200' THEN user_id END) AS distinct_successful_users
FROM sample_http_logs
```
```kusto APL equivalent
['sample-http-logs']
| summarize dcountif(id, status == '200')
```
## Usage
### Syntax
```kusto
dcountif(column_name, condition)
```
### Parameters
* **column\_name**: The name of the column for which you want to count distinct values.
* **condition**: A boolean expression that filters the records. Only records that meet the condition will be included in the distinct count.
### Returns
The function returns the count of distinct values that meet the specified condition.
## Use case examples
In log analysis, you might want to count how many distinct users accessed the service and received a successful response (HTTP status 200).
**Query**
```kusto
['sample-http-logs']
| summarize dcountif(id, status == '200')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20dcountif\(id%2C%20status%20%3D%3D%20'200'\)%22%7D)
**Output**
| distinct\_successful\_users |
| --------------------------- |
| 50 |
This query counts the distinct users (`id` field) who received a successful HTTP response (status 200), helping you understand how many unique users had successful requests.
In OpenTelemetry traces, you might want to count how many unique trace IDs are recorded for a specific service, such as `frontend`.
**Query**
```kusto
['otel-demo-traces']
| summarize dcountif(trace_id, ['service.name'] == 'frontend')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20dcountif\(trace_id%2C%20%5B'service.name'%5D%20%3D%3D%20'frontend'\)%22%7D)
**Output**
| distinct\_frontend\_traces |
| -------------------------- |
| 123 |
This query counts the number of distinct trace IDs that belong to the `frontend` service, providing insight into the volume of unique traces for that service.
In security logs, you might want to count how many unique IP addresses were logged for requests that resulted in a 403 status (forbidden access).
**Query**
```kusto
['sample-http-logs']
| summarize dcountif(['geo.city'], status == '403')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20dcountif\(%5B'geo.city'%5D%2C%20status%20%3D%3D%20'403'\)%22%7D)
**Output**
| distinct\_cities\_forbidden |
| --------------------------- |
| 20 |
This query counts the number of distinct cities (`geo.city` field) where requests resulted in a `403` status, helping you identify potential unauthorized access attempts from different regions.
## List of related aggregations
* [**dcount**](/apl/aggregation-function/dcount): Counts distinct values without applying any condition. Use this when you need to count unique values across the entire dataset.
* [**countif**](/apl/aggregation-function/countif): Counts records that match a specific condition, without focusing on distinct values. Use this when you need to count records based on a filter.
* [**dcountif**](/apl/aggregation-function/dcountif): Use this function to get a distinct count for records that meet a condition. It combines both filtering and distinct counting.
* [**sumif**](/apl/aggregation-function/sumif): Sums values in a column for records that meet a condition. This is useful when you need to sum data points after filtering.
* [**avgif**](/apl/aggregation-function/avgif): Calculates the average value of a column for records that match a condition. Use this when you need to find the average based on a filter.
# histogram
This page explains how to use the histogram aggregation function in APL.
The `histogram` aggregation in APL allows you to create a histogram that groups numeric values into intervals or "bins." This is useful for visualizing the distribution of data, such as the frequency of response times, request durations, or other continuous numerical fields. You can use it to analyze patterns and trends in datasets like logs, traces, or metrics. It is especially helpful when you need to summarize a large volume of data into a digestible form, providing insights on the distribution of values.
The `histogram` aggregation is ideal for identifying peaks, valleys, and outliers in your data. For example, you can analyze the distribution of request durations in web server logs or span durations in OpenTelemetry traces to understand performance bottlenecks.
The `histogram` aggregation in APL is a statistical aggregation that returns estimated results. The estimation comes with the benefit of speed at the expense of accuracy. This means that `histogram` is fast and light on resources even on a large or high-cardinality dataset, but it doesn’t provide precise results.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, a similar operation to APL's `histogram` is the `timechart` or `histogram` command, which groups events into time buckets. However, in APL, the `histogram` function focuses on numeric values, allowing you to control the bin size and boundaries more precisely.
```splunk Splunk example
| stats count by duration | timechart span=10 count
```
```kusto APL equivalent
['sample-http-logs']
| summarize count() by histogram(req_duration_ms, 10)
```
In ANSI SQL, you can use the `GROUP BY` clause combined with range calculations to achieve a similar result to APL’s `histogram`. However, APL’s `histogram` function simplifies the process by automatically calculating bin intervals.
```sql SQL example
SELECT COUNT(*), FLOOR(req_duration_ms/10)*10 as duration_bin
FROM sample_http_logs
GROUP BY duration_bin
```
```kusto APL equivalent
['sample-http-logs']
| summarize count() by histogram(req_duration_ms, 10)
```
## Usage
### Syntax
```kusto
histogram(numeric_field, bin_size)
```
### Parameters
* `numeric_field`: The numeric field you want to create a histogram for. This can be a field like request duration or span duration.
* `bin_size`: The size of each bin, or interval, into which the numeric values will be grouped.
### Returns
The `histogram` aggregation returns a table where each row represents a bin, along with the number of occurrences (counts) that fall within each bin.
## Use case examples
You can use the `histogram` aggregation to analyze the distribution of request durations in web server logs.
**Query**
```kusto
['sample-http-logs']
| summarize histogram(req_duration_ms, 100) by bin_auto(_time)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20histogram\(req_duration_ms%2C%20100\)%20by%20bin_auto\(_time\)%22%7D)
**Output**
| req\_duration\_ms\_bin | count |
| ---------------------- | ----- |
| 0 | 50 |
| 100 | 200 |
| 200 | 120 |
This query creates a histogram that groups request durations into bins of 100 milliseconds and shows the count of requests in each bin. It helps you visualize how frequently requests fall within certain duration ranges.
In OpenTelemetry traces, you can use the `histogram` aggregation to analyze the distribution of span durations.
**Query**
```kusto
['otel-demo-traces']
| summarize histogram(duration, 100) by bin_auto(_time)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20histogram\(duration%2C%20100\)%20by%20bin_auto\(_time\)%22%7D)
**Output**
| duration\_bin | count |
| ------------- | ----- |
| 0.1s | 30 |
| 0.2s | 120 |
| 0.3s | 50 |
This query groups the span durations into 100ms intervals, making it easier to spot latency issues in your traces.
In security logs, the `histogram` aggregation helps you understand the frequency distribution of request durations to detect anomalies or attacks.
**Query**
```kusto
['sample-http-logs']
| where status == '200'
| summarize histogram(req_duration_ms, 50) by bin_auto(_time)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20where%20status%20%3D%3D%20'200'%20%7C%20summarize%20histogram\(req_duration_ms%2C%2050\)%20by%20bin_auto\(_time\)%22%7D)
**Output**
| req\_duration\_ms\_bin | count |
| ---------------------- | ----- |
| 0 | 150 |
| 50 | 400 |
| 100 | 100 |
This query analyzes the request durations for HTTP 200 (Success) responses, helping you identify patterns in security-related events.
## List of related aggregations
* [**percentile**](/apl/aggregation-function/percentile): Use `percentile` when you need to find the specific value below which a percentage of observations fall, which can provide more precise distribution analysis.
* [**avg**](/apl/aggregation-function/avg): Use `avg` for calculating the average value of a numeric field, useful when you are more interested in the central tendency rather than distribution.
* [**sum**](/apl/aggregation-function/sum): The `sum` function adds up the total values in a numeric field, helpful for determining overall totals.
* [**count**](/apl/aggregation-function/count): Use `count` when you need a simple tally of rows or events, often in conjunction with `histogram` for more basic summarization.
# make_list
This page explains how to use the make_list aggregation function in APL.
The `make_list` aggregation function in Axiom Processing Language (APL) collects all values from a specified column into a dynamic array for each group of rows in a dataset. This aggregation is particularly useful when you want to consolidate multiple values from distinct rows into a single grouped result.
For example, if you have multiple log entries for a particular user, you can use `make_list` to gather all request URIs accessed by that user into a single list. You can also apply `make_list` to various contexts, such as trace aggregation, log analysis, or security monitoring, where collating related events into a compact form is needed.
Key uses of `make_list`:
* Consolidating values from multiple rows into a list per group.
* Summarizing activity (e.g., list all HTTP requests by a user).
* Generating traces or timelines from distributed logs.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, the `make_list` equivalent is `values` or `mvlist`, which gathers multiple values into a multivalue field. In APL, `make_list` behaves similarly by collecting values from rows into a dynamic array.
```sql Splunk example
index=logs | stats values(uri) by user
```
```kusto APL equivalent
['sample-http-logs']
| summarize uris=make_list(uri) by id
```
In ANSI SQL, the `make_list` function is similar to `ARRAY_AGG`, which aggregates column values into an array for each group. In APL, `make_list` performs the same role, grouping the column values into a dynamic array.
```sql SQL example
SELECT ARRAY_AGG(uri) AS uris FROM sample_http_logs GROUP BY id;
```
```kusto APL equivalent
['sample-http-logs']
| summarize uris=make_list(uri) by id
```
## Usage
### Syntax
```kusto
make_list(column)
```
### Parameters
* `column`: The name of the column to collect into a list.
### Returns
The `make_list` function returns a dynamic array that contains all values of the specified column for each group of rows.
## Use case examples
In log analysis, `make_list` is useful for collecting all URIs a user has accessed in a session. This can help in identifying browsing patterns or tracking user activity.
**Query**
```kusto
['sample-http-logs']
| summarize uris=make_list(uri) by id
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20summarize%20uris%3Dmake_list%28uri%29%20by%20id%22%7D)
**Output**
| id | uris |
| ------- | --------------------------------- |
| user123 | \[‘/home’, ‘/profile’, ‘/cart’] |
| user456 | \[‘/search’, ‘/checkout’, ‘/pay’] |
This query collects all URIs accessed by each user, providing a compact view of user activity in the logs.
In OpenTelemetry traces, `make_list` can help in gathering the list of services involved in a trace by consolidating all service names related to a trace ID.
**Query**
```kusto
['otel-demo-traces']
| summarize services=make_list(['service.name']) by trace_id
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D%20%7C%20summarize%20services%3Dmake_list%28%5B%27service.name%27%5D%29%20by%20trace_id%22%7D)
**Output**
| trace\_id | services |
| --------- | ----------------------------------------------- |
| trace\_a | \[‘frontend’, ‘cartservice’, ‘checkoutservice’] |
| trace\_b | \[‘productcatalogservice’, ‘loadgenerator’] |
This query aggregates all service names associated with a particular trace, helping trace spans across different services.
In security logs, `make_list` is useful for collecting all IPs or cities from where a user has initiated requests, aiding in detecting anomalies or patterns.
**Query**
```kusto
['sample-http-logs']
| summarize cities=make_list(['geo.city']) by id
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20summarize%20cities%3Dmake_list%28%5B%27geo.city%27%5D%29%20by%20id%22%7D)
**Output**
| id | cities |
| ------- | ---------------------------- |
| user123 | \[‘New York’, ‘Los Angeles’] |
| user456 | \[‘Berlin’, ‘London’] |
This query collects the cities from which each user has made HTTP requests, useful for geographical analysis or anomaly detection.
## List of related aggregations
* [**make\_set**](/apl/aggregation-function/make-set): Similar to `make_list`, but only unique values are collected in the set. Use `make_set` when duplicates aren’t relevant.
* [**count**](/apl/aggregation-function/count): Returns the count of rows in each group. Use this instead of `make_list` when you're interested in row totals rather than individual values.
* [**max**](/apl/aggregation-function/max): Aggregates values by returning the maximum value from each group. Useful for numeric comparison across rows.
* [**dcount**](/apl/aggregation-function/dcount): Returns the distinct count of values for each group. Use this when you need unique value counts instead of listing them.
# make_list_if
This page explains how to use the make_list_if aggregation function in APL.
The `make_list_if` aggregation function in APL creates a list of values from a given field, conditioned on a Boolean expression. This function is useful when you need to gather values from a column that meet specific criteria into a single array. By using `make_list_if`, you can aggregate data based on dynamic conditions, making it easier to perform detailed analysis.
This aggregation is ideal in scenarios where filtering at the aggregation level is required, such as gathering only the successful requests or collecting trace spans of a specific service in OpenTelemetry data. It’s particularly useful when analyzing logs, tracing information, or security events, where conditional aggregation is essential for understanding trends or identifying issues.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk, you would typically use the `eval` and `stats` commands to create conditional lists. In APL, the `make_list_if` function serves a similar purpose by allowing you to aggregate data into a list based on a condition.
```sql Splunk example
| stats list(field) as field_list by condition
```
```kusto APL equivalent
summarize make_list_if(field, condition)
```
In ANSI SQL, conditional aggregation often involves the use of `CASE` statements combined with aggregation functions such as `ARRAY_AGG`. In APL, `make_list_if` directly applies a condition to the aggregation.
```sql SQL example
SELECT ARRAY_AGG(CASE WHEN condition THEN field END) FROM table
```
```kusto APL equivalent
summarize make_list_if(field, condition)
```
## Usage
### Syntax
```kusto
summarize make_list_if(expression, condition)
```
### Parameters
* `expression`: The field or expression whose values will be included in the list.
* `condition`: A Boolean condition that determines which values from `expression` are included in the result.
### Returns
The function returns an array containing all values from `expression` that meet the specified `condition`.
## Use case examples
In this example, we will gather a list of request durations for successful HTTP requests.
**Query**
```kusto
['sample-http-logs']
| summarize make_list_if(req_duration_ms, status == '200') by id
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D+%7C+summarize+make_list_if%28req_duration_ms%2C+status+%3D%3D+%27200%27%29+by+id%22%7D)
**Output**
| id | req\_duration\_ms\_list |
| --- | ----------------------- |
| 123 | \[100, 150, 200] |
| 456 | \[300, 350, 400] |
This query aggregates request durations for HTTP requests that returned a status of ‘200’ for each user ID.
Here, we will aggregate the span durations for `cartservice` where the status code indicates success.
**Query**
```kusto
['otel-demo-traces']
| summarize make_list_if(duration, status_code == '200' and ['service.name'] == 'cartservice') by trace_id
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D+%7C+summarize+make_list_if%28duration%2C+status_code+%3D%3D+%27200%27+and+%5B%27service.name%27%5D+%3D%3D+%27cartservice%27%29+by+trace_id%22%7D)
**Output**
| trace\_id | duration\_list |
| --------- | --------------------- |
| abc123 | \[00:01:23, 00:01:45] |
| def456 | \[00:02:12, 00:03:15] |
This query collects span durations for successful requests to the `cartservice` by `trace_id`.
In this case, we gather a list of IP addresses from security logs where the HTTP status is `403` (Forbidden) and group them by the country of origin.
**Query**
```kusto
['sample-http-logs']
| summarize make_list_if(uri, status == '403') by ['geo.country']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D+%7C+summarize+make_list_if%28uri%2C+status+%3D%3D+%27403%27%29+by+%5B%27geo.country%27%5D%22%7D)
**Output**
| geo.country | uri\_list |
| ----------- | ---------------------- |
| USA | \['/login', '/admin'] |
| Canada | \['/admin', '/secure'] |
This query collects a list of URIs that resulted in a `403` error, grouped by the country where the request originated.
## List of related aggregations
* [**make\_list**](/apl/aggregation-function/make-list): Aggregates all values into a list without any conditions. Use `make_list` when you don’t need to filter the values based on a condition.
* [**countif**](/apl/aggregation-function/countif): Counts the number of records that satisfy a specific condition. Use `countif` when you need a count of occurrences rather than a list of values.
* [**avgif**](/apl/aggregation-function/avgif): Calculates the average of values that meet a specified condition. Use `avgif` for numerical aggregations where you want a conditional average instead of a list.
# make_set
This page explains how to use the make_set aggregation function in APL.
The `make_set` aggregation in APL (Axiom Processing Language) is used to collect unique values from a specific column into an array. It is useful when you want to reduce your data by grouping it and then retrieving all unique values for each group. This aggregation is valuable for tasks such as grouping logs, traces, or events by a common attribute and retrieving the unique values of a specific field for further analysis.
You can use `make_set` when you need to collect non-repeating values across rows within a group, such as finding all the unique HTTP methods in web server logs or unique trace IDs in telemetry data.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, the `values` function is similar to `make_set` in APL. The main difference is that while `values` returns all non-null values, `make_set` specifically returns only unique values and stores them in an array.
```sql Splunk example
| stats values(method) by id
```
```kusto APL equivalent
['sample-http-logs']
| summarize make_set(method) by id
```
In ANSI SQL, the `GROUP_CONCAT` or `ARRAY_AGG(DISTINCT)` functions are commonly used to aggregate unique values in a column. `make_set` in APL works similarly by aggregating distinct values from a specific column into an array, but it offers better performance for large datasets.
```sql SQL example
SELECT id, ARRAY_AGG(DISTINCT method)
FROM sample_http_logs
GROUP BY id;
```
```kusto APL equivalent
['sample-http-logs']
| summarize make_set(method) by id
```
## Usage
### Syntax
```kusto
make_set(column, [limit])
```
### Parameters
* `column`: The column from which unique values are aggregated.
* `limit`: (Optional) The maximum number of unique values to return. Defaults to 128 if not specified.
### Returns
An array of unique values from the specified column.
## Use case examples
In this use case, you want to collect all unique HTTP methods used by each user in the log data.
**Query**
```kusto
['sample-http-logs']
| summarize make_set(method) by id
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D+%7C+summarize+make_set%28method%29+by+id%22%7D)
**Output**
| id | make\_set\_method |
| ------- | ----------------- |
| user123 | \['GET', 'POST'] |
| user456 | \['GET'] |
This query groups the log entries by `id` and returns all unique HTTP methods used by each user.
In this use case, you want to gather the unique service names involved in a trace.
**Query**
```kusto
['otel-demo-traces']
| summarize make_set(['service.name']) by trace_id
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D+%7C+summarize+make_set%28%5B%27service.name%27%5D%29+by+trace_id%22%7D)
**Output**
| trace\_id | make\_set\_service.name |
| --------- | -------------------------------- |
| traceA | \['frontend', 'checkoutservice'] |
| traceB | \['cartservice'] |
This query groups the telemetry data by `trace_id` and collects the unique services involved in each trace.
In this use case, you want to collect all unique HTTP status codes for each country where the requests originated.
**Query**
```kusto
['sample-http-logs']
| summarize make_set(status) by ['geo.country']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D+%7C+summarize+make_set%28status%29+by+%5B%27geo.country%27%5D%22%7D)
**Output**
| geo.country | make\_set\_status |
| ----------- | ----------------- |
| USA | \['200', '404'] |
| UK | \['200'] |
This query collects all unique HTTP status codes returned for each country from which requests were made.
## List of related aggregations
* [**make\_list**](/apl/aggregation-function/make-list): Similar to `make_set`, but returns all values, including duplicates, in a list. Use `make_list` if you want to preserve duplicates.
* [**count**](/apl/aggregation-function/count): Counts the number of records in each group. Use `count` when you need the total count rather than the unique values.
* [**dcount**](/apl/aggregation-function/dcount): Returns the distinct count of values in a column. Use `dcount` when you need the number of unique values, rather than an array of them.
* [**max**](/apl/aggregation-function/max): Finds the maximum value in a group. Use `max` when you are interested in the largest value rather than collecting values.
# make_set_if
This page explains how to use the make_set_if aggregation function in APL.
The `make_set_if` aggregation function in APL allows you to create a set of distinct values from a column based on a condition. You can use this function to aggregate values that meet specific criteria, helping you filter and reduce data to unique entries while applying a conditional filter. This is especially useful when analyzing large datasets to extract relevant, distinct information without duplicates.
You can use `make_set_if` in scenarios where you need to aggregate conditional data points, such as log analysis, tracing information, or security logs, to summarize distinct occurrences based on particular conditions.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, you may use `values` with a `where` condition to achieve similar functionality to `make_set_if`. However, in APL, the `make_set_if` function is explicitly designed to create a distinct set of values based on a conditional filter within the aggregation step itself.
```sql Splunk example
| stats values(field) by another_field where condition
```
```kusto APL equivalent
summarize make_set_if(field, condition) by another_field
```
In ANSI SQL, you would typically use `GROUP BY` in combination with conditional aggregation, such as using `CASE WHEN` inside aggregate functions. In APL, the `make_set_if` function directly aggregates distinct values conditionally without requiring a `CASE WHEN`.
```sql SQL example
SELECT DISTINCT CASE WHEN condition THEN field END
FROM table
GROUP BY another_field
```
```kusto APL equivalent
summarize make_set_if(field, condition) by another_field
```
## Usage
### Syntax
```kusto
make_set_if(column, predicate, [max_size])
```
### Parameters
* `column`: The column from which distinct values will be aggregated.
* `predicate`: A condition that filters the values to be aggregated.
* `[max_size]`: (Optional) Specifies the maximum number of elements in the resulting set. If omitted, the default is 1048576.
### Returns
The `make_set_if` function returns a dynamic array of distinct values from the specified column that satisfy the given condition.
## Use case examples
In this use case, you're analyzing HTTP logs and want to get the distinct cities from which requests originated, but only for requests that took longer than 500 ms.
**Query**
```kusto
['sample-http-logs']
| summarize make_set_if(['geo.city'], req_duration_ms > 500) by ['method']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20summarize%20make_set_if%28%5B%27geo.city%27%5D%2C%20req_duration_ms%20%3E%20500%29%20by%20%5B%27method%27%5D%22%7D)
**Output**
| method | make\_set\_if\_geo.city |
| ------ | ------------------------------ |
| GET | \[‘New York’, ‘San Francisco’] |
| POST | \[‘Berlin’, ‘Tokyo’] |
This query returns the distinct cities from which requests took more than 500 ms, grouped by HTTP request method.
Here, you're analyzing OpenTelemetry traces and want to identify the distinct services that processed spans with a duration greater than 1 second, grouped by trace ID.
**Query**
```kusto
['otel-demo-traces']
| summarize make_set_if(['service.name'], duration > 1s) by ['trace_id']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D%20%7C%20summarize%20make_set_if%28%5B%27service.name%27%5D%2C%20duration%20%3E%201s%29%20by%20%5B%27trace_id%27%5D%22%7D)
**Output**
| trace\_id | make\_set\_if\_service.name |
| --------- | ------------------------------------- |
| abc123 | \[‘frontend’, ‘cartservice’] |
| def456 | \[‘checkoutservice’, ‘loadgenerator’] |
This query extracts distinct services that have processed spans longer than 1 second for each trace.
In security log analysis, you may want to find out which HTTP status codes were encountered for each city, but only for POST requests.
**Query**
```kusto
['sample-http-logs']
| summarize make_set_if(status, method == 'POST') by ['geo.city']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20summarize%20make_set_if%28status%2C%20method%20%3D%3D%20%27POST%27%29%20by%20%5B%27geo.city%27%5D%22%7D)
**Output**
| geo.city | make\_set\_if\_status |
| -------- | --------------------- |
| Berlin | \[‘200’, ‘404’] |
| Tokyo | \[‘500’, ‘403’] |
This query identifies the distinct HTTP status codes for POST requests grouped by the originating city.
## List of related aggregations
* [**make\_list\_if**](/apl/aggregation-function/make-list-if): Similar to `make_set_if`, but returns a list that can include duplicates instead of a distinct set.
* [**make\_set**](/apl/aggregation-function/make-set): Aggregates distinct values without a conditional filter.
* [**countif**](/apl/aggregation-function/countif): Counts rows that satisfy a specific condition, useful for when you need to count rather than aggregate distinct values.
# max
This page explains how to use the max aggregation function in APL.
The `max` aggregation in APL allows you to find the highest value in a specific column of your dataset. This is useful when you need to identify the maximum value of numerical data, such as the longest request duration, highest sales figures, or the latest timestamp in logs. The `max` function is ideal when you are working with large datasets and need to quickly retrieve the largest value, ensuring you're focusing on the most critical or recent data point.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, the `max` function works similarly, used to find the maximum value in a given field. The syntax in APL, however, requires you to specify the column to aggregate within a query and make use of APL's structured flow.
```sql Splunk example
| stats max(req_duration_ms)
```
```kusto APL equivalent
['sample-http-logs']
| summarize max(req_duration_ms)
```
In ANSI SQL, `MAX` works similarly to APL’s `max`. In SQL, you aggregate over a column using the `MAX` function in a `SELECT` statement. In APL, you achieve the same result using the `summarize` operator followed by the `max` function.
```sql SQL example
SELECT MAX(req_duration_ms) FROM sample_http_logs;
```
```kusto APL equivalent
['sample-http-logs']
| summarize max(req_duration_ms)
```
## Usage
### Syntax
```kusto
summarize max(ColumnName)
```
### Parameters
* `ColumnName`: The column or field from which you want to retrieve the maximum value. The column should contain numerical data, timespans, or dates.
### Returns
The maximum value from the specified column.
## Use case examples
In log analysis, you might want to find the longest request duration to diagnose performance issues.
**Query**
```kusto
['sample-http-logs']
| summarize max(req_duration_ms)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20max\(req_duration_ms\)%22%7D)
**Output**
| max\_req\_duration\_ms |
| ---------------------- |
| 5400 |
This query returns the highest request duration from the `req_duration_ms` field, which helps you identify the slowest requests.
When analyzing OpenTelemetry traces, you can find the longest span duration to determine performance bottlenecks in distributed services.
**Query**
```kusto
['otel-demo-traces']
| summarize max(duration)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20max\(duration\)%22%7D)
**Output**
| max\_duration |
| ------------- |
| 00:00:07.234 |
This query returns the longest trace span from the `duration` field, helping you pinpoint the most time-consuming operations.
In security log analysis, you may want to identify the most recent event for monitoring threats or auditing activities.
**Query**
```kusto
['sample-http-logs']
| summarize max(_time)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20max\(_time\)%22%7D)
**Output**
| max\_time |
| ------------------- |
| 2024-09-25 12:45:01 |
This query returns the most recent timestamp from your logs, allowing you to monitor the latest security events.
## List of related aggregations
* [**min**](/apl/aggregation-function/min): Retrieves the minimum value from a column, which is useful when you need to find the smallest or earliest value, such as the lowest request duration or first event in a log.
* [**avg**](/apl/aggregation-function/avg): Calculates the average value of a column. This function helps when you want to understand the central tendency, such as the average response time for requests.
* [**sum**](/apl/aggregation-function/sum): Sums all values in a column, making it useful when calculating totals, such as total sales or total number of requests over a period.
* [**count**](/apl/aggregation-function/count): Counts the number of records or non-null values in a column. It’s useful for finding the total number of log entries or transactions.
* [**percentile**](/apl/aggregation-function/percentile): Finds a value below which a specified percentage of data falls. This aggregation is helpful when you need to analyze performance metrics like latency at the 95th percentile.
# maxif
This page explains how to use the maxif aggregation function in APL.
# maxif aggregation in APL
## Introduction
The `maxif` aggregation function in APL is useful when you want to return the maximum value from a dataset based on a conditional expression. This allows you to filter the dataset dynamically and only return the maximum for rows that satisfy the given condition. It’s particularly helpful for scenarios where you want to find the highest value of a specific metric, like response time or duration, but only for a subset of the data (e.g., successful responses, specific users, or requests from a particular geographic location).
You can use the `maxif` function when analyzing logs, monitoring system traces, or inspecting security-related data to get insights into the maximum value under certain conditions.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, you might use the `stats max()` function alongside a conditional filtering step to achieve a similar result. APL’s `maxif` function combines both operations into one, streamlining the query.
```splunk
| stats max(req_duration_ms) as max_duration where status="200"
```
```kusto
['sample-http-logs']
| summarize maxif(req_duration_ms, status == "200")
```
In ANSI SQL, you typically use the `MAX` function in conjunction with a `WHERE` clause. APL’s `maxif` allows you to perform the same operation with a single aggregation function.
```sql
SELECT MAX(req_duration_ms)
FROM logs
WHERE status = '200';
```
```kusto
['sample-http-logs']
| summarize maxif(req_duration_ms, status == "200")
```
## Usage
### Syntax
```kusto
summarize maxif(column, condition)
```
### Parameters
* `column`: The column containing the values to aggregate.
* `condition`: The condition that must be true for the values to be considered in the aggregation.
### Returns
The maximum value from `column` for rows that meet the `condition`. If no rows match the condition, it returns `null`.
## Use case examples
In log analysis, you might want to find the maximum request duration, but only for successful requests.
**Query**
```kusto
['sample-http-logs']
| summarize maxif(req_duration_ms, status == "200")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20maxif\(req_duration_ms,%20status%20%3D%3D%20'200'\)%22%7D)
**Output**
| max\_req\_duration |
| ------------------ |
| 1250 |
This query returns the maximum request duration (`req_duration_ms`) for HTTP requests with a `200` status.
In OpenTelemetry traces, you might want to find the longest span duration for a specific service type.
**Query**
```kusto
['otel-demo-traces']
| summarize maxif(duration, ['service.name'] == "checkoutservice" and kind == "server")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20maxif\(duration,%20%5B'service.name'%5D%20%3D%3D%20'checkoutservice'%20and%20kind%20%3D%3D%20'server'\)%22%7D)
**Output**
| max\_duration |
| ------------- |
| 2.05s |
This query returns the maximum span duration (`duration`) for server spans in the `checkoutservice`.
For security logs, you might want to identify the longest request duration for any requests originating from a specific country, such as the United States.
**Query**
```kusto
['sample-http-logs']
| summarize maxif(req_duration_ms, ['geo.country'] == "United States")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20maxif\(req_duration_ms,%20%5B'geo.country'%5D%20%3D%3D%20'United%20States'\)%22%7D)
**Output**
| max\_req\_duration |
| ------------------ |
| 980 |
This query returns the maximum request duration for requests coming from the United States (`geo.country`).
## List of related aggregations
* [**minif**](/apl/aggregation-function/minif): Returns the minimum value from a column for rows that satisfy a condition. Use `minif` when you're interested in the lowest value under specific conditions.
* [**max**](/apl/aggregation-function/max): Returns the maximum value from a column without filtering. Use `max` when you want the highest value across the entire dataset without conditions.
* [**sumif**](/apl/aggregation-function/sumif): Returns the sum of values for rows that satisfy a condition. Use `sumif` when you want the total value of a column under specific conditions.
* [**avgif**](/apl/aggregation-function/avgif): Returns the average of values for rows that satisfy a condition. Use `avgif` when you want to calculate the mean value based on a filter.
* [**countif**](/apl/aggregation-function/countif): Returns the count of rows that satisfy a condition. Use `countif` when you want to count occurrences that meet certain criteria.
# min
This page explains how to use the min aggregation function in APL.
The `min` aggregation function in APL returns the minimum value from a set of input values. You can use this function to identify the smallest numeric or comparable value in a column of data. This is useful when you want to find the quickest response time, the lowest transaction amount, or the earliest date in log data. It’s ideal for analyzing performance metrics, filtering out abnormal low points in your data, or discovering outliers.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk, the `min` function works similarly to APL's `min` aggregation, allowing you to find the minimum value in a field across your dataset. The main difference is in the query structure and syntax between the two.
```sql Splunk example
| stats min(duration) by id
```
```kusto APL equivalent
['sample-http-logs']
| summarize min(req_duration_ms) by id
```
In ANSI SQL, the `MIN` function works almost identically to the APL `min` aggregation. You use it to return the smallest value in a column of data, grouped by one or more fields.
```sql SQL example
SELECT MIN(duration), id FROM sample_http_logs GROUP BY id;
```
```kusto APL equivalent
['sample-http-logs']
| summarize min(req_duration_ms) by id
```
## Usage
### Syntax
```kusto
summarize min(Expression)
```
### Parameters
* `Expression`: The expression from which to calculate the minimum value. Typically, this is a numeric or date/time field.
### Returns
The function returns the smallest value found in the specified column or expression.
## Use case examples
In this use case, you analyze HTTP logs to find the minimum request duration for each unique user.
**Query**
```kusto
['sample-http-logs']
| summarize min(req_duration_ms) by id
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20min\(req_duration_ms\)%20by%20id%22%7D)
**Output**
| id | min\_req\_duration\_ms |
| --------- | ---------------------- |
| user\_123 | 32 |
| user\_456 | 45 |
This query returns the minimum request duration for each user, helping you identify the fastest responses.
Here, you analyze OpenTelemetry trace data to find the minimum span duration per service.
**Query**
```kusto
['otel-demo-traces']
| summarize min(duration) by ['service.name']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20min\(duration\)%20by%20%5B'service.name'%5D%22%7D)
**Output**
| service.name | min\_duration |
| --------------- | ------------- |
| frontend | 2ms |
| checkoutservice | 5ms |
This query returns the minimum span duration for each service in the trace logs.
In this example, you analyze security logs to find the minimum request duration for each HTTP status code.
**Query**
```kusto
['sample-http-logs']
| summarize min(req_duration_ms) by status
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20min\(req_duration_ms\)%20by%20status%22%7D)
**Output**
| status | min\_req\_duration\_ms |
| ------ | ---------------------- |
| 200 | 10 |
| 404 | 40 |
This query returns the minimum request duration for each HTTP status code, helping you identify if certain statuses are associated with faster or slower response times.
## List of related aggregations
* [**max**](/apl/aggregation-function/max): Returns the maximum value from a set of values. Use `max` when you need to find the highest value instead of the lowest.
* [**avg**](/apl/aggregation-function/avg): Calculates the average of a set of values. Use `avg` to find the mean value instead of the minimum.
* [**count**](/apl/aggregation-function/count): Counts the number of records or distinct values. Use `count` when you need to know how many records or unique values exist, rather than calculating the minimum.
* [**sum**](/apl/aggregation-function/sum): Adds all values together. Use `sum` when you need the total of a set of values rather than the minimum.
* [**percentile**](/apl/aggregation-function/percentile): Returns the value at a specified percentile. Use `percentile` if you need a value that falls at a certain point in the distribution of your data, rather than the minimum.
# minif
This page explains how to use the minif aggregation function in APL.
## Introduction
The `minif` aggregation in Axiom Processing Language (APL) allows you to calculate the minimum value of a numeric expression, but only for records that meet a specific condition. This aggregation is useful when you want to find the smallest value in a subset of data that satisfies a given predicate. For example, you can use `minif` to find the shortest request duration for successful HTTP requests, or the minimum span duration for a specific service in your OpenTelemetry traces.
The `minif` aggregation is especially useful in scenarios where you need conditional aggregations, such as log analysis, monitoring distributed systems, or examining security-related events.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk, you might use the `min` function in combination with `where` to filter results. In APL, the `minif` function combines both the filtering condition and the minimum calculation into one step.
```sql Splunk example
| stats min(req_duration_ms) as min_duration where status="200"
```
```kusto APL equivalent
['sample-http-logs']
| summarize minif(req_duration_ms, status == "200") by id
```
In ANSI SQL, you would typically use a `CASE` statement with `MIN` to apply conditional logic for aggregation. In APL, the `minif` function simplifies this by combining both the condition and the aggregation.
```sql SQL example
SELECT MIN(CASE WHEN status = '200' THEN req_duration_ms ELSE NULL END) as min_duration
FROM sample_http_logs
GROUP BY id;
```
```kusto APL equivalent
['sample-http-logs']
| summarize minif(req_duration_ms, status == "200") by id
```
## Usage
### Syntax
```kusto
summarize minif(Expression, Predicate)
```
### Parameters
| Parameter | Description |
| ------------ | ------------------------------------------------------------ |
| `Expression` | The numeric expression whose minimum value you want to find. |
| `Predicate` | The condition that determines which records to include. |
### Returns
The `minif` aggregation returns the minimum value of the specified `Expression` for the records that satisfy the `Predicate`.
## Use case examples
In log analysis, you might want to find the minimum request duration for successful HTTP requests.
**Query**
```kusto
['sample-http-logs']
| summarize minif(req_duration_ms, status == '200') by ['geo.city']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20minif\(req_duration_ms,%20status%20%3D%3D%20'200'\)%20by%20%5B'geo.city'%5D%22%7D)
**Output**
| geo.city | min\_duration |
| --------- | ------------- |
| San Diego | 120 |
| New York | 95 |
This query finds the minimum request duration for HTTP requests with a `200` status code, grouped by city.
For distributed tracing, you can use `minif` to find the minimum span duration for a specific service.
**Query**
```kusto
['otel-demo-traces']
| summarize minif(duration, ['service.name'] == 'frontend') by trace_id
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20minif\(duration,%20%5B'service.name'%5D%20%3D%3D%20'frontend'\)%20by%20trace_id%22%7D)
**Output**
| trace\_id | min\_duration |
| --------- | ------------- |
| abc123 | 50ms |
| def456 | 40ms |
This query returns the minimum span duration for traces from the `frontend` service, grouped by `trace_id`.
In security logs, you can use `minif` to find the minimum request duration for HTTP requests from a specific country.
**Query**
```kusto
['sample-http-logs']
| summarize minif(req_duration_ms, ['geo.country'] == 'US') by status
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20minif\(req_duration_ms,%20%5B'geo.country'%5D%20%3D%3D%20'US'\)%20by%20status%22%7D)
**Output**
| status | min\_duration |
| ------ | ------------- |
| 200 | 95 |
| 404 | 120 |
This query returns the minimum request duration for HTTP requests originating from the United States, grouped by HTTP status code.
## List of related aggregations
* [**maxif**](/apl/aggregation-function/maxif): Finds the maximum value of an expression that satisfies a condition. Use `maxif` when you need the maximum value under a condition, rather than the minimum.
* [**avgif**](/apl/aggregation-function/avgif): Calculates the average value of an expression that meets a specified condition. Useful when you want an average instead of a minimum.
* [**countif**](/apl/aggregation-function/countif): Counts the number of records that satisfy a given condition. Use this for counting records rather than calculating a minimum.
* [**sumif**](/apl/aggregation-function/sumif): Sums the values of an expression for records that meet a condition. Helpful when you're interested in the total rather than the minimum.
# percentile
This page explains how to use the percentile aggregation function in APL.
The `percentile` aggregation function in Axiom Processing Language (APL) allows you to calculate the value below which a given percentage of data points fall. It is particularly useful when you need to analyze distributions and want to summarize the data using specific thresholds, such as the 90th or 95th percentile. This function can be valuable in performance analysis, trend detection, or identifying outliers across large datasets.
You can apply the `percentile` function to various use cases, such as analyzing log data for request durations, OpenTelemetry traces for service latencies, or security logs to assess risk patterns.
The `percentile` aggregation in APL is a statistical aggregation that returns estimated results. The estimation comes with the benefit of speed at the expense of accuracy. This means that `percentile` is fast and light on resources even on a large or high-cardinality dataset, but it doesn’t provide precise results.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, the `percentile` function is referred to as `perc` or `percentile`. APL's `percentile` function works similarly, but the syntax is different. The main difference is that APL requires you to explicitly define the column on which you want to apply the percentile and the target percentile value.
```sql Splunk example
| stats perc95(req_duration_ms)
```
```kusto APL equivalent
['sample-http-logs']
| summarize percentile(req_duration_ms, 95)
```
In ANSI SQL, you might use the `PERCENTILE_CONT` or `PERCENTILE_DISC` functions to compute percentiles. In APL, the `percentile` function provides a simpler syntax while offering similar functionality.
```sql SQL example
SELECT PERCENTILE_CONT(0.95) WITHIN GROUP (ORDER BY req_duration_ms) FROM sample_http_logs;
```
```kusto APL equivalent
['sample-http-logs']
| summarize percentile(req_duration_ms, 95)
```
## Usage
### Syntax
```kusto
percentile(column, percentile)
```
### Parameters
* **column**: The name of the column to calculate the percentile on. This must be a numeric field.
* **percentile**: The target percentile value (between 0 and 100).
### Returns
The function returns the value from the specified column that corresponds to the given percentile.
## Use case examples
In log analysis, you can use the `percentile` function to identify the 95th percentile of request durations, which gives you an idea of the tail-end latencies of requests in your system.
**Query**
```kusto
['sample-http-logs']
| summarize percentile(req_duration_ms, 95)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20summarize%20percentile%28req_duration_ms%2C%2095%29%22%7D)
**Output**
| percentile\_req\_duration\_ms |
| ----------------------------- |
| 1200 |
This query calculates the 95th percentile of request durations, showing that 95% of requests take less than or equal to 1200ms.
For OpenTelemetry traces, you can use the `percentile` function to identify the 90th percentile of span durations for specific services, which helps to understand the performance of different services.
**Query**
```kusto
['otel-demo-traces']
| where ['service.name'] == 'checkoutservice'
| summarize percentile(duration, 90)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D%20%7C%20where%20%5B%27service.name%27%5D%20%3D%3D%20%27checkoutservice%27%20%7C%20summarize%20percentile%28duration%2C%2090%29%22%7D)
**Output**
| percentile\_duration |
| -------------------- |
| 300ms |
This query calculates the 90th percentile of span durations for the `checkoutservice`, helping to assess high-latency spans.
In security logs, you can use the `percentile` function to calculate the 99th percentile of response times for a specific set of status codes, helping you focus on outliers.
**Query**
```kusto
['sample-http-logs']
| where status == '500'
| summarize percentile(req_duration_ms, 99)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20where%20status%20%3D%3D%20%27500%27%20%7C%20summarize%20percentile%28req_duration_ms%2C%2099%29%22%7D)
**Output**
| percentile\_req\_duration\_ms |
| ----------------------------- |
| 2500 |
This query identifies that 99% of requests resulting in HTTP 500 errors take less than or equal to 2500ms.
## List of related aggregations
* [**avg**](/apl/aggregation-function/avg): Use `avg` to calculate the average of a column, which gives you the central tendency of your data. In contrast, `percentile` provides more insight into the distribution and tail values.
* [**min**](/apl/aggregation-function/min): The `min` function returns the smallest value in a column. Use this when you need the absolute lowest value instead of a specific percentile.
* [**max**](/apl/aggregation-function/max): The `max` function returns the highest value in a column. It’s useful for finding the upper bound, while `percentile` allows you to focus on a specific point in the data distribution.
* [**stdev**](/apl/aggregation-function/stdev): `stdev` calculates the standard deviation of a column, which helps measure data variability. While `stdev` provides insight into overall data spread, `percentile` focuses on specific distribution points.
# percentileif
This page explains how to use the percentileif aggregation function in APL.
The `percentileif` aggregation function calculates the percentile of a numeric column, conditional on a specified boolean predicate. This function is useful for filtering data dynamically and determining percentile values based only on relevant subsets of data.
You can use `percentileif` to gain insights in various scenarios, such as:
* Identifying response time percentiles for HTTP requests from specific regions.
* Calculating percentiles of span durations for specific service types in OpenTelemetry traces.
* Analyzing security events by percentile within defined risk categories.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
The `percentileif` aggregation in APL works similarly to `percentile` combined with conditional filtering in SPL. However, APL integrates the condition directly into the aggregation for simplicity.
```sql Splunk example
stats perc95(req_duration_ms) as p95 where geo.country="US"
```
```kusto APL equivalent
['sample-http-logs']
| summarize percentileif(req_duration_ms, 95, geo.country == 'US')
```
In SQL, you typically calculate percentiles using window functions or aggregate functions combined with a `WHERE` clause. APL simplifies this by embedding the condition directly in the `percentileif` aggregation.
```sql SQL example
SELECT PERCENTILE_CONT(0.95) WITHIN GROUP (ORDER BY req_duration_ms)
FROM sample_http_logs
WHERE geo_country = 'US'
```
```kusto APL equivalent
['sample-http-logs']
| summarize percentileif(req_duration_ms, 95, geo.country == 'US')
```
## Usage
### Syntax
```kusto
summarize percentileif(Field, Percentile, Predicate)
```
### Parameters
| Parameter | Description |
| ------------ | ---------------------------------------------------------------------- |
| `Field` | The numeric field from which to calculate the percentile. |
| `Percentile` | A number between 0 and 100 that specifies the percentile to calculate. |
| `Predicate` | A Boolean expression that filters rows to include in the calculation. |
### Returns
The function returns a single numeric value representing the specified percentile of the `Field` for rows where the `Predicate` evaluates to `true`.
## Use case examples
You can use `percentileif` to analyze request durations for specific HTTP methods.
**Query**
```kusto
['sample-http-logs']
| summarize post_p90 = percentileif(req_duration_ms, 90, method == "POST"), get_p90 = percentileif(req_duration_ms, 90, method == "GET") by bin_auto(_time)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20post_p90%20%3D%20percentileif\(req_duration_ms%2C%2090%2C%20method%20%3D%3D%20'POST'\)%2C%20get_p90%20%3D%20percentileif\(req_duration_ms%2C%2090%2C%20method%20%3D%3D%20'GET'\)%20by%20bin_auto\(_time\)%22%7D)
**Output**
| post\_p90 | get\_p90 |
| --------- | -------- |
| 1.691 ms | 1.453 ms |
This query calculates the 90th percentile of request durations for HTTP POST and GET methods.
You can use `percentileif` to measure span durations for specific services and operation kinds.
**Query**
```kusto
['otel-demo-traces']
| summarize percentileif(duration, 95, ['service.name'] == 'frontend' and kind == 'server')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D%20%7C%20summarize%20percentileif%28duration%2C%2095%2C%20%5B%27service.name%27%5D%20%3D%3D%20%27frontend%27%20and%20kind%20%3D%3D%20%27server%27%29%22%7D)
**Output**
| Percentile95 |
| ------------ |
| 1.2s |
This query calculates the 95th percentile of span durations for server spans in the `frontend` service.
You can use `percentileif` to calculate response time percentiles for specific HTTP status codes.
**Query**
```kusto
['sample-http-logs']
| summarize percentileif(req_duration_ms, 75, status == '404')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20summarize%20percentileif%28req_duration_ms%2C%2075%2C%20status%20%3D%3D%20%27404%27%29%22%7D)
**Output**
| Percentile75 |
| ------------ |
| 350 |
This query calculates the 75th percentile of request durations for HTTP 404 errors.
## List of related aggregations
* [percentile](/apl/aggregation-function/percentile): Calculates the percentile for all rows without any filtering. Use `percentile` when you don’t need conditional filtering.
* [avgif](/apl/aggregation-function/avgif): Calculates the average of a numeric column based on a condition. Use `avgif` for mean calculations instead of percentiles.
* [minif](/apl/aggregation-function/minif): Returns the minimum value of a numeric column where a condition is true. Use `minif` for identifying the lowest values within subsets.
* [maxif](/apl/aggregation-function/maxif): Returns the maximum value of a numeric column where a condition is true. Use `maxif` for identifying the highest values within subsets.
* [sumif](/apl/aggregation-function/sumif): Sums a numeric column based on a condition. Use `sumif` for conditional total calculations.
# rate
This page explains how to use the rate aggregation function in APL.
The `rate` aggregation function in APL (Axiom Processing Language) helps you calculate the rate of change over a specific time interval. This is especially useful for scenarios where you need to monitor how frequently an event occurs or how a value changes over time. For example, you can use the `rate` function to track request rates in web logs or changes in metrics like CPU usage or memory consumption.
The `rate` function is useful for analyzing trends in time series data and identifying unusual spikes or drops in activity. It can help you understand patterns in logs, metrics, and traces over specific intervals, such as per minute, per second, or per hour.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, the equivalent of the `rate` function can be achieved using the `timechart` command with a `per_second` option or by calculating the difference between successive values over time. In APL, the `rate` function simplifies this process by directly calculating the rate over a specified time interval.
```splunk Splunk example
| timechart per_second count by resp_body_size_bytes
```
```kusto APL equivalent
['sample-http-logs']
| summarize rate(resp_body_size_bytes) by bin(_time, 1s)
```
In ANSI SQL, calculating rates typically involves using window functions like `LAG` or `LEAD` to calculate the difference between successive rows in a time series. In APL, the `rate` function abstracts this complexity by allowing you to directly compute the rate over time without needing window functions.
```sql SQL example
SELECT resp_body_size_bytes, COUNT(*) / TIMESTAMPDIFF(SECOND, MIN(_time), MAX(_time)) AS rate
FROM http_logs;
```
```kusto APL equivalent
['sample-http-logs']
| summarize rate(resp_body_size_bytes) by bin(_time, 1s)
```
## Usage
### Syntax
```kusto
rate(field)
```
### Parameters
* `field`: The numeric field for which you want to calculate the rate.
### Returns
Returns the rate of change or occurrence of the specified `field` over the time interval specified in the query.
Specify the time interval in the query in the following way:
* `| summarize rate(field)` calculates the rate value of the field over the entire query window.
* `| summarize rate(field) by bin(_time, 1h)` calculates the rate value of the field over a one-hour time window.
* `| summarize rate(field) by bin_auto(_time)` calculates the rate value of the field bucketed by an automatic time window computed by `bin_auto()`.
Use two `summarize` statements to visualize the average rate over one minute per hour. For example:
```kusto
['sample-http-logs']
| summarize respBodyRate = rate(resp_body_size_bytes) by bin(_time, 1m)
| summarize avg(respBodyRate) by bin(_time, 1h)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20respBodyRate%20%3D%20rate\(resp_body_size_bytes\)%20by%20bin\(_time%2C%201m\)%20%7C%20summarize%20avg\(respBodyRate\)%20by%20bin\(_time%2C%201h\)%22%2C%20%22queryOptions%22%3A%7B%22quickRange%22%3A%226h%22%7D%7D)
## Use case examples
In this example, the `rate` aggregation calculates the rate of HTTP response sizes per second.
**Query**
```kusto
['sample-http-logs']
| summarize rate(resp_body_size_bytes) by bin(_time, 1s)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20rate\(resp_body_size_bytes\)%20by%20bin\(_time%2C%201s\)%22%7D)
**Output**
| rate | \_time |
| ------ | ------------------- |
| 854 kB | 2024-01-01 12:00:00 |
| 635 kB | 2024-01-01 12:00:01 |
This query calculates the rate of HTTP response sizes per second.
This example calculates the rate of span duration per second.
**Query**
```kusto
['otel-demo-traces']
| summarize rate(toint(duration)) by bin(_time, 1s)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20rate\(toint\(duration\)\)%20by%20bin\(_time%2C%201s\)%22%7D)
**Output**
| rate | \_time |
| ---------- | ------------------- |
| 26,393,768 | 2024-01-01 12:00:00 |
| 19,303,456 | 2024-01-01 12:00:01 |
This query calculates the rate of span duration per second.
In this example, the `rate` aggregation calculates the rate of HTTP request duration per second which can be useful to detect an increate in malicious requests.
**Query**
```kusto
['sample-http-logs']
| summarize rate(req_duration_ms) by bin(_time, 1s)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20rate\(req_duration_ms\)%20by%20bin\(_time%2C%201s\)%22%7D)
**Output**
| rate | \_time |
| ---------- | ------------------- |
| 240.668 ms | 2024-01-01 12:00:00 |
| 264.17 ms | 2024-01-01 12:00:01 |
This query calculates the rate of HTTP request duration per second.
## List of related aggregations
* [**count**](/apl/aggregation-function/count): Returns the total number of records. Use `count` when you want an absolute total instead of a rate over time.
* [**sum**](/apl/aggregation-function/sum): Returns the sum of values in a field. Use `sum` when you want to aggregate the total value, not its rate of change.
* [**avg**](/apl/aggregation-function/avg): Returns the average value of a field. Use `avg` when you want to know the mean value rather than how it changes over time.
* [**max**](/apl/aggregation-function/max): Returns the maximum value of a field. Use `max` when you need to find the peak value instead of how often or quickly something occurs.
* [**min**](/apl/aggregation-function/min): Returns the minimum value of a field. Use `min` when you’re looking for the lowest value rather than a rate.
# Aggregation functions
This section explains how to use and combine different aggregation functions in APL.
The table summarizes the aggregation functions available in APL. Use all these aggregation functions in the context of the [summarize operator](/apl/tabular-operators/summarize-operator).
| Function | Description |
| -------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------- |
| [avg](/apl/aggregation-function/avg) | Returns an average value across the group. |
| [avgif](/apl/aggregation-function/avgif) | Calculates the average value of an expression in records for which the predicate evaluates to true. |
| [count](/apl/aggregation-function/count) | Returns a count of the group without/with a predicate. |
| [countif](/apl/aggregation-function/countif) | Returns a count of rows for which the predicate evaluates to true. |
| [dcount](/apl/aggregation-function/dcount) | Returns an estimate for the number of distinct values that are taken by a scalar an expressionession in the summary group. |
| [dcountif](/apl/aggregation-function/dcountif) | Returns an estimate of the number of distinct values of an expression of rows for which the predicate evaluates to true. |
| [histogram](/apl/aggregation-function/histogram) | Returns a timeseries heatmap chart across the group. |
| [make\_list](/apl/aggregation-function/make-list) | Creates a dynamic JSON object (array) of all the values of an expression in the group. |
| [make\_list\_if](/apl/aggregation-function/make-list-if) | Creates a dynamic JSON object (array) of an expression values in the group for which the predicate evaluates to true. |
| [make\_set](/apl/aggregation-function/make-set) | Creates a dynamic JSON array of the set of distinct values that an expression takes in the group. |
| [make\_set\_if](/apl/aggregation-function/make-set-if) | Creates a dynamic JSON object (array) of the set of distinct values that an expression takes in records for which the predicate evaluates to true. |
| [max](/apl/aggregation-function/max) | Returns the maximum value across the group. |
| [maxif](/apl/aggregation-function/maxif) | Calculates the maximum value of an expression in records for which the predicate evaluates to true. |
| [min](/apl/aggregation-function/min) | Returns the minimum value across the group. |
| [minif](/apl/aggregation-function/minif) | Returns the minimum of an expression in records for which the predicate evaluates to true. |
| [percentile](/apl/aggregation-function/percentile) | Calculates the requested percentiles of the group and produces a timeseries chart. |
| [percentileif](/apl/aggregation-function/percentileif) | Calculates the requested percentiles of the field for the rows where the predicate evaluates to true. |
| [rate](/apl/aggregation-function/rate) | Calculates the rate of values in a group per second. |
| [stdev](/apl/aggregation-function/stdev) | Calculates the standard deviation of an expression across the group. |
| [stdevif](/apl/aggregation-function/stdevif) | Calculates the standard deviation of an expression in records for which the predicate evaluates to true. |
| [sum](/apl/aggregation-function/sum) | Calculates the sum of an expression across the group. |
| [sumif](/apl/aggregation-function/sumif) | Calculates the sum of an expression in records for which the predicate evaluates to true. |
| [topk](/apl/aggregation-function/topk) | calculates the top values of an expression across the group in a dataset. |
| [variance](/apl/aggregation-function/variance) | Calculates the variance of an expression across the group. |
| [varianceif](/apl/aggregation-function/varianceif) | Calculates the variance of an expression in records for which the predicate evaluates to true. |
# stdev
This page explains how to use the stdev aggregation function in APL.
The `stdev` aggregation in APL computes the standard deviation of a numeric field within a dataset. This is useful for understanding the variability or dispersion of data points around the mean. You can apply this aggregation to various use cases, such as performance monitoring, anomaly detection, and statistical analysis of logs and traces.
Use the `stdev` function to determine how spread out values like request duration, span duration, or response times are. This is particularly helpful when analyzing data trends and identifying inconsistencies, outliers, or abnormal behavior.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, the `stdev` aggregation function works similarly but has a different syntax. While SPL uses the `stdev` command within the `stats` function, APL users will find the aggregation works similarly in APL with just minor differences in syntax.
```sql Splunk example
| stats stdev(duration) as duration_std
```
```kusto APL equivalent
['dataset']
| summarize duration_std = stdev(duration)
```
In ANSI SQL, the standard deviation is computed using the `STDDEV` function. APL's `stdev` function is the direct equivalent of SQL’s `STDDEV`, although APL uses pipes (`|`) for chaining operations and different keyword formatting.
```sql SQL example
SELECT STDDEV(duration) AS duration_std FROM dataset;
```
```kusto APL equivalent
['dataset']
| summarize duration_std = stdev(duration)
```
## Usage
### Syntax
```kusto
stdev(numeric_field)
```
### Parameters
* **`numeric_field`**: The field containing numeric values for which the standard deviation is calculated.
### Returns
The `stdev` aggregation returns a single numeric value representing the standard deviation of the specified numeric field in the dataset.
## Use case examples
You can use the `stdev` aggregation to analyze HTTP request durations and identify performance variations across different requests. For instance, you can calculate the standard deviation of request durations to identify potential anomalies.
**Query**
```kusto
['sample-http-logs']
| summarize req_duration_std = stdev(req_duration_ms)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20req_duration_std%20%3D%20stdev\(req_duration_ms\)%22%7D)
**Output**
| req\_duration\_std |
| ------------------ |
| 345.67 |
This query calculates the standard deviation of the `req_duration_ms` field in the `sample-http-logs` dataset, helping to understand how much variability there is in request durations.
In distributed tracing, calculating the standard deviation of span durations can help identify inconsistent spans that might indicate performance issues or bottlenecks.
**Query**
```kusto
['otel-demo-traces']
| summarize span_duration_std = stdev(duration)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20span_duration_std%20%3D%20stdev\(duration\)%22%7D)
**Output**
| span\_duration\_std |
| ------------------- |
| 0:00:02.456 |
This query computes the standard deviation of span durations in the `otel-demo-traces` dataset, providing insight into how much variation exists between trace spans.
In security logs, the `stdev` function can help analyze the response times of various HTTP requests, potentially identifying patterns that might be related to security incidents or abnormal behavior.
**Query**
```kusto
['sample-http-logs']
| summarize resp_time_std = stdev(req_duration_ms) by status
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20resp_time_std%20%3D%20stdev\(req_duration_ms\)%20by%20status%22%7D)
**Output**
| status | resp\_time\_std |
| ------ | --------------- |
| 200 | 123.45 |
| 500 | 567.89 |
This query calculates the standard deviation of request durations grouped by the HTTP status code, providing insight into the performance of different status codes.
## List of related aggregations
* [**avg**](/apl/aggregation-function/avg): Calculates the average value of a numeric field. Use `avg` to understand the central tendency of the data.
* [**min**](/apl/aggregation-function/min): Returns the smallest value in a numeric field. Use `min` when you need to find the minimum value.
* [**max**](/apl/aggregation-function/max): Returns the largest value in a numeric field. Use `max` to identify the peak value in a dataset.
* [**sum**](/apl/aggregation-function/sum): Adds up all the values in a numeric field. Use `sum` to get a total across records.
* [**count**](/apl/aggregation-function/count): Returns the number of records in a dataset. Use `count` when you need the number of occurrences or entries.
# stdevif
This page explains how to use the stdevif aggregation function in APL.
The `stdevif` aggregation function in APL computes the standard deviation of values in a group based on a specified condition. This is useful when you want to calculate variability in data, but only for rows that meet a particular condition. For example, you can use `stdevif` to find the standard deviation of response times in an HTTP log, but only for requests that resulted in a 200 status code.
The `stdevif` function is useful when you want to analyze the spread of data values filtered by specific criteria, such as analyzing request durations in successful transactions or monitoring trace durations of specific services in OpenTelemetry data.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, the `stdev` function is used to calculate the standard deviation, but you need to use an `if` function or a `where` clause to filter data. APL simplifies this by combining both operations in `stdevif`.
```sql Splunk example
| stats stdev(req_duration_ms) as stdev_req where status="200"
```
```kusto APL equivalent
['sample-http-logs']
| summarize stdevif(req_duration_ms, status == "200") by geo.country
```
In ANSI SQL, the `STDDEV` function is used to compute the standard deviation, but it requires the use of a `CASE WHEN` expression to apply a conditional filter. APL integrates the condition directly into the `stdevif` function.
```sql SQL example
SELECT STDDEV(CASE WHEN status = '200' THEN req_duration_ms END)
FROM sample_http_logs
GROUP BY geo.country;
```
```kusto APL equivalent
['sample-http-logs']
| summarize stdevif(req_duration_ms, status == "200") by geo.country
```
## Usage
### Syntax
```kusto
summarize stdevif(column, condition)
```
### Parameters
* **column**: The column that contains the numeric values for which you want to calculate the standard deviation.
* **condition**: The condition that must be true for the values to be included in the standard deviation calculation.
### Returns
The `stdevif` function returns a floating-point number representing the standard deviation of the specified column for the rows that satisfy the condition.
## Use case examples
In this example, you calculate the standard deviation of request durations (`req_duration_ms`), but only for successful HTTP requests (status code 200).
**Query**
```kusto
['sample-http-logs']
| summarize stdevif(req_duration_ms, status == '200') by ['geo.country']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20summarize%20stdevif%28req_duration_ms%2C%20status%20%3D%3D%20%27200%27%29%20by%20%5B%27geo.country%27%5D%22%7D)
**Output**
| geo.country | stdev\_req\_duration\_ms |
| ----------- | ------------------------ |
| US | 120.45 |
| Canada | 98.77 |
| Germany | 134.92 |
This query calculates the standard deviation of request durations for HTTP 200 responses, grouped by country.
In this example, you calculate the standard deviation of span durations, but only for traces from the `frontend` service.
**Query**
```kusto
['otel-demo-traces']
| summarize stdevif(duration, ['service.name'] == "frontend") by kind
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D%20%7C%20summarize%20stdevif%28duration%2C%20%5B%27service.name%27%5D%20%3D%3D%20%27frontend%27%29%20by%20kind%22%7D)
**Output**
| kind | stdev\_duration |
| ------ | --------------- |
| server | 45.78 |
| client | 23.54 |
This query computes the standard deviation of span durations for the `frontend` service, grouped by span type (`kind`).
In this example, you calculate the standard deviation of request durations for security events from specific HTTP methods, filtered by `POST` requests.
**Query**
```kusto
['sample-http-logs']
| summarize stdevif(req_duration_ms, method == "POST") by ['geo.city']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20summarize%20stdevif%28req_duration_ms%2C%20method%20%3D%3D%20%27POST%27%29%20by%20%5B%27geo.city%27%5D%22%7D)
**Output**
| geo.city | stdev\_req\_duration\_ms |
| -------- | ------------------------ |
| New York | 150.12 |
| Berlin | 130.33 |
This query calculates the standard deviation of request durations for `POST` HTTP requests, grouped by the originating city.
## List of related aggregations
* [**avgif**](/apl/aggregation-function/avgif): Similar to `stdevif`, but instead of calculating the standard deviation, `avgif` computes the average of values that meet the condition.
* [**sumif**](/apl/aggregation-function/sumif): Computes the sum of values that meet the condition. Use `sumif` when you want to aggregate total values instead of analyzing data spread.
* [**varianceif**](/apl/aggregation-function/varianceif): Returns the variance of values that meet the condition, which is a measure of how spread out the data points are.
* [**countif**](/apl/aggregation-function/countif): Counts the number of rows that satisfy the specified condition.
* [**minif**](/apl/aggregation-function/minif): Retrieves the minimum value that satisfies the given condition, useful when finding the smallest value in filtered data.
# sum
This page explains how to use the sum aggregation function in APL.
The `sum` aggregation in APL is used to compute the total sum of a specific numeric field in a dataset. This aggregation is useful when you want to find the cumulative value for a certain metric, such as the total duration of requests, total sales revenue, or any other numeric field that can be summed.
You can use the `sum` aggregation in a wide range of scenarios, such as analyzing log data, monitoring traces, or examining security logs. It is particularly helpful when you want to get a quick overview of your data in terms of totals or cumulative statistics.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk, you use the `sum` function in combination with the `stats` command to aggregate data. In APL, the `sum` aggregation works similarly but is structured differently in terms of syntax.
```splunk Splunk example
| stats sum(req_duration_ms) as total_duration
```
```kusto APL equivalent
['sample-http-logs']
| summarize total_duration = sum(req_duration_ms)
```
In ANSI SQL, the `SUM` function is commonly used with the `GROUP BY` clause to aggregate data by a specific field. In APL, the `sum` function works similarly but can be used without requiring a `GROUP BY` clause for simple summations.
```sql SQL example
SELECT SUM(req_duration_ms) AS total_duration
FROM sample_http_logs
```
```kusto APL equivalent
['sample-http-logs']
| summarize total_duration = sum(req_duration_ms)
```
## Usage
### Syntax
```kusto
summarize [ =] sum()
```
### Parameters
* ``: (Optional) The name you want to assign to the resulting column that contains the sum.
* ``: The field in your dataset that contains the numeric values you want to sum.
### Returns
The `sum` aggregation returns a single row with the sum of the specified numeric field. If used with a `by` clause, it returns multiple rows with the sum per group.
## Use case examples
The `sum` aggregation can be used to calculate the total request duration in an HTTP log dataset.
**Query**
```kusto
['sample-http-logs']
| summarize total_duration = sum(req_duration_ms)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20total_duration%20%3D%20sum\(req_duration_ms\)%22%7D)
**Output**
| total\_duration |
| --------------- |
| 123456 |
This query calculates the total request duration across all HTTP requests in the dataset.
The `sum` aggregation can be applied to OpenTelemetry traces to calculate the total span duration.
**Query**
```kusto
['otel-demo-traces']
| summarize total_duration = sum(duration)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20total_duration%20%3D%20sum\(duration\)%22%7D)
**Output**
| total\_duration |
| --------------- |
| 7890 |
This query calculates the total duration of all spans in the dataset.
You can use the `sum` aggregation to calculate the total number of requests based on a specific HTTP status in security logs.
**Query**
```kusto
['sample-http-logs']
| where status == '200'
| summarize request_count = sum(1)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20where%20status%20%3D%3D%20'200'%20%7C%20summarize%20request_count%20%3D%20sum\(1\)%22%7D)
**Output**
| request\_count |
| -------------- |
| 500 |
This query counts the total number of successful requests (status 200) in the dataset.
## List of related aggregations
* [**count**](/apl/aggregation-function/count): Counts the number of records in a dataset. Use `count` when you want to count the number of rows, not aggregate numeric values.
* [**avg**](/apl/aggregation-function/avg): Computes the average value of a numeric field. Use `avg` when you need to find the mean instead of the total sum.
* [**min**](/apl/aggregation-function/min): Returns the minimum value of a numeric field. Use `min` when you're interested in the lowest value.
* [**max**](/apl/aggregation-function/max): Returns the maximum value of a numeric field. Use `max` when you're interested in the highest value.
* [**sumif**](/apl/aggregation-function/sumif): Sums a numeric field conditionally. Use `sumif` when you only want to sum values that meet a specific condition.
# sumif
This page explains how to use the sumif aggregation function in APL.
The `sumif` aggregation function in Axiom Processing Language (APL) computes the sum of a numeric expression for records that meet a specified condition. This function is useful when you want to filter data based on specific criteria and aggregate the numeric values that match the condition. Use `sumif` when you need to apply conditional logic to sums, such as calculating the total request duration for successful HTTP requests or summing the span durations in OpenTelemetry traces for a specific service.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, the `sumif` equivalent functionality requires using a `stats` command with a `where` clause to filter the data. In APL, you can use `sumif` to simplify this operation by combining both the condition and the summing logic into one function.
```sql Splunk example
| stats sum(duration) as total_duration where status="200"
```
```kusto APL equivalent
summarize total_duration = sumif(duration, status == '200')
```
In ANSI SQL, achieving a similar result typically involves using a `CASE` statement inside the `SUM` function to conditionally sum values based on a specified condition. In APL, `sumif` provides a more concise approach by allowing you to filter and sum in a single function.
```sql SQL example
SELECT SUM(CASE WHEN status = '200' THEN duration ELSE 0 END) AS total_duration
FROM http_logs
```
```kusto APL equivalent
summarize total_duration = sumif(duration, status == '200')
```
## Usage
### Syntax
```kusto
sumif(numeric_expression, condition)
```
### Parameters
* `numeric_expression`: The numeric field or expression you want to sum.
* `condition`: A boolean expression that determines which records contribute to the sum. Only the records that satisfy the condition are considered.
### Returns
`sumif` returns the sum of the values in `numeric_expression` for records where the `condition` is true. If no records meet the condition, the result is 0.
## Use case examples
In this use case, we calculate the total request duration for HTTP requests that returned a `200` status code.
**Query**
```kusto
['sample-http-logs']
| summarize total_req_duration = sumif(req_duration_ms, status == '200')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20summarize%20total_req_duration%20%3D%20sumif%28req_duration_ms%2C%20status%20%3D%3D%20%27200%27%29%22%7D)
**Output**
| total\_req\_duration |
| -------------------- |
| 145000 |
This query computes the total request duration (in milliseconds) for all successful HTTP requests (those with a status code of `200`).
In this example, we sum the span durations for the `frontend` service in OpenTelemetry traces.
**Query**
```kusto
['otel-demo-traces']
| summarize total_duration = sumif(duration, ['service.name'] == 'frontend')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D%20%7C%20summarize%20total_duration%20%3D%20sumif%28duration%2C%20%5B%27service.name%27%5D%20%3D%3D%20%27frontend%27%29%22%7D)
**Output**
| total\_duration |
| --------------- |
| 32000 |
This query sums the span durations for traces related to the `frontend` service, providing insight into how long this service has been running over time.
Here, we calculate the total request duration for failed HTTP requests (those with status codes other than `200`).
**Query**
```kusto
['sample-http-logs']
| summarize total_req_duration_failed = sumif(req_duration_ms, status != '200')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20summarize%20total_req_duration_failed%20%3D%20sumif%28req_duration_ms%2C%20status%20%21%3D%20%27200%27%29%22%7D)
**Output**
| total\_req\_duration\_failed |
| ---------------------------- |
| 64000 |
This query computes the total request duration for all failed HTTP requests (where the status code is not `200`), which can be useful for security log analysis.
## List of related aggregations
* [**avgif**](/apl/aggregation-function/avgif): Computes the average of a numeric expression for records that meet a specified condition. Use `avgif` when you're interested in the average value, not the total sum.
* [**countif**](/apl/aggregation-function/countif): Counts the number of records that satisfy a condition. Use `countif` when you need to know how many records match a specific criterion.
* [**minif**](/apl/aggregation-function/minif): Returns the minimum value of a numeric expression for records that meet a condition. Useful when you need the smallest value under certain criteria.
* [**maxif**](/apl/aggregation-function/maxif): Returns the maximum value of a numeric expression for records that meet a condition. Use `maxif` to identify the highest values under certain conditions.
# topk
This page explains how to use the topk aggregation function in APL.
The `topk` aggregation in Axiom Processing Language (APL) allows you to identify the top *k* results based on a specified field. This is especially useful when you want to quickly analyze large datasets and extract the most significant values, such as the top-performing queries, most frequent errors, or highest latency requests.
Use `topk` to find the most common or relevant entries in datasets, especially in log analysis, telemetry data, and monitoring systems. This aggregation helps you focus on the most important data points, filtering out the noise.
The `topk` aggregation in APL is a statistical aggregation that returns estimated results. The estimation comes with the benefit of speed at the expense of accuracy. This means that `topk` is fast and light on resources even on a large or high-cardinality dataset, but it doesn’t provide precise results.
For completely accurate results, use the [`top` operator](/apl/tabular-operators/top-operator).
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
Splunk SPL doesn’t have the equivalent of the `topk` function. You can achieve similar results with SPL’s `top` command which is equivalent to APL’s `top` operator. The `topk` function in APL behaves similarly by returning the top `k` values of a specified field, but its syntax is unique to APL.
The main difference between `top` (supported by both SPL and APL) and `topk` (supported only by APL) is that `topk` is estimated. This means that APL’s `topk` is faster, less resource intenstive, but less accurate than SPL’s `top`.
```sql Splunk example
| top limit=5 status by method
```
```kusto APL equivalent
['sample-http-logs']
| summarize topk(status, 5) by method
```
In ANSI SQL, identifying the top *k* rows often involves using the `ORDER BY` and `LIMIT` clauses. While the logic remains similar, APL’s `topk` simplifies this process by directly returning the top *k* values of a field in an aggregation.
The main difference between SQL’s solution and APL’s `topk` is that `topk` is estimated. This means that APL’s `topk` is faster, less resource intenstive, but less accurate than SQL’s combination of `ORDER BY` and `LIMIT` clauses.
```sql SQL example
SELECT status, COUNT(*)
FROM sample_http_logs
GROUP BY status
ORDER BY COUNT(*) DESC
LIMIT 5;
```
```kusto APL equivalent
['sample-http-logs']
| summarize topk(status, 5)
```
## Usage
### Syntax
```kusto
topk(field, k)
```
### Parameters
* **`field`**: The field or expression to rank the results by.
* **`k`**: The number of top results to return.
### Returns
A subset of the original dataset with the top *k* values based on the specified field.
## Use case examples
When analyzing HTTP logs, you can use the `topk` function to find the top 5 most frequent HTTP status codes.
**Query**
```kusto
['sample-http-logs']
| summarize topk(status, 5)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%20%7C%20summarize%20topk\(status%2C%205\)%22%7D)
**Output**
| status | count\_ |
| ------ | ------- |
| 200 | 1500 |
| 404 | 400 |
| 500 | 200 |
| 301 | 150 |
| 302 | 100 |
This query groups the logs by HTTP status and returns the 5 most frequent statuses.
In OpenTelemetry traces, you can use `topk` to find the top five status codes by service.
**Query**
```kusto
['otel-demo-traces']
| summarize topk(['attributes.http.status_code'], 5) by ['service.name']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20topk\(%5B'attributes.http.status_code'%5D%2C%205\)%20by%20%5B'service.name'%5D%22%7D)
**Output**
| service.name | attributes.http.status\_code | \_count |
| ------------- | ---------------------------- | ---------- |
| frontendproxy | 200 | 34,862,088 |
| | 203 | 3,095,223 |
| | 404 | 154,417 |
| | 500 | 153,823 |
| | 504 | 3,497 |
This query shows the top five status codes by service.
You can use `topk` in security log analysis to find the top 5 cities generating the most HTTP requests.
**Query**
```kusto
['sample-http-logs']
| summarize topk(['geo.city'], 5)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%20%7C%20summarize%20topk\(%5B'geo.city'%5D%2C%205\)%22%7D)
**Output**
| geo.city | count\_ |
| -------- | ------- |
| New York | 500 |
| London | 400 |
| Paris | 350 |
| Tokyo | 300 |
| Berlin | 250 |
This query returns the top 5 cities based on the number of HTTP requests.
## List of related aggregations
* [**top**](/apl/tabular-operators/top-operator): Returns the top values based on a field without requiring a specific number of results (`k`), making it useful when you're unsure how many top values to retrieve.
* [**sort**](/apl/tabular-operators/sort-operator): Orders the dataset based on one or more fields, which is useful if you need a complete ordered list rather than the top *k* values.
* [**extend**](/apl/tabular-operators/extend-operator): Adds calculated fields to your dataset, which can be useful in combination with `topk` to create custom rankings.
* [**count**](/apl/aggregation-function/count): Aggregates the dataset by counting occurrences, often used in conjunction with `topk` to find the most common values.
# variance
This page explains how to use the variance aggregation function in APL.
The `variance` aggregation function in APL calculates the variance of a numeric expression across a set of records. Variance is a statistical measurement that represents the spread of data points in a dataset. It's useful for understanding how much variation exists in your data. In scenarios such as performance analysis, network traffic monitoring, or anomaly detection, `variance` helps identify outliers and patterns by showing how data points deviate from the mean.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In SPL, variance is computed using the `stats` command with the `var` function, whereas in APL, you can use `variance` for the same functionality.
```sql Splunk example
| stats var(req_duration_ms) as variance
```
```kusto APL equivalent
['sample-http-logs']
| summarize variance(req_duration_ms)
```
In ANSI SQL, variance is typically calculated using `VAR_POP` or `VAR_SAMP`. APL provides a simpler approach using the `variance` function without needing to specify population or sample.
```sql SQL example
SELECT VAR_POP(req_duration_ms) FROM sample_http_logs;
```
```kusto APL equivalent
['sample-http-logs']
| summarize variance(req_duration_ms)
```
## Usage
### Syntax
```kusto
summarize variance(Expression)
```
### Parameters
* `Expression`: A numeric expression or field for which you want to compute the variance. The expression should evaluate to a numeric data type.
### Returns
The function returns the variance (a numeric value) of the specified expression across the records.
## Use case examples
You can use the `variance` function to measure the variability of request durations, which helps in identifying performance bottlenecks or anomalies in web services.
**Query**
```kusto
['sample-http-logs']
| summarize variance(req_duration_ms)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20variance\(req_duration_ms\)%22%7D)
**Output**
| variance\_req\_duration\_ms |
| --------------------------- |
| 1024.5 |
This query calculates the variance of request durations from a dataset of HTTP logs. A high variance indicates greater variability in request durations, potentially signaling performance issues.
For OpenTelemetry traces, `variance` can be used to measure how much span durations differ across service invocations, helping in performance optimization and anomaly detection.
**Query**
```kusto
['otel-demo-traces']
| summarize variance(duration)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20variance\(duration\)%22%7D)
**Output**
| variance\_duration |
| ------------------ |
| 1287.3 |
This query computes the variance of span durations across traces, which helps in understanding how consistent the service performance is. A higher variance might indicate unstable or inconsistent performance.
You can use the `variance` function on security logs to detect abnormal patterns in request behavior, such as unusual fluctuations in response times, which may point to potential security threats.
**Query**
```kusto
['sample-http-logs']
| summarize variance(req_duration_ms) by status
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20variance\(req_duration_ms\)%20by%20status%22%7D)
**Output**
| status | variance\_req\_duration\_ms |
| ------ | --------------------------- |
| 200 | 1534.8 |
| 404 | 2103.4 |
This query calculates the variance of request durations grouped by HTTP status codes. High variance in certain status codes (e.g., 404 errors) can indicate network or application issues.
## List of related aggregations
* [**stdev**](/apl/aggregation-function/stdev): Computes the standard deviation, which is the square root of the variance. Use `stdev` when you need the spread of data in the same units as the original dataset.
* [**avg**](/apl/aggregation-function/avg): Computes the average of a numeric field. Combine `avg` with `variance` to analyze both the central tendency and the spread of data.
* [**count**](/apl/aggregation-function/count): Counts the number of records. Use `count` alongside `variance` to get a sense of data size relative to variance.
* [**percentile**](/apl/aggregation-function/percentile): Returns a value below which a given percentage of observations fall. Use `percentile` for a more detailed distribution analysis.
* [**max**](/apl/aggregation-function/max): Returns the maximum value. Use `max` when you are looking for extreme values in addition to variance to detect anomalies.
# varianceif
This page explains how to use the varianceif aggregation function in APL.
The `varianceif` aggregation in APL calculates the variance of values that meet a specified condition. This is useful when you want to understand the variability of a subset of data without considering all data points. For example, you can use `varianceif` to compute the variance of request durations for HTTP requests that resulted in a specific status code or to track anomalies in trace durations for a particular service.
You can use the `varianceif` aggregation when analyzing logs, telemetry data, or security events where conditions on subsets of the data are critical to your analysis.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk, you would use the `eval` function to filter data and calculate variance for specific conditions. In APL, `varianceif` combines the filtering and aggregation into a single function, making your queries more concise.
```sql Splunk example
| eval filtered_var=if(status=="200",req_duration_ms,null())
| stats var(filtered_var)
```
```kusto APL equivalent
['sample-http-logs']
| summarize varianceif(req_duration_ms, status == '200')
```
In ANSI SQL, you typically use a `CASE` statement to apply conditional logic and then compute the variance. In APL, `varianceif` simplifies this by combining both the condition and the aggregation.
```sql SQL example
SELECT VARIANCE(CASE WHEN status = '200' THEN req_duration_ms END)
FROM sample_http_logs;
```
```kusto APL equivalent
['sample-http-logs']
| summarize varianceif(req_duration_ms, status == '200')
```
## Usage
### Syntax
```kusto
summarize varianceif(Expr, Predicate)
```
### Parameters
* `Expr`: The expression (numeric) for which you want to calculate the variance.
* `Predicate`: A boolean condition that determines which records to include in the calculation.
### Returns
Returns the variance of `Expr` for the records where the `Predicate` is true. If no records match the condition, it returns `null`.
## Use case examples
You can use the `varianceif` function to calculate the variance of HTTP request durations for requests that succeeded (`status == '200'`).
**Query**
```kusto
['sample-http-logs']
| summarize varianceif(req_duration_ms, status == '200')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20varianceif%28req_duration_ms%2C%20status%20%3D%3D%20'200'%29%22%7D)
**Output**
| varianceif\_req\_duration\_ms |
| ----------------------------- |
| 15.6 |
This query calculates the variance of request durations for all HTTP requests that returned a status code of 200 (successful requests).
You can use the `varianceif` function to monitor the variance in span durations for a specific service, such as the `frontend` service.
**Query**
```kusto
['otel-demo-traces']
| summarize varianceif(duration, ['service.name'] == 'frontend')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20varianceif%28duration%2C%20%5B'service.name'%5D%20%3D%3D%20'frontend'%29%22%7D)
**Output**
| varianceif\_duration |
| -------------------- |
| 32.7 |
This query calculates the variance in the duration of spans generated by the `frontend` service.
The `varianceif` function can also be used to track the variance in request durations for requests from a specific geographic region, such as requests from `geo.country == 'United States'`.
**Query**
```kusto
['sample-http-logs']
| summarize varianceif(req_duration_ms, ['geo.country'] == 'United States')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20varianceif%28req_duration_ms%2C%20%5B'geo.country'%5D%20%3D%3D%20'United%20States'%29%22%7D)
**Output**
| varianceif\_req\_duration\_ms |
| ----------------------------- |
| 22.9 |
This query calculates the variance in request durations for requests originating from the United States.
## List of related aggregations
* [**avgif**](/apl/aggregation-function/avgif): Computes the average value of an expression for records that match a given condition. Use `avgif` when you want the average instead of variance.
* [**sumif**](/apl/aggregation-function/sumif): Returns the sum of values that meet a specified condition. Use `sumif` when you're interested in totals, not variance.
* [**stdevif**](/apl/aggregation-function/stdevif): Returns the standard deviation of values based on a condition. Use `stdevif` when you want to measure dispersion using standard deviation instead of variance.
# Map fields
This page explains what map fields are and how to query them.
Map fields are a special type of field that can hold a collection of nested key-value pairs within a single field. You can think of the content of a map field as a JSON object.
Currently, Axiom automatically creates map fields in datasets that use [OpenTelemetry](/send-data/opentelemetry). You cannot create map fields yourself.
Support for creating your own map fields is coming in early 2025. To express interest in the feature, [contact Axiom](https://axiom.co/contact).
## Benefits and drawbacks of map fields
The benefit of map fields is that you can store additional attributes without adding more fields. This is particularly useful when the shape of your data is unpredictable (for example, additional attributes added by OpenTelemetry instrumentation). Using map fields means that you can avoid reaching the field limit of a dataset.
The drawbacks of map fields are the following:
* Querying map fields uses more query-hours than querying conventional fields.
* Map fields don’t compress as well as conventional fields. This means datasets with map fields use more storage.
* You don’t have visibility into map fields from the schema. For example, autocomplete doesn’t know the properties inside the map field.
## Custom attributes in tracing datasets
If you use [OpenTelemetry](/send-data/opentelemetry) to send data to Axiom, you find some attributes in the `attributes.custom` map field. The reason is that instrumentation libraries can add hundreds or even thousands of arbitrary attributes to spans. Storing each custom attribute in a separate field would significantly increase the number of fields in your dataset. To keep the number of fields in your dataset under control, Axiom places all custom attributes in the single `attributes.custom` map field.
## Use map fields in queries
The example query below uses the `http.protocol` property inside the `attributes.custom` map field to filter results:
```kusto
['otel-demo-traces']
| where ['attributes.custom']['http.protocol'] == 'HTTP/1.1'
```
[Run in playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7b%22apl%22%3a%22%5b%27otel-demo-traces%27%5d%5cn%7c%20where%20%5b%27attributes.custom%27%5d%5b%27http.protocol%27%5d%20%3d%3d%20%27HTTP%2f1.1%27%22%2c%22queryOptions%22%3a%7b%22quickRange%22%3a%2230d%22%7d%7d)
## Access properties of nested maps
To access the properties of nested maps, use dot notation, index notation, or a mix of the two. If you use index notation for an entity, enclose the entity name in quotation marks (`'` or `"`) and square brackets (`[]`). For example:
* `where map_field.property1.property2 == 14`
* `where ['map_field'].property1.property2 == 14`
* `where ['map_field']['property1']['property2'] == 14`
If an entity name has spaces (` `), dots (`.`), or dashes (`-`), you can only use index notation for that entity. You can use dot notation for the other entities. For example:
* `where ['map.field']['property.name1']['property.name2'] == 14`
* `where ['map.field'].property1.property2 == 14`
For more information, see [Entity names](/apl/entities/entity-names#quote-identifiers).
# Null values
This page explains how APL represents missing values.
All scalar data types in APL have a special value that represents a missing value. This value is called the null value, or null.
## Null literals
The null value of a scalar type D is represented in the query language by the null literal D(null). The following query returns a single row full of null values:
```kusto
print bool(null), datetime(null), dynamic(null), int(null), long(null), real(null), double(null), time(null)
```
## Predicates on null values
The scalar function [isnull()](/apl/scalar-functions/string-functions#isnull\(\)) can be used to determine if a scalar value is the null value. The corresponding function [isnotnull()](/apl/scalar-functions/string-functions#isnotnull\(\)) can be used to determine if a scalar value isn’t the null value.
## Equality and inequality of null values
* Equality (`==`): Applying the equality operator to two null values yields `bool(null)`. Applying the equality operator to a null value and a non-null value yields `bool(false)`.
* inequality(`!=`): Applying the inequality operator to two null values yields `bool(null)`. Applying the inequality operator to a null value and a non-null value yields `bool(true)`.
# Scalar data types
This page explains the data types in APL.
Axiom Processing Language supplies a set of system data types that define all the types of data that can be used with APL.
The following table lists the data types supported by APL, alongside additional aliases you can use to refer to them.
| **Type** | **Additional name(s)** | **gettype()** |
| ------------------------------------- | ----------------------------- | ------------------------------------------------------------ |
| [bool()](#the-bool-data-type) | **boolean** | **int8** |
| [datetime()](#the-datetime-data-type) | **date** | **datetime** |
| [dynamic()](#the-dynamic-data-type) | | **array** or **dictionary** or any other of the other values |
| [int()](#the-int-data-type) | **int** has an alias **long** | **int** |
| [long()](#the-long-data-type) | | **long** |
| [real()](#the-real-data-type) | **double** | **real** |
| [string()](#the-string-data-type) | | **string** |
| [timespan()](#the-timespan-data-type) | **time** | **timespan** |
## The bool data type
The bool (boolean) data type can have one of two states: `true` or `false` (internally encoded as 1 and 0, respectively), as well as the null value.
### bool literals
The bool data type has the following literals:
* true and bool(true): Representing trueness
* false and bool(false): Representing falsehood
* null and bool(null): Representing the null value
### bool operators
The `bool` data type supports the following operators: equality (`==`), inequality (`!=`), logical-and (`and`), and logical-or (`or`).
## The datetime data type
The datetime (date) data type represents an instant in time, typically expressed as a date and time of day. Values range from 00:00:00 (midnight), January 1, 0001 Anno Domini (Common Era) through 11:59:59 P.M., December 31, 9999 A.D. (C.E.) in the Gregorian calendar.
### datetime literals
Literals of type **datetime** have the syntax **datetime** (`value`), where a number of formats are supported for value, as indicated by the following table:
| **Example** | **Value** |
| ------------------------------------------------------------ | -------------------------------------------------------------- |
| **datetime(2019-11-30 23:59:59.9)** **datetime(2015-12-31)** | Times are always in UTC. Omitting the date gives a time today. |
| **datetime(null)** | Check out our [null values](/apl/data-types/null-values) |
| **now()** | The current time. |
| **now(-timespan)** | now()-timespan |
| **ago(timespan)** | now()-timespan |
**now()** and **ago()** indicate a `datetime` value compared with the moment in time when APL started to execute the query.
### Supported formats
We support the **ISO 8601** format, which is the standard format for representing dates and times in the Gregorian calendar.
### [ISO 8601](https://www.iso.org/iso-8601-date-and-time-format.html)
| **Format** | **Example** |
| ------------------- | --------------------------- |
| %Y-%m-%dT%H:%M:%s%z | 2016-06-26T08:20:03.123456Z |
| %Y-%m-%dT%H:%M:%s | 2016-06-26T08:20:03.123456 |
| %Y-%m-%dT%H:%M | 2016-06-26T08:20 |
| %Y-%m-%d %H:%M:%s%z | 2016-10-06 15:55:55.123456Z |
| %Y-%m-%d %H:%M:%s | 2016-10-06 15:55:55 |
| %Y-%m-%d %H:%M | 2016-10-06 15:55 |
| %Y-%m-%d | 2014-11-08 |
## The dynamic data type
The **dynamic** scalar data type is special in that it can take on any value of other scalar data types from the list below, as well as arrays and property bags. Specifically, a **dynamic** value can be:
* null
* A value of any of the primitive scalar data types: **bool**, **datetime**, **int**, **long**, **real**, **string**, and **timespan**.
* An array of **dynamic** values, holding zero or more values with zero-based indexing.
* A property bag, holding zero or more key-value pairs.
### Dynamic literals
A literal of type dynamic looks like this:
dynamic (`Value`)
Value can be:
* null, in which case the literal represents the null dynamic value: **dynamic(null)**.
* Another scalar data type literal, in which case the literal represents the **dynamic** literal of the "inner" type. For example, **dynamic(6)** is a dynamic value holding the value 6 of the long scalar data type.
* An array of dynamic or other literals: \[`ListOfValues`]. For example, dynamic(\[3, 4, "bye"]) is a dynamic array of three elements, two **long** values and one **string** value.
* A property bag: \{`Name`=`Value ...`}. For example, `dynamic(\{"a":1, "b":\{"a":2\}\})` is a property bag with two slots, a, and b, with the second slot being another property bag.
## The int data type
The **int** data type represents a signed, 64-bit wide, integer.
The special form **int(null)** represents the [null value.](/apl/data-types/null-values)
**int** has an alias **[long](/apl/data-types/scalar-data-types#the-long-data-type)**
## The long data type
The **long** data type represents a signed, 64-bit wide, integer.
### long literals
Literals of the long data type can be specified in the following syntax:
long(`Value`)
Where Value can take the following forms:
* One more or digits, in which case the literal value is the decimal representation of these digits. For example, **long(11)** is the number eleven of type long.
* A minus (`-`) sign followed by one or more digits. For example, **long(-3)** is the number minus three of type **long**.
* null, in which case this is the [null value](/apl/data-types/null-values) of the **long** data type. Thus, the null value of type **long** is **long(null)**.
## The real data type
The **real** data type represents a 64-bit wide, double-precision, floating-point number.
## The string data type
The **string** data type represents a sequence of zero or more [Unicode](https://home.unicode.org/) characters.
### String literals
There are several ways to encode literals of the **string** data type in a query text:
* Enclose the string in double-quotes(`"`): "This is a string literal. Single quote characters (') don’t require escaping. Double quote characters (") are escaped by a backslash (\\)"
* Enclose the string in single-quotes (`'`): Another string literal. Single quote characters (') require escaping by a backslash (\\). Double quote characters (") do not require escaping.
In the two representations above, the backslash (`\`) character indicates escaping. The backslash is used to escape the enclosing quote characters, tab characters (`\t`), newline characters (`\n`), and itself (`\\`).
### Raw string literals
Raw string literals are also supported. In this form, the backslash character (`\`) stands for itself, and does not denote an escape sequence.
* Enclosed in double-quotes (`""`): `@"This is a raw string literal"`
* Enclose in single-quotes (`'`): `@'This is a raw string literal'`
Raw strings are particularly useful for regexes where you can use `@"^[\d]+$"` instead of `"^[\\d]+$"`.
## The timespan data type
The **timespan** `(time)` data type represents a time interval.
## timespan literals
Literals of type **timespan** have the syntax **timespan(value)**, where a number of formats are supported for value, as indicated by the following table:
| **Value** | **length of time** |
| ----------------- | ------------------ |
| **2d** | 2 days |
| **1.5h** | 1.5 hour |
| **30m** | 30 minutes |
| **10s** | 10 seconds |
| **timespan(15s)** | 15 seconds |
| **0.1s** | 0.1 second |
| **timespan(2d)** | 2 days |
## Type conversions
APL provides a set of functions to convert values between different scalar data types. These conversion functions allow you to convert a value from one type to another.
Some of the commonly used conversion functions include:
* `tobool()`: Converts input to boolean representation.
* `todatetime()`: Converts input to datetime scalar.
* `todouble()` or `toreal()`: Converts input to a value of type real.
* `tostring()`: Converts input to a string representation.
* `totimespan()`: Converts input to timespan scalar.
* `tolong()`: Converts input to long (signed 64-bit) number representation.
* `toint()`: Converts input to an integer value (signed 64-bit) number representation.
For a complete list of conversion functions and their detailed descriptions and examples, refer to the [Conversion functions](/apl/scalar-functions/conversion-functions) documentation.
# Entity names
This page explains how to use entity names in your APL query.
APL entities (datasets, tables, columns, and operators) are named. For example, two fields or columns in the same dataset can have the same name if the casing is different, and a table and a dataset may have the same name because they aren’t in the same scope.
## Columns
* Column names are case-sensitive for resolving purposes and they have a specific position in the dataset’s collection of columns.
* Column names are unique within a dataset and table.
* In queries, columns are generally referenced by name only. They can only appear in expressions, and the query operator under which the expression appears determines the table or tabular data stream.
## Identifier naming rules
Axiom uses identifiers to name various entities. Valid identifier names follow these rules:
* Between 1 and 1024 characters long.
* Allowed characters:
* Alphanumeric characters (letters and digits)
* Underscore (`_`)
* Space (` `)
* Dot (`.`)
* Dash (`-`)
Identifier names are case-sensitive.
## Quote identifiers
Quote an identifier in your APL query if any of the following is true:
* The identifier name contains at least one of the following special characters:
* Space (` `)
* Dot (`.`)
* Dash (`-`)
* The identifier name is identical to one of the reserved keywords of the APL query language. For example, `project` or `where`.
If any of the above is true, you must quote the identifier by putting it in quotation marks (`'` or `"`) and square brackets (`[]`). For example, `['my-field']`.
If none of the above is true, you don’t need to quote the identifier in your APL query. For example, `myfield`. In this case, quoting the identifier name is optional.
# Migrate from SQL to APL
This guide will help you through migrating SQL to APL, helping you understand key differences and providing you with query examples.
## Introduction
As data grows exponentially, organizations are continuously seeking more efficient and powerful tools to manage and analyze their data. The Query tab, which utilizes the Axiom Processing Language (APL), is one such service that offers fast, scalable, and interactive data exploration capabilities. If you are an SQL user looking to migrate to APL, this guide will provide a gentle introduction to help you make the transition smoothly.
**This tutorial will guide you through migrating SQL to APL, helping you understand key differences and providing you with query examples.**
## Introduction to Axiom Processing Language (APL)
Axiom Processing Language (APL) is the language used by the Query tab, a fast and highly scalable data exploration service. APL is optimized for real-time and historical data analytics, making it a suitable choice for various data analysis tasks.
**Tabular operators**: In APL, there are several tabular operators that help you manipulate and filter data, similar to SQL’s SELECT, FROM, WHERE, GROUP BY, and ORDER BY clauses. Some of the commonly used tabular operators are:
* `extend`: Adds new columns to the result set.
* `project`: Selects specific columns from the result set.
* `where`: Filters rows based on a condition.
* `summarize`: Groups and aggregates data similar to the GROUP BY clause in SQL.
* `sort`: Sorts the result set based on one or more columns, similar to ORDER BY in SQL.
## Key differences between SQL and APL
While SQL and APL are query languages, there are some key differences to consider:
* APL is designed for querying large volumes of structured, semi-structured, and unstructured data.
* APL is a pipe-based language, meaning you can chain multiple operations using the pipe operator (`|`) to create a data transformation flow.
* APL doesn’t use SELECT, and FROM clauses like SQL. Instead, it uses keywords such as summarize, extend, where, and project.
* APL is case-sensitive, whereas SQL isn’t.
## Benefits of migrating from SQL to APL:
* **Time Series Analysis:** APL is particularly strong when it comes to analyzing time-series data (logs, telemetry data, etc.). It has a rich set of operators designed specifically for such scenarios, making it much easier to handle time-based analysis.
* **Pipelining:** APL uses a pipelining model, much like the UNIX command line. You can chain commands together using the pipe (`|`) symbol, with each command operating on the results of the previous command. This makes it very easy to write complex queries.
* **Easy to Learn:** APL is designed to be simple and easy to learn, especially for those already familiar with SQL. It does not require any knowledge of database schemas or the need to specify joins.
* **Scalability:** APL is a more scalable platform than SQL. This means that it can handle larger amounts of data.
* **Flexibility:** APL is a more flexible platform than SQL. This means that it can be used to analyze different types of data.
* **Features:** APL offers more features and capabilities than SQL. This includes features such as real-time analytics, and time-based analysis.
## Basic APL Syntax
A basic APL query follows this structure:
```kusto
|
|
|
|
```
## Query Examples
Let’s see some examples of how to convert SQL queries to APL.
## SELECT with a simple filter
**SQL:**
```sql
SELECT *
FROM [Sample-http-logs]
WHERE method = 'GET';
```
**APL:**
```kusto
['sample-http-logs']
| where method == 'GET'
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20where%20method%20==%20%27GET%27%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}})
## COUNT with GROUP BY
**SQL:**
```sql
SELECT Country, COUNT(*)
FROM [Sample-http-logs]
GROUP BY method;
```
**APL:**
```kusto
['sample-http-logs']
| summarize count() by method
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20summarize%20count\(\)%20by%20method%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}})
## Top N results
**SQL:**
```sql
SELECT TOP 10 Status, Method
FROM [Sample-http-logs]
ORDER BY Method DESC;
```
**APL:**
```kusto
['sample-http-logs']
| top 10 by method desc
| project status, method
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|top%2010%20by%20method%20desc%20\n|%20project%20status,%20method%22,%22queryOptions%22:\{%22quickRange%22:%2215d%22}})
## Simple filtering and projection
**SQL:**
```sql
SELECT method, status, geo.country
FROM [Sample-http-logs]
WHERE resp_header_size_bytes >= 18;
```
**APL:**
```kusto
['sample-http-logs']
| where resp_header_size_bytes >= 18
| project method, status, ['geo.country']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|where%20resp_header_size_bytes%20%3E=18%20\n|%20project%20method,%20status,%20\[%27geo.country%27]%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}})
## COUNT with a HAVING clause
**SQL:**
```sql
SELECT geo.country
FROM [Sample-http-logs]
GROUP BY geo.country
HAVING COUNT(*) > 100;
```
**APL:**
```kusto
['sample-http-logs']
| summarize count() by ['geo.country']
| where count_ > 100
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20summarize%20count\(\)%20by%20\[%27geo.country%27]\n|%20where%20count_%20%3E%20100%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}})
## Multiple Aggregations
**SQL:**
```sql
SELECT geo.country,
COUNT(*) AS TotalRequests,
AVG(req_duration_ms) AS AverageRequest,
MIN(req_duration_ms) AS MinRequest,
MAX(req_duration_ms) AS MaxRequest
FROM [Sample-http-logs]
GROUP BY geo.country;
```
**APL:**
```kusto
Users
| summarize TotalRequests = count(),
AverageRequest = avg(req_duration_ms),
MinRequest = min(req_duration_ms),
MaxRequest = max(req_duration_ms) by ['geo.country']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20summarize%20totalRequests%20=%20count\(\),%20Averagerequest%20=%20avg\(req_duration_ms\),%20MinRequest%20=%20min\(req_duration_ms\),%20MaxRequest%20=%20max\(req_duration_ms\)%20by%20\[%27geo.country%27]%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}})
### Sum of a column
**SQL:**
```sql
SELECT SUM(resp_body_size_bytes) AS TotalBytes
FROM [Sample-http-logs];
```
**APL:**
```kusto
[‘sample-http-logs’]
| summarize TotalBytes = sum(resp_body_size_bytes)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20summarize%20TotalBytes%20=%20sum\(resp_body_size_bytes\)%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}})
### Average of a column
**SQL:**
```sql
SELECT AVG(req_duration_ms) AS AverageRequest
FROM [Sample-http-logs];
```
**APL:**
```kusto
['sample-http-logs']
| summarize AverageRequest = avg(req_duration_ms)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20summarize%20AverageRequest%20=%20avg\(req_duration_ms\)%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}})
## Minimum and Maximum Values of a column
**SQL:**
```sql
SELECT MIN(req_duration_ms) AS MinRequest, MAX(req_duration_ms) AS MaxRequest
FROM [Sample-http-logs];
```
**APL:**
```kusto
['sample-http-logs']
| summarize MinRequest = min(req_duration_ms), MaxRequest = max(req_duration_ms)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20summarize%20MinRequest%20=%20min\(req_duration_ms\),%20MaxRequest%20=%20max\(req_duration_ms\)%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}})
## Count distinct values
**SQL:**
```sql
SELECT COUNT(DISTINCT method) AS UniqueMethods
FROM [Sample-http-logs];
```
**APL:**
```kusto
['sample-http-logs']
| summarize UniqueMethods = dcount(method)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|summarize%20UniqueMethods%20=%20dcount\(method\)%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}})
## Standard deviation of a data
**SQL:**
```sql
SELECT STDDEV(req_duration_ms) AS StdDevRequest
FROM [Sample-http-logs];
```
**APL:**
```kusto
['sample-http-logs']
| summarize StdDevRequest = stdev(req_duration_ms)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20summarize%20stdDEVRequest%20=%20stdev\(req_duration_ms\)%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}})
## Variance of a data
**SQL:**
```sql
SELECT VAR(req_duration_ms) AS VarRequest
FROM [Sample-http-logs];
```
**APL:**
```kusto
['sample-http-logs']
| summarize VarRequest = variance(req_duration_ms)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20summarize%20VarRequest%20=%20variance\(req_duration_ms\)%22,%22queryOptions%22:\{%22quickRange%22:%2215d%22}})
## Multiple aggregation functions
**SQL:**
```sql
SELECT COUNT(*) AS TotalDuration, SUM(req_duration_ms) AS TotalDuration, AVG(Price) AS AverageDuration
FROM [Sample-http-logs];
```
**APL:**
```kusto
['sample-http-logs']
| summarize TotalOrders = count(), TotalDuration = sum( req_duration_ms), AverageDuration = avg(req_duration_ms)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20summarize%20TotalOrders%20=%20count\(\),%20TotalDuration%20=%20sum\(req_duration_ms\),%20AverageDuration%20=%20avg\(req_duration_ms\)%20%22,%22queryOptions%22:\{%22quickRange%22:%2215d%22}})
## Aggregation with GROUP BY and ORDER BY
**SQL:**
```sql
SELECT status, COUNT(*) AS TotalStatus, SUM(resp_header_size_bytes) AS TotalRequest
FROM [Sample-http-logs];
GROUP BY status
ORDER BY TotalSpent DESC;
```
**APL:**
```kusto
['sample-http-logs']
| summarize TotalStatus = count(), TotalRequest = sum(resp_header_size_bytes) by status
| order by TotalRequest desc
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20summarize%20TotalStatus%20=%20count\(\),%20TotalRequest%20=%20sum\(resp_header_size_bytes\)%20by%20status\n|%20order%20by%20TotalRequest%20desc%20%22,%22queryOptions%22:\{%22quickRange%22:%2215d%22}})
## Count with a condition
**SQL:**
```sql
SELECT COUNT(*) AS HighContentStatus
FROM [Sample-http-logs];
WHERE resp_header_size_bytes > 1;
```
**APL:**
```kusto
['sample-http-logs']
| where resp_header_size_bytes > 1
| summarize HighContentStatus = count()
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20where%20resp_header_size_bytes%20%3E%201\n|%20summarize%20HighContentStatus%20=%20count\(\)%20%20%20%22,%22queryOptions%22:\{%22quickRange%22:%2215d%22}})
## Aggregation with HAVING
**SQL:**
```sql
SELECT Status
FROM [Sample-http-logs];
GROUP BY Status
HAVING COUNT(*) > 10;
```
**APL:**
```kusto
['sample-http-logs']
| summarize OrderCount = count() by status
| where OrderCount > 10
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20summarize%20OrderCount%20=%20count\(\)%20by%20status\n|%20where%20OrderCount%20%3E%2010%20%20%20%22,%22queryOptions%22:\{%22quickRange%22:%2215d%22}})
## Count occurrences of a value in a field
**SQL:**
```sql
SELECT content_type, COUNT(*) AS RequestCount
FROM [Sample-http-logs];
WHERE content_type = ‘text/csv’;
```
**APL:**
```kusto
['sample-http-logs'];
| where content_type == 'text/csv'
| summarize RequestCount = count()
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20where%20content_type%20==%20%27text/csv%27%20\n|%20summarize%20RequestCount%20=%20count\(\)%20%20%20%22,%22queryOptions%22:\{%22quickRange%22:%2215d%22}})
## String Functions:
## Length of a string
**SQL:**
```sql
SELECT LEN(Status) AS NameLength
FROM [Sample-http-logs];
```
**APL:**
```kusto
['sample-http-logs']
| extend NameLength = strlen(status)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20NameLength%20=%20strlen\(status\)%20%22,%22queryOptions%22:\{%22quickRange%22:%2215d%22}})
## Concatentation
**SQL:**
```sql
SELECT CONCAT(content_type, ' ', method) AS FullLength
FROM [Sample-http-logs];
```
**APL:**
```kusto
['sample-http-logs']
| extend FullLength = strcat(content_type, ' ', method)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20FullLength%20=%20strcat\(content_type,%20%27%20%27,%20method\)%20%20%22,%22queryOptions%22:\{%22quickRange%22:%2215d%22}})
## Substring
**SQL:**
```sql
SELECT SUBSTRING(content_type, 1, 10) AS ShortDescription
FROM [Sample-http-logs];
```
**APL:**
```kusto
['sample-http-logs']
| extend ShortDescription = substring(content_type, 0, 10)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20ShortDescription%20=%20substring\(content_type,%200,%2010\)%20%22,%22queryOptions%22:\{%22quickRange%22:%2215d%22}})
## Left and Right
**SQL:**
```sql
SELECT LEFT(content_type, 3) AS LeftTitle, RIGHT(content_type, 3) AS RightTitle
FROM [Sample-http-logs];
```
**APL:**
```kusto
['sample-http-logs']
| extend LeftTitle = substring(content_type, 0, 3), RightTitle = substring(content_type, strlen(content_type) - 3, 3)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20LeftTitle%20=%20substring\(content_type,%200,%203\),%20RightTitle%20=%20substring\(content_type,%20strlen\(content_type\)%20-%203,%203\)%20%20%22,%22queryOptions%22:\{%22quickRange%22:%2215d%22}})
## Replace
**SQL:**
```sql
SELECT REPLACE(StaTUS, 'old', 'new') AS UpdatedStatus
FROM [Sample-http-logs];
```
**APL:**
```kusto
['sample-http-logs']
| extend UpdatedStatus = replace('old', 'new', status)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20UpdatedStatus%20=%20replace\(%27old%27,%20%27new%27,%20status\)%20%20%22,%22queryOptions%22:\{%22quickRange%22:%2215d%22}})
## Upper and Lower
**SQL:**
```sql
SELECT UPPER(FirstName) AS UpperFirstName, LOWER(LastName) AS LowerLastName
FROM [Sample-http-logs];
```
**APL:**
```kusto
['sample-http-logs']
| project upperFirstName = toupper(content_type), LowerLastNmae = tolower(status)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20project%20upperFirstName%20=%20toupper\(content_type\),%20LowerLastNmae%20=%20tolower\(status\)%20%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}})
## LTrim and RTrim
**SQL:**
```sql
SELECT LTRIM(content_type) AS LeftTrimmedFirstName, RTRIM(content_type) AS RightTrimmedLastName
FROM [Sample-http-logs];
```
**APL:**
```kusto
['sample-http-logs']
| extend LeftTrimmedFirstName = trim_start(' ', content_type), RightTrimmedLastName = trim_end(' ', content_type)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20project%20LeftTrimmedFirstName%20=%20trim_start\(%27%27,%20content_type\),%20RightTrimmedLastName%20=%20trim_end\(%27%27,%20content_type\)%20%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}})
## Trim
**SQL:**
```sql
SELECT TRIM(content_type) AS TrimmedFirstName
FROM [Sample-http-logs];
```
**APL:**
```kusto
['sample-http-logs']
| extend TrimmedFirstName = trim(' ', content_type)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20TrimmedFirstName%20=%20trim\(%27%20%27,%20content_type\)%20%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}})
## Reverse
**SQL:**
```sql
SELECT REVERSE(Method) AS ReversedFirstName
FROM [Sample-http-logs];
```
**APL:**
```kusto
['sample-http-logs']
| extend ReversedFirstName = reverse(method)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20project%20ReservedFirstnName%20=%20reverse\(method\)%20%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}})
## Case-insensitive search
**SQL:**
```sql
SELECT Status, Method
FROM “Sample-http-logs”
WHERE LOWER(Method) LIKE 'get’';
```
**APL:**
```kusto
['sample-http-logs']
| where tolower(method) contains 'GET'
| project status, method
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20where%20tolower\(method\)%20contains%20%27GET%27\n|%20project%20status,%20method%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}})
## Take the First Step Today: Dive into APL
The journey from SQL to APL might seem daunting at first, but with the right approach, it can become an empowering transition. It is about expanding your data query capabilities to leverage the advanced, versatile, and fast querying infrastructure that APL provides. In the end, the goal is to enable you to draw more value from your data, make faster decisions, and ultimately propel your business forward.
Try converting some of your existing SQL queries to APL and observe the performance difference. Explore the Axiom Processing Language and start experimenting with its unique features.
**Happy querying!**
# Migrate from Sumo Logic Query Language to APL
This guide dives into why APL could be a superior choice for your data needs, and the differences between Sumo Logic and APL.
## Introduction
In the sphere of data analytics and log management, being able to query data efficiently and effectively is of paramount importance.
This guide dives into why APL could be a superior choice for your data needs, the differences between Sumo Logic and APL, and the potential benefits you could reap from migrating from Sumo Logic to APL. Let’s explore the compelling case for APL as a robust, powerful tool for handling your complex data querying requirements.
APL is powerful and flexible and uses a pipe (`|`) operator for chaining commands, and it provides a richer set of functions and operators for more complex queries.
## Benefits of Migrating from SumoLogic to APL
* **Scalability and Performance:** APL was built with scalability in mind. It handles very large volumes of data more efficiently and provides quicker query execution compared to Sumo Logic, making it a suitable choice for organizations with extensive data requirements. APL is designed for high-speed data ingestion, real-time analytics, and providing insights across structured, semi-structured data. It’s also optimized for time-series data analysis, making it highly efficient for log and telemetry data.
* **Advanced Analytics Capabilities:** With APL’s support for aggregation and conversion functions and more advanced statistical visualization, organizations can derive more sophisticated insights from their data.
## Query Examples
Let’s see some examples of how to convert SumoLogic queries to APL.
## Parse, and Extract Operators
Extract `from` and `to` fields. For example, if a raw event contains `From: Jane To: John,` then `from=Jane and to=John.`
**Sumo Logic:**
```bash
* | parse "From: * To: *" as (from, to)
```
**APL:**
```kusto
['sample-http-logs']
| extend (method) == extract("From: (.*?) To: (.*)", 1, method)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20\(method\)%20==%20extract\(%22From:%20\(.*?\)%20To:%20\(.*\)%22,%201,%20method\)%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}})
## Extract Source IP with Regex
In this section, we will utilize a regular expression to identify the four octets of an IP address. This will help us efficiently extract the source IP addresses from the data.
**Sumo Logic:**
```bash
*| parse regex "(\\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})"
```
**APL:**
```kusto
['sample-http-logs']
| extend ip = extract("(\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3})", 1, "23.45.67.90")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20ip%20=%20extract\(%22\(\\\d\{1,3}\\\\.\\\d\{1,3}\\\\.\\\d\{1,3}\\\\.\\\d\{1,3}\)%22,%201,%20%2223.45.67.90%22\)%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}})
## Extract Visited URLs
This section focuses on identifying all URL addresses visited and extracting them to populate the "url" field. This method provides an organized way to track user activity using APL.
**Sumo Logic:**
```bash
_sourceCategory=apache
| parse "GET * " as url
```
**APL:**
```kusto
['sample-http-logs']
| where method == "GET"
| project url = extract(@"(\w+)", 1, method)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20where%20method%20%3D%3D%20%5C%22GET%5C%22%5Cn%7C%20project%20url%20%3D%20extract\(%40%5C%22\(%5C%5Cw%2B\)%5C%22%2C%201%2C%20method\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
## Extract Data from Source Category Traffic
This section aims to identify and analyze traffic originating from the Source Category. We will extract critical information including the source addresses, the sizes of messages transmitted, and the URLs visited, providing valuable insights into the nature of the traffic using APL.
**Sumo Logic:**
```bash
_sourceCategory=apache
| parse "* " as src_IP
| parse " 200 * " as size
| parse "GET * " as url
```
**APL:**
```kusto
['sample-http-logs']
| extend src_IP = extract("^(\\S+)", 0, uri)
| extend size = extract("^(\\S+)", 1, status)
| extend url = extract("^(\\S+)", 1, method)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20src_IP%20%3D%20extract\(%5C%22%5E\(%40S%2B\)%5C%22%2C%200%2C%20uri\)%5Cn%7C%20extend%20size%20%3D%20extract\(%5C%22%5E\(%40S%2B\)%5C%22%2C%201%2C%20status\)%5Cn%7C%20extend%20url%20%3D%20extract\(%5C%22%5E\(%40S%2B\)%5C%22%2C%201%2C%20method\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
## Calculate Bytes Transferred per Source IP
In this part, we will compute the total number of bytes transferred to each source IP address. This will allow us to gauge the data volume associated with each source using APL.
**Sumo Logic:**
```bash
_sourceCategory=apache
| parse "* " as src_IP
| parse " 200 * " as size
| count, sum(size) by src_IP
```
**APL:**
```kusto
['sample-http-logs']
| extend src_IP = extract("^(\\S+)", 1, uri)
| extend size = toint(extract("200", 0, status))
| summarize count(), sum(size) by src_IP
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20size%20=%20toint\(extract\(%22200%22,%200,%20status\)\)\n|%20summarize%20count\(\),%20sum\(size\)%20by%20status%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}})
## Compute Average HTTP Response Size
In this section, we will calculate the average size of all successful HTTP responses. This metric helps us to understand the typical data load associated with successful server responses.
**Sumo Logic:**
```bash
_sourceCategory=apache
| parse " 200 * " as size
| avg(size)
```
**APL:**
Get the average value from a string:
```kusto
['sample-http-logs']
| extend number = todouble(extract("\\d+(\\.\\d+)?", 0, status))
| summarize Average = avg(number)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20number%20=%20todouble\(status\)\n|%20summarize%20Average%20=%20avg\(number\)%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}})
## Extract Data with Missing Size Field (NoDrop)
This section focuses on extracting key parameters like `src`, `size`, and `URL`, even when the `size` field may be absent from the log message.
**Sumo Logic:**
```bash
_sourceCategory=apache
| parse "* " as src_IP
| parse " 200 * " as size nodrop
| parse "GET * " as url
```
**APL:**
```kusto
['sample-http-logs']
| where content_type == "text/css"
| extend src_IP = extract("^(\\S+)", 1, ['id'])
| extend size = toint(extract("(\\w+)", 1, status))
| extend url = extract("GET", 0, method)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20where%20content_type%20%3D%3D%20%5C%22text%2Fcss%5C%22%20%7C%20extend%20src_IP%20%3D%20extract\(%5C%22%5E\(%5C%5CS%2B\)%5C%22%2C%201%2C%20%5B%27id%27%5D\)%20%7C%20extend%20size%20%3D%20toint\(extract\(%5C%22\(%5C%5Cw%2B\)%5C%22%2C%201%2C%20status\)\)%20%7C%20extend%20url%20%3D%20extract\(%5C%22GET%5C%22%2C%200%2C%20method\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
## Count URL Visits
This section is dedicated to identifying the frequency of visits to a specific URL. By counting these occurrences, we can gain insights into website popularity and user behavior.
**Sumo Logic:**
```bash
_sourceCategory=apache
| parse "GET * " as url
| count by url
```
**APL:**
```kusto
['sample-http-logs']
| extend url = extract("^(\\S+)", 1, method)
| summarize Count = count() by url
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?qid=RsnK4jahgNC-rviz3s)
## Page Count by Source IP
In this section, we will identify the total number of pages associated with each source IP address. This analysis will allow us to understand the volume of content generated or hosted by each source.
**Sumo Logic:**
```bash
_sourceCategory=apache
| parse "* -" as src_ip
| count by src_ip
```
**APL:**
```kusto
['sample-http-logs']
| extend src_ip = extract(".*", 0, ['id'])
| summarize Count = count() by src_ip
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20src_ip%20=%20extract\(%22.*%22,%200,%20%20\[%27id%27]\)\n|%20summarize%20Count%20=%20count\(\)%20by%20src_ip%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}})
## Reorder Pages by Load Frequency
We aim to identify the total number of pages per source IP address in this section. Following this, the pages will be reordered based on the frequency of loads, which will provide insights into the most accessed content.
**Sumo Logic:**
```bash
_sourceCategory=apache
| parse "* " as src_ip
| parse "GET * " as url
| count by url
| sort by _count
```
**APL:**
```kusto
['sample-http-logs']
| extend src_ip = extract(".*", 0, ['id'])
| extend url = extract("(GET)", 1, method)
| where isnotnull(url)
| summarize _count = count() by url, src_ip
| order by _count desc
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20src_ip%20=%20extract\(%22.*%22,%200,%20\[%27id%27]\)\n|%20extend%20url%20=%20extract\(%22\(GET\)%22,%201,%20method\)\n|%20where%20isnotnull\(url\)\n|%20summarize%20_count%20=%20count\(\)%20by%20url,%20src_ip\n|%20order%20by%20_count%20desc%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}})
## Identify the top 10 requested pages.
**Sumo Logic:**
```bash
* | parse "GET * " as url
| count by url
| top 10 url by _count
```
**APL:**
```kusto
['sample-http-logs']
| where method == "GET"
| top 10 by method desc
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20where%20method%20==%20%22GET%22\n|%20top%2010%20by%20method%20desc%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}})
## Top 10 IPs by Bandwidth Usage
In this section, we aim to identify the top 10 source IP addresses based on their bandwidth consumption.
**Sumo Logic:**
```bash
_sourceCategory=apache
| parse " 200 * " as size
| parse "* -" as src_ip
| sum(size) as total_bytes by src_ip
| top 10 src_ip by total_bytes
```
**APL:**
```kusto
['sample-http-logs']
| extend size = req_duration_ms
| summarize total_bytes = sum(size) by ['id']
| top 10 by total_bytes desc
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20size%20=%20req_duration_ms\n|%20summarize%20total_bytes%20=%20sum\(size\)%20by%20\[%27id%27]\n|%20top%2010%20by%20total_bytes%20desc%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}})
## Top 6 IPs by Number of Hits
This section focuses on identifying the top six source IP addresses according to the number of hits they generate. This will provide insight into the most frequently accessed or active sources in the network.
**Sumo Logic**
```bash
_sourceCategory=apache
| parse "* -" as src_ip
| count by src_ip
| top 100 src_ip by _count
```
**APL:**
```kusto
['sample-http-logs']
| extend src_ip = extract("^(\\S+)", 1, user_agent)
| summarize _count = count() by src_ip
| top 6 by _count desc
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20summarize%20_count%20=%20count\(\)%20by%20user_agent\n|%20order%20by%20_count%20desc\n|%20limit%206%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}})
## Timeslice and Transpose
For the Source Category "apache", count by status\_code and timeslice of 1 hour.
**Sumo Logic:**
```bash
_sourceCategory=apache*
| parse "HTTP/1.1\" * * \"" as (status_code, size)
| timeslice 1h
| count by _timeslice, status_code
```
**APL:**
```kusto
['sample-http-logs']
| extend status_code = extract("^(\\S+)", 1, method)
| where status_code == "POST"
| summarize count() by status_code, bin(_time, 1h)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20where%20method%20==%20%22POST%22\n|%20summarize%20count\(\)%20by%20method,%20bin\(_time,%201h\)%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}})
## Hourly Status Code Count for "Text" Source
In this section, We aim to count instances by `status_code`, grouped into one-hour timeslices, and then transpose `status_code` to column format. This will help us understand the frequency and timing of different status codes.
**Sumo Logic:**
```bash
_sourceCategory=text*
| parse "HTTP/1.1\" * * \"" as (status_code, size)
| timeslice 1h
| count by _timeslice, status_code
| transpose row _timeslice column status_code
```
**APL:**
```
['sample-http-logs']
| where content_type startswith 'text/css'
| extend status_code= status
| summarize count() by bin(_time, 1h), content_type, status_code
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20where%20content_type%20startswith%20%27text/css%27\n|%20extend%20status_code%20=%20status\n|%20summarize%20count\(\)%20by%20bin\(_time,%201h\),%20content_type,%20status_code%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}})
## Status Code Count in 5 Time Buckets
In this example, we will perform a count by 'status\_code', sliced into five time buckets across the search results. This will help analyze the distribution and frequency of status codes over specific time intervals.
**Sumo Logic:**
```bash
_sourceCategory=apache*
| parse "HTTP/1.1\" * * \"" as (status_code, size)
| timeslice 5 buckets
| count by _timeslice, status_code
```
**APL:**
```kusto
['sample-http-logs']
| where content_type startswith 'text/css'
| extend p=("HTTP/1.1\" * * \""), tostring( is_tls)
| extend status_code= status
| summarize count() by bin(_time, 12m), status_code
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20where%20content_type%20startswith%20%27text/css%27\n|%20extend%20p=\(%22HTTP/1.1\\%22%20*%20*%20\\%22%22\),%20tostring\(is_tls\)\n|%20extend%20status_code%20=%20status\n|%20summarize%20count\(\)%20by%20bin\(_time,%2012m\),%20status_code%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}})
## Grouped Status Code Count
In this example, we will count messages by status code categories. We will group all messages with status codes in the `200s`, `300s`, `400s`, and `500s` together, we are also groupint the method requests with the `GET`, `POST`, `PUT`, `DELETE` attributes. This will provide an overview of the response status distribution.
**Sumo Logic:**
```bash
_sourceCategory=Apache/Access
| timeslice 15m
| if (status_code matches "20*",1,0) as resp_200
| if (status_code matches "30*",1,0) as resp_300
| if (status_code matches "40*",1,0) as resp_400
| if (status_code matches "50*",1,0) as resp_500
| if (!(status_code matches "20*" or status_code matches "30*" or status_code matches "40*" or status_code matches "50*"),1,0) as resp_others
| count(*), sum(resp_200) as tot_200, sum(resp_300) as tot_300, sum(resp_400) as tot_400, sum(resp_500) as tot_500, sum(resp_others) as tot_others by _timeslice
```
**APL:**
```kusto
['sample-http-logs']
| extend MethodCategory = case(
method == "GET", "GET Requests",
method == "POST", "POST Requests",
method == "PUT", "PUT Requests",
method == "DELETE", "DELETE Requests",
"Other Methods")
| extend StatusCodeCategory = case(
status startswith "2", "Success",
status startswith "3", "Redirection",
status startswith "4", "Client Error",
status startswith "5", "Server Error",
"Unknown Status")
| extend ContentTypeCategory = case(
content_type == "text/csv", "CSV",
content_type == "application/json", "JSON",
content_type == "text/html", "HTML",
"Other Types")
| summarize Count=count() by bin_auto(_time), StatusCodeCategory, MethodCategory, ContentTypeCategory
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20MethodCategory%20=%20case\(\n%20%20%20method%20==%20%22GET%22,%20%22GET%20Requests%22,\n%20%20%20method%20==%20%22POST%22,%20%22POST%20Requests%22,\n%20%20%20method%20==%20%22PUT%22,%20%22PUT%20Requests%22,\n%20%20%20method%20==%20%22DELETE%22,%20%22DELETE%20Requests%22,\n%20%20%20%22Other%20Methods%22\)\n|%20extend%20StatusCodeCategory%20=%20case\(\n%20%20%20status%20startswith%20%222%22,%20%22Success%22,\n%20%20%20status%20startswith%20%223%22,%20%22Redirection%22,\n%20%20%20status%20startswith%20%224%22,%20%22Client%20Error%22,\n%20%20%20status%20startswith%20%225%22,%20%22Server%20Error%22,\n%20%20%20%22Unknown%20Status%22\)\n|%20extend%20ContentTypeCategory%20=%20case\(\n%20%20%20content_type%20==%20%22text/csv%22,%20%22CSV%22,\n%20%20%20content_type%20==%20%22application/json%22,%20%22JSON%22,\n%20%20%20content_type%20==%20%22text/html%22,%20%22HTML%22,\n%20%20%20%22Other%20Types%22\)\n|%20summarize%20Count=count\(\)%20by%20bin_auto\(_time\),%20StatusCodeCategory,%20MethodCategory,%20ContentTypeCategory%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}})
## Conditional Operators
For the Source Category "apache", find all messages with a client error status code (40\*):
**Sumo Logic:**
```bash
_sourceCategory=apache*
| parse "HTTP/1.1\" * * \"" as (status_code, size)
| where status_code matches "40*"
```
**APL:**
```kusto
['sample-http-logs']
| where content_type startswith 'text/css'
| extend p = ("HTTP/1.1\" * * \"")
| where status == "200"
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20where%20content_type%20startswith%20%27text/css%27\n|%20extend%20p%20=%20\(%22HTTP/1.1\\%22%20*%20*%20\\%22%22\)\n|%20where%20status%20==%20%22200%22%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}})
## Browser-based Hit Count
In this query example, we aim to count the number of hits by browser. This analysis will provide insights into the different browsers used to access the source and their respective frequencies.
**Sumo Logic:**
```bash
_sourceCategory=Apache/Access
| extract "\"[A-Z]+ \S+ HTTP/[\d\.]+\" \S+ \S+ \S+ \"(?[^\"]+?)\""
| if (agent matches "*MSIE*",1,0) as ie
| if (agent matches "*Firefox*",1,0) as firefox
| if (agent matches "*Safari*",1,0) as safari
| if (agent matches "*Chrome*",1,0) as chrome
| sum(ie) as ie, sum(firefox) as firefox, sum(safari) as safari, sum(chrome) as chrome
```
**APL:**
```kusto
['sample-http-logs']
| extend ie = case(tolower(user_agent) contains "msie", 1, 0)
| extend firefox = case(tolower(user_agent) contains "firefox", 1, 0)
| extend safari = case(tolower(user_agent) contains "safari", 1, 0)
| extend chrome = case(tolower(user_agent) contains "chrome", 1, 0)
| summarize data = sum(ie), lima = sum(firefox), lo = sum(safari), ce = sum(chrome)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20ie%20=%20case\(tolower\(user_agent\)%20contains%20%22msie%22,%201,%200\)\n|%20extend%20firefox%20=%20case\(tolower\(user_agent\)%20contains%20%22firefox%22,%201,%200\)\n|%20extend%20safari%20=%20case\(tolower\(user_agent\)%20contains%20%22safari%22,%201,%200\)\n|%20extend%20chrome%20=%20case\(tolower\(user_agent\)%20contains%20%22chrome%22,%201,%200\)\n|%20summarize%20data%20=%20sum\(ie\),%20lima%20=%20sum\(firefox\),%20lo%20=%20sum\(safari\),%20ce%20=%20sum\(chrome\)%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}})
## Use the where operator to match only weekend days.
**Sumo Logic:**
```bash
* | parse "day=*:" as day_of_week
| where day_of_week in ("Saturday","Sunday")
```
**APL:**
```kusto
['sample-http-logs']
| extend day_of_week = dayofweek(_time)
| where day_of_week == 1 or day_of_week == 0
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20day_of_week%20=%20dayofweek\(_time\)\n|%20where%20day_of_week%20==%201%20or%20day_of_week%20==%200%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}})
## Extract Numeric Version Numbers
In this section, we will identify version numbers that match numeric values 2, 3, or 1. We will utilize the `num` operator to convert these strings into numerical format, facilitating easier analysis and comparison.
**Sumo Logic:**
```bash
* | parse "Version=*." as number | num(number)
| where number in (2,3,6)
```
**APL:**
```kusto
['sample-http-logs']
| extend p= (req_duration_ms)
| extend number=toint(p)
| where number in (2,3,6)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20p=%20\(req_duration_ms\)\n|%20extend%20number=toint\(p\)\n|%20where%20number%20in%20\(2,3,6\)%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}})
## Making the Leap: Transform Your Data Analytics with APL
As we've navigated through the process of migrating from Sumo Logic to APL, we hope you've found the insights valuable. The powerful capabilities of Axiom Processing Lnaguage are now within your reach, ready to empower your data analytics journey.
Ready to take the next step in your data analytics journey? Dive deeper into APL and discover how it can unlock even more potential in your data. Check out our APL [learning resources](/apl/guides/migrating-from-sql-to-apl) and [tutorials](/apl/tutorial) to become proficient in APL, and join our [community forums](http://axiom.co/discord) to engage with other APL users. Together, we can redefine what’s possible in data analytics. Remember, the migration to APL is not just a change, it’s an upgrade. Embrace the change, because better data analytics await you.
Begin your APL journey today!
# Migrate from Splunk SPL to APL
This step-by-step guide provides a high-level mapping from Splunk SPL to APL.
Splunk and Axiom are powerful tools for log analysis and data exploration. The data explorer interface uses Axiom Processing Language (APL). There are some differences between the query languages for Splunk and Axiom. When transitioning from Splunk to APL, you will need to understand how to convert your Splunk SPL queries into APL.
**This guide provides a high-level mapping from Splunk to APL.**
## Basic Searching
Splunk uses a `search` command for basic searching, while in APL, simply specify the dataset name followed by a filter.
**Splunk:**
```bash
search index="myIndex" error
```
**APL:**
```kusto
['myDatasaet']
| where FieldName contains “error”
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20where%20method%20contains%20%27GET%27%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}})
## Filtering
In Splunk, perform filtering using the `search` command, usually specifying field names and their desired values. In APL, perform filtering by using the `where` operator.
**Splunk:**
```bash
Search index=”myIndex” error
| stats count
```
**APL:**
```kusto
['myDataset']
| where fieldName contains “error”
| count
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20where%20content_type%20contains%20%27text%27\n|%20count\n|%20limit%2010%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}})
## Aggregation
In Splunk, the `stats` command is used for aggregation. In APL, perform aggregation using the `summarize` operator.
**Splunk:**
```bash
search index="myIndex"
| stats count by status
```
**APL:**
```kusto
['myDataset']
| summarize count() by status
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20summarize%20count\(\)%20by%20status%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}})
## Time Frames
In Splunk, select a time range for a search in the time picker on the search page. In APL, filter by a time range using the where operator and the `timespan` field of the dataset.
**Splunk:**
```bash
search index="myIndex" earliest=-1d@d latest=now
```
**APL:**
```kusto
['myDataset']
| where _time >= ago(1d) and _time <= now()
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20where%20_time%20%3E=%20ago\(1d\)%20and%20_time%20%3C=%20now\(\)%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}})
## Sorting
In Splunk, the `sort` command is used to order the results of a search. In APL, perform sorting by using the `sort by` operator.
**Splunk:**
```bash
search index="myIndex"
| sort - content_type
```
**APL:**
```kusto
['myDataset']
| sort by countent_type desc
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20sort%20by%20content_type%20desc%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}})
## Selecting Fields
In Splunk, use the fields command to specify which fields to include or exclude in the search results. In APL, use the `project` operator, `project-away` operator, or the `project-keep` operator to specify which fields to include in the query results.
**Splunk:**
```bash
index=main sourcetype=mySourceType
| fields status, responseTime
```
**APL:**
```kusto
['myDataset']
| extend newName = oldName
| project-away oldName
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20newStatus%20=%20status%20\n|%20project-away%20status%20%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}})
## Renaming Fields
In Splunk, rename fields using the `rename` command, while in APL rename fields using the `extend,` and `project` operator. Here is the general syntax:
**Splunk:**
```bash
index="myIndex" sourcetype="mySourceType"
| rename oldFieldName AS newFieldName
```
**APL:**
```kusto
['myDataset']
| where method == "GET"
| extend new_field_name = content_type
| project-away content_type
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20where%20method%20==%20%27GET%27\n|%20extend%20new_field_name%20=%20content_type\n|%20project-away%20content_type%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}})
## Calculated Fields
In Splunk, use the `eval` command to create calculated fields based on the values of other fields, while in APL use the `extend` operator to create calculated fields based on the values of other fields.
**Splunk**
```bash
search index="myIndex"
| eval newField=field1+field2
```
**APL:**
```kusto
['myDataset']
| extend newField = field1 + field2
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20calculatedFields%20=%20req_duration_ms%20%2b%20resp_body_size_bytes%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}})
## Structure and Concepts
The following table compares concepts and data structures between Splunk and APL logs.
| Concept | Splunk | APL | Comment |
| ------------------------- | -------- | ------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| data caches | buckets | caching and retention policies | Controls the period and caching level for the data.This setting directly affects the performance of queries. |
| logical partition of data | index | dataset | Allows logical separation of the data. |
| structured event metadata | N/A | dataset | Splunk doesn’t expose the concept of metadata to the search language. APL logs have the concept of a dataset, which has fields and columns. Each event instance is mapped to a row. |
| data record | event | row | Terminology change only. |
| types | datatype | datatype | APL data types are more explicit because they are set on the fields. Both have the ability to work dynamically with data types and roughly equivalent sets of data types. |
| query and search | search | query | Concepts essentially are the same between APL and Splunk |
## Functions
The following table specifies functions in APL that are equivalent to Splunk Functions.
| Splunk | APL |
| ------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| strcat | strcat() |
| split | split() |
| if | iff() |
| tonumber | todouble(), tolong(), toint() |
| upper, lower | toupper(), tolower() |
| replace | replace\_string() or replace\_regex() |
| substr | substring() |
| tolower | tolower() |
| toupper | toupper() |
| match | matches regex |
| regex | matches regex **(in splunk, regex is an operator. In APL, it’s a relational operator.)** |
| searchmatch | == **(In splunk, `searchmatch` allows searching the exact string.)** |
| random | rand(), rand(n) **(Splunk’s function returns a number between zero to 231 -1. APL returns a number between 0.0 and 1.0, or if a parameter is provided, between 0 and n-1.)** |
| now | now() |
In Splunk, the function is invoked by using the `eval` operator. In APL, it’s used as part of the `extend` or `project`.
In Splunk, the function is invoked by using the `eval` operator. In APL, it can be used with the `where` operator.
## Filter
APL log queries start from a tabular result set in which a filter is applied. In Splunk, filtering is the default operation on the current index. You may also use the where operator in Splunk, but we don’t recommend it.
| Product | Operator | Example |
| :------ | :--------- | :------------------------------------------------------------------------- |
| Splunk | **search** | Sample.Logs="330009.2" method="GET" \_indextime>-24h |
| APL | **where** | \['sample-http-logs'] \| where method == "GET" and \_time > ago(24h) |
## Get n events or rows for inspection
APL log queries also support `take` as an alias to `limit`. In Splunk, if the results are ordered, `head` returns the first n results. In APL, `limit` isn’t ordered, but it returns the first n rows that are found.
| Product | Operator | Example |
| ------- | -------- | ---------------------------------------- |
| Splunk | head | Sample.Logs=330009.2 \| head 100 |
| APL | limit | \['sample-htto-logs'] \| limit 100 |
## Get the first *n* events or rows ordered by a field or column
For the bottom results, in Splunk, use `tail`. In APL, specify ordering direction by using `asc`.
| Product | Operator | Example |
| :------ | :------- | :------------------------------------------------------------------ |
| Splunk | head | Sample.Logs="33009.2" \| sort Event.Sequence \| head 20 |
| APL | top | \['sample-http-logs'] \| top 20 by method |
## Extend the result set with new fields or columns
Splunk has an `eval` function, but it’s not comparable to the `eval` operator in APL. Both the `eval` operator in Splunk and the `extend` operator in APL support only scalar functions and arithmetic operators.
| Product | Operator | Example |
| :------ | :------- | :------------------------------------------------------------------------------------ |
| Splunk | eval | Sample.Logs=330009.2 \| eval state= if(Data.Exception = "0", "success", "error") |
| APL | extend | \['sample-http-logs'] \| extend Grade = iff(req\_duration\_ms >= 80, "A", "B") |
## Rename
APL uses the `project` operator to rename a field. In the `project` operator, a query can take advantage of any indexes that are prebuilt for a field. Splunk has a `rename` operator that does the same.
| Product | Operator | Example |
| :------ | :------- | :-------------------------------------------------------------- |
| Splunk | rename | Sample.Logs=330009.2 \| rename Date.Exception as execption |
| APL | project | \['sample-http-logs'] \| project updated\_status = status |
## Format results and projection
Splunk uses the `table` command to select which columns to include in the results. APL has a `project` operator that does the same and [more](/apl/tabular-operators/project-operator).
| Product | Operator | Example |
| :------ | :------- | :--------------------------------------------------- |
| Splunk | table | Event.Rule=330009.2 \| table rule, state |
| APL | project | \['sample-http-logs'] \| project status, method |
Splunk uses the `field -` command to select which columns to exclude from the results. APL has a `project-away` operator that does the same.
| Product | Operator | Example |
| :------ | :--------------- | :-------------------------------------------------------------- |
| Splunk | **fields -** | Sample.Logs=330009.2\` \| fields - quota, hightest\_seller |
| APL | **project-away** | \['sample-http-logs'] \| project-away method, status |
## Aggregation
See the [list of summarize aggregations functions](/apl/aggregation-function/statistical-functions) that are available.
| Splunk operator | Splunk example | APL operator | APL example |
| :-------------- | :------------------------------------------------------------- | :----------- | :----------------------------------------------------------------------- |
| **stats** | search (Rule=120502.\*) \| stats count by OSEnv, Audience | summarize | \['sample-http-logs'] \| summarize count() by content\_type, status |
## Sort
In Splunk, to sort in ascending order, you must use the `reverse` operator. APL also supports defining where to put nulls, either at the beginning or at the end.
| Product | Operator | Example |
| :------ | :------- | :------------------------------------------------------------- |
| Splunk | sort | Sample.logs=120103 \| sort Data.Hresult \| reverse |
| APL | order by | \['sample-http-logs'] \| order by status desc |
Whether you’re just starting your transition or you’re in the thick of it, this guide can serve as a helpful roadmap to assist you in your journey from Splunk to Axiom Processing Language.
Dive into the Axiom Processing Language, start converting your Splunk queries to APL, and explore the rich capabilities of the Query tab. Embrace the learning curve, and remember, every complex query you master is another step forward in your data analytics journey.
# Axiom Processing Language (APL)
This section explains how to use the Axiom Processing Language to get deeper insights from your data.
## Introduction
The Axiom Processing Language (APL) is a query language that is perfect for getting deeper insights from your data. Whether logs, events, analytics, or similar, APL provides the flexibility to filter, manipulate, and summarize your data exactly the way you need it.
## Get started
Go to the Query tab and click one of your datasets to get started. The APL editor has full auto-completion so you can poke around or you can get a better understanding of all the features by using the reference menu to the left of this page.
## APL query structure
At a minimum, a query consists of source data reference (name of a dataset) and zero or more query operators applied in sequence. Individual operators are delimited using the pipe character (`|`).
APL query has the following structure:
```kusto
DataSource
| operator ...
| operator ...
```
Where:
* DataSource is the name of the dataset you want to query
* Operator is a function that will be applied to the data
Let’s look at an example query.
```kusto
['github-issue-comment-event']
| extend bot = actor contains "-bot" or actor contains "[bot]"
| where bot == true
| summarize count() by bin_auto(_time), actor
```
The query above begins with reference to a dataset called **github-issue-comment-event** and contains several operators, [extend](/apl/tabular-operators/extend-operator), [where](/apl/tabular-operators/where-operator), and [summarize](/apl/tabular-operators/summarize-operator), each separated by a `pipe`. The extend operator creates the **bot** column in the returned result, and sets its values depending on the value of the actor column, the **where** operator filters out the value of the **bot** to a branch of rows and then produce a chart from the aggregation using the **summarize** operator.
The most common kind of query statement is a tabular expression statement. Tabular statements contain operators, each of which starts with a tabular `input` and returns a tabular `output.`
* Explore the [tabular operators](/apl/tabular-operators/extend-operator) we support.
* Check out our [entity names and identifier naming rules](/apl/entities/entity-names).
Axiom Processing Language supplies a set of system [data types](/apl/data-types/scalar-data-types) that define all the types of [data](/apl/data-types/null-values) that can be used with Axiom Processing Language.
# Set statement
The set statement is used to set a query option in your APL query.
The `set` statement is used to set a query option. Options enabled with the `set` statement only have effect for the duration of the query.
The `set` statement specified will affect how your query is processed and the returned results.
## Syntax
```kusto
set OptionName=OptionValue
```
## Strict types
The `stricttypes` query option lets you specify only the exact type of the data type declaration needed in your query, or a **QueryFailed** error will be thrown.
## Example
```kusto
set stricttypes;
['Dataset']
| where number == 5
```
# Special field attributes
This page explains how to implement special fields within APL queries to enhance the functionality and interactivity of datasets. Use these fields in APL queries to add unique behaviors to the Axiom user interface.
## Add link to table
* Name: `_row_url`
* Type: string
* Description: Define the URL to which the entire table links.
* APL query example: `extend _row_url = 'https://axiom.co/'`
* Expected behavior: Make rows clickable. When clicked, go to the specified URL.
If you specify a static string as the URL, all rows link to that page. To specify a different URL for each row, use an dynamic expression like `extend _row_url = strcat('https://axiom.co/', uri)` where `uri` is a field in your data.
## Add link to values in a field
* Name: `_FIELDNAME_url`
* Type: string
* Description: Define a URL to which values in a field link.
* APL query example: `extend _website_url = 'https://axiom.co/'`
* Expected behavior: Make values in the `website` field clickable. When clicked, go to the specified URL.
Replace `FIELDNAME` with the actual name of the field.
## Add tooltip to values in a field
* Name: `_FIELDNAME_tooltip`
* Type: string
* Description: Define text to be displayed when hovering over values in a field.
* Example Usage: `extend _errors_tooltip = 'Number of errors'`
* Expected behavior: Display a tooltip with the specified text when the user hovers over values in a field.
Replace `FIELDNAME` with the actual name of the field.
## Add description to values in a field
* Name: `_FIELDNAME_description`
* Type: string
* Description: Define additional information to be displayed under the values in a field.
* Example Usage: `extend _diskusage_description = 'Current disk usage'`
* Expected behavior: Display additional text under the values in a field for more context.
Replace `FIELDNAME` with the actual name of the field.
## Add unit of measurement
* Name: `_FIELDNAME_unit`
* Type: string
* Description: Specify the unit of measurement for another field’s value allowing for proper formatting and display.
* APL query example: `extend _size_unit = "gbytes"`
* Expected behavior: Format the value in the `size` field according to the unit specified in the `_size_unit` field.
Replace `FIELDNAME` with the actual name of the field you want to format. For example, for a field named `size`, use `_size_unit = "gbytes"` to display its values in gigabytes in the query results.
The supported units are the following:
**Percentage**
| Unit name | APL sytax |
| ----------------- | ---------- |
| percent (0-100) | percent100 |
| percent (0.0-1.0) | percent |
**Currency**
| Unit name | APL sytax |
| ------------ | --------- |
| Dollars (\$) | curusd |
| Pounds (£) | curgbp |
| Euro (€) | cureur |
| Bitcoin (฿) | curbtc |
**Data (IEC)**
| Unit name | APL sytax |
| ---------- | --------- |
| bits(IEC) | bits |
| bytes(IEC) | bytes |
| kibibytes | kbytes |
| mebibytes | mbytes |
| gibibytes | gbytes |
| tebibytes | tbytes |
| pebibytes | pbytes |
**Data (metric)**
| Unit name | APL sytax |
| ------------- | --------- |
| bits(Metric) | decbits |
| bytes(Metric) | decbytes |
| kilobytes | deckbytes |
| megabytes | decmbytes |
| gigabytes | decgbytes |
| terabytes | dectbytes |
| petabytes | decpbytes |
**Data rate**
| Unit name | APL sytax |
| ------------- | --------- |
| packets/sec | pps |
| bits/sec | bps |
| bytes/sec | Bps |
| kilobytes/sec | KBs |
| kilobits/sec | Kbits |
| megabytes/sec | MBs |
| megabits/sec | Mbits |
| gigabytes/sec | GBs |
| gigabits/sec | Gbits |
| terabytes/sec | TBs |
| terabits/sec | Tbits |
| petabytes/sec | PBs |
| petabits/sec | Pbits |
**Datetime**
| Unit name | APL sytax |
| ----------------- | --------- |
| Hertz (1/s) | hertz |
| nanoseconds (ns) | ns |
| microseconds (µs) | µs |
| milliseconds (ms) | ms |
| seconds (s) | secs |
| minutes (m) | mins |
| hours (h) | hours |
| days (d) | days |
| ago | ago |
**Throughput**
| Unit name | APL sytax |
| ------------------ | --------- |
| counts/sec (cps) | cps |
| ops/sec (ops) | ops |
| requests/sec (rps) | reqps |
| reads/sec (rps) | rps |
| writes/sec (wps) | wps |
| I/O ops/sec (iops) | iops |
| counts/min (cpm) | cpm |
| ops/min (opm) | opm |
| requests/min (rps) | reqpm |
| reads/min (rpm) | rpm |
| writes/min (wpm) | wpm |
## Example
The example APL query below adds a tooltip and a description to the values of the `status` field. Clicking one of the values in this field leads to a page about status codes. The query adds the new field `resp_body_size_bits` that displays the size of the response body in the unit of bits.
```apl
['sample-http-logs']
| extend _status_tooltip = 'The status of the HTTP request is the response code from the server. It shows if an HTTP request has been successfully completed.'
| extend _status_description = 'This is the status of the HTTP request.'
| extend _status_url = 'https://developer.mozilla.org/en-US/docs/Web/HTTP/Status'
| extend resp_body_size_bits = resp_body_size_bytes * 8
| extend _resp_body_size_bits_unit = 'bits'
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20_status_tooltip%20%3D%20'The%20status%20of%20the%20HTTP%20request%20is%20the%20response%20code%20from%20the%20server.%20It%20shows%20if%20an%20HTTP%20request%20has%20been%20successfully%20completed.'%20%7C%20extend%20_status_description%20%3D%20'This%20is%20the%20status%20of%20the%20HTTP%20request.'%20%7C%20extend%20_status_url%20%3D%20'https%3A%2F%2Fdeveloper.mozilla.org%2Fen-US%2Fdocs%2FWeb%2FHTTP%2FStatus'%20%7C%20extend%20resp_body_size_bits%20%3D%20resp_body_size_bytes%20*%208%20%7C%20extend%20_resp_body_size_bits_unit%20%3D%20'bits'%22%7D)
# Array functions
This section explains how to use array functions in APL.
The table summarizes the array functions available in APL.
| Function | Description |
| -------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------- |
| [array\_concat](/apl/scalar-functions/array-functions/array-concat) | Concatenates a number of dynamic arrays to a single array. |
| [array\_iff](/apl/scalar-functions/array-functions/array-iff) | Returns a new array containing elements from the input array that satisfy the condition. |
| [array\_index\_of](/apl/scalar-functions/array-functions/array-index-of) | Searches the array for the specified item, and returns its position. |
| [array\_length](/apl/scalar-functions/array-functions/array-length) | Calculates the number of elements in a dynamic array. |
| [array\_reverse](/apl/scalar-functions/array-functions/array-reverse) | Reverses the order of the elements in a dynamic array. |
| [array\_rotate\_left](/apl/scalar-functions/array-functions/array-rotate-left) | Rotates values inside a dynamic array to the left. |
| [array\_rotate\_right](/apl/scalar-functions/array-functions/array-rotate-right) | Rotates values inside a dynamic array to the right. |
| [array\_select\_dict](/apl/scalar-functions/array-functions/array-select-dict) | Selects a dictionary from an array of dictionaries. |
| [array\_shift\_left](/apl/scalar-functions/array-functions/array-shift-left) | Shifts the values inside a dynamic array to the left. |
| [array\_shift\_right](/apl/scalar-functions/array-functions/array-shift-right) | Shifts values inside an array to the right. |
| [array\_slice](/apl/scalar-functions/array-functions/array-slice) | Extracts a slice of a dynamic array. |
| [array\_split](/apl/scalar-functions/array-functions/array-split) | Splits an array to multiple arrays according to the split indices and packs the generated array in a dynamic array. |
| [array\_sum](/apl/scalar-functions/array-functions/array-sum) | Calculates the sum of elements in a dynamic array. |
| [isarray](/apl/scalar-functions/array-functions/isarray) | Checks whether a value is an array. |
| [pack\_array](/apl/scalar-functions/array-functions/pack-array) | Packs all input values into a dynamic array. |
| [strcat\_array](/apl/scalar-functions/array-functions/strcat-array) | Takes an array and returns a single concatenated string with the array’s elements separated by the specified delimiter. |
## Dynamic arrays
Most array functions accept a dynamic array as their parameter. Dynamic arrays allow you to add or remove elements. You can change a dynamic array with an array function.
A dynamic array expands as you add more elements. This means that you don’t need to determine the size in advance.
# array_concat
This page explains how to use the array_concat function in APL.
The `array_concat` function in APL (Axiom Processing Language) concatenates two or more arrays into a single array. Use this function when you need to merge multiple arrays into a single array structure. It’s particularly useful for situations where you need to handle and combine collections of elements across different fields or sources, such as log entries, OpenTelemetry trace data, or security logs.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In SPL, you typically use the `mvappend` function to concatenate multiple fields or arrays into a single array. In APL, the equivalent is `array_concat`, which also combines arrays but requires you to specify each array as a parameter.
```sql Splunk example
| eval combined_array = mvappend(array1, array2, array3)
```
```kusto APL equivalent
| extend combined_array = array_concat(array1, array2, array3)
```
ANSI SQL doesn’t natively support an array concatenation function across different arrays. Instead, you typically use `UNION` to combine results from multiple arrays or collections. In APL, `array_concat` allows you to directly concatenate multiple arrays, providing a more straightforward approach.
```sql SQL example
SELECT array1 UNION ALL array2 UNION ALL array3
```
```kusto APL equivalent
| extend combined_array = array_concat(array1, array2, array3)
```
## Usage
### Syntax
```kusto
array_concat(array1, array2, ...)
```
### Parameters
* `array1`: The first array to concatenate.
* `array2`: The second array to concatenate.
* `...`: Additional arrays to concatenate.
### Returns
An array containing all elements from the input arrays in the order they are provided.
## Use case examples
In log analysis, you can use `array_concat` to merge collections of user requests into a single array to analyze request patterns across different endpoints.
**Query**
```kusto
['sample-http-logs']
| take 50
| summarize combined_requests = array_concat(pack_array(uri), pack_array(method))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20take%2050%20%7C%20summarize%20combined_requests%20%3D%20array_concat\(pack_array\(uri\)%2C%20pack_array\(method\)\)%22%7D)
**Output**
| \_time | uri | method | combined\_requests |
| ------------------- | ----------------------- | ------ | ------------------------------------ |
| 2024-10-28T12:30:00 | /api/v1/textdata/cnfigs | POST | \["/api/v1/textdata/cnfigs", "POST"] |
This example concatenates the `uri` and `method` values into a single array for each log entry, allowing for combined analysis of access patterns and request methods in log data.
In OpenTelemetry traces, use `array_concat` to join span IDs and trace IDs for a comprehensive view of trace behavior across services.
**Query**
```kusto
['otel-demo-traces']
| take 50
| summarize combined_ids = array_concat(pack_array(span_id), pack_array(trace_id))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20take%2050%20%7C%20summarize%20combined_ids%20%3D%20array_concat\(pack_array\(span_id\)%2C%20pack_array\(trace_id\)\)%22%7D)
**Output**
| combined\_ids |
| ---------------------------------- |
| \["span1", "trace1", "span2", ...] |
| \_time | trace\_id | span\_id | combined\_ids |
| ------------------- | ------------- | --------- | ------------------------------- |
| 2024-10-28T12:30:00 | trace\_abc123 | span\_001 | \["trace\_abc123", "span\_001"] |
This example creates an array containing both `span_id` and `trace_id` values, offering a unified view of the trace journey across services.
In security logs, `array_concat` can consolidate multiple IP addresses or user IDs to detect potential attack patterns involving different locations or users.
**Query**
```kusto
['sample-http-logs']
| where status == '500'
| take 50
| summarize failed_attempts = array_concat(pack_array(id), pack_array(['geo.city']))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20where%20status%20%3D%3D%20'500'%20%7C%20take%2050%20%7C%20summarize%20failed_attempts%20%3D%20array_concat\(pack_array\(id\)%2C%20pack_array\(%5B'geo.city'%5D\)\)%22%7D)
**Output**
| \_time | id | geo.city | combined\_ids |
| ------------------- | ------------------------------------ | -------- | --------------------------------------------------- |
| 2024-10-28T12:30:00 | fc1407f5-04ca-4f4e-ad01-f72063736e08 | Avenal | \["fc1407f5-04ca-4f4e-ad01-f72063736e08", "Avenal"] |
This query combines failed user IDs and cities where the request originated, allowing security analysts to detect suspicious patterns or brute force attempts from different regions.
## List of related functions
* [array\_length](/apl/scalar-functions/array-functions/array-length): Returns the number of elements in an array.
* [array\_index\_of](/apl/scalar-functions/array-functions/array-index-of): Finds the index of an element in an array.
* [array\_slice](/apl/scalar-functions/array-functions/array-slice): Extracts a subset of elements from an array.
# array_iff
This page explains how to use the array_iff function in APL.
The `array_iff` function in Axiom Processing Language (APL) allows you to create arrays based on a condition. It returns an array with elements from two specified arrays, choosing each element from the first array when a condition is met and from the second array otherwise. This function is useful for scenarios where you need to evaluate a series of conditions across multiple datasets, especially in log analysis, trace data, and other applications requiring conditional element selection within arrays.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, array manipulation based on conditions typically requires using conditional functions or eval expressions. APL’s `array_iff` function lets you directly select elements from one array or another based on a condition, offering more streamlined array manipulation.
```sql Splunk example
eval selected_array=if(condition, array1, array2)
```
```kusto APL equivalent
array_iff(condition_array, array1, array2)
```
In ANSI SQL, conditionally selecting elements from arrays often requires complex `CASE` statements or functions. With APL’s `array_iff` function, you can directly compare arrays and conditionally populate them, simplifying array-based operations.
```sql SQL example
CASE WHEN condition THEN array1 ELSE array2 END
```
```kusto APL equivalent
array_iff(condition_array, array1, array2)
```
## Usage
### Syntax
```kusto
array_iff(condition_array, array1, array2)
```
### Parameters
* `condition_array`: An array of boolean values, where each element determines whether to choose the corresponding element from `array1` or `array2`.
* `array1`: The array to select elements from when the corresponding `condition_array` element is `true`.
* `array2`: The array to select elements from when the corresponding `condition_array` element is `false`.
### Returns
An array where each element is selected from `array1` if the corresponding `condition_array` element is `true`, and from `array2` otherwise.
## Use case examples
The `array_iff` function can help filter log data conditionally, such as choosing specific durations based on HTTP status codes.
**Query**
```kusto
['sample-http-logs']
| order by _time desc
| limit 1000
| summarize is_ok = make_list(status == '200'), request_duration = make_list(req_duration_ms)
| project ok_request_duration = array_iff(is_ok, request_duration, 0)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20order%20by%20_time%20desc%20%7C%20limit%201000%20%7C%20summarize%20is_ok%20%3D%20make_list\(status%20%3D%3D%20'200'\)%2C%20request_duration%20%3D%20make_list\(req_duration_ms\)%20%7C%20project%20ok_request_duration%20%3D%20array_iff\(is_ok%2C%20request_duration%2C%200\)%22%7D)
**Output**
| ok\_request\_duration |
| -------------------------------------------------------------------- |
| \[0.3150485097707766, 0, 0.21691408087847264, 0, 0.2757618582190533] |
This example filters the `req_duration_ms` field to include only durations for the most recent 1,000 requests with status `200`, replacing others with `0`.
With OpenTelemetry trace data, you can use `array_iff` to filter spans based on the service type, such as selecting durations for `server` spans and setting others to zero.
**Query**
```kusto
['otel-demo-traces']
| order by _time desc
| limit 1000
| summarize is_server = make_list(kind == 'server'), duration_list = make_list(duration)
| project server_durations = array_iff(is_server, duration_list, 0)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20order%20by%20_time%20desc%20%7C%20limit%201000%20%7C%20summarize%20is_server%20%3D%20make_list\(kind%20%3D%3D%20'server'\)%2C%20duration_list%20%3D%20make_list\(duration\)%20%7C%20project%20%20server_durations%20%3D%20array_iff\(is_server%2C%20duration_list%2C%200\)%22%7D)
**Output**
| server\_durations |
| ---------------------------------------- |
| \["45.632µs", "54.622µs", 0, "34.051µs"] |
In this example, `array_iff` selects durations only for `server` spans, setting non-server spans to `0`.
In security logs, `array_iff` can be used to focus on specific cities in which HTTP requests originated, such as showing response durations for certain cities and excluding others.
**Query**
```kusto
['sample-http-logs']
| limit 1000
| summarize is_london = make_list(['geo.city'] == "London"), request_duration = make_list(req_duration_ms)
| project london_duration = array_iff(is_london, request_duration, 0)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%20%7C%20limit%201000%20%7C%20summarize%20is_london%20%3D%20make_list\(%5B'geo.city'%5D%20%3D%3D%20'London'\)%2C%20request_duration%20%3D%20make_list\(req_duration_ms\)%20%7C%20project%20london_duration%20%3D%20array_iff\(is_london%2C%20request_duration%2C%200\)%22%7D)
**Output**
| london\_duration |
| ---------------- |
| \[100, 0, 250] |
This example filters the `req_duration_ms` array to show durations for requests from London, with non-matching cities having `0` as duration.
## List of related functions
* [array\_slice](/apl/scalar-functions/array-functions/array-slice): Extracts a subset of elements from an array.
* [array\_concat](/apl/scalar-functions/array-functions/array-concat): Combines multiple arrays.
* [array\_rotate\_right](/apl/scalar-functions/array-functions/array-rotate-right): Rotates array elements to the right by a specified number of positions.
# array_index_of
This page explains how to use the array_index_of function in APL.
The `array_index_of` function in APL returns the zero-based index of the first occurrence of a specified value within an array. If the value isn’t found, the function returns `-1`. Use this function when you need to identify the position of a specific item within an array, such as finding the location of an error code in a sequence of logs or pinpointing a particular value within telemetry data arrays.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, the `mvfind` function retrieves the position of an element within an array, similar to how `array_index_of` operates in APL. However, note that APL uses a zero-based index for results, while SPL is one-based.
```splunk Splunk example
| eval index=mvfind(array, "value")
```
```kusto APL equivalent
let index = array_index_of(array, 'value')
```
ANSI SQL doesn’t have a direct equivalent for finding the index of an element within an array. Typically, you would use a combination of array and search functions if supported by your SQL variant.
```sql SQL example
SELECT POSITION('value' IN ARRAY[...])
```
```kusto APL equivalent
let index = array_index_of(array, 'value')
```
## Usage
### Syntax
```kusto
array_index_of(array, lookup_value, [start], [length], [occurrence])
```
### Parameters
| Name | Type | Required | Description |
| ------------- | ------ | -------- | ---------------------------------------------------------------------------------------------------------------------------------------------- |
| array | array | Yes | Input array to search. |
| lookup\_value | scalar | Yes | Scalar value to search for in the array. Accepted data types: long, integer, double, datetime, timespan, or string. |
| start\_index | number | No | The index where to start the search. A negative value offsets the starting search value from the end of the array by `abs(start_index)` steps. |
| length | number | No | Number of values to examine. A value of `-1` means unlimited length. |
| occurrence | number | No | The number of the occurrence. By default `1`. |
### Returns
`array_index_of` returns the zero-based index of the first occurrence of the specified `lookup_value` in `array`. If `lookup_value` doesn’t exist in the array, it returns `-1`.
## Use case examples
You can use `array_index_of` to find the position of a specific HTTP status code within an array of codes in your log analysis.
**Query**
```kusto
['sample-http-logs']
| take 50
| summarize status_array = make_list(status)
| extend index_500 = array_index_of(status_array, '500')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20take%2050%20%7C%20summarize%20status_array%20%3D%20make_list\(status\)%20%7C%20extend%20index_500%20%3D%20array_index_of\(status_array%2C%20'500'\)%22%7D)
**Output**
| status\_array | index\_500 |
| ---------------------- | ---------- |
| \["200", "404", "500"] | 2 |
This query creates an array of `status` codes and identifies the position of the first occurrence of the `500` status.
In OpenTelemetry traces, you can find the position of a specific `service.name` within an array of service names to detect when a particular service appears.
**Query**
```kusto
['otel-demo-traces']
| take 50
| summarize service_array = make_list(['service.name'])
| extend frontend_index = array_index_of(service_array, 'frontend')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20%20service_array%20%3D%20make_list\(%5B'service.name'%5D\)%20%7C%20extend%20frontend_index%20%3D%20array_index_of\(service_array%2C%20'frontend'\)%22%7D)
**Output**
| service\_array | frontend\_index |
| ---------------------------- | --------------- |
| \["frontend", "cartservice"] | 0 |
This query collects the array of services and determines where the `frontend` service first appears.
When working with security logs, `array_index_of` can help identify the index of a particular error or status code, such as `500`, within an array of `status` codes.
**Query**
```kusto
['sample-http-logs']
| take 50
| summarize status_array = make_list(status)
| extend index_500 = array_index_of(status_array, '500')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20take%2050%20%7C%20summarize%20status_array%20%3D%20make_list\(status\)%20%7C%20extend%20index_500%20%3D%20array_index_of\(status_array%2C%20'500'\)%22%7D)
**Output**
| status\_array | index\_500 |
| ---------------------- | ---------- |
| \["200", "404", "500"] | 2 |
This query helps identify at what index the `500` status code appears.
## List of related functions
* [array\_concat](/apl/scalar-functions/array-functions/array-concat): Combines multiple arrays.
* [array\_rotate\_right](/apl/scalar-functions/array-functions/array-rotate-right): Rotates array elements to the right by a specified number of positions.
* [array\_rotate\_left](/apl/scalar-functions/array-functions/array-rotate-left): Rotates elements of an array to the left.
# array_length
This page explains how to use the array_length function in APL.
The `array_length` function in APL (Axiom Processing Language) returns the length of an array. You can use this function to analyze and filter data by array size, such as identifying log entries with specific numbers of entries or events with multiple tags. This function is useful for analyzing structured data fields that contain arrays, such as lists of error codes, tags, or IP addresses.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, you might use the `mvcount` function to determine the length of a multivalue field. In APL, `array_length` serves the same purpose by returning the size of an array within a column.
```sql Splunk example
| eval array_size = mvcount(array_field)
```
```kusto APL equivalent
['sample-http-logs']
| extend array_size = array_length(array_field)
```
In ANSI SQL, you would use functions such as `CARDINALITY` or `ARRAY_LENGTH` (in databases that support arrays) to get the length of an array. In APL, the `array_length` function is straightforward and works directly with array fields in any dataset.
```sql SQL example
SELECT CARDINALITY(array_field) AS array_size
FROM sample_table
```
```kusto APL equivalent
['sample-http-logs']
| extend array_size = array_length(array_field)
```
## Usage
### Syntax
```kusto
array_length(array_expression)
```
### Parameters
* array\_expression: An expression representing the array to measure.
### Returns
The function returns an integer representing the number of elements in the specified array.
## Use case example
In OpenTelemetry traces, `array_length` can reveal the number of events associated with a span.
**Query**
```kusto
['otel-demo-traces']
| take 50
| extend event_count = array_length(events)
| where event_count > 2
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20take%2050%20%7C%20extend%20event_count%20%3D%20array_length\(events\)%20%7C%20where%20event_count%20%3E%202%22%7D)
**Output**
| \_time | trace\_id | span\_id | service.name | event\_count |
| ------------------- | ------------- | --------- | ------------ | ------------ |
| 2024-10-28T12:30:00 | trace\_abc123 | span\_001 | frontend | 3 |
This query finds spans associated with at least three events.
## List of related functions
* [array\_slice](/apl/scalar-functions/array-functions/array-slice): Extracts a subset of elements from an array.
* [array\_concat](/apl/scalar-functions/array-functions/array-concat): Combines multiple arrays.
* [array\_shift\_left](/apl/scalar-functions/array-functions/array-shift-left): Shifts array elements one position to the left, moving the first element to the last position.
# array_reverse
This page explains how to use the array_reverse function in APL.
Use the `array_reverse` function in APL to reverse the order of elements in an array. This function is useful when you need to transform data where the sequence matters, such as reversing a list of events for chronological analysis or processing lists in descending order.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk, reversing an array is not a built-in function, so you typically manipulate the data manually or use workarounds. In APL, `array_reverse` simplifies this process by reversing the array directly.
```sql Splunk example
# SPL does not have a direct array_reverse equivalent.
```
```kusto APL equivalent
let arr = dynamic([1, 2, 3, 4, 5]);
print reversed_arr = array_reverse(arr)
```
Standard ANSI SQL lacks an explicit function to reverse an array; you generally need to create a custom solution. APL’s `array_reverse` makes reversing an array straightforward.
```sql SQL example
-- ANSI SQL lacks a built-in array reverse function.
```
```kusto APL equivalent
let arr = dynamic([1, 2, 3, 4, 5]);
print reversed_arr = array_reverse(arr)
```
## Usage
### Syntax
```kusto
array_reverse(array_expression)
```
### Parameters
* `array_expression`: The array you want to reverse. This array must be of a dynamic type.
### Returns
Returns the input array with its elements in reverse order.
## Use case examples
Use `array_reverse` to inspect the sequence of actions in log entries, reversing the order to understand the initial steps of a user's session.
**Query**
```kusto
['sample-http-logs']
| summarize paths = make_list(uri) by id
| project id, reversed_paths = array_reverse(paths)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20paths%20%3D%20make_list\(uri\)%20by%20id%20%7C%20project%20id%2C%20reversed_paths%20%3D%20array_reverse\(paths\)%22%7D)
**Output**
| id | reversed\_paths |
| ----- | ------------------------------------ |
| U1234 | \['/home', '/cart', '/product', '/'] |
| U5678 | \['/login', '/search', '/'] |
This example identifies a user’s navigation sequence in reverse, showing their entry point into the system.
Use `array_reverse` to analyze trace data by reversing the sequence of span events for each trace, allowing you to trace back the sequence of service calls.
**Query**
```kusto
['otel-demo-traces']
| summarize spans = make_list(span_id) by trace_id
| project trace_id, reversed_spans = array_reverse(spans)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20spans%20%3D%20make_list\(span_id\)%20by%20trace_id%20%7C%20project%20trace_id%2C%20reversed_spans%20%3D%20array_reverse\(spans\)%22%7D)
**Output**
| trace\_id | reversed\_spans |
| --------- | ------------------------- |
| T12345 | \['S4', 'S3', 'S2', 'S1'] |
| T67890 | \['S7', 'S6', 'S5'] |
This example reveals the order in which service calls were made in a trace, but in reverse, aiding in backtracking issues.
Apply `array_reverse` to examine security events, like login attempts or permission checks, in reverse order to identify unusual access patterns or last actions.
**Query**
```kusto
['sample-http-logs']
| where status == '403'
| summarize blocked_uris = make_list(uri) by id
| project id, reversed_blocked_uris = array_reverse(blocked_uris)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20where%20status%20%3D%3D%20'403'%20%7C%20summarize%20blocked_uris%20%3D%20make_list\(uri\)%20by%20id%20%7C%20project%20id%2C%20reversed_blocked_uris%20%3D%20array_reverse\(blocked_uris\)%22%7D)
**Output**
| id | reversed\_blocked\_uris |
| ----- | ------------------------------------- |
| U1234 | \['/admin', '/settings', '/login'] |
| U5678 | \['/account', '/dashboard', '/login'] |
This example helps identify the sequence of unauthorized access attempts by each user.
## List of related functions
* [array\_length](/apl/scalar-functions/array-functions/array-length): Returns the number of elements in an array.
* [array\_shift\_right](/apl/scalar-functions/array-functions/array-shift-right): Shifts array elements to the right.
* [array\_shift\_left](/apl/scalar-functions/array-functions/array-shift-left): Shifts array elements one position to the left, moving the first element to the last position.
# array_rotate_left
This page explains how to use the array_rotate_left function in APL.
The `array_rotate_left` function in Axiom Processing Language (APL) rotates the elements of an array to the left by a specified number of positions. It’s useful when you want to reorder elements in a fixed-length array, shifting elements to the left while moving the leftmost elements to the end. For instance, this function can help analyze sequences where relative order matters but the starting position doesn’t, such as rotating network logs, error codes, or numeric arrays in data for pattern identification.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In APL, `array_rotate_left` allows for direct rotation within the array. Splunk SPL does not have a direct equivalent, so you may need to combine multiple SPL functions to achieve a similar rotation effect.
```sql Splunk example
| eval rotated_array = mvindex(array, 1) . "," . mvindex(array, 0)
```
```kusto APL equivalent
print rotated_array = array_rotate_left(dynamic([1,2,3,4]), 1)
```
ANSI SQL lacks a direct equivalent for array rotation within arrays. A similar transformation can be achieved using array functions if available or by restructuring the array through custom logic.
```sql SQL example
SELECT array_column[2], array_column[3], array_column[0], array_column[1] FROM table
```
```kusto APL equivalent
print rotated_array = array_rotate_left(dynamic([1,2,3,4]), 2)
```
## Usage
### Syntax
```kusto
array_rotate_left(array, positions)
```
### Parameters
* `array`: The array to be rotated. Use a dynamic data type.
* `positions`: An integer specifying the number of positions to rotate the array to the left.
### Returns
A new array where the elements have been rotated to the left by the specified number of positions.
## Use case example
Analyze traces by rotating the field order for visualization or pattern matching.
**Query**
```kusto
['otel-demo-traces']
| extend rotated_sequence = array_rotate_left(events, 1)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20extend%20rotated_sequence%20%3D%20array_rotate_left\(events%2C%201\)%22%7D)
**Output**
```json events
[
{
"name": "Enqueued",
"timestamp": 1733997117722909000
},
{
"timestamp": 1733997117722911700,
"name": "Sent"
},
{
"name": "ResponseReceived",
"timestamp": 1733997117723591400
}
]
```
```json rotated_sequence
[
{
"timestamp": 1733997117722911700,
"name": "Sent"
},
{
"name": "ResponseReceived",
"timestamp": 1733997117723591400
},
{
"timestamp": 1733997117722909000,
"name": "Enqueued"
}
]
```
This example rotates trace-related fields, which can help to identify variations in trace data when visualized differently.
## List of related functions
* [array\_slice](/apl/scalar-functions/array-functions/array-slice): Extracts a subset of elements from an array.
* [array\_rotate\_right](/apl/scalar-functions/array-functions/array-rotate-right): Rotates array elements to the right by a specified number of positions.
* [array\_reverse](/apl/scalar-functions/array-functions/array-reverse): Reverses the order of array elements.
# array_rotate_right
This page explains how to use the array_rotate_right function in APL.
The `array_rotate_right` function in APL allows you to rotate the elements of an array to the right by a specified number of positions. This function is useful when you need to reorder data within arrays, either to shift recent events to the beginning, reorder log entries, or realign elements based on specific processing logic.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In APL, the `array_rotate_right` function provides functionality similar to the use of `mvindex` or specific SPL commands for reordering arrays. The rotation here shifts all elements by a set count to the right, maintaining their original order within the new positions.
```sql Splunk example
| eval rotated_array=mvindex(array, -3)
```
```kusto APL equivalent
| extend rotated_array = array_rotate_right(array, 3)
```
ANSI SQL lacks a direct function for rotating elements within arrays. In APL, the `array_rotate_right` function offers a straightforward way to accomplish this by specifying a rotation count, while SQL users typically require a more complex use of `CASE` statements or custom functions to achieve the same.
```sql SQL example
-- No direct ANSI SQL equivalent for array rotation
```
```kusto APL equivalent
| extend rotated_array = array_rotate_right(array_column, 3)
```
## Usage
### Syntax
```kusto
array_rotate_right(array, count)
```
### Parameters
* `array`: An array to rotate.
* `count`: An integer specifying the number of positions to rotate the array to the right.
### Returns
An array where the elements are rotated to the right by the specified `count`.
## Use case example
In OpenTelemetry traces, rotating an array of span details can help you reorder trace information for performance tracking or troubleshooting.
**Query**
```kusto
['otel-demo-traces']
| extend rotated_sequence = array_rotate_right(events, 1)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20extend%20rotated_sequence%20%3D%20array_rotate_right\(events%2C%201\)%22%7D)
**Output**
```json events
[
{
"attributes": null,
"name": "Enqueued",
"timestamp": 1733997421220380700
},
{
"name": "Sent",
"timestamp": 1733997421220390400,
"attributes": null
},
{
"attributes": null,
"name": "ResponseReceived",
"timestamp": 1733997421221118500
}
]
```
```json rotated_sequence
[
{
"attributes": null,
"name": "ResponseReceived",
"timestamp": 1733997421221118500
},
{
"attributes": null,
"name": "Enqueued",
"timestamp": 1733997421220380700
},
{
"name": "Sent",
"timestamp": 1733997421220390400,
"attributes": null
}
]
```
## List of related functions
* [array\_length](/apl/scalar-functions/array-functions/array-length): Returns the number of elements in an array.
* [array\_index\_of](/apl/scalar-functions/array-functions/array-index-of): Finds the index of an element in an array.
* [array\_rotate\_left](/apl/scalar-functions/array-functions/array-rotate-left): Rotates elements of an array to the left.
# array_select_dict
This page explains how to use the array_select_dict function in APL.
The `array_select_dict` function in APL allows you to retrieve a dictionary from an array of dictionaries based on a specified key-value pair. This function is useful when you need to filter arrays and extract specific dictionaries for further processing. If no match exists, it returns `null`. Non-dictionary values in the input array are ignored.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
The `array_select_dict` function in APL is similar to filtering objects in an array based on conditions in Splunk SPL. However, unlike Splunk, where filtering often applies directly to JSON structures, `array_select_dict` specifically targets arrays of dictionaries.
```sql Splunk example
| eval selected = mvfilter(array, 'key' == 5)
```
```kusto APL equivalent
| project selected = array_select_dict(array, "key", 5)
```
In ANSI SQL, filtering typically involves table rows rather than nested arrays. The APL `array_select_dict` function applies a similar concept to array elements, allowing you to extract dictionaries from arrays using a condition.
```sql SQL example
SELECT *
FROM my_table
WHERE JSON_CONTAINS(array_column, '{"key": 5}')
```
```kusto APL equivalent
| project selected = array_select_dict(array_column, "key", 5)
```
## Usage
### Syntax
```kusto
array_select_dict(array, key, value)
```
### Parameters
| Name | Type | Description |
| ----- | ------- | ------------------------------------- |
| array | dynamic | Input array of dictionaries. |
| key | string | Key to match in each dictionary. |
| value | scalar | Value to match for the specified key. |
### Returns
The function returns the first dictionary in the array that matches the specified key-value pair. If no match exists, it returns `null`. Non-dictionary elements in the array are ignored.
## Use case example
This example demonstrates how to use `array_select_dict` to extract a dictionary where the key `service.name` has the value `frontend`.
**Query**
```kusto
['sample-http-logs']
| extend array = dynamic([{"service.name": "frontend", "status_code": "200"}, {"service.name": "backend", "status_code": "500"}])
| project selected = array_select_dict(array, "service.name", "frontend")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20array%20%3D%20dynamic\(%5B%7B'service.name'%3A%20'frontend'%2C%20'status_code'%3A%20'200'%7D%2C%20%7B'service.name'%3A%20'backend'%2C%20'status_code'%3A%20'500'%7D%5D\)%20%7C%20project%20selected%20%3D%20array_select_dict\(array%2C%20'service.name'%2C%20'frontend'\)%22%7D)
**Output**
`{"service.name": "frontend", "status_code": "200"}`
This query selects the first dictionary in the array where `service.name` equals `frontend` and returns it.
## List of related functions
* [array\_index\_of](/apl/scalar-functions/array-functions/array-index-of): Finds the index of an element in an array.
* [array\_concat](/apl/scalar-functions/array-functions/array-concat): Combines multiple arrays.
* [array\_rotate\_right](/apl/scalar-functions/array-functions/array-rotate-right): Rotates array elements to the right by a specified number of positions.
# array_shift_left
This page explains how to use the array_shift_left function in APL.
The `array_shift_left` function in APL rotates the elements of an array to the left by a specified number of positions. If the shift exceeds the array length, it wraps around and continues from the beginning. This function is useful when you need to realign or reorder elements for pattern analysis, comparisons, or other array transformations.
For example, you can use `array_shift_left` to:
* Align time-series data for comparative analysis.
* Rotate log entries for cyclic pattern detection.
* Reorganize multi-dimensional datasets in your queries.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, there is no direct equivalent to `array_shift_left`, but you can achieve similar results using custom code or by manipulating arrays manually. In APL, `array_shift_left` simplifies this operation by providing a built-in, efficient implementation.
```sql Splunk example
| eval rotated_array = mvindex(array, 1) . mvindex(array, 0)
```
```kusto APL equivalent
['array_shift_left'](array, 1)
```
ANSI SQL does not have a native function equivalent to `array_shift_left`. Typically, you would use procedural SQL to write custom logic for this transformation. In APL, the `array_shift_left` function provides an elegant, concise solution.
```sql SQL example
-- Pseudo code in SQL
SELECT ARRAY_SHIFT_LEFT(array_column, shift_amount)
```
```kusto APL equivalent
['array_shift_left'](array_column, shift_amount)
```
## Usage
### Syntax
```kusto
['array_shift_left'](array, shift_amount)
```
### Parameters
| Parameter | Type | Description |
| -------------- | ------- | ------------------------------------------------------ |
| `array` | Array | The array to shift. |
| `shift_amount` | Integer | The number of positions to shift elements to the left. |
### Returns
An array with elements shifted to the left by the specified `shift_amount`. The function wraps the excess elements to the start of the array.
## Use case example
Reorganize span events to analyze dependencies in a different sequence.
**Query**
```kusto
['otel-demo-traces']
| take 50
| extend shifted_events = array_shift_left(events, 1)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20take%2050%20%7C%20extend%20shifted_events%20%3D%20array_shift_left\(events%2C%201\)%22%7D)
**Output**
```json events
[
{
"name": "Enqueued",
"timestamp": 1734001111273917000,
"attributes": null
},
{
"attributes": null,
"name": "Sent",
"timestamp": 1734001111273925400
},
{
"name": "ResponseReceived",
"timestamp": 1734001111274167300,
"attributes": null
}
]
```
```json shifted_events
[
{
"attributes": null,
"name": "Sent",
"timestamp": 1734001111273925400
},
{
"name": "ResponseReceived",
"timestamp": 1734001111274167300,
"attributes": null
},
null
]
```
This query shifts span events for `frontend` services to analyze the adjusted sequence.
## List of related functions
* [array\_rotate\_right](/apl/scalar-functions/array-functions/array-rotate-right): Rotates array elements to the right by a specified number of positions.
* [array\_rotate\_left](/apl/scalar-functions/array-functions/array-rotate-left): Rotates elements of an array to the left.
* [array\_shift\_right](/apl/scalar-functions/array-functions/array-shift-right): Shifts array elements to the right.
# array_shift_right
This page explains how to use the array_shift_right function in APL.
The `array_shift_right` function in Axiom Processing Language (APL) shifts the elements of an array one position to the right. The last element of the array wraps around and becomes the first element. You can use this function to reorder elements, manage time-series data in circular arrays, or preprocess arrays for specific analytical needs.
### When to use the function
* To manage and rotate data within arrays.
* To implement cyclic operations or transformations.
* To manipulate array data structures in log analysis or telemetry contexts.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, similar functionality might be achieved using custom code to rotate array elements, as there is no direct equivalent to `array_shift_right`. APL provides this functionality natively, making it easier to work with arrays directly.
```sql Splunk example
| eval shifted_array=mvappend(mvindex(array,-1),mvindex(array,0,len(array)-1))
```
```kusto APL equivalent
['dataset.name']
| extend shifted_array = array_shift_right(array)
```
ANSI SQL does not have a built-in function for shifting arrays. In SQL, achieving this would involve user-defined functions or complex subqueries. In APL, `array_shift_right` simplifies this operation significantly.
```sql SQL example
WITH shifted AS (
SELECT
array_column[ARRAY_LENGTH(array_column)] AS first_element,
array_column[1:ARRAY_LENGTH(array_column)-1] AS rest_of_elements
FROM table
)
SELECT ARRAY_APPEND(first_element, rest_of_elements) AS shifted_array
FROM shifted
```
```kusto APL equivalent
['dataset.name']
| extend shifted_array = array_shift_right(array)
```
## Usage
### Syntax
```kusto
array_shift_right(array: array) : array
```
### Parameters
| Parameter | Type | Description |
| --------- | ----- | ------------------------------------------ |
| `array` | array | The input array whose elements are shifted |
### Returns
An array with its elements shifted one position to the right. The last element of the input array wraps around to the first position.
## Use case example
Reorganize span events in telemetry data for visualization or debugging.
**Query**
```kusto
['otel-demo-traces']
| take 50
| extend shifted_events = array_shift_right(events, 1)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20take%2050%20%7C%20extend%20shifted_events%20%3D%20array_shift_right\(events%2C%201\)%22%7D)
**Output**
```json events
[
{
"name": "Enqueued",
"timestamp": 1734001215487927300,
"attributes": null
},
{
"attributes": null,
"name": "Sent",
"timestamp": 1734001215487937000
},
{
"timestamp": 1734001215488191000,
"attributes": null,
"name": "ResponseReceived"
}
]
```
```json shifted_events
[
null,
{
"timestamp": 1734001215487927300,
"attributes": null,
"name": "Enqueued"
},
{
"attributes": null,
"name": "Sent",
"timestamp": 1734001215487937000
}
]
```
The query rotates span events for better trace debugging.
## List of related functions
* [array\_rotate\_right](/apl/scalar-functions/array-functions/array-rotate-right): Rotates array elements to the right by a specified number of positions.
* [array\_rotate\_left](/apl/scalar-functions/array-functions/array-rotate-left): Rotates elements of an array to the left.
* [array\_shift\_left](/apl/scalar-functions/array-functions/array-shift-left): Shifts array elements one position to the left, moving the first element to the last position.
# array_slice
This page explains how to use the array_slice function in APL.
The `array_slice` function in APL extracts a subset of elements from an array, based on specified start and end indices. This function is useful when you want to analyze or transform a portion of data within arrays, such as trimming logs, filtering specific events, or working with trace data in OpenTelemetry logs.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, you can use `mvindex` to extract elements from an array. APL's `array_slice` is similar but more expressive, allowing you to specify slices with optional bounds.
```sql Splunk example
| eval sliced_array=mvindex(my_array, 1, 3)
```
```kusto APL equivalent
T | extend sliced_array = array_slice(my_array, 1, 3)
```
In ANSI SQL, arrays are often handled using JSON functions or window functions, requiring workarounds to slice arrays. In APL, `array_slice` directly handles arrays, making operations more concise.
```sql SQL example
SELECT JSON_EXTRACT(my_array, '$[1:3]') AS sliced_array FROM my_table
```
```kusto APL equivalent
T | extend sliced_array = array_slice(my_array, 1, 3)
```
## Usage
### Syntax
```kusto
array_slice(array, start, end)
```
### Parameters
| Parameter | Description |
| --------- | -------------------------------------------------------------------------------------------------- |
| `array` | The input array to slice. |
| `start` | The starting index of the slice (inclusive). If negative, it is counted from the end of the array. |
| `end` | The ending index of the slice (exclusive). If negative, it is counted from the end of the array. |
### Returns
An array containing the elements from the specified slice. If the indices are out of bounds, it adjusts to return valid elements without error.
## Use case example
Filter spans from trace data to analyze a specific range of events.
**Query**
```kusto
['otel-demo-traces']
| where array_length(events) > 4
| extend sliced_events = array_slice(events, -3, -1)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20where%20array_length\(events\)%20%3E%204%20%7C%20extend%20sliced_events%20%3D%20array_slice\(events%2C%20-3%2C%20-1\)%22%7D)
**Output**
```json events
[
{
"timestamp": 1734001336443987200,
"attributes": null,
"name": "prepared"
},
{
"attributes": {
"feature_flag.provider_name": "flagd",
"feature_flag.variant": "off",
"feature_flag.key": "paymentServiceUnreachable"
},
"name": "feature_flag",
"timestamp": 1734001336444001800
},
{
"name": "charged",
"timestamp": 1734001336445970200,
"attributes": {
"custom": {
"app.payment.transaction.id": "49567406-21f4-41aa-bab2-69911c055753"
}
}
},
{
"name": "shipped",
"timestamp": 1734001336446488600,
"attributes": {
"custom": {
"app.shipping.tracking.id": "9a3b7a5c-aa41-4033-917f-50cb7360a2a4"
}
}
},
{
"attributes": {
"feature_flag.variant": "off",
"feature_flag.key": "kafkaQueueProblems",
"feature_flag.provider_name": "flagd"
},
"name": "feature_flag",
"timestamp": 1734001336461096700
}
]
```
```json sliced_events
[
{
"name": "charged",
"timestamp": 1734001336445970200,
"attributes": {
"custom": {
"app.payment.transaction.id": "49567406-21f4-41aa-bab2-69911c055753"
}
}
},
{
"name": "shipped",
"timestamp": 1734001336446488600,
"attributes": {
"custom": {
"app.shipping.tracking.id": "9a3b7a5c-aa41-4033-917f-50cb7360a2a4"
}
}
}
]
```
Slices the last three events from the `events` array, excluding the final one.
## List of related functions
* [array\_concat](/apl/scalar-functions/array-functions/array-concat): Combines multiple arrays.
* [array\_reverse](/apl/scalar-functions/array-functions/array-reverse): Reverses the order of array elements.
* [array\_shift\_right](/apl/scalar-functions/array-functions/array-shift-right): Shifts array elements to the right.
# array_split
This page explains how to use the array_split function in APL.
The `array_split` function in APL splits an array into smaller subarrays based on specified split indices and packs the generated subarrays into a dynamic array. This function is useful when you want to partition data for analysis, batch processing, or distributing workloads across smaller units.
You can use `array_split` to:
* Divide large datasets into manageable chunks for processing.
* Create segments for detailed analysis or visualization.
* Handle nested data structures for targeted processing.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, array manipulation is achieved through functions like `mvzip` and `mvfilter`, but there is no direct equivalent to `array_split`. APL provides a more explicit approach for splitting arrays.
```sql Splunk example
| eval split_array = mvzip(array_field, "2")
```
```kusto APL equivalent
['otel-demo-traces']
| extend split_array = array_split(events, 2)
```
ANSI SQL does not have built-in functions for directly splitting arrays. APL provides this capability natively, making it easier to handle array operations within queries.
```sql SQL example
-- SQL typically requires custom functions or JSON manipulation.
SELECT * FROM dataset WHERE JSON_ARRAY_LENGTH(array_field) > 0;
```
```kusto APL equivalent
['otel-demo-traces']
| extend split_array = array_split(events, 2)
```
## Usage
### Syntax
```kusto
array_split(array, index)
```
### Parameters
| Parameter | Description | Type |
| --------- | -------------------------------------------------------------------------------------------------------------------------- | ------------------ |
| `array` | The array to split. | Dynamic |
| `index` | An integer or dynamic array of integers. These zero-based split indices indicate the location at which to split the array. | Integer or Dynamic |
### Returns
Returns a dynamic array containing N+1 arrays where N is the number of input indices. The original array is split at the input indices.
## Use case examples
### Single split index
Split large event arrays into manageable chunks for analysis.
```kusto
['otel-demo-traces']
| where array_length(events) == 3
| extend split_events = array_split(events, 2)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20where%20array_length\(events\)%20%3D%3D%203%20%7C%20extend%20span_chunks%20%3D%20array_split\(events%2C%202\)%22%7D)
**Output**
```json events
[
{
"timestamp": 1734033733465219300,
"name": "Enqueued"
},
{
"name": "Sent",
"timestamp": 1734033733465228500
},
{
"timestamp": 1734033733465455900,
"name": "ResponseReceived"
}
]
```
```json split_events
[
[
{
"timestamp": 1734033733465219300,
"name": "Enqueued"
},
{
"name": "Sent",
"timestamp": 1734033733465228500
}
],
[
{
"timestamp": 1734033733465455900,
"name": "ResponseReceived"
}
]
]
```
This query splits the `events` array at index `2` into two subarrays for further processing.
### Multiple split indeces
Divide traces into fixed-size segments for better debugging.
**Query**
```kusto
['otel-demo-traces']
| where array_length(events) == 3
| extend split_events = array_split(events, dynamic([1,2]))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20where%20array_length\(events\)%20%3D%3D%203%20%7C%20extend%20span_chunks%20%3D%20array_split\(events%2C%20dynamic\(%5B1%2C2%5D\)\)%22%7D)
**Output**
```json events
[
{
"attributes": null,
"name": "Enqueued",
"timestamp": 1734034755085206000
},
{
"name": "Sent",
"timestamp": 1734034755085215500,
"attributes": null
},
{
"attributes": null,
"name": "ResponseReceived",
"timestamp": 1734034755085424000
}
]
```
```json split_events
[
[
{
"timestamp": 1734034755085206000,
"attributes": null,
"name": "Enqueued"
}
],
[
{
"timestamp": 1734034755085215500,
"attributes": null,
"name": "Sent"
}
],
[
{
"attributes": null,
"name": "ResponseReceived",
"timestamp": 1734034755085424000
}
]
]
```
This query splits the `events` array into three subarrays based on the indices `[1,2]`.
## List of related functions
* [array\_index\_of](/apl/scalar-functions/array-functions/array-index-of): Finds the index of an element in an array.
* [array\_rotate\_right](/apl/scalar-functions/array-functions/array-rotate-right): Rotates array elements to the right by a specified number of positions.
* [array\_shift\_left](/apl/scalar-functions/array-functions/array-shift-left): Shifts array elements one position to the left, moving the first element to the last position.
# array_sum
This page explains how to use the array_sum function in APL.
The `array_sum` function in APL computes the sum of all numerical elements in an array. This function is particularly useful when you want to aggregate numerical values stored in an array field, such as durations, counts, or measurements, across events or records. Use `array_sum` when your dataset includes array-type fields, and you need to quickly compute their total.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, you might need to use commands or functions such as `mvsum` for similar operations. In APL, `array_sum` provides a direct method to compute the sum of numerical arrays.
```sql Splunk example
| eval total_duration = mvsum(duration_array)
```
```kusto APL equivalent
['dataset.name']
| extend total_duration = array_sum(duration_array)
```
ANSI SQL does not natively support array operations like summing array elements. However, you can achieve similar results with `UNNEST` and `SUM`. In APL, `array_sum` simplifies this by handling array summation directly.
```sql SQL example
SELECT SUM(value) AS total_duration
FROM UNNEST(duration_array) AS value;
```
```kusto APL equivalent
['dataset.name']
| extend total_duration = array_sum(duration_array)
```
## Usage
### Syntax
```kusto
array_sum(array_expression)
```
### Parameters
| Parameter | Type | Description |
| ------------------ | ----- | ------------------------------------------ |
| `array_expression` | array | An array of numerical values to be summed. |
### Returns
The function returns the sum of all numerical values in the array. If the array is empty or contains no numerical values, the result is `null`.
## Use case example
Summing the duration of all events in an array field.
**Query**
```kusto
['otel-demo-traces']
| summarize event_duration = make_list(duration) by ['service.name']
| extend total_event_duration = array_sum(event_duration)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20event_duration%20%3D%20make_list\(duration\)%20by%20%5B'service.name'%5D%20%7C%20extend%20total_event_duration%20%3D%20array_sum\(event_duration\)%22%7D)
**Output**
| service.name | total\_event\_duration |
| --------------- | ---------------------- |
| frontend | 1667269530000 |
| checkoutservice | 3801404276900 |
The query calculates the total duration of all events for each service.
## List of related functions
* [array\_rotate\_right](/apl/scalar-functions/array-functions/array-rotate-right): Rotates array elements to the right by a specified number of positions.
* [array\_reverse](/apl/scalar-functions/array-functions/array-reverse): Reverses the order of array elements.
* [array\_shift\_left](/apl/scalar-functions/array-functions/array-shift-left): Shifts array elements one position to the left, moving the first element to the last position.
# isarray
This page explains how to use the isarray function in APL.
The `isarray` function in APL checks whether a specified value is an array. Use this function to validate input data, handle dynamic schemas, or filter for records where a field is explicitly an array. It is particularly useful when working with data that contains fields with mixed data types or optional nested arrays.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, similar functionality is achieved by analyzing the data structure manually, as SPL does not have a direct equivalent to `isarray`. APL simplifies this task by providing the `isarray` function to directly evaluate whether a value is an array.
```sql Splunk example
| eval is_array=if(isnotnull(mvcount(field)), "true", "false")
```
```kusto APL equivalent
['dataset.name']
| extend is_array=isarray(field)
```
In ANSI SQL, there is no built-in function for directly checking if a value is an array. You might need to rely on JSON functions or structural parsing. APL provides the `isarray` function as a more straightforward solution.
```sql SQL example
SELECT CASE
WHEN JSON_TYPE(field) = 'ARRAY' THEN TRUE
ELSE FALSE
END AS is_array
FROM dataset_name;
```
```kusto APL equivalent
['dataset.name']
| extend is_array=isarray(field)
```
## Usage
### Syntax
```kusto
isarray(value)
```
### Parameters
| Parameter | Description |
| --------- | ------------------------------------- |
| `value` | The value to check if it is an array. |
### Returns
A boolean value:
* `true` if the specified value is an array.
* `false` otherwise.
## Use case example
Filter for records where the `events` field contains an array.
**Query**
```kusto
['otel-demo-traces']
| take 50
| summarize events_array = make_list(events)
| extend is_array = isarray(events_array)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20take%2050%20%7C%20summarize%20events_array%20%3D%20make_list\(events\)%20%7C%20extend%20is_array%20%3D%20isarray\(events_array\)%22%7D)
**Output**
| is\_array |
| --------- |
| true |
## List of related functions
* [array\_length](/apl/scalar-functions/array-functions/array-length): Returns the number of elements in an array.
* [array\_index\_of](/apl/scalar-functions/array-functions/array-index-of): Finds the index of an element in an array.
* [array\_slice](/apl/scalar-functions/array-functions/array-slice): Extracts a subset of elements from an array.
# pack_array
This page explains how to use the pack_array function in APL.
The `pack_array` function in APL creates an array from individual values or expressions. You can use this function to group related data into a single field, which can simplify handling and querying of data collections. It is especially useful when working with nested data structures or aggregating data into arrays for further processing.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, you typically use functions like `mvappend` to create multi-value fields. In APL, the `pack_array` function serves a similar purpose by combining values into an array.
```sql Splunk example
| eval array_field = mvappend(value1, value2, value3)
```
```kusto APL equivalent
| extend array_field = pack_array(value1, value2, value3)
```
In ANSI SQL, arrays are often constructed using functions like `ARRAY`. The `pack_array` function in APL performs a similar operation, creating an array from specified values.
```sql SQL example
SELECT ARRAY[value1, value2, value3] AS array_field;
```
```kusto APL equivalent
| extend array_field = pack_array(value1, value2, value3)
```
## Usage
### Syntax
```kusto
pack_array(value1, value2, ..., valueN)
```
### Parameters
| Parameter | Description |
| --------- | ------------------------------------------ |
| `value1` | The first value to include in the array. |
| `value2` | The second value to include in the array. |
| `...` | Additional values to include in the array. |
| `valueN` | The last value to include in the array. |
### Returns
An array containing the specified values in the order they are provided.
## Use case example
Use `pack_array` to consolidate span data into an array for a trace summary.
**Query**
```kusto
['otel-demo-traces']
| extend span_summary = pack_array(['service.name'], kind, duration)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20extend%20span_summary%20%3D%20pack_array\(%5B'service.name'%5D%2C%20kind%2C%20duration\)%22%7D)
**Output**
| service.name | kind | duration | span\_summary |
| ------------ | ------ | -------- | -------------------------------- |
| frontend | server | 123ms | \["frontend", "server", "123ms"] |
This query creates a concise representation of span details.
## List of related functions
* [array\_slice](/apl/scalar-functions/array-functions/array-slice): Extracts a subset of elements from an array.
* [array\_concat](/apl/scalar-functions/array-functions/array-concat): Combines multiple arrays.
* [array\_length](/apl/scalar-functions/array-functions/array-length): Returns the number of elements in an array.
# strcat_array
This page explains how to use the strcat_array function in APL.
The `strcat_array` function in Axiom Processing Language (APL) allows you to concatenate the elements of an array into a single string, with an optional delimiter separating each element. This function is useful when you need to transform a set of values into a readable or exportable format, such as combining multiple log entries, tracing IDs, or security alerts into a single output for further analysis or reporting.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, concatenation typically involves transforming fields into a string using the `eval` command with the `+` operator or `mvjoin()` for arrays. In APL, `strcat_array` simplifies array concatenation by natively supporting array input with a delimiter.
```sql Splunk example
| eval concatenated=mvjoin(array_field, ", ")
```
```kusto APL equivalent
dataset
| extend concatenated = strcat_array(array_field, ', ')
```
In ANSI SQL, concatenation involves functions like `STRING_AGG()` or manual string building using `CONCAT()`. APL’s `strcat_array` is similar to `STRING_AGG()`, but focuses on array input directly with a customizable delimiter.
```sql SQL example
SELECT STRING_AGG(column_name, ', ') AS concatenated FROM table;
```
```kusto APL equivalent
dataset
| summarize concatenated = strcat_array(column_name, ', ')
```
## Usage
### Syntax
```kusto
strcat_array(array, delimiter)
```
### Parameters
| Parameter | Type | Description |
| ----------- | ------- | ---------------------------------------------------------------------------------------------------------------------------- |
| `array` | dynamic | The array of values to concatenate. |
| `delimiter` | string | The string used to separate each element in the concatenated result. Optional. Defaults to an empty string if not specified. |
### Returns
A single concatenated string with the array’s elements separated by the specified delimiter.
## Use case example
You can use `strcat_array` to combine HTTP methods and URLs for a quick summary of unique request paths.
**Query**
```kusto
['sample-http-logs']
| take 50
| extend combined_requests = strcat_delim(' ', method, uri)
| summarize requests_list = make_list(combined_requests)
| extend paths = strcat_array(requests_list, ', ')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20take%2050%20%7C%20extend%20combined_requests%20%3D%20strcat_delim\('%20'%2C%20method%2C%20uri\)%20%7C%20summarize%20requests_list%20%3D%20make_list\(combined_requests\)%20%7C%20extend%20paths%20%3D%20strcat_array\(requests_list%2C%20'%2C%20'\)%22%7D)
**Output**
| paths |
| ------------------------------------ |
| GET /index, POST /submit, GET /about |
This query summarizes unique HTTP method and URL combinations into a single, readable string.
## List of related functions
* [array\_length](/apl/scalar-functions/array-functions/array-length): Returns the number of elements in an array.
* [array\_index\_of](/apl/scalar-functions/array-functions/array-index-of): Finds the index of an element in an array.
* [array\_concat](/apl/scalar-functions/array-functions/array-concat): Combines multiple arrays.
# Conditional functions
Learn how to use and combine different conditional functions in APL
## Conditional functions
| **Function Name** | **Description** |
| ----------------- | ----------------------------------------------------------------------------------------------------------- |
| [case()](#case) | Evaluates a list of conditions and returns the first result expression whose condition is satisfied. |
| [iff()](#iff) | Evaluates the first argument (the predicate), and returns the value of either the second or third arguments |
## case()
Evaluates a list of conditions and returns the first result whose condition is satisfied.
### Arguments
* condition: An expression that evaluates to a Boolean.
* result: An expression that Axiom evaluates and returns the value if its condition is the first that evaluates to true.
* nothingMatchedResult: An expression that Axiom evaluates and returns the value if none of the conditional expressions evaluates to true.
### Returns
Axiom returns the value of the first result whose condition evaluates to true. If none of the conditions is satisfied, Axiom returns the value of `nothingMatchedResult`.
### Example
```kusto
case(condition1, result1, condition2, result2, condition3, result3, ..., nothingMatchedResult)
```
```kusto
['sample-http-logs'] |
extend status_human_readable = case(
status_int == 200,
'OK',
status_int == 201,
'Created',
status_int == 301,
'Moved Permanently',
status_int == 500,
'Internal Server Error',
'Other'
)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20extend%20status_code%20%3D%20case\(status_int%20%3D%3D%20200%2C%20'OK'%2C%20status_int%20%3D%3D%20201%2C%20'Created'%2C%20status_int%20%3D%3D%20301%2C%20'Moved%20Permanently'%2C%20status_int%20%3D%3D%20500%2C%20'Internal%20Server%20Error'%2C%20'Other'\)%22%7D)
## iff()
Evaluates the first argument (the predicate), and returns the value of either the second or third arguments. The second and third arguments must be of the same type.
### Arguments
* predicate: An expression that evaluates to a boolean value.
* ifTrue: An expression that gets evaluated and its value returned from the function if predicate evaluates to `true`.
* ifFalse: An expression that gets evaluated and its value returned from the function if predicate evaluates to `false`.
### Returns
This function returns the value of ifTrue if predicate evaluates to true, or the value of ifFalse otherwise.
### Examples
```kusto
iff(predicate, ifTrue, ifFalse)
```
```kusto
['sample-http-logs']
| project Status = iff(req_duration_ms == 1, "numeric", "Inactive")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20project%20Status%20%3D%20iff%28req_duration_ms%20%3D%3D%201%2C%20%5C%22numeric%5C%22%2C%20%5C%22Inactive%5C%22%29%22%7D)
# Conversion functions
Learn how to use and combine different conversion functions in APL
## Conversion functions
| **Function Name** | **Description** |
| --------------------------------------------- | ------------------------------------------------------------------------------------------ |
| [ensure\_field()](#ensure-field) | Ensures the existence of a field and returns its value or a typed nil if it doesn’t exist. |
| [tobool()](#tobool) | Converts input to boolean (signed 8-bit) representation. |
| [todatetime()](#todatetime) | Converts input to datetime scalar. |
| [todouble(), toreal()](#todouble\(\),-toreal) | Converts the input to a value of type `real`. `todouble()` and `toreal()` are synonyms. |
| [tostring()](#tostring) | Converts input to a string representation. |
| [totimespan()](#totimespan) | Converts input to timespan scalar. |
| [tohex()](#tohex) | Converts input to a hexadecimal string. |
| [tolong()](#tolong) | Converts input to long (signed 64-bit) number representation. |
| [dynamic\_to\_json()](#dynamic-to-json) | Converts a scalar value of type dynamic to a canonical string representation. |
| [isbool()](#isbool) | Returns a value of true or false if the expression value is passed. |
| [toint()](#toint) | Converts the input to an integer value (signed 64-bit) number representation. |
## ensure\_field()
Ensures the existence of a field and returns its value or a typed nil if it doesn’t exist.
### Arguments
| **name** | **type** | **description** |
| ----------- | -------- | ------------------------------------------------------------------------------------------------------ |
| field\_name | string | The name of the field to ensure exists. |
| field\_type | type | The type of the field. See [scalar data types](/apl/data-types/scalar-data-types) for supported types. |
### Returns
This function returns the value of the specified field if it exists, otherwise it returns a typed nil.
### Examples
```kusto
ensure_field(field_name, field_type)
```
### Handle missing fields
In this example, the value of `show_field` is nil because the `myfield` field doesn’t exist.
```kusto
['sample-http-logs']
| extend show_field = ensure_field("myfield", typeof(string))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20show_field%20%3D%20ensure_field%28%27myfield%27%2C%20typeof%28string%29%29%22%7D)
### Access existing fields
In this example, the value of `newstatus` is the value of `status` because the `status` field exists.
```kusto
['sample-http-logs']
| extend newstatus = ensure_field("status", typeof(string))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20newstatus%20%3D%20ensure_field%28%27status%27%2C%20typeof%28string%29%29%22%7D)
### Future-proof queries
In this example, the query is prepared for a field named `upcoming_field` that is expected to be added to the data soon. By using `ensure_field()`, logic can be written around this future field, and the query will work when the field becomes available.
```kusto
['sample-http-logs']
| extend new_field = ensure_field("upcoming_field", typeof(int))
| where new_field > 100
```
## tobool()
Converts input to boolean (signed 8-bit) representation.
### Arguments
* Expr: Expression that will be converted to boolean.
### Returns
* If conversion is successful, result will be a boolean. If conversion isn’t successful, result will be `false`
### Examples
```kusto
tobool(Expr)
toboolean(Expr) (alias)
```
```kusto
tobool("true") == true
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20conversion_function%20%3D%20tobool%28%5C%22true%5C%22%29%20%3D%3D%20true%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"conversion_function": true
}
```
## todatetime()
Converts input to datetime scalar.
### Arguments
* Expr: Expression that will be converted to datetime.
### Returns
If the conversion is successful, the result will be a datetime value. Else, the result will be `false.`
### Examples
```kusto
todatetime(Expr)
```
```kusto
todatetime("2022-11-13")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20conversion_function%20%3D%20todatetime%28%5C%222022-11-13%5C%22%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result
```json
{
"boo": "2022-11-13T00:00:00Z"
}
```
## todouble(), toreal()
Converts the input to a value of type real. **(todouble() is an alternative word to toreal())**
### Arguments
* Expr: An expression whose value will be converted to a value of type `real.`
### Returns
If conversion is successful, the result is a value of type real. If conversion is not successful, the result returns false.
### Examples
```kusto
toreal(Expr)todouble(Expr)
```
```kusto
toreal("1567") == 1567
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20conversion_function%20%3D%20toreal%28%5C%221567%5C%22%29%20%3D%3D%201567%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"conversion_function": true
}
```
## tostring()
Converts input to a string representation.
### Arguments
* `Expr:` Expression that will be converted to string.
### Returns
If the Expression value is non-null, the result will be a string representation of the Expression. If the Expression value is null, the result will be an empty string.
### Examples
```kusto
tostring(Expr)
```
```kusto
tostring(axiom) == "axiom"
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20conversion_function%20%3D%20tostring%28%5C%22axiom%5C%22%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"conversion_function": "axiom"
}
```
## totimespan
Converts input to timespan scalar.
### Arguments
* `Expr:` Expression that will be converted to timespan.
### Returns
If conversion is successful, result will be a timespan value. Else, result will be false.
### Examples
```kusto
totimespan(Expr)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20conversion_function%20%3D%20totimespan%282022-11-13%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result
```json
{
"conversion_function": "1.998µs"
}
```
## tohex()
Converts input to a hexadecimal string.
### Arguments
* Expr: int or long value that will be converted to a hex string. Other types are not supported.
### Returns
If conversion is successful, result will be a string value. If conversion is not successful, result will be false.
### Examples
```kusto
tohex(value)
```
```kusto
tohex(-546) == 'fffffffffffffdde'
```
```kusto
tohex(546) == '222'
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20conversion_function%20%3D%20tohex%28-546%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"conversion_function": "fffffffffffffdde"
}
```
## tolong()
Converts input to long (signed 64-bit) number representation.
### Arguments
* Expr: Expression that will be converted to long.
### Returns
If conversion is successful, result will be a long number. If conversion is not successful, result will be false.
### Examples
```kusto
tolong(Expr)
```
```kusto
tolong("241") == 241
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20conversion_function%20%3D%20tolong%28%5C%22241%5C%22%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"conversion_function": 241
}
```
## dynamic\_to\_json()
Converts a scalar value of type `dynamic` to a canonical `string` representation.
### Arguments
* dynamic input (EXpr): The function accepts one argument.
### Returns
Returns a canonical representation of the input as a value of type `string`, according to the following rules:
* If the input is a scalar value of type other than `dynamic`, the output is the app of `tostring()` to that value.
* If the input in an array of values, the output is composed of the characters **\[, ,, and ]** interspersed with the canonical representation described here of each array element.
* If the input is a property bag, the output is composed of the characters **\{, ,, and }** interspersed with the colon (:)-delimited name/value pairs of the properties. The pairs are sorted by the names, and the values are in the canonical representation described here of each array element.
### Examples
```kusto
dynamic_to_json(dynamic)
```
```kusto
['sample-http-logs']
| project conversion_function = dynamic_to_json(dynamic([1,2,3]))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20conversion_function%20%3D%20dynamic_to_json%28dynamic%28%5B1%2C2%2C3%5D%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"conversion_function": "[1,2,3]"
}
```
## isbool()
Returns a value of true or false if the expression value is passed.
### Arguments
* Expr: The function accepts one argument. The variable of expression to be evaluated.
### Returns
Returns `true` if expression value is a bool, `false` otherwise.
### Examples
```kusto
isbool(expression)
```
```kusto
isbool("pow") == false
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20conversion_function%20%3D%20isbool%28%5C%22pow%5C%22%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"conversion_function": false
}
```
***
## toint()
Converts the input to an integer value (signed 64-bit) number representation.
### Arguments
* Value: The value to convert to an [integer](/apl/data-types/scalar-data-types#the-int-data-type).
### Returns
If the conversion is successful, the result will be an integer. Otherwise, the result will be `null`.
### Examples
```kusto
toint(value)
```
```kusto
| project toint("456") == 456
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20toint%28%5C%22456%5C%22%29%20%3D%3D%20456%22%7D)
# Datetime functions
Learn how to use and combine different timespan functions in APL
## DateTime/ Timespan functions
| **Function Name** | **Description** |
| ---------------------------------- | -------------------------------------------------------------------------------------------------------------------- |
| [ago()](#ago) | Subtracts the given timespan from the current UTC clock time. |
| [datetime\_add()](#datetime-add) | Calculates a new datetime from a specified datepart multiplied by a specified amount, added to a specified datetime. |
| [datetime\_part()](#datetime-part) | Extracts the requested date part as an integer value. |
| [datetime\_diff()](#datetime-diff) | Calculates calendarian difference between two datetime values. |
| [dayofmonth()](#dayofmonth) | Returns the integer number representing the day number of the given month |
| [dayofweek()](#dayofweek) | Returns the integer number of days since the preceding Sunday, as a timespan. |
| [dayofyear()](#dayofyear) | Returns the integer number represents the day number of the given year. |
| [endofyear()](#endofyear) | Returns the end of the year containing the date |
| [getmonth()](#getmonth) | Get the month number (1-12) from a datetime. |
| [getyear()](#getyear) | Returns the year part of the `datetime` argument. |
| [hourofday()](#hourofday) | Returns the integer number representing the hour number of the given date |
| [endofday()](#endofday) | Returns the end of the day containing the date |
| [now()](#now) | Returns the current UTC clock time, optionally offset by a given timespan. |
| [endofmonth()](#endofmonth) | Returns the end of the month containing the date |
| [endofweek()](#endofweek) | Returns the end of the week containing the date. |
| [monthofyear()](#monthofyear) | Returns the integer number represents the month number of the given year. |
| [startofday()](#startofday) | Returns the start of the day containing the date |
| [startofmonth()](#startofmonth) | Returns the start of the month containing the date |
| [startofweek()](#startofweek) | Returns the start of the week containing the date |
| [startofyear()](#startofyear) | Returns the start of the year containing the date |
* We support the ISO 8601 format, which is the standard format for representing dates and times in the Gregorian calendar. [Check them out here](/apl/data-types/scalar-data-types#supported-formats)
## ago()
Subtracts the given timespan from the current UTC clock time.
### Arguments
* Interval to subtract from the current UTC clock time
### Returns
now() - a\_timespan
### Example
```kusto
ago(6h)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20date_time_functions%20%3D%20ago%286h%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"date_time_functions": "2023-09-11T20:12:39Z"
}
```
```kusto
ago(3d)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20date_time_functions%20%3D%20ago%283d%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"date_time_functions": "2023-09-09T02:13:29Z"
}
```
## datetime\_add()
Calculates a new datetime from a specified datepart multiplied by a specified amount, added to a specified datetime.
### Arguments
* period: string.
* amount: integer.
* datetime: datetime value.
### Returns
A date after a certain time/date interval has been added.
### Example
```kusto
datetime_add(period,amount,datetime)
```
```kusto
['sample-http-logs']
| project new_datetime = datetime_add( "month", 1, datetime(2016-10-06))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20new_datetime%20%3D%20datetime_add%28%20%5C%22month%5C%22%2C%201%2C%20datetime%282016-10-06%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"new_datetime": "2016-11-06T00:00:00Z"
}
```
## datetime\_part()
Extracts the requested date part as an integer value.
### Arguments
* date: datetime
* part: string
### Returns
An integer representing the extracted part.
### Examples
```kusto
datetime_part(part,datetime)
```
```kusto
['sample-http-logs']
| project new_datetime = datetime_part("Day", datetime(2016-06-26T08:20:03.123456Z))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20new_datetime%20%3D%20datetime_part%28%5C%22Day%5C%22%2C%20datetime%282016-06-26T08%3A20%3A03.123456Z%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"new_datetime": 26
}
```
## datetime\_diff()
Calculates calendarian difference between two datetime values.
### Arguments
* period: string.
* datetime\_1: datetime value.
* datetime\_2: datetime value.
### Returns
An integer, which represents amount of periods in the result of subtraction (datetime\_1 - datetime\_2).
### Example
```kusto
datetime_diff(period,datetime_1,datetime_2)
```
```kusto
['sample-http-logs']
| project new_datetime = datetime_diff("week", datetime(2019-06-26T08:20:03.123456Z), datetime(2014-06-26T08:19:03.123456Z))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20new_datetime%20%3D%20datetime_diff%28%5C%22week%5C%22%2C%20datetime%28%5C%222019-06-26T08%3A20%3A03.123456Z%5C%22%29%2C%20datetime%28%5C%222014-06-26T08%3A19%3A03.123456Z%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"new_datetime": 260
}
```
```kusto
['sample-http-logs']
| project new_datetime = datetime_diff("week", datetime(2015-11-08), datetime(2014-11-08))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20new_datetime%20%3D%20datetime_diff%28%5C%22week%5C%22%2C%20datetime%28%5C%222014-11-08%5C%22%29%2C%20datetime%28%5C%222014-11-08%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"new_datetime": 52
}
```
## dayofmonth()
Returns the integer number representing the day number of the given month
### Arguments
* `a_date`: A `datetime`
### Returns
day number of the given month.
### Example
```kusto
dayofmonth(a_date)
```
```kusto
['sample-http-logs']
| project day_of_the_month = dayofmonth(datetime(2017-11-30))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20day_of_the_month%20%3D%20dayofmonth%28datetime%28%5C%222017-11-30%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"day_of_the_month": 30
}
```
## dayofweek()
Returns the integer number of days since the preceding Sunday, as a timespan.
### Arguments
* a\_date: A datetime.
### Returns
The `timespan` since midnight at the beginning of the preceding Sunday, rounded down to an integer number of days.
### Example
```kusto
dayofweek(a_date)
```
```kusto
['sample-http-logs']
| project day_of_the_week = dayofweek(datetime(2019-05-18))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20day_of_the_week%20%3D%20dayofweek%28datetime%28%5C%222019-05-18%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"day_of_the_week": 6
}
```
## dayofyear()
Returns the integer number represents the day number of the given year.
### Arguments
* `a_date`: A `datetime.`
### Returns
`day number` of the given year.
### Example
```kusto
dayofyear(a_date)
```
```kusto
['sample-http-logs']
| project day_of_the_year = dayofyear(datetime(2020-07-20))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20day_of_the_year%20%3D%20dayofyear%28datetime%28%5C%222020-07-20%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"day_of_the_year": 202
}
```
## endofyear()
Returns the end of the year containing the date
### Arguments
* date: The input date.
### Returns
A datetime representing the end of the year for the given date value
### Example
```kusto
endofyear(date)
```
```kusto
['sample-http-logs']
| extend end_of_the_year = endofyear(datetime(2016-06-26T08:20))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20%20end_of_the_year%20%3D%20endofyear%28datetime%28%5C%222016-06-26T08%3A20%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"end_of_the_year": "2016-12-31T23:59:59.999999999Z"
}
```
## getmonth()
Get the month number (1-12) from a datetime.
```kusto
['sample-http-logs']
| extend get_specific_month = getmonth(datetime(2020-07-26T08:20))
```
## getyear()
Returns the year part of the `datetime` argument.
### Example
```kusto
getyear(datetime())
```
```kusto
['sample-http-logs']
| project get_specific_year = getyear(datetime(2020-07-26))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20get_specific_year%20%3D%20getyear%28datetime%28%5C%222020-07-26%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"get_specific_year": 2020
}
```
## hourofday()
Returns the integer number representing the hour number of the given date
### Arguments
* a\_date: A datetime.
### Returns
hour number of the day (0-23).
### Example
```kusto
hourofday(a_date)
```
```kusto
['sample-http-logs']
| project get_specific_hour = hourofday(datetime(2016-06-26T08:20:03.123456Z))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20get_specific_hour%20%3D%20hourofday%28datetime%28%5C%222016-06-26T08%3A20%3A03.123456Z%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"get_specific_hour": 8
}
```
## endofday()
Returns the end of the day containing the date
### Arguments
* date: The input date.
### Returns
A datetime representing the end of the day for the given date value.
### Example
```kusto
endofday(date)
```
```kusto
['sample-http-logs']
| project end_of_day_series = endofday(datetime(2016-06-26T08:20:03.123456Z))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20end_of_day_series%20%3D%20endofday%28datetime%28%5C%222016-06-26T08%3A20%3A03.123456Z%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"end_of_day_series": "2016-06-26T23:59:59.999999999Z"
}
```
## now()
Returns the current UTC clock time, optionally offset by a given timespan. This function can be used multiple times in a statement and the clock time being referenced will be the same for all instances.
### Arguments
* offset: A timespan, added to the current UTC clock time. Default: 0.
### Returns
The current UTC clock time as a datetime.
### Example
```kusto
now([offset])
```
```kusto
['sample-http-logs']
| project returns_clock_time = now(-5d)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20returns_clock_time%20%3D%20now%28-5d%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"returns_clock_time": "2023-09-07T02:54:50Z"
}
```
## endofmonth()
Returns the end of the month containing the date
### Arguments
* date: The input date.
### Returns
A datetime representing the end of the month for the given date value.
### Example
```kusto
endofmonth(date)
```
```kusto
['sample-http-logs']
| project end_of_the_month = endofmonth(datetime(2016-06-26T08:20))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20end_of_the_month%20%3D%20endofmonth%28datetime%28%5C%222016-06-26T08%3A20%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"end_of_the_month": "2016-06-30T23:59:59.999999999Z"
}
```
## endofweek()
Returns the end of the week containing the date
### Arguments
* date: The input date.
### Returns
A datetime representing the end of the week for the given date value
### Example
```kusto
endofweek(date)
```
```kusto
['sample-http-logs']
| project end_of_the_week = endofweek(datetime(2019-04-18T08:20))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20end_of_the_week%20%3D%20endofweek%28datetime%28%5C%222019-04-18T08%3A20%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"end_of_the_week": "2019-04-20T23:59:59.999999999Z"
}
```
## monthofyear()
Returns the integer number represents the month number of the given year.
### Arguments
* `date`: A datetime.
### Returns
month number of the given year.
### Example
```kusto
monthofyear(datetime("2018-11-21"))
```
```kusto
['sample-http-logs']
| project month_of_the_year = monthofyear(datetime(2018-11-11))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20month_of_the_year%20%3D%20monthofyear%28datetime%28%5C%222018-11-11%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"month_of_the_year": 11
}
```
## startofday()
Returns the start of the day containing the date
### Arguments
* date: The input date.
### Returns
A datetime representing the start of the day for the given date value
### Examples
```kusto
startofday(datetime(2020-08-31))
```
```kusto
['sample-http-logs']
| project start_of_the_day = startofday(datetime(2018-11-11))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20start_of_the_day%20%3D%20startofday%28datetime%28%5C%222018-11-11%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"start_of_the_day": "2018-11-11T00:00:00Z"
}
```
## startofmonth()
Returns the start of the month containing the date
### Arguments
* date: The input date.
### Returns
A datetime representing the start of the month for the given date value
### Example
```kusto
['github-issues-event']
| project start_of_the_month = startofmonth(datetime(2020-08-01))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27github-issues-event%27%5D%5Cn%7C%20project%20start_of_the_month%20%3D%20%20startofmonth%28datetime%28%5C%222020-08-01%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"start_of_the_month": "2020-08-01T00:00:00Z"
}
```
```kusto
['hackernews']
| extend start_of_the_month = startofmonth(datetime(2020-08-01))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27hn%27%5D%5Cn%7C%20project%20start_of_the_month%20%3D%20startofmonth%28datetime%28%5C%222020-08-01%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"start_of_the_month": "2020-08-01T00:00:00Z"
}
```
## startofweek()
Returns the start of the week containing the date
Start of the week is considered to be a Sunday.
### Arguments
* date: The input date.
### Returns
A datetime representing the start of the week for the given date value
### Examples
```kusto
['github-issues-event']
| extend start_of_the_week = startofweek(datetime(2020-08-01))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27github-issues-event%27%5D%5Cn%7C%20project%20start_of_the_week%20%3D%20%20startofweek%28datetime%28%5C%222020-08-01%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"start_of_the_week": "2020-07-26T00:00:00Z"
}
```
```kusto
['hackernews']
| extend start_of_the_week = startofweek(datetime(2020-08-01))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27hn%27%5D%5Cn%7C%20project%20start_of_the_week%20%3D%20startofweek%28datetime%28%5C%222020-08-01%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"start_of_the_week": "2020-07-26T00:00:00Z"
}
```
```kusto
['sample-http-logs']
| extend start_of_the_week = startofweek(datetime(2018-06-11T00:00:00Z))
```
## startofyear()
Returns the start of the year containing the date
### Arguments
* date: The input date.
### Returns
A datetime representing the start of the year for the given date value
### Examples
```kusto
['sample-http-logs']
| project yearStart = startofyear(datetime(2019-04-03))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20yearStart%20%3D%20startofyear%28datetime%28%5C%222019-04-03%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"yearStart": "2019-01-01T00:00:00Z"
}
```
```kusto
['sample-http-logs']
| project yearStart = startofyear(datetime(2019-10-09 01:00:00.0000000))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20yearStart%20%3D%20startofyear%28datetime%28%5C%222019-10-09%2001%3A00%3A00.0000000%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result:
```json
{
"yearStart": "2019-01-01T00:00:00Z"
}
```
# Hash functions
Learn how to use and combine various hash functions in APL
## Hash functions
| **Function Name** | **Description** |
| ------------------------------ | ------------------------------------------------ |
| [hash\_md5()](#hash-md5) | Returns a MD5 hash value for the input value. |
| [hash\_sha1()](#hash-sha1) | Returns a sha1 hash value for the input value. |
| [hash\_sha256()](#hash-sha256) | Returns a SHA256 hash value for the input value. |
| [hash\_sha512()](#hash-sha512) | Returns a SHA512 hash value for the input value. |
## hash\_md5()
Returns an MD5 hash value for the input value.
### Arguments
* source: The value to be hashed.
### Returns
The MD5 hash value of the given scalar, encoded as a hex string (a string of characters, each two of which represent a single Hex number between 0 and 255).
### Examples
```kusto
hash_md5(source)
```
```kusto
['sample-http-logs']
| project md5_hash_value = hash_md5(content_type)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20md5_hash_value%20%3D%20hash_md5%28content_type%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result
```json
{
"md5_hash_value": "b980a9c041dbd33d5893fad65d33284b"
}
```
## hash\_sha1()
Returns a SHA1 hash value for the input value.
### Arguments
* source: The value to be hashed.
### Returns
The sha1 hash value of the given scalar, encoded as a hex string
### Examples
```kusto
hash_sha1(source)
```
```kusto
['sample-http-logs']
| project sha1_hash_value = hash_sha1(content_type)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20sha1_hash_value%20%3D%20hash_sha1%28content_type%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result
```json
{
"sha1_hash_value": "9f9af029585ba014e07cd3910ca976cf56160616"
}
```
## hash\_sha256()
Returns a SHA256 hash value for the input value.
### Arguments
* source: The value to be hashed.
### Returns
The sha256 hash value of the given scalar, encoded as a hex string (a string of characters, each two of which represent a single Hex number between 0 and 255).
### Examples
```kusto
hash_sha256(source)
```
```kusto
['sample-http-logs']
| project sha256_hash_value = hash_sha256(content_type)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20sha256_hash_value%20%3D%20hash_sha256%28content_type%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result
```json
{
"sha256_hash_value": "bb4770ff4ac5b7d2be41a088cb27d8bcaad53b574b6f27941e8e48e9e10fc25a"
}
```
## hash\_sha512()
Returns a SHA512 hash value for the input value.
### Arguments
* source: The value to be hashed.
### Returns
The sha512 hash value of the given scalar, encoded as a hex string (a string of characters, each two of which represent a single Hex number between 0 and 511).
### Examples
```kusto
hash_sha512(source)
```
```kusto
['sample-http-logs']
| project sha512_hash_value = hash_sha512(status)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20sha512_hash_value%20%3D%20hash_sha512%28status%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result
```json
{
"sha512_hash_value": "0878a61b503dd5a9fe9ea3545d6d3bd41c3b50a47f3594cb8bbab3e47558d68fc8fcc409cd0831e91afc4e609ef9da84e0696c50354ad86b25f2609efef6a834"
}
```
***
```kusto
['sample-http-logs']
| project sha512_hash_value = hash_sha512(content_type)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20sha512_hash_value%20%3D%20hash_sha512%28content_type%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
* Result
```json
{
"sha512_hash_value": "95c6eacdd41170b129c3c287cfe088d4fafea34e371422b94eb78b9653a89d4132af33ef39dd6b3d80e18c33b21ae167ec9e9c2d820860689c647ffb725498c4"
}
```
# IP functions
This section explains how to use IP functions in APL.
The table summarizes the IP functions available in APL.
| Function | Description |
| ------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------- |
| [format\_ipv4](/apl/scalar-functions/ip-functions/format-ipv4) | Parses input with a netmask and returns string representing IPv4 address. |
| [geo\_info\_from\_ip\_address](/apl/scalar-functions/ip-functions/geo-info-from-ip-address) | Extracts geographical, geolocation, and network information from IP addresses. |
| [has\_any\_ipv4](/apl/scalar-functions/ip-functions/has-any-ipv4) | Returns a Boolean value indicating whether the specified column contains any of the given IPv4 addresses. |
| [has\_any\_ipv4\_prefix](/apl/scalar-functions/ip-functions/has-any-ipv4-prefix) | Returns a Boolean value indicating whether the IPv4 address matches any of the specified prefixes. |
| [has\_ipv4](/apl/scalar-functions/ip-functions/has-ipv4) | Returns a Boolean value indicating whether the given IPv4 address is valid and found in the source text. |
| [has\_ipv4\_prefix](/apl/scalar-functions/ip-functions/has-ipv4-prefix) | Returns a Boolean value indicating whether the given IPv4 address starts with a specified prefix. |
| [ipv4\_compare](/apl/scalar-functions/ip-functions/ipv4-compare) | Compares two IPv4 addresses. |
| [ipv4\_is\_in\_range](/apl/scalar-functions/ip-functions/ipv4-is-in-range) | Checks if IPv4 string address is in IPv4-prefix notation range. |
| [ipv4\_is\_in\_any\_range](/apl/scalar-functions/ip-functions/ipv4-is-in-any-range) | Returns a Boolean value indicating whether the given IPv4 address is in any specified range. |
| [ipv4\_is\_match](/apl/scalar-functions/ip-functions/ipv4-is-match) | Returns a Boolean value indicating whether the given IPv4 matches the specified pattern. |
| [ipv4\_is\_private](/apl/scalar-functions/ip-functions/ipv4-is-private) | Checks if IPv4 string address belongs to a set of private network IPs. |
| [ipv4\_netmask\_suffix](/apl/scalar-functions/ip-functions/ipv4-netmask-suffix) | Returns the value of the IPv4 netmask suffix from IPv4 string address. |
| [parse\_ipv4](/apl/scalar-functions/ip-functions/parse-ipv4) | Converts input to long (signed 64-bit) number representation. |
| [parse\_ipv4\_mask](/apl/scalar-functions/ip-functions/parse-ipv4-mask) | Converts input string and IP-prefix mask to long (signed 64-bit) number representation. |
## IP-prefix notation
You can define IP addresses with IP-prefix notation using a slash (`/`) character. The IP address to the left of the slash is the base IP address. The number (1 to 32) to the right of the slash is the number of contiguous bits in the netmask. For example, `192.168.2.0/24` has an associated net/subnetmask containing 24 contiguous bits or `255.255.255.0` in dotted decimal format.
# format_ipv4
This page explains how to use the format_ipv4 function in APL.
The `format_ipv4` function in APL converts a numeric representation of an IPv4 address into its standard dotted-decimal format. This function is particularly useful when working with logs or datasets where IP addresses are stored as integers, making them hard to interpret directly.
You can use `format_ipv4` to enhance log readability, enrich security logs, or convert raw telemetry data for analysis.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, IPv4 address conversion is typically not a built-in function. You may need to use custom scripts or calculations. APL simplifies this process with the `format_ipv4` function.
```sql Splunk example
No direct equivalent
```
```kusto APL equivalent
datatable(ip: long)
[
3232235776
]
| extend formatted_ip = format_ipv4(ip)
```
ANSI SQL doesn’t have a built-in function for IPv4 formatting. You’d often use string manipulation or external utilities to achieve the same result. In APL, `format_ipv4` offers a straightforward solution.
```sql SQL example
-- No direct equivalent in SQL
```
```kusto APL equivalent
datatable(ip: long)
[
3232235776
]
| extend formatted_ip = format_ipv4(ip)
```
## Usage
### Syntax
```kusto
format_ipv4(ip: long) -> string
```
### Parameters
| Parameter | Type | Description |
| --------- | ------ | --------------------------------------------- |
| `ip` | `long` | A numeric IPv4 address in network byte order. |
### Returns
| Return type | Description |
| ----------- | ------------------------------------------ |
| `string` | The IPv4 address in dotted-decimal format. |
## Use case example
When analyzing HTTP request logs, you can convert IP addresses stored as integers into a readable format to identify client locations or troubleshoot issues.
**Query**
```kusto
['sample-http-logs']
| extend formatted_ip = format_ipv4(3232235776)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20formatted_ip%20%3D%20format_ipv4\(3232235776\)%22%7D)
**Output**
| \_time | formatted\_ip | status | uri | method |
| ------------------- | ------------- | ------ | ------------- | ------ |
| 2024-11-14 10:00:00 | 192.168.1.0 | 200 | /api/products | GET |
This query decodes raw IP addresses into a human-readable format for easier analysis.
## List of related functions
* [has\_any\_ipv4](/apl/scalar-functions/ip-functions/has-any-ipv4): Matches any IP address in a string column with a list of IP addresses or ranges.
* [has\_ipv4](/apl/scalar-functions/ip-functions/has-ipv4): Checks if a single IP address is present in a string column.
* [ipv4\_compare](/apl/scalar-functions/ip-functions/ipv4-compare): Compares two IPv4 addresses lexicographically. Use for sorting or range evaluations.
* [parse\_ipv4](/apl/scalar-functions/ip-functions/parse-ipv4): Converts a dotted-decimal IP address into a numeric representation.
# geo_info_from_ip_address
This page explains how to use the geo_info_from_ip_address function in APL.
The `geo_info_from_ip_address` function in APL retrieves geographic information based on an IP address. It maps an IP address to attributes such as city, region, and country, allowing you to perform location-based analytics on your datasets. This function is particularly useful for analyzing web logs, security events, and telemetry data to uncover geographic trends or detect anomalies based on location.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk, the equivalent process often involves using lookup tables or add-ons to resolve IP addresses into geographic details. In APL, `geo_info_from_ip_address` performs the resolution natively within the query, streamlining the workflow.
```sql Splunk example
| eval geo_info = iplocation(client_ip)
```
```kusto APL equivalent
['sample-http-logs']
| extend geo_info = geo_info_from_ip_address(client_ip)
```
In SQL, geographic information retrieval typically requires a separate database or API integration. In APL, the `geo_info_from_ip_address` function directly provides geographic details, simplifying the query process.
```sql SQL example
SELECT ip_to_location(client_ip) AS geo_info
FROM sample_http_logs
```
```kusto APL equivalent
['sample-http-logs']
| extend geo_info = geo_info_from_ip_address(client_ip)
```
## Usage
### Syntax
```kusto
geo_info_from_ip_address(ip_address)
```
### Parameters
| Parameter | Type | Description |
| ------------ | ------ | ------------------------------------------------------------ |
| `ip_address` | string | The IP address for which to retrieve geographic information. |
### Returns
A dynamic object containing the IP address’s geographic attributes (if available). The object contains the following fields:
| Name | Type | Description |
| ------------ | ------ | -------------------------------------------- |
| country | string | Country name |
| state | string | State (subdivision) name |
| city | string | City name |
| latitude | real | Latitude coordinate |
| longitude | real | Longitude coordinate |
| country\_iso | string | ISO code of the country |
| time\_zone | string | Time zone in which the IP address is located |
## Use case example
Use geographic data to analyze web log traffic.
**Query**
```kusto
['sample-http-logs']
| extend geo_info = geo_info_from_ip_address('172.217.22.14')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20geo_info%20%3D%20geo_info_from_ip_address\('172.217.22.14'\)%22%7D)
**Output**
```json geo_info
{
"state": "",
"longitude": -97.822,
"latitude": 37.751,
"country_iso": "US",
"country": "United States",
"city": "",
"time_zone": "America/Chicago"
}
```
This query identifies the geographic location of the IP address `172.217.22.14`.
## List of related functions
* [has\_any\_ipv4](/apl/scalar-functions/ip-functions/has-any-ipv4): Matches any IP address in a string column with a list of IP addresses or ranges.
* [has\_ipv4](/apl/scalar-functions/ip-functions/has-ipv4): Checks if a single IP address is present in a string column.
* [ipv4\_is\_in\_range](/apl/scalar-functions/ip-functions/ipv4-is-in-range): Checks if an IP address is within a specified range.
* [ipv4\_is\_private](/apl/scalar-functions/ip-functions/ipv4-is-private): Checks if an IPv4 address is within private IP ranges.
## IPv4 Examples
### Extract geolocation information from IPv4 address
```kusto
['sample-http-logs']
| extend ip_location = geo_info_from_ip_address('172.217.11.4')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20ip_location%20%3D%20geo_info_from_ip_address%28%27172.217.11.4%27%29%22%7D)
### Project geolocation information from IPv4 address
```kusto
['sample-http-logs']
| project ip_location=geo_info_from_ip_address('20.53.203.50')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20ip_location%3Dgeo_info_from_ip_address%28%2720.53.203.50%27%29%22%7D)
### Filter geolocation information from IPv4 address
```kusto
['sample-http-logs']
| extend ip_location = geo_info_from_ip_address('20.53.203.50')
| where ip_location.country == "Australia" and ip_location.country_iso == "AU" and ip_location.state == "New South Wales"
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20ip_location%20%3D%20geo_info_from_ip_address%28%2720.53.203.50%27%29%5Cn%7C%20where%20ip_location.country%20%3D%3D%20%5C%22Australia%5C%22%20and%20ip_location.country_iso%20%3D%3D%20%5C%22AU%5C%22%20and%20ip_location.state%20%3D%3D%20%5C%22New%20South%20Wales%5C%22%22%7D)
### Group geolocation information from IPv4 address
```kusto
['sample-http-logs']
| extend ip_location = geo_info_from_ip_address('20.53.203.50')
| summarize Count=count() by ip_location.state, ip_location.city, ip_location.latitude, ip_location.longitude
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20ip_location%20%3D%20geo_info_from_ip_address%28%2720.53.203.50%27%29%5Cn%7C%20summarize%20Count%3Dcount%28%29%20by%20ip_location.state%2C%20ip_location.city%2C%20ip_location.latitude%2C%20ip_location.longitude%22%7D)
## IPv6 Examples
### Extract geolocation information from IPv6 address
```kusto
['sample-http-logs']
| extend ip_location = geo_info_from_ip_address('2607:f8b0:4005:805::200e')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20ip_location%20%3D%20geo_info_from_ip_address%28%272607%3Af8b0%3A4005%3A805%3A%3A200e%27%29%22%7D)
### Project geolocation information from IPv6 address
```kusto
['sample-http-logs']
| project ip_location=geo_info_from_ip_address('2a03:2880:f12c:83:face:b00c::25de')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20ip_location%3Dgeo_info_from_ip_address%28%272a03%3A2880%3Af12c%3A83%3Aface%3Ab00c%3A%3A25de%27%29%22%7D)
### Filter geolocation information from IPv6 address
```kusto
['sample-http-logs']
| extend ip_location = geo_info_from_ip_address('2a03:2880:f12c:83:face:b00c::25de')
| where ip_location.country == "United States" and ip_location.country_iso == "US" and ip_location.state == "Florida"
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20ip_location%20%3D%20geo_info_from_ip_address%28%272a03%3A2880%3Af12c%3A83%3Aface%3Ab00c%3A%3A25de%27%29%5Cn%7C%20where%20ip_location.country%20%3D%3D%20%5C%22United%20States%5C%22%20and%20ip_location.country_iso%20%3D%3D%20%5C%22US%5C%22%20and%20ip_location.state%20%3D%3D%20%5C%22Florida%5C%22%22%7D)
### Group geolocation information from IPv6 address
```kusto
['sample-http-logs']
| extend ip_location = geo_info_from_ip_address('2a03:2880:f12c:83:face:b00c::25de')
| summarize Count=count() by ip_location.state, ip_location.city, ip_location.latitude, ip_location.longitude
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20ip_location%20%3D%20geo_info_from_ip_address%28%272a03%3A2880%3Af12c%3A83%3Aface%3Ab00c%3A%3A25de%27%29%5Cn%7C%20summarize%20Count%3Dcount%28%29%20by%20ip_location.state%2C%20ip_location.city%2C%20ip_location.latitude%2C%20ip_location.longitude%22%7D)
# has_any_ipv4
This page explains how to use the has_any_ipv4 function in APL.
The `has_any_ipv4` function in Axiom Processing Language (APL) allows you to check whether a specified column contains any IPv4 addresses from a given set of IPv4 addresses or CIDR ranges. This function is useful when analyzing logs, tracing OpenTelemetry data, or investigating security events to quickly filter records based on a predefined list of IP addresses or subnets.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk, you typically use the `cidrmatch` or similar functions for working with IP ranges. In APL, `has_any_ipv4` offers similar functionality by matching any IPv4 address in a column against multiple values or ranges.
```sql Splunk example
| where cidrmatch("192.168.1.0/24", ip_field)
```
```kusto APL equivalent
['sample-http-logs']
| where has_any_ipv4('ip_field', dynamic(['192.168.1.0/24']))
```
SQL does not natively support CIDR matching or IP address comparison out of the box. In APL, the `has_any_ipv4` function is designed to simplify these checks with concise syntax.
```sql SQL example
SELECT * FROM logs WHERE ip_field = '192.168.1.1' OR ip_field = '192.168.1.2';
```
```kusto APL equivalent
['sample-http-logs']
| where has_any_ipv4('ip_field', dynamic(['192.168.1.1', '192.168.1.2']))
```
## Usage
### Syntax
```kusto
has_any_ipv4(column, ip_list)
```
### Parameters
| Parameter | Description | Type |
| --------- | ---------------------------------------- | --------- |
| `column` | The column to evaluate. | `string` |
| `ip_list` | A list of IPv4 addresses or CIDR ranges. | `dynamic` |
### Returns
A boolean value indicating whether the specified column contains any of the given IPv4 addresses or matches any of the CIDR ranges in `ip_list`.
## Use case example
When analyzing logs, you can use `has_any_ipv4` to filter requests from specific IPv4 addresses or subnets.
**Query**
```kusto
['sample-http-logs']
| extend has_ip = has_any_ipv4('192.168.1.1', dynamic(['192.168.1.1', '192.168.0.0/16']))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20has_ip%20%3D%20has_any_ipv4\('192.168.1.1'%2C%20dynamic\(%5B'192.168.1.1'%2C%20'192.168.0.0%2F16'%5D\)\)%22%7D)
**Output**
| \_time | has\_ip | status |
| ------------------- | ------- | ------ |
| 2024-11-14T10:00:00 | true | 200 |
This query identifies log entries from specific IPs or subnets.
## List of related functions
* [has\_ipv4\_prefix](/apl/scalar-functions/ip-functions/has-ipv4-prefix): Checks if an IPv4 address matches a single prefix.
* [has\_ipv4](/apl/scalar-functions/ip-functions/has-ipv4): Checks if a single IP address is present in a string column.
# has_any_ipv4_prefix
This page explains how to use the has_any_ipv4_prefix function in APL.
The `has_any_ipv4_prefix` function in APL lets you determine if an IPv4 address starts with any prefix in a list of specified prefixes. This function is particularly useful for filtering, segmenting, and analyzing data involving IP addresses, such as log data, network traffic, or security events. By efficiently checking prefixes, you can identify IP ranges of interest for purposes like geolocation, access control, or anomaly detection.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, checking if an IP address matches a prefix requires custom search logic with pattern matching or conditional expressions. In APL, `has_any_ipv4_prefix` provides a direct and optimized way to perform this check.
```sql Splunk example
| eval is_in_range=if(match(ip, "10.*") OR match(ip, "192.168.*"), 1, 0)
```
```kusto APL equivalent
['sample-http-logs']
| where has_any_ipv4_prefix(uri, dynamic(['10.', '192.168.']))
```
In ANSI SQL, you need to use `LIKE` clauses combined with `OR` operators to check prefixes. In APL, the `has_any_ipv4_prefix` function simplifies this process by accepting a dynamic list of prefixes.
```sql SQL example
SELECT * FROM logs
WHERE ip LIKE '10.%' OR ip LIKE '192.168.%';
```
```kusto APL equivalent
['sample-http-logs']
| where has_any_ipv4_prefix(uri, dynamic(['10.', '192.168.']))
```
## Usage
### Syntax
```kusto
has_any_ipv4_prefix(ip_column, prefixes)
```
### Parameters
| Parameter | Type | Description |
| ----------- | --------- | ----------------------------------------- |
| `ip_column` | `string` | The column containing the IPv4 address. |
| `prefixes` | `dynamic` | A list of IPv4 prefixes to check against. |
### Returns
* `true` if the IPv4 address matches any of the specified prefixes.
* `false` otherwise.
## Use case example
Detect requests from specific IP ranges.
**Query**
```kusto
['sample-http-logs']
| extend has_ip_prefix = has_any_ipv4_prefix('192.168.0.1', dynamic(['172.16.', '192.168.']))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20has_ip_prefix%20%3D%20has_any_ipv4_prefix\('192.168.0.1'%2C%20dynamic\(%5B'172.16.'%2C%20'192.168.'%5D\)\)%22%7D)
**Output**
| \_time | has\_ip\_prefix | status |
| ------------------- | --------------- | ------ |
| 2024-11-14T10:00:00 | true | 200 |
## List of related functions
* [has\_any\_ipv4](/apl/scalar-functions/ip-functions/has-any-ipv4): Matches any IP address in a string column with a list of IP addresses or ranges.
* [has\_ipv4\_prefix](/apl/scalar-functions/ip-functions/has-ipv4-prefix): Checks if an IPv4 address matches a single prefix.
* [has\_ipv4](/apl/scalar-functions/ip-functions/has-ipv4): Checks if a single IP address is present in a string column.
# has_ipv4
This page explains how to use the has_ipv4 function in APL.
## Introduction
The `has_ipv4` function in Axiom Processing Language (APL) allows you to check if a specified IPv4 address appears in a given text. The function is useful for tasks such as analyzing logs, monitoring security events, and processing network data where you need to identify or filter entries based on IP addresses.
To use `has_ipv4`, ensure that IP addresses in the text are properly delimited with non-alphanumeric characters. For example:
* **Valid:** `192.168.1.1` in `"Requests from: 192.168.1.1, 10.1.1.115."`
* **Invalid:** `192.168.1.1` in `"192.168.1.1ThisText"`
The function returns `true` if the IP address is valid and present in the text; otherwise, it returns `false`.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, you might use `match` or similar regex-based functions to locate IPv4 addresses in a string. In APL, `has_ipv4` provides a simpler and more efficient alternative for detecting specific IPv4 addresses.
```sql Splunk example
search sourcetype=access_combined | eval isPresent=match(_raw, "192\.168\.1\.1")
```
```kusto APL equivalent
print result=has_ipv4('05:04:54 192.168.1.1 GET /favicon.ico 404', '192.168.1.1')
```
In ANSI SQL, locating IPv4 addresses often involves string manipulation or pattern matching with `LIKE` or regular expressions. APL’s `has_ipv4` function provides a more concise and purpose-built approach.
```sql SQL example
SELECT CASE WHEN column_text LIKE '%192.168.1.1%' THEN TRUE ELSE FALSE END AS result
FROM log_table;
```
```kusto APL equivalent
print result=has_ipv4('05:04:54 192.168.1.1 GET /favicon.ico 404', '192.168.1.1')
```
## Usage
### Syntax
```kusto
has_ipv4(source, ip_address)
```
### Parameters
| Name | Type | Description |
| ------------ | ------ | --------------------------------------------------- |
| `source` | string | The source text where to search for the IP address. |
| `ip_address` | string | The IP address to look for in the source. |
### Returns
* `true` if `ip_address` is a valid IP address and is found in `source`.
* `false` otherwise.
## Use case example
Identify requests coming from a specific IP address in HTTP logs.
**Query**
```kusto
['sample-http-logs']
| extend has_ip = has_ipv4('Requests from: 192.168.1.1, 10.1.1.115.', '192.168.1.1')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20has_ip%20%3D%20has_ipv4\('Requests%20from%3A%20192.168.1.1%2C%2010.1.1.115.'%2C%20'192.168.1.1'\)%22%7D)
**Output**
| \_time | has\_ip | status |
| ------------------- | ------- | ------ |
| 2024-11-14T10:00:00 | true | 200 |
## List of related functions
* [has\_any\_ipv4](/apl/scalar-functions/ip-functions/has-any-ipv4): Matches any IP address in a string column with a list of IP addresses or ranges.
* [has\_ipv4\_prefix](/apl/scalar-functions/ip-functions/has-ipv4-prefix): Checks if an IPv4 address matches a single prefix.
# has_ipv4_prefix
This page explains how to use the has_ipv4_prefix function in APL.
The `has_ipv4_prefix` function checks if an IPv4 address starts with a specified prefix. Use this function to filter or match IPv4 addresses efficiently based on their prefixes. It is particularly useful when analyzing network traffic, identifying specific address ranges, or working with CIDR-based IP filtering in datasets.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, you use string-based matching or CIDR functions for IP comparison. In APL, `has_ipv4_prefix` simplifies the process by directly comparing an IP against a prefix.
```sql Splunk example
| eval is_match = if(cidrmatch("192.168.0.0/24", ip), true, false)
```
```kusto APL equivalent
['sample-http-logs']
| where has_ipv4_prefix(uri, "192.168.0")
```
In ANSI SQL, there is no direct equivalent to `has_ipv4_prefix`. You would typically use substring or LIKE operators for partial matching. APL provides a dedicated function for this purpose, ensuring simplicity and accuracy.
```sql SQL example
SELECT *
FROM sample_http_logs
WHERE ip LIKE '192.168.0%'
```
```kusto APL equivalent
['sample-http-logs']
| where has_ipv4_prefix(uri, "192.168.0")
```
## Usage
### Syntax
```kusto
has_ipv4_prefix(column_name, prefix)
```
### Parameters
| Parameter | Type | Description |
| ------------- | ------ | --------------------------------------------------------------- |
| `column_name` | string | The column containing the IPv4 addresses to evaluate. |
| `prefix` | string | The prefix to check for, expressed as a string (e.g., "192.0"). |
### Returns
* Returns a Boolean (`true` or `false`) indicating whether the IPv4 address starts with the specified prefix.
## Use case example
Use `has_ipv4_prefix` to filter logs for requests originating from a specific IP range.
**Query**
```kusto
['sample-http-logs']
| extend has_prefix= has_ipv4_prefix('192.168.0.1', '192.168.')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20has_prefix%3D%20has_ipv4_prefix\('192.168.0.1'%2C%20'192.168.'\)%22%7D)
**Output**
| \_time | has\_prefix | status |
| ------------------- | ----------- | ------ |
| 2024-11-14T10:00:00 | true | 200 |
## List of related functions
* [has\_any\_ipv4](/apl/scalar-functions/ip-functions/has-any-ipv4): Matches any IP address in a string column with a list of IP addresses or ranges.
* [has\_ipv4](/apl/scalar-functions/ip-functions/has-ipv4): Checks if a single IP address is present in a string column.
# ipv4_compare
This page explains how to use the ipv4_compare function in APL.
The `ipv4_compare` function in APL allows you to compare two IPv4 addresses lexicographically or numerically. This is useful for sorting IP addresses, validating CIDR ranges, or detecting overlaps between IP ranges. It’s particularly helpful in analyzing network logs, performing security investigations, and managing IP-based filters or rules.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, similar functionality can be achieved using `sort` or custom commands. In APL, `ipv4_compare` is a dedicated function for comparing two IPv4 addresses.
```sql Splunk example
| eval comparison = if(ip1 < ip2, -1, if(ip1 == ip2, 0, 1))
```
```kusto APL equivalent
| extend comparison = ipv4_compare(ip1, ip2)
```
In ANSI SQL, you might manually parse or order IP addresses as strings. In APL, `ipv4_compare` simplifies this task with built-in support for IPv4 comparison.
```sql SQL example
SELECT CASE
WHEN ip1 < ip2 THEN -1
WHEN ip1 = ip2 THEN 0
ELSE 1
END AS comparison
FROM ips;
```
```kusto APL equivalent
['sample-http-logs']
| extend comparison = ipv4_compare(ip1, ip2)
```
## Usage
### Syntax
```kusto
ipv4_compare(ip1: string, ip2: string)
```
### Parameters
| Parameter | Type | Description |
| --------- | ------ | ----------------------------------- |
| `ip1` | string | The first IPv4 address to compare. |
| `ip2` | string | The second IPv4 address to compare. |
### Returns
* `-1` if `ip1` is less than `ip2`
* `0` if `ip1` is equal to `ip2`
* `1` if `ip1` is greater than `ip2`
## Use case example
You can use `ipv4_compare` to sort logs based on IP addresses or to identify connections between specific IPs.
**Query**
```kusto
['sample-http-logs']
| extend ip1 = '192.168.1.1', ip2 = '192.168.1.10'
| extend comparison = ipv4_compare(ip1, ip2)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20extend%20ip1%20%3D%20%27192.168.1.1%27%2C%20ip2%20%3D%20%27192.168.1.10%27%20%7C%20extend%20comparison%20%3D%20ipv4_compare\(ip1%2C%20ip2\)%22%7D)
**Output**
| ip1 | ip2 | comparison |
| ----------- | ------------ | ---------- |
| 192.168.1.1 | 192.168.1.10 | -1 |
This query compares two hardcoded IP addresses. It returns `-1`, indicating that `192.168.1.1` is lexicographically less than `192.168.1.10`.
## List of related functions
* [ipv4\_is\_in\_range](/apl/scalar-functions/ip-functions/ipv4-is-in-range): Checks if an IP address is within a specified range.
* [ipv4\_is\_private](/apl/scalar-functions/ip-functions/ipv4-is-private): Checks if an IPv4 address is within private IP ranges.
* [parse\_ipv4](/apl/scalar-functions/ip-functions/parse-ipv4): Converts a dotted-decimal IP address into a numeric representation.
# ipv4_is_in_any_range
This page explains how to use the ipv4_is_in_any_range function in APL.
The `ipv4_is_in_any_range` function checks whether a given IPv4 address belongs to any range of IPv4 subnets. You can use it to evaluate whether an IP address falls within a set of CIDR blocks or IP ranges, which is useful for filtering, monitoring, or analyzing network traffic in your datasets.
This function is particularly helpful for security monitoring, analyzing log data for specific geolocated traffic, or validating access based on allowed IP ranges.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, you use `cidrmatch` to check if an IP belongs to a range. In APL, `ipv4_is_in_any_range` is equivalent, but it supports evaluating against multiple ranges simultaneously.
```sql Splunk example
| eval is_in_range = cidrmatch("192.168.0.0/24", ip_address)
```
```kusto APL equivalent
['dataset']
| extend is_in_range = ipv4_is_in_any_range(ip_address, dynamic(['192.168.0.0/24', '10.0.0.0/8']))
```
ANSI SQL does not have a built-in function for checking IP ranges. Instead, you use custom functions or comparisons. APL’s `ipv4_is_in_any_range` simplifies this by handling multiple CIDR blocks and ranges in a single function.
```sql SQL example
SELECT *,
CASE WHEN ip_address BETWEEN '192.168.0.0' AND '192.168.0.255' THEN 1 ELSE 0 END AS is_in_range
FROM dataset;
```
```kusto APL equivalent
['dataset']
| extend is_in_range = ipv4_is_in_any_range(ip_address, dynamic(['192.168.0.0/24', '10.0.0.0/8']))
```
## Usage
### Syntax
```kusto
ipv4_is_in_any_range(ip_address: string, ranges: dynamic)
```
### Parameters
| Parameter | Type | Description |
| ------------ | ------- | --------------------------------------------------------------------------- |
| `ip_address` | string | The IPv4 address to evaluate. |
| `ranges` | dynamic | A list of IPv4 ranges or CIDR blocks to check against (in JSON array form). |
### Returns
* `true` if the IP address is in any specified range.
* `false` otherwise.
* `null` if the conversion of a string wasn’t successful.
## Use case example
Identify log entries from specific subnets, such as local office IP ranges.
**Query**
```kusto
['sample-http-logs']
| extend is_in_range = ipv4_is_in_any_range('192.168.0.0', dynamic(['192.168.0.0/24', '10.0.0.0/8']))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%20%7C%20extend%20is_in_range%20%3D%20ipv4_is_in_any_range\('192.168.0.0'%2C%20dynamic\(%5B'192.168.0.0%2F24'%2C%20'10.0.0.0%2F8'%5D\)\)%22%7D)
**Output**
| \_time | id | method | uri | status | is\_in\_range |
| ------------------- | ------- | ------ | ----- | ------ | ------------- |
| 2024-11-14 10:00:00 | user123 | GET | /home | 200 | true |
## List of related functions
* [ipv4\_compare](/apl/scalar-functions/ip-functions/ipv4-compare): Compares two IPv4 addresses lexicographically. Use for sorting or range evaluations.
* [ipv4\_is\_in\_range](/apl/scalar-functions/ip-functions/ipv4-is-in-range): Checks if an IP address is within a specified range.
* [ipv4\_is\_private](/apl/scalar-functions/ip-functions/ipv4-is-private): Checks if an IPv4 address is within private IP ranges.
* [parse\_ipv4](/apl/scalar-functions/ip-functions/parse-ipv4): Converts a dotted-decimal IP address into a numeric representation.
# ipv4_is_in_range
This page explains how to use the ipv4_is_in_range function in APL.
The `ipv4_is_in_range` function in Axiom Processing Language (APL) determines whether an IPv4 address falls within a specified range of addresses. This function is particularly useful for filtering or grouping logs based on geographic regions, network blocks, or security zones.
You can use this function to:
* Analyze logs for requests originating from specific IP address ranges.
* Detect unauthorized or suspicious activity by isolating traffic outside trusted IP ranges.
* Aggregate metrics for specific IP blocks or subnets.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
The `ipv4_is_in_range` function in APL operates similarly to the `cidrmatch` function in Splunk SPL. Both determine whether an IP address belongs to a specified range, but APL uses a different syntax and format.
```sql Splunk example
| eval in_range = cidrmatch("192.168.0.0/24", ip_address)
```
```kusto APL equivalent
['sample-http-logs']
| extend in_range = ipv4_is_in_range(ip_address, '192.168.0.0/24')
```
ANSI SQL doesn’t have a built-in equivalent for determining if an IP address belongs to a CIDR range. In SQL, you would typically need custom functions or expressions to achieve this. APL’s `ipv4_is_in_range` provides a concise way to perform this operation.
```sql SQL example
SELECT CASE
WHEN ip_address BETWEEN '192.168.0.0' AND '192.168.0.255' THEN 1
ELSE 0
END AS in_range
FROM logs
```
```kusto APL equivalent
['sample-http-logs']
| extend in_range = ipv4_is_in_range(ip_address, '192.168.0.0/24')
```
## Usage
### Syntax
```kusto
ipv4_is_in_range(ip: string, range: string)
```
### Parameters
| Parameter | Type | Description |
| --------- | ------ | --------------------------------------------------------- |
| `ip` | string | The IPv4 address to evaluate. |
| `range` | string | The IPv4 range in CIDR notation (e.g., `192.168.1.0/24`). |
### Returns
* `true` if the IPv4 address is in the range.
* `false` otherwise.
* `null` if the conversion of a string wasn’t successful.
## Use case example
You can use `ipv4_is_in_range` to identify traffic from specific geographic regions or service provider IP blocks.
**Query**
```kusto
['sample-http-logs']
| extend in_range = ipv4_is_in_range('192.168.1.0', '192.168.1.0/24')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20in_range%20%3D%20ipv4_is_in_range\('192.168.1.0'%2C%20'192.168.1.0%2F24'\)%22%7D)
**Output**
| geo.city | in\_range |
| -------- | --------- |
| Seattle | true |
| Denver | true |
This query identifies the number of requests from IP addresses in the specified range.
## List of related functions
* [ipv4\_compare](/apl/scalar-functions/ip-functions/ipv4-compare): Compares two IPv4 addresses lexicographically. Use for sorting or range evaluations.
* [ipv4\_is\_private](/apl/scalar-functions/ip-functions/ipv4-is-private): Checks if an IPv4 address is within private IP ranges.
* [parse\_ipv4](/apl/scalar-functions/ip-functions/parse-ipv4): Converts a dotted-decimal IP address into a numeric representation.
# ipv4_is_match
This page explains how to use the ipv4_is_match function in APL.
The `ipv4_is_match` function in APL helps you determine whether a given IPv4 address matches a specific IPv4 pattern. This function is especially useful for tasks that involve IP address filtering, including network security analyses, log file inspections, and geo-locational data processing. By specifying patterns that include wildcards or CIDR notations, you can efficiently check if an IP address falls within defined ranges or meets specific conditions.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
The `ipv4_is_match` function in APL resembles the `cidrmatch` function in Splunk SPL. Both functions assess whether an IP address falls within a designated CIDR range, but `ipv4_is_match` also supports wildcard pattern matching, providing additional flexibility.
```sql Splunk example
cidrmatch("192.168.1.0/24", ip)
```
```kusto APL equivalent
ipv4_is_match(ip, "192.168.1.0/24")
```
ANSI SQL lacks a direct equivalent to the `ipv4_is_match` function, but you can replicate similar functionality with a combination of `LIKE` and range checking. However, these approaches can be complex and less efficient than `ipv4_is_match`, which simplifies CIDR and wildcard-based IP matching.
```sql SQL example
ip LIKE '192.168.1.0'
```
```kusto APL equivalent
ipv4_is_match(ip, "192.168.1.0")
```
## Usage
### Syntax
```kusto
ipv4_is_match(ipaddress1, ipaddress2, prefix)
```
### Parameters
* **ipaddress1**: A string representing the first IPv4 address you want to evaluate. Use CIDR notation (for example, `192.168.1.0/24`).
* **ipaddress2**: A string representing the second IPv4 address you want to evaluate. Use CIDR notation (for example, `192.168.1.0/24`).
* **prefix**: Optionally, a number between 0 and 32 that specifies the number of most-significant bits taken into account.
### Returns
* `true` if the IPv4 addresses match.
* `false` otherwise.
* `null` if the conversion of an IPv4 string wasn’t successful.
## Use case example
The `ipv4_is_match` function allows you to identify traffic based on IP addresses, enabling faster identification of traffic patterns and potential issues.
**Query**
```kusto
['sample-http-logs']
| extend is_match = ipv4_is_match('203.0.113.112', '203.0.113.112')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20is_match%20%3D%20ipv4_is_match\('203.0.113.112'%2C%20'203.0.113.112'\)%22%7D)
**Output**
| \_time | id | status | method | uri | is\_match |
| ------------------- | ------------- | ------ | ------ | ----------- | --------- |
| 2023-11-11T13:20:14 | 203.0.113.45 | 403 | GET | /admin | true |
| 2023-11-11T13:30:32 | 203.0.113.101 | 401 | POST | /restricted | true |
## List of related functions
* [has\_any\_ipv4](/apl/scalar-functions/ip-functions/has-any-ipv4): Matches any IP address in a string column with a list of IP addresses or ranges.
* [has\_ipv4\_prefix](/apl/scalar-functions/ip-functions/has-ipv4-prefix): Checks if an IPv4 address matches a single prefix.
* [has\_ipv4](/apl/scalar-functions/ip-functions/has-ipv4): Checks if a single IP address is present in a string column.
* [ipv4\_compare](/apl/scalar-functions/ip-functions/ipv4-compare): Compares two IPv4 addresses lexicographically. Use for sorting or range evaluations.
# ipv4_is_private
This page explains how to use the ipv4_is_private function in APL.
The `ipv4_is_private` function determines if an IPv4 address belongs to a private range, as defined by [RFC 1918](https://www.rfc-editor.org/rfc/rfc1918). You can use this function to filter private addresses in datasets such as server logs, network traffic, and other IP-based data.
This function is especially useful in scenarios where you want to:
* Exclude private IPs from logs to focus on public traffic.
* Identify traffic originating from within an internal network.
* Simplify security analysis by categorizing IP addresses.
The private IPv4 addresses reserved for private networks by the Internet Assigned Numbers Authority (IANA) are the following:
| IP address range | Number of addresses | Largest CIDR block (subnet mask) |
| ----------------------------- | ------------------- | -------------------------------- |
| 10.0.0.0 – 10.255.255.255 | 16777216 | 10.0.0.0/8 (255.0.0.0) |
| 172.16.0.0 – 172.31.255.255 | 1048576 | 172.16.0.0/12 (255.240.0.0) |
| 192.168.0.0 – 192.168.255.255 | 65536 | 192.168.0.0/16 (255.255.0.0) |
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, you might use a combination of CIDR matching functions or regex to check for private IPs. In APL, the `ipv4_is_private` function offers a built-in and concise way to achieve the same result.
```sql Splunk example
eval is_private=if(cidrmatch("10.0.0.0/8", ip) OR cidrmatch("172.16.0.0/12", ip) OR cidrmatch("192.168.0.0/16", ip), 1, 0)
```
```kusto APL equivalent
['sample-http-logs']
| extend is_private=ipv4_is_private(client_ip)
```
In ANSI SQL, you might use `CASE` statements with CIDR-based checks or regex patterns to detect private IPs. In APL, the `ipv4_is_private` function simplifies this with a single call.
```sql SQL example
SELECT ip,
CASE
WHEN ip LIKE '10.%' OR ip LIKE '172.16.%' OR ip LIKE '192.168.%' THEN 'true'
ELSE 'false'
END AS is_private
FROM logs;
```
```kusto APL equivalent
['sample-http-logs']
| extend is_private=ipv4_is_private(client_ip)
```
## Usage
### Syntax
```kusto
ipv4_is_private(ip: string)
```
### Parameters
| Parameter | Type | Description |
| --------- | ------ | ------------------------------------------------------ |
| `ip` | string | The IPv4 address to evaluate for private range status. |
### Returns
* `true`: The input IP address is private.
* `false`: The input IP address is not private.
## Use case example
You can use `ipv4_is_private` to filter logs and focus on public traffic for external analysis.
**Query**
```kusto
['sample-http-logs']
| extend is_private = ipv4_is_private('192.168.0.1')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20is_private%20%3D%20ipv4_is_private\('192.168.0.1'\)%22%7D)
**Output**
| geo.country | is\_private |
| ----------- | ----------- |
| USA | true |
| UK | true |
## List of related functions
* [ipv4\_compare](/apl/scalar-functions/ip-functions/ipv4-compare): Compares two IPv4 addresses lexicographically. Use for sorting or range evaluations.
* [ipv4\_is\_in\_range](/apl/scalar-functions/ip-functions/ipv4-is-in-range): Checks if an IP address is within a specified range.
* [parse\_ipv4](/apl/scalar-functions/ip-functions/parse-ipv4): Converts a dotted-decimal IP address into a numeric representation.
# ipv4_netmask_suffix
This page explains how to use the ipv4_netmask_suffix function in APL.
The `ipv4_netmask_suffix` function in APL extracts the netmask suffix from an IPv4 address. The netmask suffix, also known as the subnet prefix length, specifies how many bits are used for the network portion of the address.
This function is useful for network log analysis, security auditing, and infrastructure monitoring. It helps you categorize IP addresses by their subnets, enabling you to detect patterns or anomalies in network traffic or to manage IP allocations effectively.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk, netmask suffix extraction typically requires manual parsing or custom scripts. In APL, the `ipv4_netmask_suffix` function simplifies this task by directly extracting the suffix from an IPv4 address in CIDR notation.
```spl Splunk example
eval netmask = replace(ip, "^.*?/", "")
```
```kusto APL equivalent
extend netmask = ipv4_netmask_suffix(ip)
```
In ANSI SQL, extracting the netmask suffix often involves using string functions like `SUBSTRING` or `CHARINDEX`. In APL, the `ipv4_netmask_suffix` function provides a direct and efficient alternative.
```sql SQL example
SELECT SUBSTRING(ip, CHARINDEX('/', ip) + 1, LEN(ip)) AS netmask FROM logs;
```
```kusto APL equivalent
extend netmask = ipv4_netmask_suffix(ip)
```
## Usage
### Syntax
```kusto
ipv4_netmask_suffix(ipv4address)
```
### Parameters
| Parameter | Type | Description |
| ------------- | ------ | ----------------------------------------------------------- |
| `ipv4address` | string | The IPv4 address in CIDR notation (e.g., `192.168.1.1/24`). |
### Returns
* An integer representing the netmask suffix. For example, `24` for `192.168.1.1/24`.
* Returns `null` if the input is not a valid IPv4 address in CIDR notation.
## Use case example
When analyzing network traffic logs, you can extract the netmask suffix to group or filter traffic by subnets.
**Query**
```kusto
['sample-http-logs']
| extend netmask = ipv4_netmask_suffix('192.168.1.1/24')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20netmask%20%3D%20ipv4_netmask_suffix\('192.168.1.1%2F24'\)%22%7D)
**Output**
| geo.country | netmask |
| ----------- | ------- |
| USA | 24 |
| UK | 24 |
## List of related functions
* [ipv4\_compare](/apl/scalar-functions/ip-functions/ipv4-compare): Compares two IPv4 addresses lexicographically. Use for sorting or range evaluations.
* [ipv4\_is\_in\_range](/apl/scalar-functions/ip-functions/ipv4-is-in-range): Checks if an IP address is within a specified range.
* [ipv4\_is\_private](/apl/scalar-functions/ip-functions/ipv4-is-private): Checks if an IPv4 address is within private IP ranges.
* [parse\_ipv4](/apl/scalar-functions/ip-functions/parse-ipv4): Converts a dotted-decimal IP address into a numeric representation.
# parse_ipv4
This page explains how to use the parse_ipv4 function in APL.
The `parse_ipv4` function in APL extracts the four octets of an IPv4 address and represents them as integers. You can use this function to break down an IPv4 address into its constituent components for advanced analysis, filtering, or comparisons. It is especially useful for tasks like analyzing network traffic logs, identifying trends in IP address usage, or performing security-related queries.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, extracting IPv4 components requires using regular expressions or string manipulation. APL simplifies this process with the dedicated `parse_ipv4` function.
```sql Splunk example
| eval octets=split("192.168.1.1", ".") | table octets
```
```kusto APL equivalent
['sample-http-logs']
| extend octets = parse_ipv4('192.168.1.1')
| project octets
```
In ANSI SQL, breaking down an IPv4 address often requires using functions like `SUBSTRING` or `SPLIT`. APL offers the `parse_ipv4` function as a straightforward alternative.
```sql SQL example
SELECT SPLIT_PART(ip, '.', 1) AS octet1, SPLIT_PART(ip, '.', 2) AS octet2 FROM logs
```
```kusto APL equivalent
['sample-http-logs']
| extend octets = parse_ipv4('192.168.1.1')
| project octets
```
## Usage
### Syntax
```kusto
parse_ipv4(ipv4_address)
```
### Parameters
| Parameter | Type | Description |
| -------------- | ------ | ---------------------------------------------- |
| `ipv4_address` | string | The IPv4 address to parse into integer octets. |
### Returns
The function returns an array of four integers, each representing an octet of the IPv4 address.
## Use case example
You can use the `parse_ipv4` function to analyze web traffic by breaking down user IP addresses into octets.
**Query**
```kusto
['sample-http-logs']
| extend ip_octets = parse_ipv4('192.168.1.1')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20ip_octets%20%3D%20parse_ipv4\('192.168.1.1'\)%22%7D)
**Output**
| \_time | uri | method | ip\_octets |
| ------------------- | ----------- | ------ | ------------- |
| 2024-11-14T10:00:00 | /index.html | GET | 3,232,235,777 |
## List of related functions
* [has\_any\_ipv4](/apl/scalar-functions/ip-functions/has-any-ipv4): Matches any IP address in a string column with a list of IP addresses or ranges.
* [has\_ipv4\_prefix](/apl/scalar-functions/ip-functions/has-ipv4-prefix): Checks if an IPv4 address matches a single prefix.
* [has\_ipv4](/apl/scalar-functions/ip-functions/has-ipv4): Checks if a single IP address is present in a string column.
* [ipv4\_compare](/apl/scalar-functions/ip-functions/ipv4-compare): Compares two IPv4 addresses lexicographically. Use for sorting or range evaluations.
* [ipv4\_is\_in\_range](/apl/scalar-functions/ip-functions/ipv4-is-in-range): Checks if an IP address is within a specified range.
* [ipv4\_is\_private](/apl/scalar-functions/ip-functions/ipv4-is-private): Checks if an IPv4 address is within private IP ranges.
# parse_ipv4_mask
This page explains how to use the parse_ipv4_mask function in APL.
## Introduction
The `parse_ipv4_mask` function in APL converts an IPv4 address and its associated netmask into a signed 64-bit wide, long number representation in big-endian order. Use this function when you need to process or compare IPv4 addresses efficiently as numerical values, such as for IP range filtering, subnet calculations, or network analysis.
This function is particularly useful in scenarios where you need a compact and precise way to represent IP addresses and their masks for further aggregation or filtering.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, you use functions like `cidrmatch` for subnet operations. In APL, `parse_ipv4_mask` focuses on converting an IP and mask into a numerical representation for low-level processing.
```sql Splunk example
| eval converted_ip = cidrmatch("192.168.1.0/24", ip)
```
```kusto APL equivalent
print converted_ip = parse_ipv4_mask("192.168.1.0", 24)
```
In ANSI SQL, you typically use custom expressions or stored procedures to perform similar IP address transformations. In APL, `parse_ipv4_mask` offers a built-in, optimized function for this task.
```sql SQL example
SELECT inet_aton('192.168.1.0') & (0xFFFFFFFF << (32 - 24)) AS converted_ip
```
```kusto APL equivalent
print converted_ip = parse_ipv4_mask("192.168.1.0", 24)
```
## Usage
### Syntax
```kusto
parse_ipv4_mask(ip, prefix)
```
### Parameters
| Name | Type | Description |
| -------- | ------ | ------------------------------------------------------------------------- |
| `ip` | string | The IPv4 address to convert to a long number. |
| `prefix` | int | An integer from 0 to 32 representing the number of most-significant bits. |
### Returns
* A signed, 64-bit long number in big-endian order if the conversion is successful.
* `null` if the conversion is unsuccessful.
### Example
```kusto
print parse_ipv4_mask("127.0.0.1", 24)
```
## Use case example
Use `parse_ipv4_mask` to analyze logs and filter entries based on IP ranges.
**Query**
```kusto
['sample-http-logs']
| extend masked_ip = parse_ipv4_mask('192.168.0.1', 24)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20masked_ip%20%3D%20parse_ipv4_mask\('192.168.0.1'%2C%2024\)%22%7D)
**Output**
| \_time | uri | method | masked\_ip |
| ------------------- | ----------- | ------ | ------------- |
| 2024-11-14T10:00:00 | /index.html | GET | 3,232,235,520 |
## List of related functions
* [ipv4\_compare](/apl/scalar-functions/ip-functions/ipv4-compare): Compares two IPv4 addresses lexicographically. Use for sorting or range evaluations.
* [ipv4\_is\_in\_range](/apl/scalar-functions/ip-functions/ipv4-is-in-range): Checks if an IP address is within a specified range.
* [parse\_ipv4](/apl/scalar-functions/ip-functions/parse-ipv4): Converts a dotted-decimal IP address into a numeric representation.
# Mathematical functions
Learn how to use and combine different mathematical functions in APL
## Mathematical functions
| **Function Name** | **Description** |
| ----------------------- | -------------------------------------------------------------------------------------------------------------- |
| [abs()](#abs) | Calculates the absolute value of the input. |
| [acos()](#acos) | Returns the angle whose cosine is the specified number (the inverse operation of cos()). |
| [asin()](#asin) | Returns the angle whose sine is the specified number (the inverse operation of sin()). |
| [atan()](#atan) | Returns the angle whose tangent is the specified number (the inverse operation of tan()). |
| [atan2()](#atan2) | Calculates the angle, in radians, between the positive x-axis and the ray from the origin to the point (y, x). |
| [cos()](#cos) | Returns the cosine function. |
| [degrees()](#degrees) | Converts angle value in radians into value in degrees, using formula degrees = (180 / PI) \* angle-in-radians. |
| [exp()](#exp) | The base-e exponential function of x, which is e raised to the power x: e^x. |
| [exp2()](#exp2) | The base-2 exponential function of x, which is 2 raised to the power x: 2^x. |
| [gamma()](#gamma) | Computes gamma function. |
| [isinf()](#isinf) | Returns whether input is an infinite (positive or negative) value. |
| [isnan()](#isnan) | Returns whether input is Not-a-Number (NaN) value. |
| [log()](#log) | Returns the natural logarithm function. |
| [log10()](#log10) | Returns the common (base-10) logarithm function. |
| [log2()](#log2) | Returns the base-2 logarithm function. |
| [loggamma()](#loggamma) | Computes log of absolute value of the gamma function. |
| [not()](#not) | Reverses the value of its bool argument. |
| [pi()](#pi) | Returns the constant value of Pi (π). |
| [pow()](#pow) | Returns a result of raising to power. |
| [radians()](#radians) | Converts angle value in degrees into value in radians, using formula radians = (PI / 180) \* angle-in-degrees. |
| [round()](#round) | Returns the rounded source to the specified precision. |
| [sign()](#sign) | Sign of a numeric expression. |
| [sin()](#sin) | Returns the sine function. |
| [sqrt()](#sqrt) | Returns the square root function. |
| [tan()](#tan) | Returns the tangent function. |
| [exp10()](#exp10) | The base-10 exponential function of x, which is 10 raised to the power x: 10^x. |
| [isint()](#isint) | Returns whether input is an integer (positive or negative) value |
| [isfinite()](#isfinite) | Returns whether input is a finite value (is neither infinite nor NaN). |
## abs()
Calculates the absolute value of the input.
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| -------- | --------------------- | ------------------------ | -------------------------- |
| x | int, real or timespan | Required | The value to make absolute |
### Returns
* Absolute value of x.
### Examples
```kusto
abs(x)
```
```kusto
abs(80.5) == 80.5
```
```kusto
['sample-http-logs']
| project absolute_value = abs(req_duration_ms)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20absolute_value%20%3D%20abs%28req_duration_ms%29%22%7D)
## acos()
Returns the angle whose cosine is the specified number (the inverse operation of cos()) .
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| -------- | -------- | ------------------------ | -------------------------------- |
| x | real | Required | A real number in range \[-1,. 1] |
### Returns
* The value of the arc cosine of x
* `null` if `x` \< -1 or `x` > 1
### Examples
```kusto
acos(x)
```
```kusto
acos(-1) == 3.141592653589793
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20cosine_angle%20%3D%20acos%28-1%29%22%7D)
## asin()
Returns the angle whose sine is the specified number (the inverse operation of sin()) .
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| -------- | -------- | ------------------------ | -------------------------------- |
| x | real | Required | A real number in range \[-1,. 1] |
* x: A real number in range \[-1, 1].
### Returns
* The value of the arc sine of x
* null if x \< -1 or x > 1
### Examples
```kusto
asin(x)
```
```kusto
['sample-http-logs']
| project inverse_sin_angle = asin(-1)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20inverse_sin_angle%20%3D%20asin%28-1%29%22%7D)
## atan()
Returns the angle whose tangent is the specified number (the inverse operation of tan()) .
### Arguments
x: A real number.
### Returns
The value of the arc tangent of x
### Examples
```kusto
atan(x)
```
```kusto
atan(-1) == -0.7853981633974483
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20inverse_tan_angle%20%3D%20atan%28-1%29%22%7D)
## atan2()
Calculates the angle, in radians, between the positive x-axis and the ray from the origin to the point (y, x).
### Arguments
x: X coordinate (a real number).
y: Y coordinate (a real number).
### Returns
The angle, in radians, between the positive x-axis and the ray from the origin to the point (y, x).
### Examples
```kusto
atan2(y,x)
```
```kusto
atan2(-1, 1) == -0.7853981633974483
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20angle_in_rads%20%3D%20atan2%28-1%2C%201%29%22%7D)
## cos()
Returns the cosine function.
### Arguments
x: A real number.
### Returns
The result of cos(x)
### Examples
```kusto
cos(x)
```
```kusto
cos(-1) == 0.5403023058681398
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20cosine_function%20%3D%20cos%28-1%29%22%7D)
## degrees()
Converts angle value in radians into value in degrees, using formula degrees = (180 / PI ) \* angle\_in\_radians
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| -------- | -------- | ------------------------ | ----------------- |
| a | real | Required | Angle in radians. |
### Returns
The corresponding angle in degrees for an angle specified in radians.
### Examples
```kusto
degrees(a)
```
```kusto
degrees(3.14) == 179.9087476710785
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20degree_rads%20%3D%20degrees%283.14%29%22%7D)
## exp()
The base-e exponential function of x, which is e raised to the power x: e^x.
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| -------- | ----------- | ------------------------ | ---------------------- |
| x | real number | Required | Value of the exponent. |
### Returns
* Exponential value of x.
* For natural (base-e) logarithms, see [log()](/apl/scalar-functions/mathematical-functions#log\(\)).
* For exponential functions of base-2 and base-10 logarithms, see [exp2()](/apl/scalar-functions/mathematical-functions#exp2\(\)), [exp10()](/apl/scalar-functions/mathematical-functions#exp10\(\))
### Examples
```kusto
exp(x)
```
```kusto
exp(1) == 2.718281828459045
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20exponential_value%20%3D%20exp%281%29%22%7D)
## exp2()
The base-2 exponential function of x, which is 2 raised to the power x: 2^x.
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| -------- | ----------- | ------------------------ | ---------------------- |
| x | real number | Required | Value of the exponent. |
### Returns
* Exponential value of x.
* For natural (base-2) logarithms, see [log2()](/apl/scalar-functions/mathematical-functions#log2\(\)).
* For exponential functions of base-e and base-10 logarithms, see [exp()](/apl/scalar-functions/mathematical-functions#exp\(\)), [exp10()](/apl/scalar-functions/mathematical-functions#exp10\(\))
### Examples
```kusto
exp2(x)
```
```kusto
| project base_2_exponential_value = exp2(req_duration_ms)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20base_2_exponential_value%20%3D%20exp2%28req_duration_ms%29%22%7D)
## gamma()
Computes [gamma function](https://en.wikipedia.org/wiki/Gamma_function)
### Arguments
* x: Parameter for the gamma function
### Returns
* Gamma function of x.
* For computing log-gamma function, see loggamma().
### Examples
```kusto
gamma(x)
```
```kusto
gamma(4) == 6
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20gamma_function%20%3D%20gamma%284%29%22%7D)
## isinf()
Returns whether input is an infinite (positive or negative) value.
### Example
```kusto
isinf(x)
```
```kusto
isinf(45.56) == false
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20infinite_value%20%3D%20isinf%2845.56%29%22%7D)
### Arguments
x: A real number.
### Returns
A non-zero value (true) if x is a positive or negative infinite; and zero (false) otherwise.
## isnan()
Returns whether input is Not-a-Number (NaN) value.
### Arguments
x: A real number.
### Returns
A non-zero value (true) if x is NaN; and zero (false) otherwise.
### Examples
```kusto
isnan(x)
```
```kusto
isnan(45.56) == false
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20nan%20%3D%20isnan%2845.56%29%22%7D)
## log()
log() returns the natural logarithm function.
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| -------- | -------- | ------------------------ | ------------------ |
| x | real | Required | A real number > 0. |
### Returns
The natural logarithm is the base-e logarithm: the inverse of the natural exponential function (exp).
null if the argument is negative or null or can’t be converted to a real value.
### Examples
```kusto
log(x)
```
```kusto
log(1) == 0
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20natural_log%20%3D%20log%281%29%22%7D)
## log10()
log10() returns the common (base-10) logarithm function.
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| -------- | -------- | ------------------------ | ------------------ |
| x | real | Required | A real number > 0. |
### Returns
The common logarithm is the base-10 logarithm: the inverse of the exponential function (exp) with base 10.
null if the argument is negative or null or can’t be converted to a real value.
### Examples
```kusto
log10(x)
```
```kusto
log10(4) == 0.6020599913279624
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20base10%20%3D%20log10%284%29%22%7D)
## log2()
log2() returns the base-2 logarithm function.
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| -------- | -------- | ------------------------ | ------------------ |
| x | real | Required | A real number > 0. |
### Returns
The logarithm is the base-2 logarithm: the inverse of the exponential function (exp) with base 2.
null if the argument is negative or null or can’t be converted to a real value.
### Examples
```kusto
log2(x)
```
```kusto
log2(6) == 2.584962500721156
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20base2_log%20%3D%20log2%286%29%22%7D)
## loggamma()
Computes log of absolute value of the [gamma function](https://en.wikipedia.org/wiki/Gamma_function)
### Arguments
x: Parameter for the gamma function
### Returns
* Returns the natural logarithm of the absolute value of the gamma function of x.
### Examples
````kusto
loggamma(x)
```kusto
loggamma(16) == 27.89927138384089
````
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20gamma_function%20%3D%20loggamma%2816%29%22%7D)
## not()
Reverses the value of its bool argument.
### Examples
```kusto
not(expr)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20reverse%20%3D%20not%28false%29%22%7D)
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| -------- | -------- | ------------------------ | ----------------------------------- |
| Expr | bool | Required | A `bool` expression to be reversed. |
### Returns
Returns the reversed logical value of its bool argument.
## pi()
Returns the constant value of Pi.
### Returns
* The double value of Pi (3.1415926...)
### Examples
```kusto
pi()
```
```kusto
['sample-http-logs']
| project pie = pi()
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20pie%20%3D%20pi%28%29%22%7D)
## pow()
Returns a result of raising to power
### Examples
```kusto
pow(base, exponent )
```
```kusto
pow(2, 6) == 64
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20power%20%3D%20pow%282%2C%206%29%22%7D)
### Arguments
* *base:* Base value.
* *exponent:* Exponent value.
### Returns
Returns base raised to the power exponent: base ^ exponent.
## radians()
Converts angle value in degrees into value in radians, using formula `radians = (PI / 180 ) * angle_in_degrees`
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| -------- | -------- | ------------------------ | --------------------------------- |
| a | real | Required | Angle in degrees (a real number). |
### Returns
The corresponding angle in radians for an angle specified in degrees.
### Examples
```kusto
radians(a)
```
```kusto
radians(60) == 1.0471975511965976
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20radians%20%3D%20radians%2860%29%22%7D)
## round()
Returns the rounded source to the specified precision.
### Arguments
* source: The source scalar the round is calculated on.
* Precision: Number of digits the source will be rounded to.(default value is 0)
### Returns
The rounded source to the specified precision.
### Examples
```kusto
round(source [, Precision])
```
```kusto
round(25.563663) == 26
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20rounded_value%20%3D%20round%2825.563663%29%22%7D)
## sign()
Sign of a numeric expression
### Examples
```kusto
sign(x)
```
```kusto
sign(25.563663) == 1
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20numeric_expression%20%3D%20sign%2825.563663%29%22%7D)
### Arguments
* x: A real number.
### Returns
* The positive (+1), zero (0), or negative (-1) sign of the specified expression.
## sin()
Returns the sine function.
### Examples
```kusto
sin(x)
```
```kusto
sin(25.563663) == 0.41770848373492825
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20sine_function%20%3D%20sin%2825.563663%29%22%7D)
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| -------- | -------- | ------------------------ | --------------- |
| x | real | Required | A real number. |
### Returns
The result of sin(x)
## sqrt()
Returns the square root function.
### Examples
```kusto
sqrt(x)
```
```kusto
sqrt(25.563663) == 5.0560521160288685
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20squareroot%20%3D%20sqrt%2825.563663%29%22%7D)
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| -------- | -------- | ------------------------ | ------------------- |
| x | real | Required | A real number >= 0. |
### Returns
* A positive number such that \_sqrt(x) \_ sqrt(x) == x\*
* null if the argument is negative or cannot be converted to a real value.
## tan()
Returns the tangent function.
### Examples
```kusto
tan(x)
```
```kusto
tan(25.563663) == 0.4597371460602336
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20tangent_function%20%3D%20tan%2825.563663%29%22%7D)
### Argument
| **Name** | **Type** | **Required or Optional** | **Description** |
| -------- | -------- | ------------------------ | --------------- |
| x | real | Required | A real number. |
### Returns
* The result of `tan(x)`
## exp10()
The base-10 exponential function of x, which is 10 raised to the power x: 10^x.
### Examples
```kusto
exp10(x)
```
```kusto
exp10(25.563663) == 36,615,333,994,520,800,000,000,000
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20base_10_exponential%20%3D%20pow%2810%2C%2025.563663%29%22%7D)
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| -------- | -------- | ------------------------ | ------------------------------------- |
| x | real | Required | A real number, value of the exponent. |
### Returns
* Exponential value of x.
* For natural (base-10) logarithms, see [log10()](/apl/scalar-functions/mathematical-functions#log10\(\)).
* For exponential functions of base-e and base-2 logarithms, see [exp()](/apl/scalar-functions/mathematical-functions#exp\(\)), [exp2()](/apl/scalar-functions/mathematical-functions#exp2\(\))
## isint()
Returns whether input is an integer (positive or negative) value.
### Arguments
* Expr: expression value which can be a real number
### Returns
A non-zero value (true) if expression is a positive or negative integer; and zero (false) otherwise.
### Examples
```kusto
isint(expression)
```
```kusto
isint(resp_body_size_bytes) == true
```
```kusto
isint(25.563663) == false
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20integer_value%20%3D%20isint%2825.563663%29%22%7D)
## isfinite()
Returns whether input is a finite value (is neither infinite nor NaN).
### Arguments
* number: A real number.
### Returns
A non-zero value (true) if x is finite; and zero (false) otherwise.
### Examples
```kusto
isfinite(number)
```
```kusto
isfinite(25.563663) == true
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20isfinite_value%20%3D%20isfinite%2825.563663%29%22%7D)
# Pair functions
Learn how to use and combine different pair functions in APL
## Pair functions
| **Function Name** | **Description** |
| ---------------------------- | ------------------------------------ |
| [pair()](#pair) | Creates a pair from a key and value. |
| [parse\_pair()](#parse-pair) | Parses a string to form a pair. |
Each argument has a **required** section which is denoted with `required` or `optional`
* If it’s denoted by `required` it means the argument must be passed into that function before it'll work.
* if it’s denoted by `optional` it means the function can work without passing the argument value.
## pair()
Creates a pair from a key and value.
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| --------- | -------- | ------------------------ | ----------------------------------------------- |
| Key | string | Required | String for the key in the pair |
| Value | string | Required | String for the value in the pair |
| Separator | string | Optional (Default: ":") | Separator between the key and value in the pair |
### Returns
Returns a pair with the key **Key** and the value **Value** with the separator **Seperator**.
### Examples
```kusto
pair("key", "value", ".")
```
```kusto
['logs']
| where tags contains pair("host", "mymachine")
```
[Run in Playground]()
## parse\_pair()
Creates a pair from a key and value.
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| --------- | -------- | ------------------------ | ----------------------------------------------- |
| Pair | string | Required | String that has a pair of key value to pull out |
| Separator | string | Optional (Default: ":") | Separator between the key and value in the pair |
### Returns
Returns a pair with the key and value separated by the separator **Seperator** in **Pair**. If
none is found a pair with the value of **Pair** and an empty key is returned.
### Examples
```kusto
parse_pair("key.value", ".")
```
```kusto
['logs']
| where parse_pair(tags[0]).key == "host"
```
[Run in Playground]()
# Rounding functions
Learn how to use and combine different rounding functions in APL
## Rounding functions
| **Function Name** | **Description** |
| ------------------------ | ------------------------------------------------------------------------------------------------------------------------- |
| [ceiling()](#ceiling) | Calculates the smallest integer greater than, or equal to, the specified numeric expression. |
| [bin()](#bin) | Rounds values down to an integer multiple of a given bin size. |
| [bin\_auto()](#bin-auto) | Rounds values down to a fixed-size "bin", with control over the bin size and starting point provided by a query property. |
| [floor()](#floor) | Calculates the largest integer less than, or equal to, the specified numeric expression. |
## ceiling()
Calculates the smallest integer greater than, or equal to, the specified numeric expression.
### Arguments
* x: A real number.
### Returns
* The smallest integer greater than, or equal to, the specified numeric expression.
### Examples
```kusto
ceiling(x)
```
```kusto
ceiling(25.43) == 26
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20project%20smallest_integer%20%3D%20ceiling%2825.43%29%22%7D)
## bin()
Rounds values down to an integer multiple of a given bin size.
The `bin()` function is used with [summarize operator](/apl/tabular-operators/summarize-operator). If your set of values are disorderly, they will be grouped into fractions.
### Arguments
* value: A date, number, or [timespan](/apl/data-types/scalar-data-types#timespan-literals)
* roundTo: The "bin size", a number or timespan that divides value.
### Returns
The nearest multiple of roundTo below value.
### Examples
```kusto
bin(value,roundTo)
```
```kusto
bin(25.73, 4) == 24
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20project%20round_value%20%3D%20bin%2825.73%2C%204%29%22%7D)
## bin\_auto()
Rounds values down to a fixed-size "bin", the `bin_auto()` function can only be used with the [summarize operator](/apl/tabular-operators/summarize-operator) by statement with the `_time` column.
### Arguments
* Expression: A scalar expression of a numeric type indicating the value to round.
### Returns
The nearest multiple of `query_bin_auto_at` below Expression, shifted so that `query_bin_auto_at` will be translated into itself.
### Example
```kusto
summarize count() by bin_auto(_time)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20summarize%20count%28%29%20by%20bin_auto%28_time%29%22%7D)
## floor()
Calculates the largest integer less than, or equal to, the specified numeric expression.
### Arguments
* number: A real number.
### Returns
* The largest integer greater than, or equal to, the specified numeric expression.
### Examples
```kusto
floor(number)
```
```kusto
floor(25.73) == 25
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20project%20largest_integer_number%20%3D%20floor%2825.73%29%22%7D)
# SQL functions
Learn how to use SQL functions in APL
## SQL functions
| **Function Name** | **Description** |
| ---------------------------- | ------------------------------------------------------------------------------------------------------------------ |
| [parse\_sql()](#parse-sql) | Interprets and analyzes SQL queries, making it easier to extract and understand SQL statements within datasets. |
| [format\_sql()](#format-sql) | Converts the data model produced by `parse_sql()` back into a SQL statement for validation or formatting purposes. |
## parse\_sql()
Analyzes an SQL statement and constructs a data model, enabling insights into the SQL content within a dataset.
### Limitations
* It is mainly used for simple SQL queries. SQL statements like stored procedures, Windows functions, common table expressions (CTEs), recursive queries, advanced statistical functions, and special joins are not supported.
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| -------------- | -------- | ------------------------ | ----------------------------- |
| sql\_statement | string | Required | The SQL statement to analyze. |
### Returns
A dictionary representing the structured data model of the provided SQL statement. This model includes maps or slices that detail the various components of the SQL statement, such as tables, fields, conditions, etc.
### Examples
### Basic data retrieval
The SQL statement **`SELECT * FROM db`** retrieves all columns and rows from the table named **`db`**.
```kusto
hn
| project parse_sql("select * from db")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22hn%20%7C%20project%20parse_sql\('select%20*%20from%20db'\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D)
### WHERE Clause
This example parses a **`SELECT`** statement with a **`WHERE`** clause, filtering **`customers`** by **`subscription_status`**.
```kusto
hn
| project parse_sql("SELECT id, email FROM customers WHERE subscription_status = 'active'")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22hn%20%7C%20project%20parse_sql\(%5C%22SELECT%20id%2C%20email%20FROM%20customers%20WHERE%20subscription_status%20%3D%20'active'%5C%22\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D)
### JOIN operation
This example shows parsing an SQL statement that performs a **`JOIN`** operation between **`orders`** and **`customers`** tables to match orders with customer names.
```kusto
hn
| project parse_sql("SELECT orders.id, customers.name FROM orders JOIN customers ON orders.customer_id = customers.id")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22hn%20%7C%20project%20parse_sql\(%5C%22SELECT%20orders.id%2C%20customers.name%20FROM%20orders%20JOIN%20customers%20ON%20orders.customer_id%20%3D%20customers.id%5C%22\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D)
### GROUP BY Clause
In this example, the **`parse_sql()`** function is used to parse an SQL statement that aggregates order counts by **`product_id`** using the **`GROUP BY`** clause.
```kusto
hn
| project parse_sql("SELECT product_id, COUNT(*) as order_count FROM orders GROUP BY product_id")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22hn%20%7C%20project%20parse_sql\(%5C%22SELECT%20product_id%2C%20COUNT\(*\)%20as%20order_count%20FROM%20orders%20GROUP%20BY%20product_id%5C%22\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D)
### Nested Queries
This example demonstrates parsing a nested SQL query, where the inner query selects **`user_id`** from **`orders`** based on **`purchase_date`**, and the outer query selects names from **`users`** based on those IDs.
```kusto
hn
| project parse_sql("SELECT name FROM users WHERE id IN (SELECT user_id FROM orders WHERE purchase_date > '2022-01-01')")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22hn%20%7C%20project%20parse_sql\(%5C%22SELECT%20name%20FROM%20users%20WHERE%20id%20IN%20\(SELECT%20user_id%20FROM%20orders%20WHERE%20purchase_date%20%3E%20'2022-01-01'\)%5C%22\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D)
### ORDER BY Clause
Here, the example shows how to parse an SQL statement that orders **`users`** by **`registration_date`** in descending order.
```kusto
hn
| project parse_sql("SELECT name, registration_date FROM users ORDER BY registration_date DESC")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22hn%20%7C%20project%20parse_sql\(%5C%22SELECT%20name%2C%20registration_date%20FROM%20users%20ORDER%20BY%20registration_date%20DESC%5C%22\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D)
### Sorting users by registration data
This example demonstrates parsing an SQL statement that retrieves the **`name`** and **`registration_date`** of users from the **`users`** table, and orders the results by **`registration_date`** in descending order, showing how to sort data based on a specific column.
```kusto
hn | extend parse_sql("SELECT name, registration_date FROM users ORDER BY registration_date DESC")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22hn%20%7C%20extend%20parse_sql\(%5C%22SELECT%20name%2C%20registration_date%20FROM%20users%20ORDER%20BY%20registration_date%20DESC%5C%22\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D)
### Querying with index hints to use a specific index
This query hints at MySQL to use a specific index named **`index_name`** when executing the SELECT statement on the **`users`** table.
```kusto
hn
| project parse_sql("SELECT * FROM users USE INDEX (index_name) WHERE user_id = 101")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22hn%20%7C%20project%20parse_sql\(%5C%22SELECT%20*%20FROM%20users%20USE%20INDEX%20\(index_name\)%20WHERE%20user_id%20%3D%20101%5C%22\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D)
### Inserting data with ON DUPLICATE KEY UPDATE
This example showcases MySQL’s ability to handle duplicate key entries elegantly by updating the existing record if the insert operation encounters a duplicate key.
```kusto
hn
| project parse_sql("INSERT INTO settings (user_id, setting, value) VALUES (1, 'theme', 'dark') ON DUPLICATE KEY UPDATE value='dark'")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22hn%20%7C%20project%20parse_sql\(%5C%22INSERT%20INTO%20settings%20\(user_id%2C%20setting%2C%20value\)%20VALUES%20\(1%2C%20'theme'%2C%20'dark'\)%20ON%20DUPLICATE%20KEY%20UPDATE%20value%3D'dark'%5C%22\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D)
### Using JSON functions
This query demonstrates MySQL’s support for JSON data types and functions, extracting the age from a JSON object stored in the **`user_info`** column.
```kusto
hn
| project parse_sql("SELECT JSON_EXTRACT(user_info, '$.age') AS age FROM users WHERE user_id = 101")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22hn%20%7C%20project%20parse_sql\(%5C%22SELECT%20JSON_EXTRACT\(user_info%2C%20%27%24.age%27\)%20AS%20age%20FROM%20users%20WHERE%20user_id%20%3D%20101%5C%22\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D)
## format\_sql()
Transforms the data model output by `parse_sql()` back into a SQL statement. Useful for testing and ensuring that the parsing accurately retains the original structure and intent of the SQL statement.
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| ------------------ | ---------- | ------------------------ | -------------------------------------------------- |
| parsed\_sql\_model | dictionary | Required | The structured data model output by `parse_sql()`. |
### Returns
A string that represents the SQL statement reconstructed from the provided data model.
### Examples
### Reformatting a basic SELECT Query
After parsing a SQL statement, you can reformat it back to its original or a standard SQL format.
```kusto
hn
| extend parsed = parse_sql("SELECT * FROM db")
| project formatted_sql = format_sql(parsed)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22hn%20%7C%20extend%20parsed%20%3D%20parse_sql\(%5C%22SELECT%20*%20FROM%20db%5C%22\)%20%7C%20project%20formatted_sql%20%3D%20format_sql\(parsed\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D)
### Formatting SQL Queries
This example first parses a SQL statement to analyze its structure and then formats the parsed structure back into a SQL string using `format_sql`.
```kusto
hn
| extend parsed = parse_sql("SELECT name, registration_date FROM users ORDER BY registration_date DESC")
| project format_sql(parsed)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22hn%20%7C%20extend%20parsed%20%3D%20parse_sql\(%5C%22SELECT%20name%2C%20registration_date%20FROM%20users%20ORDER%20BY%20registration_date%20DESC%5C%22\)%20%7C%20project%20formatted_sql%20%3D%20format_sql\(parsed\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D)
### Formatting a simple SELECT Statement
This example demonstrates parsing a straightforward `SELECT` statement that retrieves user IDs and usernames from an `user_accounts` table where the `active` status is `1`. After parsing, it uses `format_sql` to convert the parsed data back into a SQL string.
```kusto
hn
| extend parsed = parse_sql("SELECT user_id, username FROM user_accounts WHERE active = 1")
| project formatted_sql = format_sql(parsed)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22hn%20%7C%20extend%20parsed%20%3D%20parse_sql\(%5C%22SELECT%20user_id%2C%20username%20FROM%20user_accounts%20WHERE%20active%20%3D%201%5C%22\)%20%7C%20project%20formatted_sql%20%3D%20format_sql\(parsed\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D)
### Reformatting a complex query with JOINS
In this example, a more complex SQL statement involving an `INNER JOIN` between `orders` and `customers` tables is parsed. The query selects orders and customer names for orders placed after January 1, 2023. `format_sql` is then used to reformat the parsed structure into a SQL string.
```kusto
hn
| extend parsed = parse_sql("SELECT orders.order_id, customers.name FROM orders INNER JOIN customers ON orders.customer_id = customers.id WHERE orders.order_date > '2023-01-01'")
| project formatted_sql = format_sql(parsed)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22hn%20%7C%20extend%20parsed%20%3D%20parse_sql\(%5C%22SELECT%20orders.order_id%2C%20customers.name%20FROM%20orders%20INNER%20JOIN%20customers%20ON%20orders.customer_id%20%3D%20customers.id%20WHERE%20orders.order_date%20%3E%20'2023-01-01'%5C%22\)%20%7C%20project%20formatted_sql%20%3D%20format_sql\(parsed\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D)
### Using format\_sql with aggregation functions
This example focuses on parsing an SQL statement that performs aggregation. It selects product IDs and counts of total sales from a `sales` table, grouping by `product_id` and having a condition on the count. After parsing, `format_sql` reformats the output into an SQL string.
```kusto
hn
| extend parsed = parse_sql("SELECT product_id, COUNT(*) as total_sales FROM sales GROUP BY product_id HAVING COUNT(*) > 100")
| project formatted_sql = format_sql(parsed)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22hn%20%7C%20extend%20parsed%20%3D%20parse_sql\(%5C%22SELECT%20product_id%2C%20COUNT\(*\)%20as%20total_sales%20FROM%20sales%20GROUP%20BY%20product_id%20HAVING%20COUNT\(*\)%20%3E%20100%5C%22\)%20%7C%20project%20formatted_sql%20%3D%20format_sql\(parsed\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D)
# String functions
Learn how to use and combine different string functions in APL
## String functions
| **Function Name** | **Description** |
| ----------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------- |
| [base64\_encode\_tostring()](#base64-encode-tostring) | Encodes a string as base64 string. |
| [base64\_decode\_tostring()](#base64-decode-tostring) | Decodes a base64 string to a UTF-8 string. |
| [countof()](#countof) | Counts occurrences of a substring in a string. |
| [countof\_regex()](#countof-regex) | Counts occurrences of a substring in a string. Regex matches don’t. |
| [coalesce()](#coalesce) | Evaluates a list of expressions and returns the first non-null (or non-empty for string) expression. |
| [extract()](#extract) | Get a match for a regular expression from a text string. |
| [extract\_all()](#extract-all) | Get all matches for a regular expression from a text string. |
| [format\_bytes()](#format-bytes) | Formats a number of bytes as a string including bytes units |
| [format\_url()](#format-url) | Formats an input string into a valid URL by adding the necessary protocol if it’s escaping illegal URL characters. |
| [indexof()](#indexof) | Function reports the zero-based index of the first occurrence of a specified string within input string. |
| [isempty()](#isempty) | Returns true if the argument is an empty string or is null. |
| [isnotempty()](#isnotempty) | Returns true if the argument isn’t an empty string or a null. |
| [isnotnull()](#isnotnull) | Returns true if the argument is not null. |
| [isnull()](#isnull) | Evaluates its sole argument and returns a bool value indicating if the argument evaluates to a null value. |
| [parse\_bytes()](#parse-bytes) | Parses a string including byte size units and returns the number of bytes |
| [parse\_json()](#parse-json) | Interprets a string as a JSON value) and returns the value as dynamic. |
| [parse\_url()](#parse-url) | Parses an absolute URL string and returns a dynamic object contains all parts of the URL. |
| [parse\_urlquery()](#parse-urlquery) | Parses a url query string and returns a dynamic object contains the Query parameters. |
| [replace()](#replace) | Replace all regex matches with another string. |
| [replace\_regex()](#replace-regex) | Replaces all regex matches with another string. |
| [replace\_string()](#replace-string) | Replaces all string matches with another string. |
| [reverse()](#reverse) | Function makes reverse of input string. |
| [split()](#split) | Splits a given string according to a given delimiter and returns a string array with the contained substrings. |
| [strcat()](#strcat) | Concatenates between 1 and 64 arguments. |
| [strcat\_delim()](#strcat-delim) | Concatenates between 2 and 64 arguments, with delimiter, provided as first argument. |
| [strcmp()](#strcmp) | Compares two strings. |
| [strlen()](#strlen) | Returns the length, in characters, of the input string. |
| [strrep()](#strrep) | Repeats given string provided number of times (default = 1). |
| [substring()](#substring) | Extracts a substring from a source string starting from some index to the end of the string. |
| [toupper()](#toupper) | Converts a string to upper case. |
| [tolower()](#tolower) | Converts a string to lower case. |
| [trim()](#trim) | Removes all leading and trailing matches of the specified cutset. |
| [trim\_regex()](#trim-regex) | Removes all leading and trailing matches of the specified regular expression. |
| [trim\_end()](#trim-end) | Removes trailing match of the specified cutset. |
| [trim\_end\_regex()](#trim-end-regex) | Removes trailing match of the specified regular expression. |
| [trim\_start()](#trim-start) | Removes leading match of the specified cutset. |
| [trim\_start\_regex()](#trim-start-regex) | Removes leading match of the specified regular expression. |
| [url\_decode()](#url-decode) | The function converts encoded URL into a regular URL representation. |
| [url\_encode()](#url-encode) | The function converts characters of the input URL into a format that can be transmitted over the Internet. |
| [gettype()](#gettype) | Returns the runtime type of its single argument. |
| [parse\_csv()](#parse-csv) | Splits a given string representing a single record of comma-separated values and returns a string array with these values. |
Each argument has a **required** section which is denoted with `required` or `optional`
* If it’s denoted by `required` it means the argument must be passed into that function before it'll work.
* if it’s denoted by `optional` it means the function can work without passing the argument value.
## base64\_encode\_tostring()
Encodes a string as base64 string.
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| -------- | -------- | ------------------------ | ------------------------------------------------------------ |
| String | string | Required | Input string or string field to be encoded as base64 string. |
### Returns
Returns the string encoded as base64 string.
* To decode base64 strings to UTF-8 strings, see [base64\_decode\_tostring()](#base64-decode-tostring)
### Examples
```kusto
base64_encode_tostring(string)
```
```kusto
['sample-http-logs']
| project encoded_base64_string = base64_encode_tostring(content_type)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20encoded_base64_string%20%3D%20base64_encode_tostring\(content_type\)%22%7D)
## base64\_decode\_tostring()
Decodes a base64 string to a UTF-8 string.
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| -------- | -------- | ------------------------ | ------------------------------------------------------------------------ |
| String | string | Required | Input string or string field to be decoded from base64 to UTF8-8 string. |
### Returns
Returns UTF-8 string decoded from base64 string.
* To encode strings to base64 string, see [base64\_encode\_tostring()](#base64-encode-tostring)
### Examples
```kusto
base64_decode_tostring(string)
```
```kusto
['sample-http-logs']
| project decoded_base64_string = base64_decode_tostring("VGhpcyBpcyBhbiBlbmNvZGVkIG1lc3NhZ2Uu")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20decoded_base64_string%20%3D%20base64_decode_tostring\(%5C%22VGhpcyBpcyBhbiBlbmNvZGVkIG1lc3NhZ2Uu%5C%22\)%22%7D)
## countof()
Counts occurrences of a substring in a string.
### Arguments
| **name** | **type** | **description** | **Required or Optional** |
| ----------- | ---------- | ---------------------------------------- | ------------------------ |
| text source | **string** | Source to count your occurences from | Required |
| search | **string** | The plain string to match inside source. | Required |
### Returns
The number of times that the search string can be matched.
### Examples
```kusto
countof(search, text)
```
```kusto
['sample-http-logs']
| project count = countof("con", "content_type")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20count%20%3D%20countof\(%5C%22con%5C%22%2C%20%5C%22content_type%5C%22\)%22%7D)
## countof\_regex()
Counts occurrences of a substring in a string. regex matches don’t.
### Arguments
* text source: A string.
* regex search: regular expression to match inside your text source.
### Returns
The number of times that the search string can be matched in the dataset. Regex matches do not.
### Examples
```kusto
countof_regex(regex, text)
```
```kusto
['sample-http-logs']
| project count = countof_regex("c.n", "content_type")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20count%20%3D%20countof_regex\(%5C%22c.n%5C%22%2C%20%5C%22content_type%5C%22\)%22%7D)
## coalesce()
Evaluates a list of expressions and returns the first non-null (or non-empty for string) expression.
### Arguments
| **name** | **type** | **description** | **Required or Optional** |
| --------- | ---------- | ---------------------------------------- | ------------------------ |
| arguments | **scalar** | The expression or field to be evaluated. | Required |
### Returns
The value of the first argument whose value isn’t null (or not-empty for string expressions).
### Examples
```kusto
['sample-http-logs']
| project coalesced = coalesce(content_type, ['geo.city'], method)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20coalesced%20%3D%20coalesce\(content_type%2C%20%5B%27geo.city%27%5D%2C%20method\)%22%7D)
```kusto
['http-logs']
| project req_duration_ms, server_datacenter, predicate = coalesce(content_type, method, status)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20req_duration_ms%2C%20server_datacenter%2C%20predicate%20%3D%20coalesce\(content_type%2C%20method%2C%20status\)%22%7D)
## extract()
Retrieve the first substring matching a regular expression from a source string.
### Arguments
| **name** | **type** | **description** |
| ------------ | -------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| regex | **expression** | A regular expression. |
| captureGroup | **int** | A positive `int` constant indicating the capture group to extract. 0 stands for the entire match, 1 for the value matched by the first '('parenthesis')' in the regular expression, 2 or more for subsequent parentheses. |
| source | **string** | A string to search |
### Returns
If regex finds a match in source: the substring matched against the indicated capture group captureGroup, optionally converted to typeLiteral.
If there’s no match, or the type conversion fails: `-1` or `string error`
### Examples
```kusto
extract(regex, captureGroup, source)
```
```kusto
['sample-http-logs']
| project extract_sub = extract("^.{2,2}(.{4,4})", 1, content_type)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20extract_sub%20%3D%20%20extract\(%5C%22%5E.%7B2%2C2%7D\(.%7B4%2C4%7D\)%5C%22%2C%201%2C%20content_type\)%22%7D)
```kusto
extract("x=([0-9.]+)", 1, "axiom x=65.6|po") == "65.6"
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20extract_sub%20%3D%20%20extract\(%5C%22x%3D\(%5B0-9.%5D%2B\)%5C%22%2C%201%2C%20%5C%22axiom%20x%3D65.6%7Cpo%5C%22\)%20%3D%3D%20%5C%2265.6%5C%22%22%7D)
## extract\_all()
Retrieve all substrings matching a regular expression from a source string. Optionally, retrieve only a subset of the matching groups.
### Arguments
| **name** | **type** | **description** | **Required or Optional** |
| ------------- | -------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------ |
| regex | **expression** | A regular expression containing between one and 16 capture groups. Examples of a valid regex: @"(\d+)". Examples of an invalid regex: @"\d+" | Required |
| captureGroups | **array** | A dynamic array constant that indicates the capture group to extract. Valid values are from 1 to the number of capturing groups in the regular expression. | optional |
| source | **string** | A string to search | Required |
### Returns
* If regex finds a match in source: Returns dynamic array including all matches against the indicated capture groups captureGroups, or all of capturing groups in the regex.
* If number of captureGroups is 1: The returned array has a single dimension of matched values.
* If number of captureGroups is more than 1: The returned array is a two-dimensional collection of multi-value matches per captureGroups selection, or all capture groups present in the regex if captureGroups is omitted.
* If there’s no match: `-1`
### Examples
```kusto
extract_all(regex, [captureGroups,] source)
```
```kusto
['sample-http-logs']
| project extract_match = extract_all(@"(\w)(\w+)(\w)", dynamic([1,3]), content_type)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20extract_match%20%3D%20extract_all%28%40%5C%22%28%5C%5Cw%29%28%5C%5Cw%2B%29%28%5C%5Cw%29%5C%22%2C%20dynamic%28%5B1%2C3%5D%29%2C%20content_type%29%22%2C%20%22queryOptions%22%3A%20%7B%22quickRange%22%3A%20%2290d%22%7D%7D)
```kusto
extract_all(@"(\w)(\w+)(\w)", dynamic([1,3]), content_type) == [["t", "t"],["c","v"]]
```
## format\_bytes()
Formats a number as a string representing data size in bytes.
### Arguments
| **name** | **type** | **description** | **Required or Optional** |
| --------- | ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------ |
| value | **number** | a number to be formatted as data size in bytes | Required |
| precision | **number** | Number of digits the value will be rounded to. (default value is zero) | Optional |
| units | **string** | Units of the target data size the string formatting will use (base 2 suffixes: `Bytes`, `KiB`, `KB`, `MiB`, `MB`, `GiB`, `GB`, `TiB`, `TB`, `PiB`, `EiB`, `ZiB`, `YiB`; base 10 suffixes: `kB` `MB` `GB` `TB` `PB` `EB` `ZB` `YB`). If the parameter is empty the units will be auto-selected based on input value. | Optional |
| base | **number** | Either 2 or 10 to specify whether the prefix is calculated using 1000s or 1024s for each type. (default value is 2) | Optional |
### Returns
* A formatted string for humans
### Examples
```kusto
format_bytes( 4000, number, "['id']", num_comments ) == "3.9062500000000 KB"
```
```kusto
format_bytes(value [, precision [, units [, base]]])
format_bytes(1024) == "1 KB"
format_bytes(8000000, 2, "MB", 10) == "8.00 MB"
```
```kusto
['github-issues-event']
| project formated_bytes = format_bytes( 4783549035, number, "['id']", num_comments )
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27github-issues-event%27%5D%5Cn%7C%20project%20formated_bytes%20%3D%20format_bytes\(4783549035%2C%20number%2C%20%5C%22%5B%27id%27%5D%5C%22%2C%20num_comments\)%22%7D)
## format\_url()
Formats an input string into a valid URL. This function will return a string that is a properly formatted URL.
### Arguments
| **name** | **type** | **description** | **Required or Optional** |
| -------- | ----------- | ------------------------------------------ | ------------------------ |
| url | **dynamic** | string input you want to format into a URL | Required |
### Returns
* A string that represents a properly formatted URL.
### Examples
```kusto
['sample-http-logs']
| project formatted_url = format_url(dynamic({"scheme": "https", "host": "github.com", "path": "/axiomhq/next-axiom"})
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20formatted_url%20%3D%20format_url%28dynamic%28%7B%5C%22scheme%5C%22%3A%20%5C%22https%5C%22%2C%20%5C%22host%5C%22%3A%20%5C%22github.com%5C%22%2C%20%5C%22path%5C%22%3A%20%5C%22%2Faxiomhq%2Fnext-axiom%5C%22%7D%29%29%22%7D)
```kusto
['sample-http-logs']
| project formatted_url = format_url(dynamic({"scheme": "https", "host": "github.com", "path": "/axiomhq/next-axiom", "port": 443, "fragment": "axiom","user": "axiom", "password": "apl"}))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20formatted_url%20%3D%20format_url%28dynamic%28%7B%5C%22scheme%5C%22%3A%20%5C%22https%5C%22%2C%20%5C%22host%5C%22%3A%20%5C%22github.com%5C%22%2C%20%5C%22path%5C%22%3A%20%5C%22%2Faxiomhq%2Fnext-axiom%5C%22%2C%20%5C%22port%5C%22%3A%20443%2C%20%5C%22fragment%5C%22%3A%20%5C%22axiom%5C%22%2C%20%5C%22user%5C%22%3A%20%5C%22axiom%5C%22%2C%20%5C%22password%5C%22%3A%20%5C%22apl%5C%22%7D%29%29%22%7D)
* These are all the supported keys when using the `format_url` function: scheme, host, port, fragment, user, password, query.
## indexof()
Reports the zero-based index of the first occurrence of a specified string within the input string.
### Arguments
| **name** | **type** | **description** | **usage** |
| ------------ | -------------- | ------------------------------------------------------------------------------- | --------- |
| source | **string** | Input string | Required |
| lookup | **string** | String to look up | Required |
| start\_index | **text** | Search start position. | Optional |
| length | **characters** | Number of character positions to examine. A value of -1 means unlimited length. | Optional |
| occurrence | **number** | The number of the occurrence. Default 1. | Optional |
### Returns
* Zero-based index position of lookup.
* Returns -1 if the string isn’t found in the input.
### Examples
```kusto
indexof( body, ['id'], 2, 1, number ) == "-1"
```
```kusto
indexof(source,lookup[,start_index[,length[,occurrence]]])
indexof ()
```
```kusto
['github-issues-event']
| project occurrence = indexof( body, ['id'], 23, 5, number )
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27github-issues-event%27%5D%5Cn%7C%20project%20occurrence%20%3D%20indexof%28%20body%2C%20%5B%27id%27%5D%2C%2023%2C%205%2C%20number%20%29%22%7D)
## isempty()
Returns `true` if the argument is an empty string or is null.
### Returns
Indicates whether the argument is an empty string or isnull.
### Examples
```kusto
isempty("") == true
```
```kusto
isempty([value])
```
```kusto
['github-issues-event']
| project empty = isempty(num_comments)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20project%20empty%20%3D%20isempty%28num_comments%29%22%7D)
## isnotempty()
Returns `true` if the argument isn’t an empty string, and it isn’t null.
### Examples
```kusto
isnotempty("") == false
```
```kusto
isnotempty([value])
notempty([value]) -- alias of isnotempty
```
```kusto
['github-issues-event']
| project not_empty = isnotempty(num_comments)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20project%20not_empty%20%3D%20isnotempty%28num_comments%29%22%7D)
## isnotnull()
Returns `true` if the argument is not null.
### Examples
```kusto
isnotnull( num_comments ) == true
```
```kusto
isnotnull([value])
notnull([value]) - alias for `isnotnull`
```
```kusto
['github-issues-event']
| project not_null = isnotnull(num_comments)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20project%20not_null%20%3D%20isnotnull%28num_comments%29%22%7D)
## isnull()
Evaluates its sole argument and returns a bool value indicating if the argument evaluates to a null value.
### Returns
True or false, depending on whether or not the value is null.
### Examples
```kusto
isnull(Expr)
```
```kusto
['github-issues-event']
| project is_null = isnull(creator)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20project%20is_null%20%3D%20isnull%28creator%29%22%7D)
## parse\_bytes()
Parses a string including byte size units and returns the number of bytes
### Arguments
| **name** | **type** | **description** | **Required or Optional** |
| ------------- | ---------- | ------------------------------------------------------------------------------------------------------------------------------ | ------------------------ |
| bytes\_string | **string** | A string formated defining the number of bytes | Required |
| base | **number** | (optional) Either 2 or 10 to specify whether the prefix is calculated using 1000s or 1024s for each type. (default value is 2) | Required |
### Returns
* The number of bytes or zero if unable to parse
### Examples
```kusto
parse_bytes(bytes_string [, base])
parse_bytes("1 KB") == 1024
parse_bytes("1 KB", 10) == 1000
parse_bytes("128 Bytes") == 128
parse_bytes("bad data") == 0
```
```kusto
['github-issues-event']
| extend parsed_bytes = parse_bytes("300 KB", 10)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20extend%20parsed_bytes%20%3D%20%20parse_bytes%28%5C%22300%20KB%5C%22%2C%2010%29%22%7D)
```kusto
['github-issues-event']
| project parsed_bytes = parse_bytes("300 KB", 10)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20project%20parsed_bytes%20%3D%20%20parse_bytes%28%5C%22300%20KB%5C%22%2C%2010%29%22%7D)
## parse\_json()
Interprets a string as a JSON value and returns the value as dynamic.
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| --------- | -------- | ------------------------ | -------------------------------------------------------------------- |
| Json Expr | string | Required | Expression that will be used, also represents a JSON-formatted value |
### Returns
An object of type json that is determined by the value of json:
* If json is of type string, and is a properly formatted JSON string, then the string is parsed, and the value produced is returned.
* If json is of type string, but it isn’t a properly formatted JSON string, then the returned value is an object of type dynamic that holds the original string value.
### Examples
```kusto
parse_json(json)
```
```kusto
['vercel']
| extend parsed = parse_json('{"name":"vercel", "statuscode":200, "region": { "route": "usage streams", "number": 9 }}')
```
```kusto
['github-issues-event']
| extend parsed = parse_json(creator)
| where isnotnull( parsed)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20extend%20parsed%20%3D%20parse_json%28creator%29%5Cn%7C%20where%20isnotnull%28parsed%29%22%7D)
## parse\_url()
Parses an absolute URL `string` and returns an object contains `URL parts.`
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| -------- | -------- | ------------------------ | ------------------------------------------------------- |
| URL | string | Required | A string represents a URL or the query part of the URL. |
### Returns
An object of type dynamic that included the URL components: Scheme, Host, Port, Path, Username, Password, Query Parameters, Fragment.
### Examples
```kusto
parse_url(url)
```
```kusto
['sample-http-logs']
| extend ParsedURL = parse_url("https://www.example.com/path/to/page?query=example")
| project
Scheme = ParsedURL["scheme"],
Host = ParsedURL["host"],
Path = ParsedURL["path"],
Query = ParsedURL["query"]
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20extend%20ParsedURL%20%3D%20parse_url%28%5C%22https%3A%2F%2Fwww.example.com%2Fpath%2Fto%2Fpage%3Fquery%3Dexample%5C%22%29%5Cn%7C%20project%20%5Cn%20%20Scheme%20%3D%20ParsedURL%5B%5C%22scheme%5C%22%5D%2C%5Cn%20%20Host%20%3D%20ParsedURL%5B%5C%22host%5C%22%5D%2C%5Cn%20%20Path%20%3D%20ParsedURL%5B%5C%22path%5C%22%5D%2C%5Cn%20%20Query%20%3D%20ParsedURL%5B%5C%22query%5C%22%5D%22%7D)
* Result
```json
{
"Host": "www.example.com",
"Path": "/path/to/page",
"Query": {
"query": "example"
},
"Scheme": "https"
}
```
## parse\_urlquery()
Returns a `dynamic` object contains the Query parameters.
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| -------- | -------- | ------------------------ | -------------------------------- |
| Query | string | Required | A string represents a url query. |
query: A string represents a url query
### Returns
An object of type dynamic that includes the query parameters.
### Examples
```kusto
parse_urlquery("a1=b1&a2=b2&a3=b3")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20extend%20ParsedURLQUERY%20%3D%20parse_urlquery%28%5C%22a1%3Db1%26a2%3Db2%26a3%3Db3%5C%22%29%22%7D)
* Result
```json
{
"Result": {
"a3": "b3",
"a2": "b2",
"a1": "b1"
}
}
```
```kusto
parse_urlquery(query)
```
```kusto
['github-issues-event']
| project parsed = parse_urlquery("https://play.axiom.co/axiom-play-qf1k/explorer?qid=fUKgiQgLjKE-rd7wjy")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20project%20parsed%20%3D%20parse_urlquery%28%5C%22https%3A%2F%2Fplay.axiom.co%2Faxiom-play-qf1k%2Fexplorer%3Fqid%3DfUKgiQgLjKE-rd7wjy%5C%22%29%22%7D)
## replace()
Replace all regex matches with another string.
### Arguments
* regex: The regular expression to search source. It can contain capture groups in '('parentheses')'.
* rewrite: The replacement regex for any match made by matchingRegex. Use $0 to refer to the whole match, $1 for the first capture group, \$2 and so on for subsequent capture groups.
* source: A string.
### Returns
* source after replacing all matches of regex with evaluations of rewrite. Matches do not overlap.
### Examples
```kusto
replace(regex, rewrite, source)
```
```kusto
['sample-http-logs']
| project content_type, Comment = replace("[html]", "[censored]", method)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20project%20content_type%2C%20Comment%20%3D%20replace%28%5C%22%5Bhtml%5D%5C%22%2C%20%5C%22%5Bcensored%5D%5C%22%2C%20method%29%22%7D)
## replace\_regex()
Replaces all regex matches with another string.
### Arguments
* regex: The regular expression to search text.
* rewrite: The replacement regex for any match made by *matchingRegex*.
* text: A string.
### Returns
source after replacing all matches of regex with evaluations of rewrite. Matches do not overlap.
### Examples
```kusto
replace_regex(@'^logging', 'axiom', 'logging-data')
```
* Result
```json
{
"replaced": "axiom-data"
}
```
```kusto
replace_regex(regex, rewrite, text)
```
```kusto
['github-issues-event']
| extend replaced = replace_regex(@'^logging', 'axiom', 'logging-data')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20project%20replaced_regex%20%3D%20replace_regex%28%40'%5Elogging'%2C%20'axiom'%2C%20'logging-data'%29%22%7D)
### Backreferences
Backreferences match the same text as previously matched by a capturing group. With Backreferences, you can identify a repeated character or substring within a string.
* Backreferences in APL is implemented using the `$` sign.
#### Examples
```kusto
['github-issues-event']
| project backreferences = replace_regex(@'observability=(.+)', 'axiom=$1', creator)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%20%7C%20project%20backreferences%20%3D%20replace_regex\(%40'observability%3D\(.%2B\)'%2C%20'axiom%3D%241'%2C%20creator\)%22%7D)
## replace\_string()
Replaces all string matches with another string.
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| -------- | -------- | ------------------------ | ----------------------------------------------------------------------- |
| lookup | string | Required | A string which Axiom matches in `text` and replaces with `rewrite`. |
| rewrite | string | Required | A string with which Axiom replaces parts of `text` that match `lookup`. |
| text | string | Required | A string where Axiom replaces parts matching `lookup` with `rewrite`. |
### Returns
`text` after replacing all matches of `lookup` with evaluations of `rewrite`. Matches don’t overlap.
### Examples
```kusto
replace_string("github", "axiom", "The project is hosted on github")
```
* Result
```json
{
"replaced_string": "axiom"
}
```
```kusto
replace_string(lookup, rewrite, text)
```
```kusto
['sample-http-logs']
| extend replaced_string = replace_string("The project is hosted on github", "github", "axiom")
| project replaced_string
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20replaced_string%20%3D%20replace_string%28%27github%27%2C%20%27axiom%27%2C%20%27The%20project%20is%20hosted%20on%20github%27%29%5Cn%7C%20project%20replaced_string%22%7D)
## reverse()
Function reverses the order of the input Field.
### Arguments
| **name** | **type** | **description** | **Required or Optional** |
| -------- | -------- | ----------------- | ------------------------ |
| Field | `string` | Field input value | Required |
### Returns
The reverse order of a field value.
### Examples
```kusto
reverse(value)
```
```kusto
project reversed = reverse("axiom")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20project%20reversed_value%20%3D%20reverse%28'axiom'%29%22%7D)
* Result
```json
moixa
```
## split()
Splits a given string according to a given delimiter and returns a string array with the contained substrings.
Optionally, a specific substring can be returned if exists.
### Arguments
* source: The source string that will be split according to the given delimiter.
* delimiter: The delimiter (Field) that will be used in order to split the source string.
### Returns
* A string array that contains the substrings of the given source string that are delimited by the given delimiter.
### Examples
```kusto
split(source, delimiter)
```
```kusto
project split_str = split("axiom_observability_monitoring", "_")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27github-issues-event%27%5D%5Cn%7C%20project%20split_str%20%3D%20split%28%5C%22axiom_observability_monitoring%5C%22%2C%20%5C%22_%5C%22%29%22%7D)
* Result
```json
{
"split_str": ["axiom", "observability", "monitoring"]
}
```
## strcat()
Concatenates between 1 and 64 arguments.
If the arguments aren’t of string type, they'll be forcibly converted to string.
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| -------- | -------- | ------------------------ | ------------------------------- |
| Expr | string | Required | Expressions to be concatenated. |
### Returns
Arguments, concatenated to a single string.
### Examples
```kusto
strcat(argument1, argument2[, argumentN])
```
```kusto
['github-issues-event']
| project stract_con = strcat( ['milestone.creator'], number )
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20project%20stract_con%20%3D%20strcat%28%20%5B'milestone.creator'%5D%2C%20number%20%29%22%7D)
```kusto
['github-issues-event']
| project stract_con = strcat( 'axiom', number )
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20project%20stract_con%20%3D%20strcat%28%20'axiom'%2C%20number%20%29%22%7D)
* Result
```json
{
"stract_con": "axiom3249"
}
```
## strcat\_delim()
Concatenates between 2 and 64 arguments, with delimiter, provided as first argument.
* If arguments aren’t of string type, they'll be forcibly converted to string.
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| ------------ | -------- | ------------------------ | --------------------------------------------------- |
| delimiter | string | Required | string expression, which will be used as separator. |
| argument1 .. | string | Required | Expressions to be concatenated. |
### Returns
Arguments, concatenated to a single string with delimiter.
### Examples
```kusto
strcat_delim(delimiter, argument1, argument2[ , argumentN])
```
```kusto
['github-issues-event']
| project strcat = strcat_delim(":", actor, creator)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20project%20strcat%20%3D%20strcat_delim%28'%3A'%2C%20actor%2C%20creator%29%22%7D)
```kusto
project strcat = strcat_delim(":", "axiom", "monitoring")
```
* Result
```json
{
"strcat": "axiom:monitoring"
}
```
## strcmp()
Compares two strings.
The function starts comparing the first character of each string. If they are equal to each other, it continues with the following pairs until the characters differ or until the end of shorter string is reached.
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| -------- | -------- | ------------------------ | ----------------------------------- |
| string1 | string | Required | first input string for comparison. |
| string2 | string | Required | second input string for comparison. |
### Returns
Returns an integral value indicating the relationship between the strings:
* When the result is 0: The contents of both strings are equal.
* When the result is -1: the first character that does not match has a lower value in string1 than in string2.
* When the result is 1: the first character that does not match has a higher value in string1 than in string2.
### Examples
```kusto
strcmp(string1, string2)
```
```kusto
['github-issues-event']
| extend cmp = strcmp( body, repo )
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20extend%20cmp%20%3D%20strcmp%28%20body%2C%20repo%20%29%22%7D)
```kusto
project cmp = strcmp( "axiom", "observability")
```
* Result
```json
{
"input_string": -1
}
```
## strlen()
Returns the length, in characters, of the input string.
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| -------- | -------- | ------------------------ | ---------------------------------------------------------- |
| source | string | Required | The source string that will be measured for string length. |
### Returns
Returns the length, in characters, of the input string.
### Examples
```kusto
strlen(source)
```
```kusto
project str_len = strlen("axiom")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20project%20str_len%20%3D%20strlen\(%5C%22axiom%5C%22\)%22%7D)
* Result
```json
{
"str_len": 5
}
```
## strrep()
Repeats given string provided amount of times.
* In case if first or third argument is not of a string type, it will be forcibly converted to string.
### Arguments
| **Name** | **Type** | **Required or Optional** | **Description** |
| ---------- | -------- | ------------------------ | ----------------------------------------------------- |
| value | Expr | Required | Inpute Expression |
| multiplier | integer | Required | positive integer value (from 1 to 1024) |
| delimiter | string | Optional | An optional string expression (default: empty string) |
### Returns
* Value repeated for a specified number of times, concatenated with delimiter.
* In case if multiplier is more than maximal allowed value (1024), input string will be repeated 1024 times.
### Examples
```kusto
strrep(value,multiplier,[delimiter])
```
```kusto
['github-issues-event']
| extend repeat_string = strrep( repo, 5, "::" )
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20extend%20repeat_string%20%3D%20strrep\(%20repo%2C%205%2C%20%5C%22%3A%3A%5C%22%20\)%22%7D)
```kusto
project repeat_string = strrep( "axiom", 3, "::" )
```
* Result
```json
{
"repeat_string": "axiom::axiom::axiom"
}
```
## substring()
Extracts a substring from a source string starting from some index to the end of the string.
### Arguments
* source: The source string that the substring will be taken from.
* startingIndex: The zero-based starting character position of the requested substring.
* length: A parameter that can be used to specify the requested number of characters in the substring.
### Returns
A substring from the given string. The substring starts at startingIndex (zero-based) character position and continues to the end of the string or length characters if specified.
### Examples
```kusto
substring(source, startingIndex [, length])
```
```kusto
['github-issues-event']
| extend extract_string = substring( repo, 4, 5 )
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20extend%20extract_string%20%3D%20substring\(%20repo%2C%204%2C%205%20\)%22%7D)
```kusto
project extract_string = substring( "axiom", 4, 5 )
```
```json
{
"extract_string": "m"
}
```
## toupper()
Converts a string to upper case.
```kusto
toupper("axiom") == "AXIOM"
```
```kusto
['github-issues-event']
| project upper = toupper( body )
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20project%20upper%20%3D%20toupper\(%20body%20\)%22%7D)
## tolower()
Converts a string to lower case.
```kusto
tolower("AXIOM") == "axiom"
```
```kusto
['github-issues-event']
| project low = tolower( body )
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20project%20low%20%3D%20tolower%28body%29%22%7D)
## trim()
Removes all leading and trailing matches of the specified cutset.
### Arguments
* source: A string.
* cutset: A string containing the characters to be removed.
### Returns
source after trimming matches of the cutset found in the beginning and/or the end of source.
### Examples
```kusto
trim(source)
```
```kusto
['github-issues-event']
| extend remove_leading_matches = trim( "locked", repo)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20extend%20remove_leading_matches%20%3D%20trim\(%5C%22locked%5C%22%2C%20repo\)%22%7D)
```kusto
project remove_leading_matches = trim( "axiom", "observability")
```
* Result
```json
{
"remove_leading_matches": "bservability"
}
```
## trim\_regex()
Removes all leading and trailing matches of the specified regular expression.
### Arguments
* regex: String or regular expression to be trimmed from the beginning and/or the end of source.
* source: A string.
### Returns
source after trimming matches of regex found in the beginning and/or the end of source.
### Examples
```kusto
trim_regex(regex, source)
```
```kusto
['github-issues-event']
| extend remove_trailing_match_regex = trim_regex( "^github", action )
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20extend%20remove_trailing_match_regex%20%3D%20trim_regex\(%5C%22%5Egithub%5C%22%2C%20action\)%22%7D)
* Result
```json
{
"remove_trailing_match_regex": "closed"
}
```
## trim\_end()
Removes trailing match of the specified cutset.
### Arguments
* source: A string.
* cutset: A string containing the characters to be removed.\`
### Returns
source after trimming matches of the cutset found in the end of source.
### Examples
```kusto
trim_end(source)
```
```kusto
['github-issues-event']
| extend remove_cutset = trim_end(@"[^\w]+", body)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27github-issues-event%27%5D%5Cn%7C%20extend%20remove_cutset%20%3D%20trim_end%28%40%5C%22%5B%5E%5C%5Cw%5D%2B%5C%22%2C%20body%29%22%7D)
* Result
```json
{
"remove_cutset": "In [`9128d50`](https://7aa98788e07\n), **down**:\n- HTTP code: 0\n- Response time: 0 ms\n"
}
```
## trim\_end\_regex()
Removes trailing match of the specified regular expression.
### Arguments
* regex: String or regular expression to be trimmed from the end of source.
* source: A string.
### Returns
source after trimming matches of regex found in the end of source.
### Examples
```kusto
trim_end_regex(regex, source)
```
```kusto
['github-issues-event']
| project remove_cutset_regex = trim_end_regex( "^github", creator )
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20project%20remove_cutset_regex%20%3D%20trim_end_regex\(%20%5C%22%5Egithub%5C%22%2C%20creator%20\)%22%7D)
* Result
```json
{
"remove_cutset_regex": "axiomhq"
}
```
## trim\_start()
Removes leading match of the specified cutset.
### Arguments
* source: A string.
### Returns
* source after trimming match of the specified cutset found in the beginning of source.
### Examples
```kusto
trim_start(source)
```
```kusto
['github-issues-event']
| project remove_cutset = trim_start( "github", repo)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20project%20remove_cutset%20%3D%20trim_start\(%20%5C%22github%5C%22%2C%20repo\)%22%7D)
* Result
```json
{
"remove_cutset": "axiomhq/next-axiom"
}
```
## trim\_start\_regex()
Removes leading match of the specified regular expression.
### Arguments
* regex: String or regular expression to be trimmed from the beginning of source.
* source: A string.
### Returns
source after trimming match of regex found in the beginning of source.
### Examples
```kusto
trim_start_regex(regex, source)
```
```kusto
['github-issues-event']
| project remove_cutset = trim_start_regex( "github", repo)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20project%20remove_cutset%20%3D%20trim_start_regex\(%20%5C%22github%5C%22%2C%20repo\)%22%7D)
* Result
```json
{
"remove_cutset": "axiomhq/next-axiom"
}
```
## url\_decode()
The function converts encoded URL into a to regular URL representation.
### Arguments
* `encoded url:` encoded URL (string).
### Returns
URL (string) in a regular representation.
### Examples
```kusto
url_decode(encoded url)
```
```kusto
['github-issues-event']
| project decoded_link = url_decode( "https://www.axiom.co/" )
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20project%20decoded_link%20%3D%20url_decode\(%20%5C%22https%3A%2F%2Fwww.axiom.co%2F%5C%22%20\)%22%7D)
* Result
```json
{
"decoded_link": "https://www.axiom.co/"
}
```
## url\_encode()
The function converts characters of the input URL into a format that can be transmitted over the Internet.
### Arguments
* url: input URL (string).
### Returns
URL (string) converted into a format that can be transmitted over the Internet.
### Examples
```kusto
url_encode(url)
```
```kusto
['github-issues-event']
| project encoded_url = url_encode( "https://www.axiom.co/" )
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20project%20encoded_url%20%3D%20url_encode\(%20%5C%22https%3A%2F%2Fwww.axiom.co%2F%5C%22%20\)%22%7D)
* Result
```json
{
"encoded_link": "https%3A%2F%2Fwww.axiom.co%2F"
}
```
## gettype()
Returns the runtime type of its single argument.
### Arguments
* Expressions
### Returns
A string representing the runtime type of its single argument.
### Examples
| **Expression** | **Returns** |
| ----------------------------------------- | -------------- |
| gettype("lima") | **string** |
| gettype(2222) | **int** |
| gettype(5==5) | **bool** |
| gettype(now()) | **datetime** |
| gettype(parse\_json('67')) | **int** |
| gettype(parse\_json(' "polish" ')) | **string** |
| gettype(parse\_json(' \{"axiom":1234} ')) | **dictionary** |
| gettype(parse\_json(' \[6, 7, 8] ')) | **array** |
| gettype(456.98) | **real** |
| gettype(parse\_json('')) | **null** |
## parse\_csv()
Splits a given string representing a single record of comma-separated values and returns a string array with these values.
### Arguments
* csv\_text: A string representing a single record of comma-separated values.
### Returns
A string array that contains the split values.
### Examples
```kusto
parse_csv("axiom,logging,observability") == [ "axiom", "logging", "observability" ]
```
```kusto
parse_csv("axiom, processing, language") == [ "axiom", "processing", "language" ]
```
```kusto
['github-issues-event']
| project parse_csv("github, body, repo")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20project%20parse_csv\(%5C%22github%2C%20body%2C%20repo%5C%22\)%22%7D)
# Logical operators
Learn how to use and combine different logical operators in APL.
## Logical (binary) operators
The following logical operators are supported between two values of the `bool` type:
**These logical operators are sometimes referred-to as Boolean operators, and sometimes as binary operators. The names are all synonyms.**
| **Operator name** | **Syntax** | **meaning** | |
| ----------------- | ---------- | ----------------------------------------------------------------------------------------------------------------------- | - |
| Equality | **==** | Yields `true` if both operands are non-null and equal to each other. Otherwise, `false.` | |
| Inequality | **!=** | Yields `true` if either one (or both) of the operands are null, or they are not equal to each other. Otherwise, `true.` | |
| Logical and | **and** | Yields `true` if both operands are `true.` | |
| Logical or | **or** | Yields `true `if one of the operands is `true`, regardless of the other operand. | |
# Numerical operators
Learn how to use and combine numerical operators in APL.
## Numerical operators
The types `int`, `long`, and `real` represent numerical types. The following operators can be used between pairs of these types:
| **Operator** | **Description** | **Example** | |
| ------------ | --------------------------------- | ------------------------------------------------ | - |
| `+` | Add | `3.19 + 3.19`, `ago(10m) + 10m` | |
| `-` | Subtract | `0.26 - 0.23` | |
| `*` | Multiply | `1s * 5`, `5 * 5` | |
| `/` | Divide | `10m / 1s`, `4 / 2` | |
| `%` | Modulo | `10 % 3`, `5 % 2` | |
| `<` | Less | `1 < 2`, `1 <= 1` | |
| `>` | Greater | `0.23 > 0.22`, `10min > 1sec`, `now() > ago(1d)` | |
| `==` | Equals | `3 == 3` | |
| `!=` | Not equals | `2 != 1` | |
| `<=` | Less or Equal | `5 <= 6` | |
| `>=` | Greater or Equal | `7 >= 6` | |
| `in` | Equals to one of the elements | `"abc" in ("123", "345", "abc")` | |
| `!in` | Not equals to any of the elements | `"bca" !in ("123", "345", "abc")` | |
# String operators
Learn how to use and combine different query operators for searching string data types.
## String operators
Axiom processing language provides you with different query operators for searching string data types.
Below are the list of string operators we support on Axiom processing language.
**Note:**
The following abbreviations are used in the table below:
* RHS = right hand side of the expression.
* LHS = left hand side of the expression.
Operators with an \_cs suffix are case sensitive
When two operators do the same task, use the case-sensitive one for better performance.
For example:
* instead of `=~`, use `==`
* instead of `in~`, use `in`
* instead of `contains`, use `contains_cs`
The table below shows the list of string operators supported by Axiom processing language:
| **Operator** | **Description** | **Case-Sensitive** | **Example** |
| ------------------- | --------------------------------------- | ------------------ | --------------------------------------- |
| **==** | Equals | Yes | `"aBc" == "aBc"` |
| **!=** | Not equals | Yes | `"abc" != "ABC"` |
| **=\~** | Equals | No | `"abc" =~ "ABC"` |
| **!\~** | Not equals | No | `"aBc" !~ "xyz"` |
| **contains** | RHS occurs as a subsequence of LHS | No | `parentSpanId` contains `Span` |
| **!contains** | RHS doesn’t occur in LHS | No | `parentSpanId` !contains `abc` |
| **contains\_cs** | RHS occurs as a subsequence of LHS | Yes | `parentSpanId` contains\_cs "Id" |
| **!contains\_cs** | RHS doesn’t occur in LHS | Yes | `parentSpanId` !contains\_cs "Id" |
| **startswith** | RHS is an initial subsequence of LHS | No | `parentSpanId` startswith `parent` |
| **!startswith** | RHS isn’t an initial subsequence of LHS | No | `parentSpanId` !startswith "Id" |
| **startswith\_cs** | RHS is an initial subsequence of LHS | Yes | `parentSpanId` startswith\_cs "parent" |
| **!startswith\_cs** | RHS isn’t an initial subsequence of LHS | Yes | `parentSpanId` !startswith\_cs "parent" |
| **endswith** | RHS is a closing subsequence of LHS | No | `parentSpanId` endswith "Id" |
| **!endswith** | RHS isn’t a closing subsequence of LHS | No | `parentSpanId` !endswith `Span` |
| **endswith\_cs** | RHS is a closing subsequence of LHS | Yes | `parentSpanId` endswith\_cs `Id` |
| **!endswith\_cs** | RHS isn’t a closing subsequence of LHS | Yes | `parentSpanId` !endswith\_cs `Span` |
| **in** | Equals to one of the elements | Yes | `abc` in ("123", "345", "abc") |
| **!in** | Not equals to any of the elements | Yes | "bca" !in ("123", "345", "abc") |
| **in\~** | Equals to one of the elements | No | "abc" in\~ ("123", "345", "ABC") |
| **!in\~** | Not equals to any of the elements | No | "bca" !in\~ ("123", "345", "ABC") |
| **!matches regex** | LHS doesn’t contain a match for RHS | Yes | `parentSpanId` !matches regex `g.*r` |
| **matches regex** | LHS contains a match for RHS | Yes | `parentSpanId` matches regex `g.*r` |
| **has** | RHS is a whole term in LHS | No | `Content Type` has `text` |
| **has\_cs** | RHS is a whole term in LHS | Yes | `Content Type` has\_cs `Text` |
## Use string operators efficiently
String operators are fundamental in comparing, searching, or matching strings. Understanding the performance implications of different operators can significantly optimize your queries. Below are performance tips and query examples.
## Equality and Inequality Operators
* Operators: `==`, `!=`, `=~`, `!~`, `in`, `!in`, `in~`, `!in~`
Query Examples:
```kusto
"get" == "get"
"get" != "GET"
"get" =~ "GET"
"get" !~ "put"
"get" in ("get", "put", "delete")
```
* Use `==` or `!=` for exact match comparisons when case sensitivity is important, as they are faster.
* Use `=~` or `!~` for case-insensitive comparisons, or when the exact case is unknown.
* Use `in` or `!in` for checking membership within a set of values, which can be efficient for a small set of values.
## Subsequence Matching Operators
* Operators: `contains`, `!contains`, `contains_cs`, `!contains_cs`, `startswith`, `!startswith`, `startswith_cs`, `!startswith_cs`, `endswith`, `!endswith`, `endswith_cs`, `!endswith_cs`.
Query Examples:
```kusto
"parentSpanId" contains "Span" // True
"parentSpanId" !contains "xyz" // True
"parentSpanId" startswith "parent" // True
"parentSpanId" endswith "Id" // True
"parentSpanId" contains_cs "Span" // True if parentSpanId is "parentSpanId", False if parentSpanId is "parentspanid" or "PARENTSPANID"
"parentSpanId" startswith_cs "parent" // True if parentSpanId is "parentSpanId", False if parentSpanId is "ParentSpanId" or "PARENTSPANID"
"parentSpanId" endswith_cs "Id" // True if parentSpanId is "parentSpanId", False if parentSpanId is "parentspanid" or "PARENTSPANID"
```
* Use case-sensitive operators (`contains_cs`, `startswith_cs`, `endswith_cs`) when the case is known, as they are faster.
## Regular Expression Matching Operators
* Operators: `matches regex`, `!matches regex`
```kusto
"parentSpanId" matches regex "p.*Id" // True
"parentSpanId" !matches regex "x.*z" // True
```
* Avoid complex regular expressions or use string operators for simple substring, prefix, or suffix matching.
## Term Matching Operators
* Operators: `has`, `has_cs`
Query Examples:
```kusto
"content type" has "type" // True
"content type" has_cs "Type" // False
```
* Use `has` or `has_cs` for term matching which can be more efficient than regular expression matching for simple term searches.
* Use `has_cs` when the case is known, as it is faster due to case-sensitive matching.
## Best Practices
* Always use case-sensitive operators when the case is known, as they are faster.
* Avoid complex regular expressions for simple matching tasks; use simpler string operators instead.
* When matching against a set of values, ensure the set is as small as possible to improve performance.
* For substring matching, prefer prefix or suffix matching over general substring matching for better performance.
## has operator
The `has` operator in APL filters rows based on whether a given term or phrase appears within a string field.
## Importance of the `has` operator:
* **Precision Filtering:** Unlike the `contains` operator, which matches any substring, the `has` operator looks for exact terms, ensuring more precise results.
* **Simplicity:** Provides an easy and readable way to find exact terms in a string without resorting to regex or other more complex methods.
The following table compares the `has` operators using the abbreviations provided:
* RHS = right-hand side of the expression
* LHS = left-hand side of the expression
| Operator | Description | Case-Sensitive | Example |
| ------------- | ------------------------------------------------------------- | -------------- | -------------------------------------- |
| has | Right-hand-side (RHS) is a whole term in left-hand-side (LHS) | No | "North America" has "america" |
| has\_cs | RHS is a whole term in LHS | Yes | "North America" has\_cs "America" |
| hassuffix | LHS string ends with the RHS string | No | "documentation.docx" hassuffix ".docx" |
| hasprefix | LHS string starts with the RHS string | No | "Admin\_User" hasprefix "Admin" |
| hassuffix\_cs | LHS string ends with the RHS string | Yes | "Document.HTML" hassuffix\_cs ".HTML" |
| hasprefix\_cs | LHS string starts with the RHS string | Yes | "DOCS\_file" hasprefix\_cs "DOCS" |
## Syntax
```kusto
['Dataset']
| where Field has (Expression)
```
## Parameters
| Name | Type | Required | Description |
| ---------- | ----------------- | -------- | -------------------------------------------------------------------------------------------------------------- |
| Field | string | ✓ | The field filters the events. |
| Expression | scalar or tabular | ✓ | An expression for which to search. The first field is used if the value of the expression has multiple fields. |
## Returns
The `has` operator returns rows from the dataset where the specified term is found in the given field. If the term is present, the row is included in the result set; otherwise, it is filtered out.
## Example
```kusto
['sample-http-logs']
| summarize event_count = count() by content_type
| where content_type has "text"
| where event_count > 10
| project event_count, content_type
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20summarize%20event_count%20%3D%20count%28%29%20by%20content_type%5Cn%7C%20where%20content_type%20has%20%5C%22text%5C%22%5Cn%7C%20where%20event_count%20%3E%2010%5Cn%7C%20project%20event_count%2C%20content_type%22%7D\&queryOptions=%7B%22quickRange%22%3A%2230d%22%7D)
## Output
| event\_count | content\_type |
| ------------ | ------------------------ |
| 132,765 | text/html |
| 132,621 | text/plain-charset=utf-8 |
| 89,085 | text/csv |
| 88,436 | text/css |
# count
This page explains how to use the count operator function in APL.
The `count` operator in Axiom Processing Language (APL) is a simple yet powerful aggregation function that returns the total number of records in a dataset. You can use it to calculate the number of rows in a table or the results of a query. The `count` operator is useful in scenarios such as log analysis, telemetry data processing, and security monitoring, where you need to know how many events, transactions, or data entries match certain criteria.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk’s SPL, the `stats count` function is used to count the number of events in a dataset. In APL, the equivalent operation is simply `count`. You can use `count` in APL without the need for additional function wrapping.
```splunk Splunk example
index=web_logs
| stats count
```
```kusto APL equivalent
['sample-http-logs']
| count
```
In ANSI SQL, you typically use `COUNT(*)` or `COUNT(field)` to count the number of rows in a table. In APL, the `count` operator achieves the same functionality, but it doesn’t require a field name or `*`.
```sql SQL example
SELECT COUNT(*) FROM web_logs;
```
```kusto APL equivalent
['sample-http-logs']
| count
```
## Usage
### Syntax
```kusto
| count
```
### Parameters
The `count` operator does not take any parameters. It simply returns the number of records in the dataset or query result.
### Returns
`count` returns an integer representing the total number of records in the dataset.
## Use case examples
In this example, you count the total number of HTTP requests in the `['sample-http-logs']` dataset.
**Query**
```kusto
['sample-http-logs']
| count
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20count%22%7D)
**Output**
| count |
| ----- |
| 15000 |
This query returns the total number of HTTP requests recorded in the logs.
In this example, you count the number of traces in the `['otel-demo-traces']` dataset.
**Query**
```kusto
['otel-demo-traces'] |
count
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20count%22%7D)
**Output**
| count |
| ----- |
| 5000 |
This query returns the total number of OpenTelemetry traces in the dataset.
In this example, you count the number of security events in the `['sample-http-logs']` dataset where the status code indicates an error (status codes 4xx or 5xx).
**Query**
```kusto
['sample-http-logs'] |
where status startswith '4' or status startswith '5' |
count
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20where%20status%20startswith%20'4'%20or%20status%20startswith%20'5'%20%7C%20count%22%7D)
**Output**
| count |
| ----- |
| 1200 |
This query returns the number of HTTP requests that resulted in an error (HTTP status code 4xx or 5xx).
## List of related operators
* [summarize](/apl/tabular-operators/summarize-operator): The `summarize` operator is used to aggregate data based on one or more fields, allowing you to calculate sums, averages, and other statistics, including counts. Use `summarize` when you need to group data before counting.
* [extend](/apl/tabular-operators/extend-operator): The `extend` operator adds calculated fields to a dataset. You can use `extend` alongside `count` if you want to add additional calculated data to your query results.
* [project](/apl/tabular-operators/project-operator): The `project` operator selects specific fields from a dataset. While `count` returns the total number of records, `project` can limit or change which fields you see.
* [where](/apl/tabular-operators/where-operator): The `where` operator filters rows based on a condition. Use `where` with `count` to only count records that meet certain criteria.
* [take](/apl/tabular-operators/take-operator): The `take` operator returns a specified number of records. You can use `take` to limit results before applying `count` if you're interested in counting a sample of records.
# distinct
This page explains how to use the distinct operator function in APL.
The `distinct` operator in APL (Axiom Processing Language) returns a unique set of values from a specified field or set of fields. This operator is useful when you need to filter out duplicate entries and focus only on distinct values, such as unique user IDs, event types, or error codes within your datasets. Use the `distinct` operator in scenarios where eliminating duplicates helps you gain clearer insights from your data, like when analyzing logs, monitoring system traces, or reviewing security incidents.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk’s SPL, the `dedup` command is often used to retrieve distinct values. In APL, the equivalent is the `distinct` operator, which behaves similarly by returning unique values but without necessarily ordering them.
```splunk Splunk example
index=web_logs
| dedup user_id
```
```kusto APL equivalent
['sample-http-logs']
| distinct id
```
In ANSI SQL, you use `SELECT DISTINCT` to return unique rows from a table. In APL, the `distinct` operator serves a similar function but is placed after the table reference rather than in the `SELECT` clause.
```sql SQL example
SELECT DISTINCT user_id FROM web_logs;
```
```kusto APL equivalent
['sample-http-logs']
| distinct id
```
## Usage
### Syntax
```kusto
| distinct FieldName1 [, FieldName2, ...]
```
### Parameters
* `FieldName1, FieldName2, ...`: The fields to include in the distinct operation. If you specify multiple fields, the result will include rows where the combination of values across these fields is unique.
### Returns
The `distinct` operator returns a dataset with unique values from the specified fields, removing any duplicate entries.
## Use case examples
In this use case, the `distinct` operator helps identify unique users who made HTTP requests in a system.
**Query**
```kusto
['sample-http-logs']
| distinct id
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20distinct%20id%22%7D)
**Output**
| id |
| --------- |
| user\_123 |
| user\_456 |
| user\_789 |
This query returns a list of unique user IDs that have made HTTP requests, filtering out duplicate user activity.
Here, the `distinct` operator is used to identify all unique services involved in traces.
**Query**
```kusto
['otel-demo-traces']
| distinct ['service.name']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20distinct%20%5B'service.name'%5D%22%7D)
**Output**
| service.name |
| --------------------- |
| frontend |
| checkoutservice |
| productcatalogservice |
This query returns a distinct list of services involved in traces.
In this example, you use the `distinct` operator to find unique HTTP status codes from security logs.
**Query**
```kusto
['sample-http-logs']
| distinct status
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20distinct%20status%22%7D)
**Output**
| status |
| ------ |
| 200 |
| 404 |
| 500 |
This query provides a distinct list of HTTP status codes that occurred in the logs.
## List of related operators
* [count](/apl/tabular-operators/count-operator): Returns the total number of rows. Use it to count occurrences of data rather than filtering for distinct values.
* [summarize](/apl/tabular-operators/summarize-operator): Allows you to aggregate data and perform calculations like sums or averages while grouping by distinct values.
* [project](/apl/tabular-operators/project-operator): Selects specific fields from the dataset. Use it when you want to control which fields are returned before applying `distinct`.
# extend
This page explains how to use the extend operator in APL.
The `extend` operator in APL allows you to create new calculated fields in your result set based on existing data. You can define expressions or functions to compute new values for each row, making `extend` particularly useful when you need to enrich your data without altering the original dataset. You typically use `extend` when you want to add additional fields to analyze trends, compare metrics, or generate new insights from your data.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk, the `eval` command is used to create new fields or modify existing ones. In APL, you can achieve this using the `extend` operator.
```sql Splunk example
index=myindex
| eval newField = duration * 1000
```
```kusto APL equivalent
['sample-http-logs']
| extend newField = req_duration_ms * 1000
```
In ANSI SQL, you typically use the `SELECT` clause with expressions to create new fields. In APL, `extend` is used instead to define these new computed fields.
```sql SQL example
SELECT id, req_duration_ms, req_duration_ms * 1000 AS newField FROM logs;
```
```kusto APL equivalent
['sample-http-logs']
| extend newField = req_duration_ms * 1000
```
## Usage
### Syntax
```kusto
| extend NewField = Expression
```
### Parameters
* `NewField`: The name of the new field to be created.
* `Expression`: The expression used to compute values for the new field. This can include mathematical operations, string manipulations, or functions.
### Returns
The operator returns a copy of the original dataset with the following changes:
* Field names noted by `extend` that already exist in the input are removed and appended as their new calculated values.
* Field names noted by `extend` that do not exist in the input are appended as their new calculated values.
## Use case examples
In log analysis, you can use `extend` to compute the duration of each request in seconds from a millisecond value.
**Query**
```kusto
['sample-http-logs']
| extend duration_sec = req_duration_ms / 1000
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20extend%20duration_sec%20%3D%20req_duration_ms%20%2F%201000%22%7D)
**Output**
| \_time | req\_duration\_ms | id | status | uri | method | geo.city | geo.country | duration\_sec |
| ------------------- | ----------------- | ---- | ------ | ----- | ------ | -------- | ----------- | ------------- |
| 2024-10-17 09:00:01 | 300 | 1234 | 200 | /home | GET | London | UK | 0.3 |
This query calculates the duration of HTTP requests in seconds by dividing the `req_duration_ms` field by 1000.
You can use `extend` to create a new field that categorizes the service type based on the service’s name.
**Query**
```kusto
['otel-demo-traces']
| extend service_type = iff(['service.name'] in ('frontend', 'frontendproxy'), 'Web', 'Backend')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D%20%7C%20extend%20service_type%20%3D%20iff%28%5B%27service.name%27%5D%20in%20%28%27frontend%27%2C%20%27frontendproxy%27%29%2C%20%27Web%27%2C%20%27Backend%27%29%22%7D)
**Output**
| \_time | span\_id | trace\_id | service.name | kind | status\_code | service\_type |
| ------------------- | -------- | --------- | --------------- | ------ | ------------ | ------------- |
| 2024-10-17 09:00:01 | abc123 | xyz789 | frontend | client | 200 | Web |
| 2024-10-17 09:00:01 | def456 | uvw123 | checkoutservice | server | 500 | Backend |
This query adds a new field `service_type` that categorizes the service into either Web or Backend based on the `service.name` field.
For security logs, you can use `extend` to categorize HTTP statuses as success or failure.
**Query**
```kusto
['sample-http-logs']
| extend status_category = iff(status == '200', 'Success', 'Failure')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20extend%20status_category%20%3D%20iff%28status%20%3D%3D%20%27200%27%2C%20%27Success%27%2C%20%27Failure%27%29%22%7D)
**Output**
| \_time | id | status | uri | status\_category |
| ------------------- | ---- | ------ | ----- | ---------------- |
| 2024-10-17 09:00:01 | 1234 | 200 | /home | Success |
This query creates a new field `status_category` that labels each HTTP request as either a Success or Failure based on the status code.
## List of related operators
* [project](/apl/tabular-operators/project-operator): Use `project` to select specific fields or rename them. Unlike `extend`, it does not add new fields.
* [summarize](/apl/tabular-operators/summarize-operator): Use `summarize` to aggregate data, which differs from `extend` that only adds new calculated fields without aggregation.
# extend-valid
This page explains how to use the extend-valid operator in APL.
The `extend-valid` operator in Axiom Processing Language (APL) allows you to extend a set of fields with new calculated values, where these calculations are based on conditions of validity for each row. It’s particularly useful when working with datasets that contain missing or invalid data, as it enables you to calculate and assign values only when certain conditions are met. This operator helps you keep your data clean by applying calculations to valid data points, and leaving invalid or missing values untouched.
This is a shorthand operator to create a field while also doing basic checking on the validity of the field. In many cases, additional checks are required and it is recommended in those cases a combination of an [extend](/apl/tabular-operators/extend-operator) and a [where](/apl/tabular-operators/where-operator) operator are used. The basic checks that Axiom preform depend on the type of the expression:
* **Dictionary:** Check if the dictionary is not null and has at least one entry.
* **Array:** Check if the arrat is not null and has at least one value.
* **String:** Check is the string is not empty and has at least one character.
* **Other types:** The same logic as `tobool` and a check for true.
You can use `extend-valid` to perform conditional transformations on large datasets, especially in scenarios where data quality varies or when dealing with complex log or telemetry data.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, similar functionality is achieved using the `eval` function, but with the `if` command to handle conditional logic for valid or invalid data. In APL, `extend-valid` is more specialized for handling valid data points directly, allowing you to extend fields based on conditions.
```sql Splunk example
| eval new_field = if(isnotnull(field), field + 1, null())
```
```kusto APL equivalent
['sample-http-logs']
| extend-valid new_field = req_duration_ms + 100
```
In ANSI SQL, similar functionality is often achieved using the `CASE WHEN` expression within a `SELECT` statement to handle conditional logic for fields. In APL, `extend-valid` directly extends a field conditionally, based on the validity of the data.
```sql SQL example
SELECT CASE WHEN req_duration_ms IS NOT NULL THEN req_duration_ms + 100 ELSE NULL END AS new_field FROM sample_http_logs;
```
```kusto APL equivalent
['sample-http-logs']
| extend-valid new_field = req_duration_ms + 100
```
## Usage
### Syntax
```kusto
| extend-valid FieldName1 = Expression1, FieldName2 = Expression2, FieldName3 = ...
```
### Parameters
* `FieldName`: The name of the existing field that you want to extend.
* `Expression`: The expression to evaluate and apply for valid rows.
### Returns
The operator returns a table where the specified fields are extended with new values based on the given expression for valid rows. The original value remains unchanged.
## Use case examples
In this use case, you normalize the HTTP request methods by converting them to uppercase for valid entries.
**Query**
```kusto
['sample-http-logs']
| extend-valid upper_method = toupper(method)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend-valid%20upper_method%20%3D%20toupper\(method\)%22%7D)
**Output**
| \_time | method | upper\_method |
| ------------------- | ------ | ------------- |
| 2023-10-01 12:00:00 | get | GET |
| 2023-10-01 12:01:00 | POST | POST |
| 2023-10-01 12:02:00 | NULL | NULL |
In this query, the `toupper` function converts the `method` field to uppercase, but only for valid entries. If the `method` field is null, the result remains null.
In this use case, you extract the first part of the service namespace (before the hyphen) from valid namespaces in the OpenTelemetry traces.
**Query**
```kusto
['otel-demo-traces']
| extend-valid namespace_prefix = extract('^(.*?)-', 1, ['service.namespace'])
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20extend-valid%20namespace_prefix%20%3D%20extract\('%5E\(.*%3F\)-'%2C%201%2C%20%5B'service.namespace'%5D\)%22%7D)
**Output**
| \_time | service.namespace | namespace\_prefix |
| ------------------- | ------------------ | ----------------- |
| 2023-10-01 12:00:00 | opentelemetry-demo | opentelemetry |
| 2023-10-01 12:01:00 | opentelemetry-prod | opentelemetry |
| 2023-10-01 12:02:00 | NULL | NULL |
In this query, the `extract` function pulls the first part of the service namespace. It only applies to valid `service.namespace` values, leaving nulls unchanged.
In this use case, you extract the first letter of the city names from the `geo.city` field for valid log entries.
**Query**
```kusto
['sample-http-logs']
| extend-valid city_first_letter = extract('^([A-Za-z])', 1, ['geo.city'])
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend-valid%20city_first_letter%20%3D%20extract\('%5E\(%5BA-Za-z%5D\)'%2C%201%2C%20%5B'geo.city'%5D\)%22%7D)
**Output**
| \_time | geo.city | city\_first\_letter |
| ------------------- | -------- | ------------------- |
| 2023-10-01 12:00:00 | New York | N |
| 2023-10-01 12:01:00 | NULL | NULL |
| 2023-10-01 12:02:00 | London | L |
| 2023-10-01 12:03:00 | 1Paris | NULL |
In this query, the `extract` function retrieves the first letter of the city names from the `geo.city` field for valid entries. If the `geo.city` field is null or starts with a non-alphabetical character, no city name is extracted, and the result remains null.
## List of related operators
* [extend](/apl/tabular-operators/extend-operator): Use `extend` to add calculated fields unconditionally, without validating data.
* [project](/apl/tabular-operators/project-operator): Use `project` to select and rename fields, without performing conditional extensions.
* [summarize](/apl/tabular-operators/summarize-operator): Use `summarize` for aggregation, often used before extending fields with further calculations.
# limit
This page explains how to use the limit operator in APL.
The `limit` operator in Axiom Processing Language (APL) allows you to restrict the number of rows returned from a query. It is particularly useful when you want to see only a subset of results from large datasets, such as when debugging or previewing query outputs. The `limit` operator can help optimize performance and focus analysis by reducing the amount of data processed.
Use the `limit` operator when you want to return only the top rows from a dataset, especially in cases where the full result set is not necessary.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk, the equivalent to APL’s `limit` is the `head` command, which also returns the top rows of a dataset. The main difference is in the syntax.
```sql Splunk example
| head 10
```
```kusto APL equivalent
['sample-http-logs']
| limit 10
```
In ANSI SQL, the `LIMIT` clause is equivalent to the `limit` operator in APL. The SQL `LIMIT` statement is placed at the end of a query, whereas in APL, the `limit` operator comes after the dataset reference.
```sql SQL example
SELECT * FROM sample_http_logs LIMIT 10;
```
```kusto APL equivalent
['sample-http-logs']
| limit 10
```
## Usage
### Syntax
```kusto
| limit [N]
```
### Parameters
* `N`: The maximum number of rows to return. This must be a non-negative integer.
### Returns
The `limit` operator returns the top **`N`** rows from the input dataset. If fewer than **`N`** rows are available, all rows are returned.
## Use case examples
In log analysis, you often want to view only the most recent entries, and `limit` can help narrow the focus on those rows.
**Query**
```kusto
['sample-http-logs']
| limit 5
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20limit%205%22%7D)
**Output**
| \_time | req\_duration\_ms | id | status | uri | method | geo.city | geo.country |
| ------------------- | ----------------- | --- | ------ | -------------- | ------ | -------- | ----------- |
| 2024-10-17T12:00:00 | 200 | 123 | 200 | /index.html | GET | New York | USA |
| 2024-10-17T11:59:59 | 300 | 124 | 404 | /notfound.html | GET | London | UK |
This query limits the output to the first 5 rows from the `['sample-http-logs']` dataset, returning recent HTTP log entries.
When analyzing OpenTelemetry traces, you may want to focus on the most recent traces.
**Query**
```kusto
['otel-demo-traces']
| limit 5
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20limit%205%22%7D)
**Output**
| \_time | duration | span\_id | trace\_id | service.name | kind | status\_code |
| ------------------- | -------- | -------- | --------- | ------------ | ------ | ------------ |
| 2024-10-17T12:00:00 | 500ms | 1abc | 123xyz | frontend | server | OK |
| 2024-10-17T11:59:59 | 200ms | 2def | 124xyz | cartservice | client | OK |
This query retrieves the first 5 rows from the `['otel-demo-traces']` dataset, helping you analyze the latest traces.
For security log analysis, you might want to review the most recent login attempts to ensure no anomalies exist.
**Query**
```kusto
['sample-http-logs']
| where status == '401'
| limit 5
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20where%20status%20%3D%3D%20'401'%20%7C%20limit%205%22%7D)
**Output**
| \_time | req\_duration\_ms | id | status | uri | method | geo.city | geo.country |
| ------------------- | ----------------- | --- | ------ | ----------- | ------ | -------- | ----------- |
| 2024-10-17T12:00:00 | 300 | 567 | 401 | /login.html | POST | Berlin | Germany |
| 2024-10-17T11:59:59 | 250 | 568 | 401 | /login.html | POST | Sydney | Australia |
This query limits the output to 5 unauthorized access attempts (`401` status code) from the `['sample-http-logs']` dataset.
## List of related operators
* [take](/apl/tabular-operators/take-operator): Similar to `limit`, but explicitly focuses on row sampling.
* [top](/apl/tabular-operators/top-operator): Retrieves the top **N** rows sorted by a specific field.
* [sample](/apl/tabular-operators/sample-operator): Randomly samples **N** rows from the dataset.
# order
This page explains how to use the order operator in APL.
The `order` operator in Axiom Processing Language (APL) allows you to sort the rows of a result set by one or more specified fields. You can use this operator to organize data for easier interpretation, prioritize specific values, or prepare data for subsequent analysis steps. The `order` operator is particularly useful when working with logs, telemetry data, or any dataset where ranking or sorting by values (such as time, status, or user ID) is necessary.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, the equivalent operator to `order` is `sort`. SPL uses a similar syntax to APL but with some differences. In SPL, `sort` allows both ascending (`asc`) and descending (`desc`) sorting, while in APL, you achieve sorting using the `asc()` and `desc()` functions for fields.
```splunk Splunk example
| sort - _time
```
```kusto APL equivalent
['sample-http-logs']
| order by _time desc
```
In ANSI SQL, the equivalent of `order` is `ORDER BY`. SQL uses `ASC` for ascending and `DESC` for descending order. In APL, sorting works similarly, with the `asc()` and `desc()` functions added around field names to specify the order.
```sql SQL example
SELECT * FROM logs ORDER BY _time DESC;
```
```kusto APL equivalent
['sample-http-logs']
| order by _time desc
```
## Usage
### Syntax
```kusto
| order by FieldName [asc | desc], FieldName [asc | desc]
```
### Parameters
* `FieldName`: The name of the field by which to sort.
* `asc`: Sorts the field in ascending order.
* `desc`: Sorts the field in descending order.
### Returns
The `order` operator returns the input dataset, sorted according to the specified fields and order (ascending or descending). If multiple fields are specified, sorting is done based on the first field, then by the second if values in the first field are equal, and so on.
## Use case examples
In this example, you sort HTTP logs by request duration in descending order to prioritize the longest requests.
**Query**
```kusto
['sample-http-logs']
| order by req_duration_ms desc
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20order%20by%20req_duration_ms%20desc%22%7D)
**Output**
| \_time | req\_duration\_ms | id | status | uri | method | geo.city | geo.country |
| ------------------- | ----------------- | ------ | ------ | -------------------- | ------ | -------- | ----------- |
| 2024-10-17 10:10:01 | 1500 | user12 | 200 | /api/v1/get-orders | GET | Seattle | US |
| 2024-10-17 10:09:47 | 1350 | user23 | 404 | /api/v1/get-products | GET | New York | US |
| 2024-10-17 10:08:21 | 1200 | user45 | 500 | /api/v1/post-order | POST | London | UK |
This query sorts the logs by request duration, helping you identify which requests are taking the most time to complete.
In this example, you sort OpenTelemetry trace data by span duration in descending order, which helps you identify the longest-running spans across your services.
**Query**
```kusto
['otel-demo-traces']
| order by duration desc
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D%20%7C%20order%20by%20duration%20desc%22%7D)
**Output**
| \_time | duration | span\_id | trace\_id | service.name | kind | status\_code |
| ------------------- | -------- | -------- | --------- | --------------------- | ------ | ------------ |
| 2024-10-17 10:10:01 | 15.3s | span4567 | trace123 | frontend | server | 200 |
| 2024-10-17 10:09:47 | 12.4s | span8910 | trace789 | checkoutservice | client | 200 |
| 2024-10-17 10:08:21 | 10.7s | span1112 | trace456 | productcatalogservice | server | 500 |
This query helps you detect performance bottlenecks by sorting spans based on their duration.
In this example, you analyze security logs by sorting them by time to view the most recent logs.
**Query**
```kusto
['sample-http-logs']
| order by _time desc
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20order%20by%20_time%20desc%22%7D)
**Output**
| \_time | req\_duration\_ms | id | status | uri | method | geo.city | geo.country |
| ------------------- | ----------------- | ------ | ------ | ---------------------- | ------ | -------- | ----------- |
| 2024-10-17 10:10:01 | 300 | user34 | 200 | /api/v1/login | POST | Berlin | DE |
| 2024-10-17 10:09:47 | 150 | user78 | 401 | /api/v1/get-profile | GET | Paris | FR |
| 2024-10-17 10:08:21 | 200 | user56 | 500 | /api/v1/update-profile | PUT | Madrid | ES |
This query sorts the security logs by time to display the most recent log entries first, helping you quickly review recent security events.
## List of related operators
* [top](/apl/tabular-operators/top-operator): The `top` operator returns the top N records based on a specific sorting criteria, which is similar to `order` but only retrieves a fixed number of results.
* [summarize](/apl/tabular-operators/summarize-operator): The `summarize` operator groups data and often works in combination with `order` to rank summarized values.
* [extend](/apl/tabular-operators/extend-operator): The `extend` operator can be used to create calculated fields, which can then be used as sorting criteria in the `order` operator.
# Tabular operators
This section explains how to use and combine tabular operators in APL.
The table summarizes the tabular operators available in APL.
| Function | Description |
| ------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [count](/apl/tabular-operators/count-operator) | Returns an integer representing the total number of records in the dataset. |
| [distinct](/apl/tabular-operators/distinct-operator) | Returns a dataset with unique values from the specified fields, removing any duplicate entries. |
| [extend](/apl/tabular-operators/extend-operator) | Returns the original dataset with one or more new fields appended, based on the defined expressions. |
| [extend-valid](/apl/tabular-operators/extend-valid-operator) | Returns a table where the specified fields are extended with new values based on the given expression for valid rows. |
| [limit](/apl/tabular-operators/limit-operator) | Returns the top N rows from the input dataset. |
| [order](/apl/tabular-operators/order-operator) | Returns the input dataset, sorted according to the specified fields and order. |
| [parse](/apl/tabular-operators/parse-operator) | Returns the input dataset with new fields added based on the specified parsing pattern. |
| [project](/apl/tabular-operators/project-operator) | Returns a dataset containing only the specified fields. |
| [project-away](/apl/tabular-operators/project-away-operator) | Returns the input dataset excluding the specified fields. |
| [project-keep](/apl/tabular-operators/project-keep-operator) | Returns a dataset with only the specified fields. |
| [project-reorder](/apl/tabular-operators/project-reorder-operator) | Returns a table with the specified fields reordered as requested followed by any unspecified fields in their original order. |
| [sample](/apl/tabular-operators/sample-operator) | Returns a table containing the specified number of rows, selected randomly from the input dataset. |
| [search](/apl/tabular-operators/search-operator) | Returns all rows where the specified keyword appears in any field. |
| [sort](/apl/tabular-operators/sort-operator) | Returns a table with rows ordered based on the specified fields. |
| [summarize](/apl/tabular-operators/summarize-operator) | Returns a table where each row represents a unique combination of values from the by fields, with the aggregated results calculated for the other fields. |
| [take](/apl/tabular-operators/take-operator) | Returns the specified number of rows from the dataset. |
| [top](/apl/tabular-operators/top-operator) | Returns the top N rows from the dataset based on the specified sorting criteria. |
| [union](/apl/tabular-operators/union-operator) | Returns all rows from the specified tables or queries. |
| [where](/apl/tabular-operators/where-operator) | Returns a filtered dataset containing only the rows where the condition evaluates to true. |
# parse
This page explains how to use the parse operator function in APL.
The `parse` operator in APL enables you to extract and structure information from unstructured or semi-structured text data, such as log files or strings. You can use the operator to specify a pattern for parsing the data and define the fields to extract. This is useful when analyzing logs, tracing information from text fields, or extracting key-value pairs from message formats.
You can find the `parse` operator helpful when you need to process raw text fields and convert them into a structured format for further analysis. It’s particularly effective when working with data that doesn't conform to a fixed schema, such as log entries or custom messages.
## Importance of the parse operator
* **Data extraction:** It allows you to extract structured data from unstructured or semi-structured string fields, enabling you to transform raw data into a more usable format.
* **Flexibility:** The parse operator supports different parsing modes (simple, relaxed, regex) and provides various options to define parsing patterns, making it adaptable to different data formats and requirements.
* **Performance:** By extracting only the necessary information from string fields, the parse operator helps optimize query performance by reducing the amount of data processed and enabling more efficient filtering and aggregation.
* **Readability:** The parse operator provides a clear and concise way to define parsing patterns, making the query code more readable and maintainable.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk, the `rex` command is often used to extract fields from raw events or text. In APL, the `parse` operator performs a similar function. You define the text pattern to match and extract fields, allowing you to extract structured data from unstructured strings.
```splunk Splunk example
index=web_logs | rex field=_raw "duration=(?\d+)"
```
```kusto APL equivalent
['sample-http-logs']
| parse uri with * "duration=" req_duration_ms:int
```
In ANSI SQL, there isn’t a direct equivalent to the `parse` operator. Typically, you use string functions such as `SUBSTRING` or `REGEXP` to extract parts of a text field. However, APL’s `parse` operator simplifies this process by allowing you to define a text pattern and extract multiple fields in a single statement.
```sql SQL example
SELECT SUBSTRING(uri, CHARINDEX('duration=', uri) + 9, 3) AS req_duration_ms
FROM sample_http_logs;
```
```kusto APL equivalent
['sample-http-logs']
| parse uri with * "duration=" req_duration_ms:int
```
## Usage
### Syntax
```kusto
| parse [kind=simple|regex|relaxed] Expression with [*] StringConstant FieldName [: FieldType] [*] ...
```
### Parameters
* `kind`: Optional parameter to specify the parsing mode. Its value can be `simple` for exact matches, `regex` for regular expressions, or `relaxed` for relaxed parsing. The default is `simple`.
* `Expression`: The string expression to parse.
* `StringConstant`: A string literal or regular expression pattern to match against.
* `FieldName`: The name of the field to assign the extracted value.
* `FieldType`: Optional parameter to specify the data type of the extracted field. The default is `string`.
* `*`: Wildcard to match any characters before or after the `StringConstant`.
* `...`: You can specify additional `StringConstant` and `FieldName` pairs to extract multiple values.
### Returns
The parse operator returns the input dataset with new fields added based on the specified parsing pattern. The new fields contain the extracted values from the parsed string expression. If the parsing fails for a particular row, the corresponding fields have null values.
## Use case examples
For log analysis, you can extract the HTTP request duration from the `uri` field using the `parse` operator.
**Query**
```kusto
['sample-http-logs']
| parse uri with * 'duration=' req_duration_ms:int
| project _time, req_duration_ms, uri
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20parse%20uri%20with%20%2A%20'duration%3D'%20req_duration_ms%3Aint%20%7C%20project%20_time%2C%20req_duration_ms%2C%20uri%22%7D)
**Output**
| \_time | req\_duration\_ms | uri |
| ------------------- | ----------------- | ----------------------------- |
| 2024-10-18T12:00:00 | 200 | /api/v1/resource?duration=200 |
| 2024-10-18T12:00:05 | 300 | /api/v1/resource?duration=300 |
This query extracts the `req_duration_ms` from the `uri` field and projects the time and duration for each HTTP request.
In OpenTelemetry traces, the `parse` operator is useful for extracting components of trace data, such as the service name or status code.
**Query**
```kusto
['otel-demo-traces']
| parse trace_id with * '-' ['service.name']
| project _time, ['service.name'], trace_id
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D%20%7C%20parse%20trace_id%20with%20%2A%20'-'%20%5B'service.name'%5D%20%7C%20project%20_time%2C%20%5B'service.name'%5D%2C%20trace_id%22%7D)
**Output**
| \_time | service.name | trace\_id |
| ------------------- | ------------ | -------------------- |
| 2024-10-18T12:00:00 | frontend | a1b2c3d4-frontend |
| 2024-10-18T12:01:00 | cartservice | e5f6g7h8-cartservice |
This query extracts the `service.name` from the `trace_id` and projects the time and service name for each trace.
For security logs, you can use the `parse` operator to extract status codes and the method of HTTP requests.
**Query**
```kusto
['sample-http-logs']
| parse method with * '/' status
| project _time, method, status
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20parse%20method%20with%20%2A%20'%2F'%20status%20%7C%20project%20_time%2C%20method%2C%20status%22%7D)
**Output**
| \_time | method | status |
| ------------------- | ------ | ------ |
| 2024-10-18T12:00:00 | GET | 200 |
| 2024-10-18T12:00:05 | POST | 404 |
This query extracts the HTTP method and status from the `method` field and shows them along with the timestamp.
## Other examples
### Parse content type
This example parses the `content_type` field to extract the `datatype` and `format` values separated by a `/`. The extracted values are projected as separate fields.
**Original string**
```bash
application/charset=utf-8
```
**Query**
```kusto
['sample-http-logs']
| parse content_type with datatype '/' format
| project datatype, format
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20parse%20content_type%20with%20datatype%20'%2F'%20format%20%7C%20project%20datatype%2C%20format%22%7D)
**Output**
```json
{
"datatype": "application",
"format": "charset=utf-8"
}
```
### Parse user agent
This example parses the `user_agent` field to extract the operating system name (`os_name`) and version (`os_version`) enclosed within parentheses. The extracted values are projected as separate fields.
**Original string**
```bash
Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36
```
**Query**
```kusto
['sample-http-logs']
| parse user_agent with * '(' os_name ' ' os_version ';' * ')' *
| project os_name, os_version
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20parse%20user_agent%20with%20*%20'\('%20os_name%20'%20'%20os_version%20'%3B'%20*%20'\)'%20*%20%7C%20project%20os_name%2C%20os_version%22%7D)
**Output**
```json
{
"os_name": "Windows NT 10.0; Win64; x64",
"os_version": "10.0"
}
```
### Parse URI endpoint
This example parses the `uri` field to extract the `endpoint` value that appears after `/api/v1/`. The extracted value is projected as a new field.
**Original string**
```bash
/api/v1/ping/user/textdata
```
**Query**
```kusto
['sample-http-logs']
| parse uri with '/api/v1/' endpoint
| project endpoint
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20parse%20uri%20with%20'%2Fapi%2Fv1%2F'%20endpoint%20%7C%20project%20endpoint%22%7D)
**Output**
```json
{
"endpoint": "ping/user/textdata"
}
```
### Parse ID into region, tenant, and user ID
This example demonstrates how to parse the `id` field into three parts: `region`, `tenant`, and `userId`. The `id` field is structured with these parts separated by hyphens (`-`). The extracted parts are projected as separate fields.
**Original string**
```bash
usa-acmeinc-3iou24
```
**Query**
```kusto
['sample-http-logs']
| parse id with region '-' tenant '-' userId
| project region, tenant, userId
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20parse%20id%20with%20region%20'-'%20tenant%20'-'%20userId%20%7C%20project%20region%2C%20tenant%2C%20userId%22%7D)
**Output**
```json
{
"region": "usa",
"tenant": "acmeinc",
"userId": "3iou24"
}
```
### Parse in relaxed mode
The parse operator supports a relaxed mode that allows for more flexible parsing. In relaxed mode, Axiom treats the parsing pattern as a regular string and matches results in a relaxed manner. If some parts of the pattern are missing or do not match the expected type, Axiom assigns null values.
This example parses the `log` field into four separate parts (`method`, `url`, `status`, and `responseTime`) based on a structured format. The extracted parts are projected as separate fields.
**Original string**
```bash
GET /home 200 123ms
POST /login 500 nonValidResponseTime
PUT /api/data 201 456ms
DELETE /user/123 404 nonValidResponseTime
```
**Query**
```kusto
['HttpRequestLogs']
| parse kind=relaxed log with method " " url " " status:int " " responseTime
| project method, url, status, responseTime
```
**Output**
```json
[
{
"method": "GET",
"url": "/home",
"status": 200,
"responseTime": "123ms"
},
{
"method": "POST",
"url": "/login",
"status": 500,
"responseTime": null
},
{
"method": "PUT",
"url": "/api/data",
"status": 201,
"responseTime": "456ms"
},
{
"method": "DELETE",
"url": "/user/123",
"status": 404,
"responseTime": null
}
]
```
### Parse in regex mode
The parse operator supports a regex mode that allows you to parse use regular expressions. In regex mode, Axiom treats the parsing pattern as a regular expression and matches results based on the specified regex pattern.
This example demonstrates how to parse Kubernetes pod log entries using regex mode to extract various fields such as `podName`, `namespace`, `phase`, `startTime`, `nodeName`, `hostIP`, and `podIP`. The parsing pattern is treated as a regular expression, and the extracted values are assigned to the respective fields.
**Original string**
```bash
Log: PodStatusUpdate (podName=nginx-pod, namespace=default, phase=Running, startTime=2023-05-14 08:30:00, nodeName=node-1, hostIP=192.168.1.1, podIP=10.1.1.1)
```
**Query**
```kusto
['PodLogs']
| parse kind=regex AppName with @"Log: PodStatusUpdate \(podName=" podName: string @", namespace=" namespace: string @", phase=" phase: string @", startTime=" startTime: datetime @", nodeName=" nodeName: string @", hostIP=" hostIP: string @", podIP=" podIP: string @"\)"
| project podName, namespace, phase, startTime, nodeName, hostIP, podIP
```
**Output**
```json
{
"podName": "nginx-pod",
"namespace": "default",
"phase": "Running",
"startTime": "2023-05-14 08:30:00",
"nodeName": "node-1",
"hostIP": "192.168.1.1",
"podIP": "10.1.1.1"
}
```
## Best practices
When using the parse operator, consider the following best practices:
* Use appropriate parsing modes: Choose the parsing mode (simple, relaxed, regex) based on the complexity and variability of the data being parsed. Simple mode is suitable for fixed patterns, while relaxed and regex modes offer more flexibility.
* Handle missing or invalid data: Consider how to handle scenarios where the parsing pattern does not match or the extracted values do not conform to the expected types. Use the relaxed mode or provide default values to handle such cases.
* Project only necessary fields: After parsing, use the project operator to select only the fields that are relevant for further querying. This helps reduce the amount of data transferred and improves query performance.
* Use parse in combination with other operators: Combine parse with other APL operators like where, extend, and summarize to filter, transform, and aggregate the parsed data effectively.
By following these best practices and understanding the capabilities of the parse operator, you can effectively extract and transform data from string fields in APL, enabling powerful querying and insights.
## List of related operators
* [extend](/apl/tabular-operators/extend-operator): Use the `extend` operator when you want to add calculated fields without parsing text.
* [project](/apl/tabular-operators/project-operator): Use `project` to select and rename fields after parsing text.
* [extract](/apl/scalar-functions/string-functions#extract): Use `extract` to retrieve the first substring matching a regular expression from a source string.
* [extract\_all](/apl/scalar-functions/string-functions#extract-all): Use `extract_all` to retrieve all substrings matching a regular expression from a source string.
# project-away
This page explains how to use the project-away operator function in APL.
The `project-away` operator in APL is used to exclude specific fields from the output of a query. This operator is useful when you want to return a subset of fields from a dataset, without needing to manually specify every field you want to keep. Instead, you specify the fields you want to remove, and the operator returns all remaining fields.
You can use `project-away` in scenarios where your dataset contains irrelevant or sensitive fields that you do not want in the results. It simplifies queries, especially when dealing with wide datasets, by allowing you to filter out fields without having to explicitly list every field to include.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, you use the `fields` command to remove fields from your results. In APL, the `project-away` operator provides a similar functionality, removing specified fields while returning the remaining ones.
```splunk Splunk example
... | fields - status, uri, method
```
```kusto APL equivalent
['sample-http-logs']
| project-away status, uri, method
```
In SQL, you typically use the `SELECT` statement to explicitly include fields. In contrast, APL’s `project-away` operator allows you to exclude fields, offering a more concise approach when you want to keep many fields but remove a few.
```sql SQL example
SELECT _time, req_duration_ms, id, geo.city, geo.country
FROM sample_http_logs;
```
```kusto APL equivalent
['sample-http-logs']
| project-away status, uri, method
```
## Usage
### Syntax
```kusto
| project-away FieldName1, FieldName2, ...
```
### Parameters
* `FieldName`: The field you want to exclude from the result set.
### Returns
The `project-away` operator returns the input dataset excluding the specified fields. The result contains the same number of rows as the input table.
## Use case examples
In log analysis, you might want to exclude unnecessary fields to focus on the relevant fields, such as timestamp, request duration, and user information.
**Query**
```kusto
['sample-http-logs']
| project-away status, uri, method
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20project-away%20status%2C%20uri%2C%20method%22%7D)
**Output**
| \_time | req\_duration\_ms | id | geo.city | geo.country |
| ------------------- | ----------------- | -- | -------- | ----------- |
| 2023-10-17 10:23:00 | 120 | u1 | Seattle | USA |
| 2023-10-17 10:24:00 | 135 | u2 | Berlin | Germany |
The query removes the `status`, `uri`, and `method` fields from the output, keeping the focus on the key fields.
When analyzing OpenTelemetry traces, you can remove fields that aren't necessary for specific trace evaluations, such as span IDs and statuses.
**Query**
```kusto
['otel-demo-traces']
| project-away span_id, status_code
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D%20%7C%20project-away%20span_id%2C%20status_code%22%7D)
**Output**
| \_time | duration | trace\_id | service.name | kind |
| ------------------- | -------- | --------- | --------------- | ------ |
| 2023-10-17 11:01:00 | 00:00:03 | t1 | frontend | server |
| 2023-10-17 11:02:00 | 00:00:02 | t2 | checkoutservice | client |
The query removes the `span_id` and `status_code` fields, focusing on key service information.
In security log analysis, excluding unnecessary fields such as the HTTP method or URI can help focus on user behavior patterns and request durations.
**Query**
```kusto
['sample-http-logs']
| project-away method, uri
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20project-away%20method%2C%20uri%22%7D)
**Output**
| \_time | req\_duration\_ms | id | status | geo.city | geo.country |
| ------------------- | ----------------- | -- | ------ | -------- | ----------- |
| 2023-10-17 10:25:00 | 95 | u3 | 200 | London | UK |
| 2023-10-17 10:26:00 | 180 | u4 | 404 | Paris | France |
The query excludes the `method` and `uri` fields, keeping information like status and geographical details.
## Wildcard
Wildcard refers to a special character or a set of characters that can be used to substitute for any other character in a search pattern. Use wildcards to create more flexible queries and perform more powerful searches.
The syntax for wildcard can either be `data*` or `['data.fo']*`.
Here’s how you can use wildcards in `project-away`:
```kusto
['sample-http-logs']
| project-away status*, user*, is*, ['geo.']*
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project-away%20status%2A%2C%20user%2A%2C%20is%2A%2C%20%20%5B%27geo.%27%5D%2A%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
```kusto
['github-push-event']
| project-away push*, repo*, ['commits']*
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27github-push-event%27%5D%5Cn%7C%20project-away%20push%2A%2C%20repo%2A%2C%20%5B%27commits%27%5D%2A%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
## List of related operators
* [project](/apl/tabular-operators/project-operator): The `project` operator lets you select specific fields to include, rather than excluding them.
* [extend](/apl/tabular-operators/extend-operator): The `extend` operator is used to add new fields, whereas `project-away` is for removing fields.
* [summarize](/apl/tabular-operators/summarize-operator): While `project-away` removes fields, `summarize` is useful for aggregating data across multiple fields.
# project-keep
This page explains how to use the project-keep operator function in APL.
The `project-keep` operator in APL is a powerful tool for field selection. It allows you to explicitly keep specific fields from a dataset, discarding any others not listed in the operator's parameters. This is useful when you only need to work with a subset of fields in your query results and want to reduce clutter or improve performance by eliminating unnecessary fields.
You can use `project-keep` when you need to focus on particular data points, such as in log analysis, security event monitoring, or extracting key fields from traces.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, the `table` command performs a similar task to APL’s `project-keep`. It selects only the fields you specify and excludes any others.
```splunk Splunk example
index=main | table _time, status, uri
```
```kusto APL equivalent
['sample-http-logs']
| project-keep _time, status, uri
```
In ANSI SQL, the `SELECT` statement combined with field names performs a task similar to `project-keep` in APL. Both allow you to specify which fields to retrieve from the dataset.
```sql SQL example
SELECT _time, status, uri FROM sample_http_logs
```
```kusto APL equivalent
['sample-http-logs']
| project-keep _time, status, uri
```
## Usage
### Syntax
```kusto
| project-keep FieldName1, FieldName2, ...
```
### Parameters
* `FieldName`: The field you want to keep in the result set.
### Returns
`project-keep` returns a dataset with only the specified fields. All other fields are removed from the output. The result contains the same number of rows as the input table.
## Use case examples
For log analysis, you might want to keep only the fields that are relevant to investigating HTTP requests.
**Query**
```kusto
['sample-http-logs']
| project-keep _time, status, uri, method, req_duration_ms
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20project-keep%20_time%2C%20status%2C%20uri%2C%20method%2C%20req_duration_ms%22%7D)
**Output**
| \_time | status | uri | method | req\_duration\_ms |
| ------------------- | ------ | ------------------ | ------ | ----------------- |
| 2024-10-17 10:00:00 | 200 | /index.html | GET | 120 |
| 2024-10-17 10:01:00 | 404 | /non-existent.html | GET | 50 |
| 2024-10-17 10:02:00 | 500 | /server-error | POST | 300 |
This query filters the dataset to show only the request timestamp, status, URI, method, and duration, which can help you analyze server performance or errors.
For OpenTelemetry trace analysis, you may want to focus on key tracing details such as service names and trace IDs.
**Query**
```kusto
['otel-demo-traces']
| project-keep _time, trace_id, span_id, ['service.name'], duration
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D%20%7C%20project-keep%20_time%2C%20trace_id%2C%20span_id%2C%20%5B%27service.name%27%5D%2C%20duration%22%7D)
**Output**
| \_time | trace\_id | span\_id | service.name | duration |
| ------------------- | --------- | -------- | --------------- | -------- |
| 2024-10-17 10:03:00 | abc123 | xyz789 | frontend | 500ms |
| 2024-10-17 10:04:00 | def456 | mno345 | checkoutservice | 250ms |
This query extracts specific tracing information, such as trace and span IDs, the name of the service, and the span’s duration.
In security log analysis, focusing on essential fields like user ID and HTTP status can help track suspicious activity.
**Query**
```kusto
['sample-http-logs']
| project-keep _time, id, status, uri, ['geo.city'], ['geo.country']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20project-keep%20_time%2C%20id%2C%20status%2C%20uri%2C%20%5B%27geo.city%27%5D%2C%20%5B%27geo.country%27%5D%22%7D)
**Output**
| \_time | id | status | uri | geo.city | geo.country |
| ------------------- | ------- | ------ | ------ | ------------- | ----------- |
| 2024-10-17 10:05:00 | user123 | 403 | /admin | New York | USA |
| 2024-10-17 10:06:00 | user456 | 200 | /login | San Francisco | USA |
This query narrows down the data to track HTTP status codes by users, helping identify potential unauthorized access attempts.
## List of related operators
* [project](/apl/tabular-operators/project-operator): Use `project` to explicitly specify the fields you want in your result, while also allowing transformations or calculations on those fields.
* [extend](/apl/tabular-operators/extend-operator): Use `extend` to add new fields or modify existing ones without dropping any fields.
* [summarize](/apl/tabular-operators/summarize-operator): Use `summarize` when you need to perform aggregation operations on your dataset, grouping data as necessary.
## Wildcard
Wildcard refers to a special character or a set of characters that can be used to substitute for any other character in a search pattern. Use wildcards to create more flexible queries and perform more powerful searches.
The syntax for wildcard can either be `data*` or `['data.fo']*`.
Here’s how you can use wildcards in `project-keep`:
```kusto
['sample-http-logs']
| project-keep resp*, content*, ['geo.']*
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project-keep%20resp%2A%2C%20content%2A%2C%20%20%5B%27geo.%27%5D%2A%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
```kusto
['github-push-event']
| project-keep size*, repo*, ['commits']*, id*
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27github-push-event%27%5D%5Cn%7C%20project-keep%20size%2A%2C%20repo%2A%2C%20%5B%27commits%27%5D%2A%2C%20id%2A%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
# project
This page explains how to use the project operator in APL.
# project operator
The `project` operator in Axiom Processing Language (APL) is used to select specific fields from a dataset, potentially renaming them or applying calculations on the fly. With `project`, you can control which fields are returned by the query, allowing you to focus on only the data you need.
This operator is useful when you want to refine your query results by reducing the number of fields, renaming them, or deriving new fields based on existing data. It’s a powerful tool for filtering out unnecessary fields and performing light transformations on your dataset.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, the equivalent of the `project` operator is typically the `table` or `fields` command. While SPL’s `table` focuses on selecting fields, `fields` controls both selection and exclusion, similar to `project` in APL.
```sql Splunk example
| table _time, status, uri
```
```kusto APL equivalent
['sample-http-logs']
| project _time, status, uri
```
In ANSI SQL, the `SELECT` statement serves a similar role to the `project` operator in APL. SQL users will recognize that `project` behaves like selecting fields from a table, with the ability to rename or transform fields inline.
```sql SQL example
SELECT _time, status, uri FROM sample_http_logs;
```
```kusto APL equivalent
['sample-http-logs']
| project _time, status, uri
```
## Usage
### Syntax
```kusto
| project FieldName [= Expression] [, ...]
```
Or
```kusto
| project FieldName, FieldName, FieldName, ...
```
Or
```kusto
| project [FieldName, FieldName[,] = Expression [, ...]
```
### Parameters
* `FieldName`: The names of the fields in the order you want them to appear in the result set. If there is no Expression, then FieldName is compulsory and a field of that name must appear in the input.
* `Expression`: Optional scalar expression referencing the input fields.
### Returns
The `project` operator returns a dataset containing only the specified fields.
## Use case examples
In this example, you’ll extract the timestamp, HTTP status code, and request URI from the sample HTTP logs.
**Query**
```kusto
['sample-http-logs']
| project _time, status, uri
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20project%20_time%2C%20status%2C%20uri%22%7D)
**Output**
| \_time | status | uri |
| ------------------- | ------ | --------------- |
| 2024-10-17 12:00:00 | 200 | /api/v1/getData |
| 2024-10-17 12:01:00 | 404 | /api/v1/getUser |
The query returns only the timestamp, HTTP status code, and request URI, reducing unnecessary fields from the dataset.
In this example, you’ll extract trace information such as the service name, span ID, and duration from OpenTelemetry traces.
**Query**
```kusto
['otel-demo-traces']
| project ['service.name'], span_id, duration
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20project%20%5B'service.name'%5D%2C%20span_id%2C%20duration%22%7D)
**Output**
| service.name | span\_id | duration |
| ------------ | ------------- | -------- |
| frontend | span-1234abcd | 00:00:02 |
| cartservice | span-5678efgh | 00:00:05 |
The query isolates relevant tracing data, such as the service name, span ID, and duration of spans.
In this example, you’ll focus on security log entries by projecting only the timestamp, user ID, and HTTP status from the sample HTTP logs.
**Query**
```kusto
['sample-http-logs']
| project _time, id, status
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20project%20_time%2C%20id%2C%20status%22%7D)
**Output**
| \_time | id | status |
| ------------------- | ----- | ------ |
| 2024-10-17 12:00:00 | user1 | 200 |
| 2024-10-17 12:01:00 | user2 | 403 |
The query extracts only the timestamp, user ID, and HTTP status for analysis of access control in security logs.
## List of related operators
* [extend](/apl/tabular-operators/extend-operator): Use `extend` to add new fields or calculate values without removing any existing fields.
* [summarize](/apl/tabular-operators/summarize-operator): Use `summarize` to aggregate data across groups of rows, which is useful when you’re calculating totals or averages.
* [where](/apl/tabular-operators/where-operator): Use `where` to filter rows based on conditions, often paired with `project` to refine your dataset further.
# project-reorder
This page explains how to use the project-reorder operator in APL.
The `project-reorder` operator in APL allows you to rearrange the fields of a dataset without modifying the underlying data. This operator is useful when you need to control the display order of fields in query results, making your data easier to read and analyze. It can be especially helpful when working with large datasets where field ordering impacts the clarity of the output.
Use `project-reorder` when you want to emphasize specific fields by adjusting their order in the result set without changing their values or structure.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, you use the `table` command to reorder fields, which works similarly to how `project-reorder` functions in APL.
```splunk Splunk example
| table FieldA, FieldB, FieldC
```
```kusto APL equivalent
['dataset.name']
| project-reorder FieldA, FieldB, FieldC
```
In ANSI SQL, the order of fields in a `SELECT` statement determines their arrangement in the output. In APL, `project-reorder` provides more explicit control over the field order without requiring a full `SELECT` clause.
```sql SQL example
SELECT FieldA, FieldB, FieldC FROM dataset;
```
```kusto APL equivalent
| project-reorder FieldA, FieldB, FieldC
```
## Usage
### Syntax
```kusto
| project-reorder Field1 [asc | desc | granny-asc | granny-desc], Field2 [asc | desc | granny-asc | granny-desc], ...
```
### Parameters
* `Field1, Field2, ...`: The names of the fields in the order you want them to appear in the result set.
* `[asc | desc | granny-asc | granny-desc]`: Optional: Specifies the sort order for the reordered fields. `asc` or `desc` order fields by field name in ascending or descending manner. `granny-asc` or `granny-desc` order by ascending or descending while secondarily sorting by the next numeric value. For example, `b50` comes before `b9` when you use `granny-asc`.
### Returns
A table with the specified fields reordered as requested followed by any unspecified fields in their original order. `project-reorder` doesn‘t rename or remove fields from the dataset. All fields that existed in the dataset appear in the results table.
## Use case examples
In this example, you reorder HTTP log fields to prioritize the most relevant ones for log analysis.
**Query**
```kusto
['sample-http-logs']
| project-reorder _time, method, status, uri, req_duration_ms, ['geo.city'], ['geo.country']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20project-reorder%20_time%2C%20method%2C%20status%2C%20uri%2C%20req_duration_ms%2C%20%5B%27geo.city%27%5D%2C%20%5B%27geo.country%27%5D%22%7D)
**Output**
| \_time | method | status | uri | req\_duration\_ms | geo.city | geo.country |
| ------------------- | ------ | ------ | ---------------- | ----------------- | -------- | ----------- |
| 2024-10-17 12:34:56 | GET | 200 | /home | 120 | New York | USA |
| 2024-10-17 12:35:01 | POST | 404 | /api/v1/resource | 250 | Berlin | Germany |
This query rearranges the fields for clarity, placing the most crucial fields (`_time`, `method`, `status`) at the front for easier analysis.
Here’s an example where OpenTelemetry trace fields are reordered to prioritize service and status information.
**Query**
```kusto
['otel-demo-traces']
| project-reorder _time, ['service.name'], kind, status_code, trace_id, span_id, duration
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D%20%7C%20project-reorder%20_time%2C%20%5B%27service.name%27%5D%2C%20kind%2C%20status_code%2C%20trace_id%2C%20span_id%2C%20duration%22%7D)
**Output**
| \_time | service.name | kind | status\_code | trace\_id | span\_id | duration |
| ------------------- | --------------------- | ------ | ------------ | --------- | -------- | -------- |
| 2024-10-17 12:34:56 | frontend | client | 200 | abc123 | span456 | 00:00:01 |
| 2024-10-17 12:35:01 | productcatalogservice | server | 500 | xyz789 | span012 | 00:00:05 |
This query emphasizes service-related fields like `service.name` and `status_code` at the start of the output.
In this example, fields in a security log are reordered to prioritize key fields for investigating HTTP request anomalies.
**Query**
```kusto
['sample-http-logs']
| project-reorder _time, status, method, uri, id, ['geo.city'], ['geo.country']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20project-reorder%20_time%2C%20status%2C%20method%2C%20uri%2C%20id%2C%20%5B%27geo.city%27%5D%2C%20%5B%27geo.country%27%5D%22%7D)
**Output**
| \_time | status | method | uri | id | geo.city | geo.country |
| ------------------- | ------ | ------ | ---------------- | ------ | -------- | ----------- |
| 2024-10-17 12:34:56 | 200 | GET | /home | user01 | New York | USA |
| 2024-10-17 12:35:01 | 404 | POST | /api/v1/resource | user02 | Berlin | Germany |
This query reorders the fields to focus on the HTTP status, request method, and URI, which are critical for security-related analyses.
## Wildcard
Wildcard refers to a special character or a set of characters that can be used to substitute for any other character in a search pattern. Use wildcards to create more flexible queries and perform more powerful searches.
The syntax for wildcard can either be `data*` or `['data.fo']*`.
Here’s how you can use wildcards in `project-reorder`:
Reorder all fields in ascending order:
```kusto
['sample-http-logs']
| project-reorder * asc
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project-reorder%20%2A%20asc%22%7D)
Reorder specific fields to the beginning:
```kusto
['sample-http-logs']
| project-reorder method, status, uri
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project-reorder%20method%2C%20status%2C%20uri%22%7D)
Reorder fields using wildcards and sort in descending order:
```kusto
['github-push-event']
| project-reorder repo*, num_commits, push_id, ref, size, ['id'], size_large desc
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27github-push-event%27%5D%5Cn%7C%20project-reorder%20repo%2A%2C%20num_commits%2C%20push_id%2C%20ref%2C%20size%2C%20%5B%27id%27%5D%2C%20size_large%20desc%22%7D)
Reorder specific fields and keep others in original order:
```kusto
['otel-demo-traces']
| project-reorder trace_id, *, span_id // orders the trace_id then everything else, then span_id fields
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27otel-demo-traces%27%5D%5Cn%7C%20project-reorder%20trace_id%2C%20%2A%2C%20span_id%22%7D)
## List of related operators
* [project](/apl/tabular-operators/project-operator): Use the `project` operator to select and rename fields without changing their order.
* [extend](/apl/tabular-operators/extend-operator): `extend` adds new calculated fields while keeping the original ones in place.
* [summarize](/apl/tabular-operators/summarize-operator): Use `summarize` to perform aggregations on fields, which can then be reordered using `project-reorder`.
* [sort](/apl/tabular-operators/sort-operator): Sorts rows based on field values, and the results can then be reordered with `project-reorder`.
# sample
This page explains how to use the sample operator function in APL.
The `sample` operator in APL psuedo-randomly selects rows from the input dataset at a rate specified by a parameter. This operator is useful when you want to analyze a subset of data, reduce the dataset size for testing, or quickly explore patterns without processing the entire dataset. The sampling algorithm is not statistically rigorous but provides a way to explore and understand a dataset. For statistically rigorous analysis, use `summarize` instead.
You can find the `sample` operator useful when working with large datasets, where processing the entire dataset is resource-intensive or unnecessary. It’s ideal for scenarios like log analysis, performance monitoring, or sampling for data quality checks.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, the `sample` command works similarly, returning a subset of data rows randomly. However, the APL `sample` operator requires a simpler syntax without additional arguments for biasing the randomness.
```sql Splunk example
| sample 10
```
```kusto APL equivalent
['sample-http-logs']
| sample 0.1
```
In ANSI SQL, there is no direct equivalent to the `sample` operator, but you can achieve similar results using the `TABLESAMPLE` clause. In APL, `sample` operates independently and is more flexible, as it’s not tied to a table scan.
```sql SQL example
SELECT * FROM table TABLESAMPLE (10 ROWS);
```
```kusto APL equivalent
['sample-http-logs']
| sample 0.1
```
## Usage
### Syntax
```kusto
| sample ProportionOfRows
```
### Parameters
* `ProportionOfRows`: A float greater than 0 and less than 1 which specifies the proportion of rows to return from the dataset. The rows are selected randomly.
### Returns
The operator returns a table containing the specified number of rows, selected randomly from the input dataset.
## Use case examples
In this use case, you sample a small number of rows from your HTTP logs to quickly analyze trends without working through the entire dataset.
**Query**
```kusto
['sample-http-logs']
| sample 0.05
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20sample%200.05%22%7D)
**Output**
| \_time | req\_duration\_ms | id | status | uri | method | geo.city | geo.country |
| ------------------- | ----------------- | ----- | ------ | --------- | ------ | -------- | ----------- |
| 2023-10-16 12:45:00 | 234 | user1 | 200 | /index | GET | New York | US |
| 2023-10-16 12:47:00 | 120 | user2 | 404 | /login | POST | Paris | FR |
| 2023-10-16 12:48:00 | 543 | user3 | 500 | /checkout | POST | Tokyo | JP |
This query returns a random subset of 5 % of all rows from the HTTP logs, helping you quickly identify any potential issues or patterns without analyzing the entire dataset.
In this use case, you sample traces to investigate performance metrics for a particular service across different spans.
**Query**
```kusto
['otel-demo-traces']
| where ['service.name'] == 'checkoutservice'
| sample 0.05
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D%20%7C%20where%20%5B%27service.name%27%5D%20%3D%3D%20%27checkoutservice%27%20%7C%20sample%200.05%22%7D)
**Output**
| \_time | duration | span\_id | trace\_id | service.name | kind | status\_code |
| ------------------- | -------- | -------- | --------- | --------------- | ------ | ------------ |
| 2023-10-16 14:05:00 | 1.34s | span5678 | trace123 | checkoutservice | client | 200 |
| 2023-10-16 14:06:00 | 0.89s | span3456 | trace456 | checkoutservice | server | 500 |
This query returns 5 % of all traces for the `checkoutservice` to identify potential performance bottlenecks.
In this use case, you sample security log data to spot irregular activity in requests, such as 500-level HTTP responses.
**Query**
```kusto
['sample-http-logs']
| where status == '500'
| sample 0.03
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20where%20status%20%3D%3D%20%27500%27%20%7C%20sample%200.03%22%7D)
**Output**
| \_time | req\_duration\_ms | id | status | uri | method | geo.city | geo.country |
| ------------------- | ----------------- | ----- | ------ | -------- | ------ | -------- | ----------- |
| 2023-10-16 14:30:00 | 543 | user4 | 500 | /payment | POST | Berlin | DE |
| 2023-10-16 14:32:00 | 876 | user5 | 500 | /order | POST | London | GB |
This query helps you quickly spot failed requests (HTTP 500 responses) and investigate any potential causes of these errors.
## List of related operators
* [take](/apl/tabular-operators/take-operator): Use `take` when you want to return the first N rows in the dataset rather than a random subset.
* [where](/apl/tabular-operators/where-operator): Use `where` to filter rows based on conditions rather than sampling randomly.
* [top](/apl/tabular-operators/top-operator): Use `top` to return the highest N rows based on a sorting criterion.
# search
This page explains how to use the search operator in APL.
The `search` operator in APL is used to perform a full-text search across multiple fields in a dataset. This operator allows you to locate specific keywords, phrases, or patterns, helping you filter data quickly and efficiently. You can use `search` to query logs, traces, and other data sources without the need to specify individual fields, making it particularly useful when you’re unsure where the relevant data resides.
Use `search` when you want to search multiple fields in a dataset, especially for ad-hoc analysis or quick lookups across logs or traces. It’s commonly applied in log analysis, security monitoring, and trace analysis, where multiple fields may contain the desired data.
## Importance of the search operator
* **Versatility:** It allows you to find a specific text or term across various fields within a dataset that they choose or select for their search, without the necessity to specify each field.
* **Efficiency:** Saves time when you aren’t sure which field or datasets in APL might contain the information you are looking for.
* **User-friendliness:** It’s particularly useful for users or developers unfamiliar with the schema details of a given database.
## Usage
### Syntax
```kusto
search [kind=CaseSensitivity] SearchPredicate
```
or
```kusto
search [kind=CaseSensitivity] SearchPredicate
```
### Parameters
| Name | Type | Required | Description |
| ------------------- | ------ | -------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **CaseSensitivity** | string | | A flag that controls the behavior of all `string` scalar operators, such as `has`, with respect to case sensitivity. Valid values are `default`, `case_insensitive`, `case_sensitive`. The options `default` and `case_insensitive` are synonymous, since the default behavior is case insensitive. |
| **SearchPredicate** | string | ✓ | A Boolean expression to be evaluated for every event in the input. If it returns `true`, the record is outputted. |
## Returns
Returns all rows where the specified keyword appears in any field.
## Search predicate syntax
The SearchPredicate allows you to search for specific terms in all fields of a dataset. The operator that will be applied to a search term depends on the presence and placement of a wildcard asterisk (\*) in the term, as shown in the following table.
| Literal | Operator |
| ---------- | --------------- |
| `axiomk` | `has` |
| `*axiomk` | `hassuffix` |
| `axiomk*` | `hasprefix` |
| `*axiomk*` | `contains` |
| `ax*ig` | `matches regex` |
You can also restrict the search to a specific field, look for an exact match instead of a term match, or search by regular expression. The syntax for each of these cases is shown in the following table.
| Syntax | Explanation |
| ------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------- |
| **FieldName**`:`**StringLiteral** | This syntax can be used to restrict the search to a specific field. The default behavior is to search all fields. |
| **FieldName**`==`**StringLiteral** | This syntax can be used to search for exact matches of a field against a string value. The default behavior is to look for a term-match. |
| **Field** `matches regex` **StringLiteral** | This syntax indicates regular expression matching, in which *StringLiteral* is the regex pattern. |
Use boolean expressions to combine conditions and create more complex searches. For example, `"axiom" and b==789` would result in a search for events that have the term axiom in any field and the value 789 in the b field.
### Search predicate syntax examples
| # | Syntax | Meaning (equivalent `where`) | Comments |
| -- | ---------------------------------------- | --------------------------------------------------------- | ----------------------------------------- |
| 1 | `search "axiom"` | `where * has "axiom"` | |
| 2 | `search field:"axiom"` | `where field has "axiom"` | |
| 3 | `search field=="axiom"` | `where field=="axiom"` | |
| 4 | `search "axiom*"` | `where * hasprefix "axiom"` | |
| 5 | `search "*axiom"` | `where * hassuffix "axiom"` | |
| 6 | `search "*axiom*"` | `where * contains "axiom"` | |
| 7 | `search "Pad*FG"` | `where * matches regex @"\bPad.*FG\b"` | |
| 8 | `search *` | `where 0==0` | |
| 9 | `search field matches regex "..."` | `where field matches regex "..."` | |
| 10 | `search kind=case_sensitive` | | All string comparisons are case-sensitive |
| 11 | `search "axiom" and ("log" or "metric")` | `where * has "axiom" and (* has "log" or * has "metric")` | |
| 12 | `search "axiom" or (A>a and Aa and A datetime('2022-09-16')
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20search%20%5C%22get%5C%22%20and%20_time%20%3E%20datetime%28%272022-09-16%27%29%22%7D\&queryOptions=%7B%22quickRange%22%3A%2230d%22%7D)
### Use kind=default
By default, the search is case-insensitive and uses the simple search.
```kusto
['sample-http-logs']
| search kind=default "INDIA"
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20search%20kind%3Ddefault%20%5C%22INDIA%5C%22%22%7D\&queryOptions=%7B%22quickRange%22%3A%2230d%22%7D)
### Use kind=case\_sensitive
Search for logs that contain the term "text" with case sensitivity.
```kusto
['sample-http-logs']
| search kind=case_sensitive "text"
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20search%20kind%3Dcase_sensitive%20%5C%22text%5C%22%22%7D\&queryOptions=%7B%22quickRange%22%3A%2230d%22%7D)
### Use kind=case\_insensitive
Explicitly search for logs that contain the term "CSS" without case sensitivity.
```kusto
['sample-http-logs']
| search kind=case_insensitive "CSS"
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20search%20kind%3Dcase_insensitive%20%5C%22CSS%5C%22%22%7D\&queryOptions=%7B%22quickRange%22%3A%2230d%22%7D)
### Use search \*
Search all logs. This would essentially return all rows in the dataset.
```kusto
['sample-http-logs']
| search *
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20search%20%2A%22%7D\&queryOptions=%7B%22quickRange%22%3A%2230d%22%7D)
### Contain any substring
Search for logs that contain any substring of "brazil".
```kusto
['sample-http-logs']
| search "*brazil*"
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20search%20%5C%22%2Abrazil%2A%5C%22%22%7D\&queryOptions=%7B%22quickRange%22%3A%2230d%22%7D)
### Search for multiple independent terms
Search the logs for entries that contain either the term "GET" or "covina", irrespective of their context or the fields they appear in.
```kusto
['sample-http-logs']
| search "GET" or "covina"
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20search%20%5C%22GET%5C%22%20or%20%5C%22covina%5C%22%22%7D\&queryOptions=%7B%22quickRange%22%3A%2230d%22%7D)
## Use the search operator efficiently
Using non-field-specific filters such as the `search` operator has an impact on performance, especially when used over a high volume of events in a wide time range. To use the `search` operator efficiently, follow these guidelines:
* Use field-specific filters when possible. Field-specific filters narrow your query results to events where a field has a given value. They are more efficient than non-field-specific filters, such as the `search` operator, that narrow your query results by searching across all fields for a given value. When you know the target field, replace the `search` operator with `where` clauses that filter for values in a specific field.
* After using the `search` operator in your query, use other operators, such as `project` statements, to limit the number of returned fields.
* Use the `kind` flag when possible. When you know the pattern that string values in your data follow, use the `kind` flag to specify the case-sensitivity of the search.
# sort
This page explains how to use the sort operator function in APL.
The `sort` operator in APL arranges the rows of a result set based on one or more fields in ascending or descending order. You can use it to organize your data logically or optimize subsequent operations that depend on ordered data. This operator is useful when analyzing logs, traces, or any dataset where the order of results matters, such as when you’re interested in top or bottom performers, chronological sequences, or sorting by status codes.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, the equivalent of `sort` is the `sort` command, which orders search results based on one or more fields. However, in APL, you must explicitly specify the sorting direction for each field, and sorting by multiple fields requires chaining them with commas.
```splunk Splunk example
| sort - _time, status
```
```kusto APL equivalent
['sample-http-logs']
| sort by _time desc, status asc
```
In SQL, sorting is done using the `ORDER BY` clause. The APL `sort` operator behaves similarly but uses the `by` keyword instead of `ORDER BY`. Additionally, APL requires specifying the order direction (`asc` or `desc`) explicitly for each field.
```sql SQL example
SELECT * FROM sample_http_logs
ORDER BY _time DESC, status ASC
```
```kusto APL equivalent
['sample-http-logs']
| sort by _time desc, status asc
```
## Usage
### Syntax
```kusto
| sort by Field1 [asc | desc], Field2 [asc | desc], ...
```
### Parameters
* `Field1`, `Field2`, ...: The fields to sort by.
* \[asc | desc]: Specify the sorting direction for each field as either `asc` for ascending order or `desc` for descending order.
### Returns
A table with rows ordered based on the specified fields.
## Use sort and project together
When you use `project` and `sort` in the same query, ensure you project the fields that you want to sort on. Similarly, when you use `project-away` and `sort` in the same query, ensure you don’t remove the fields that you want to sort on.
The above is also true for time fields. For example, to project the field `status` and sort on the field `_time`, project both fields similarly to the query below:
```apl
['sample-http-logs']
| project status, _time
| sort by _time desc
```
## Use case examples
Sorting HTTP logs by request duration and then by status code is useful to identify slow requests and their corresponding statuses.
**Query**
```kusto
['sample-http-logs']
| sort by req_duration_ms desc, status asc
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20sort%20by%20req_duration_ms%20desc%2C%20status%20asc%22%7D)
**Output**
| \_time | req\_duration\_ms | id | status | uri | method | geo.city | geo.country |
| ------------------- | ----------------- | ---- | ------ | ---------- | ------ | -------- | ----------- |
| 2024-10-18 12:34:56 | 5000 | abc1 | 500 | /api/data | GET | New York | US |
| 2024-10-18 12:35:56 | 4500 | abc2 | 200 | /api/users | POST | London | UK |
The query sorts the HTTP logs by the duration of each request in descending order, showing the longest-running requests at the top. If two requests have the same duration, they are sorted by status code in ascending order.
Sorting OpenTelemetry traces by span duration helps identify the longest-running spans within a specific service.
**Query**
```kusto
['otel-demo-traces']
| sort by duration desc, ['service.name'] asc
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D%20%7C%20sort%20by%20duration%20desc%2C%20%5B%27service.name%27%5D%20asc%22%7D)
**Output**
| \_time | duration | span\_id | trace\_id | service.name | kind | status\_code |
| ------------------- | -------- | -------- | --------- | ------------ | ------ | ------------ |
| 2024-10-18 12:36:56 | 00:00:15 | span1 | trace1 | frontend | server | 200 |
| 2024-10-18 12:37:56 | 00:00:14 | span2 | trace2 | cartservice | client | 500 |
This query sorts spans by their duration in descending order, with the longest spans at the top, followed by the service name in ascending order.
Sorting security logs by status code and then by timestamp can help in investigating recent failed requests.
**Query**
```kusto
['sample-http-logs']
| sort by status asc, _time desc
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20sort%20by%20status%20asc%2C%20_time%20desc%22%7D)
**Output**
| \_time | req\_duration\_ms | id | status | uri | method | geo.city | geo.country |
| ------------------- | ----------------- | ---- | ------ | ---------- | ------ | -------- | ----------- |
| 2024-10-18 12:40:56 | 3000 | abc3 | 400 | /api/login | POST | Toronto | CA |
| 2024-10-18 12:39:56 | 2000 | abc4 | 400 | /api/auth | GET | Berlin | DE |
This query sorts security logs by status code first (in ascending order) and then by the most recent events.
## List of related operators
* [top](/apl/tabular-operators/top-operator): Use `top` to return a specified number of rows with the highest or lowest values, but unlike `sort`, `top` limits the result set.
* [project](/apl/tabular-operators/project-operator): Use `project` to select and reorder fields without changing the order of rows.
* [extend](/apl/tabular-operators/extend-operator): Use `extend` to create calculated fields that can then be used in conjunction with `sort` to refine your results.
* [summarize](/apl/tabular-operators/summarize-operator): Use `summarize` to group and aggregate data before applying `sort` for detailed analysis.
# summarize
This page explains how to use the summarize operator function in APL.
## Introduction
The `summarize` operator in APL enables you to perform data aggregation and create summary tables from large datasets. You can use it to group data by specified fields and apply aggregation functions such as `count()`, `sum()`, `avg()`, `min()`, `max()`, and many others. This is particularly useful when analyzing logs, tracing OpenTelemetry data, or reviewing security events. The `summarize` operator is helpful when you want to reduce the granularity of a dataset to extract insights or trends.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, the `stats` command performs a similar function to APL’s `summarize` operator. Both operators are used to group data and apply aggregation functions. In APL, `summarize` is more explicit about the fields to group by and the aggregation functions to apply.
```sql Splunk example
index="sample-http-logs" | stats count by method
```
```kusto APL equivalent
['sample-http-logs']
| summarize count() by method
```
The `summarize` operator in APL is conceptually similar to SQL’s `GROUP BY` clause with aggregation functions. In APL, you explicitly specify the aggregation function (like `count()`, `sum()`) and the fields to group by.
```sql SQL example
SELECT method, COUNT(*)
FROM sample_http_logs
GROUP BY method
```
```kusto APL equivalent
['sample-http-logs']
| summarize count() by method
```
## Usage
### Syntax
```kusto
| summarize [[Field1 =] AggregationFunction [, ...]] [by [Field2 =] GroupExpression [, ...]]
```
### Parameters
* `Field1`: A field name.
* `AggregationFunction`: The aggregation function to apply. Examples include `count()`, `sum()`, `avg()`, `min()`, and `max()`.
* `GroupExpression`: A scalar expression that can reference the dataset.
### Returns
The `summarize` operator returns a table where:
* The input rows are arranged into groups having the same values of the `by` expressions.
* The specified aggregation functions are computed over each group, producing a row for each group.
* The result contains the `by` fields and also at least one field for each computed aggregate. Some aggregation functions return multiple fields.
## Use case examples
In log analysis, you can use `summarize` to count the number of HTTP requests grouped by method, or to compute the average request duration.
**Query**
```kusto
['sample-http-logs']
| summarize count() by method
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20summarize%20count\(\)%20by%20method%22%7D)
**Output**
| method | count\_ |
| ------ | ------- |
| GET | 1000 |
| POST | 450 |
This query groups the HTTP requests by the `method` field and counts how many times each method is used.
You can use `summarize` to analyze OpenTelemetry traces by calculating the average span duration for each service.
**Query**
```kusto
['otel-demo-traces']
| summarize avg(duration) by ['service.name']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D%20%7C%20summarize%20avg\(duration\)%20by%20%5B%27service.name%27%5D%22%7D)
**Output**
| service.name | avg\_duration |
| ------------ | ------------- |
| frontend | 50ms |
| cartservice | 75ms |
This query calculates the average duration of traces for each service in the dataset.
In security log analysis, `summarize` can help group events by status codes and see the distribution of HTTP responses.
**Query**
```kusto
['sample-http-logs']
| summarize count() by status
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20summarize%20count\(\)%20by%20status%22%7D)
**Output**
| status | count\_ |
| ------ | ------- |
| 200 | 1200 |
| 404 | 300 |
This query summarizes HTTP status codes, giving insight into the distribution of responses in your logs.
## Other examples
```kusto
['sample-http-logs']
| summarize topk(content_type, 20)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20summarize%20topk\(content_type%2C%2020\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
```kusto
['github-push-event']
| summarize topk(repo, 20) by bin(_time, 24h)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27github-push-event%27%5D%7C%20summarize%20topk\(repo%2C%2020\)%20by%20bin\(_time%2C%2024h\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
Returns a table that shows the heatmap in each interval \[0, 30], \[30, 20, 10], and so on. This example has a cell for `HISTOGRAM(req_duration_ms)`.
```kusto
['sample-http-logs']
| summarize histogram(req_duration_ms, 30)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20summarize%20histogram\(req_duration_ms%2C%2030\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
```kusto
['github-push-event']
| where _time > ago(7d)
| where repo contains "axiom"
| summarize count(), numCommits=sum(size) by _time=bin(_time, 3h), repo
| take 100
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27github-push-event%27%5D%20%7C%20where%20_time%20%3E%20ago\(7d\)%20%7C%20where%20repo%20contains%20%5C%22axiom%5C%22%20%7C%20summarize%20count\(\)%2C%20numCommits%3Dsum\(size\)%20by%20_time%3Dbin\(_time%2C%203h\)%2C%20repo%20%7C%20take%20100%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
## List of related operators
* [count](/apl/tabular-operators/count-operator): Use when you only need to count rows without grouping by specific fields.
* [extend](/apl/tabular-operators/extend-operator): Use to add new calculated fields to a dataset.
* [project](/apl/tabular-operators/project-operator): Use to select specific fields or create new calculated fields, often in combination with `summarize`.
# take
This page explains how to use the take operator in APL.
The `take` operator in APL allows you to retrieve a specified number of rows from a dataset. It’s useful when you want to preview data, limit the result set for performance reasons, or fetch a random sample from large datasets. The `take` operator can be particularly effective in scenarios like log analysis, security monitoring, and telemetry where large amounts of data are processed, and only a subset is needed for analysis.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, the `head` and `tail` commands perform similar operations to the APL `take` operator, where `head` returns the first N results, and `tail` returns the last N. In APL, `take` is a flexible way to fetch any subset of rows in a dataset.
```sql Splunk example
| head 10
```
```kusto APL equivalent
['sample-http-logs']
| take 10
```
In ANSI SQL, the equivalent of the APL `take` operator is `LIMIT`. While SQL requires you to specify a sorting order with `ORDER BY` for deterministic results, APL allows you to use `take` to fetch a specific number of rows without needing explicit sorting.
```sql SQL example
SELECT * FROM sample_http_logs LIMIT 10;
```
```kusto APL equivalent
['sample-http-logs']
| take 10
```
## Usage
### Syntax
```kusto
| take N
```
### Parameters
* `N`: The number of rows to take from the dataset. If `N` is positive, it returns the first `N` rows. If `N` is negative, it returns the last `N` rows.
### Returns
The operator returns the specified number of rows from the dataset.
## Use case examples
The `take` operator is useful in log analysis when you need to view a subset of logs to quickly identify trends or errors without analyzing the entire dataset.
**Query**
```kusto
['sample-http-logs']
| take 5
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20take%205%22%7D)
**Output**
| \_time | req\_duration\_ms | id | status | uri | method | geo.city | geo.country |
| -------------------- | ----------------- | ---- | ------ | --------- | ------ | -------- | ----------- |
| 2023-10-18T10:00:00Z | 120 | u123 | 200 | /home | GET | Berlin | Germany |
| 2023-10-18T10:01:00Z | 85 | u124 | 404 | /login | POST | New York | USA |
| 2023-10-18T10:02:00Z | 150 | u125 | 500 | /checkout | POST | Tokyo | Japan |
This query retrieves the first 5 rows from the `sample-http-logs` dataset.
In the context of OpenTelemetry traces, the `take` operator helps extract a small number of traces to analyze span performance or trace behavior across services.
**Query**
```kusto
['otel-demo-traces']
| take 3
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20take%203%22%7D)
**Output**
| \_time | duration | span\_id | trace\_id | service.name | kind | status\_code |
| -------------------- | -------- | -------- | --------- | --------------- | -------- | ------------ |
| 2023-10-18T10:10:00Z | 250ms | s123 | t456 | frontend | server | OK |
| 2023-10-18T10:11:00Z | 300ms | s124 | t457 | checkoutservice | client | OK |
| 2023-10-18T10:12:00Z | 100ms | s125 | t458 | cartservice | internal | ERROR |
This query retrieves the first 3 spans from the OpenTelemetry traces dataset.
For security logs, `take` allows quick sampling of log entries to detect patterns or anomalies without needing the entire log file.
**Query**
```kusto
['sample-http-logs']
| take 10
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20take%2010%22%7D)
**Output**
| \_time | req\_duration\_ms | id | status | uri | method | geo.city | geo.country |
| -------------------- | ----------------- | ---- | ------ | ---------- | ------ | -------- | ----------- |
| 2023-10-18T10:20:00Z | 200 | u223 | 200 | /admin | GET | London | UK |
| 2023-10-18T10:21:00Z | 190 | u224 | 403 | /dashboard | GET | Berlin | Germany |
This query retrieves the first 10 security log entries, useful for quick investigations.
## List of related operators
* [limit](/apl/tabular-operators/limit-operator): Similar to `take`, but explicitly limits the result set and often used for pagination or performance optimization.
* [sort](/apl/tabular-operators/sort-operator): Used in combination with `take` when you want to fetch a subset of sorted data.
* [where](/apl/tabular-operators/where-operator): Filters rows based on a condition before using `take` for sampling specific subsets.
# top
This page explains how to use the top operator function in APL.
The `top` operator in Axiom Processing Language (APL) allows you to retrieve the top N rows from a dataset based on specified criteria. It is particularly useful when you need to analyze the highest values in large datasets or want to quickly identify trends, such as the highest request durations in logs or top error occurrences in traces. You can apply it in scenarios like log analysis, security investigations, or tracing system performance.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
The `top` operator in APL is similar to `top` in Splunk SPL but allows greater flexibility in specifying multiple sorting criteria.
```sql Splunk example
index="sample_http_logs" | top limit=5 req_duration_ms
```
```kusto APL equivalent
['sample-http-logs']
| top 5 by req_duration_ms
```
In ANSI SQL, the `TOP` operator is used with an `ORDER BY` clause to limit the number of rows. In APL, the syntax is similar but uses `top` in a pipeline and specifies the ordering criteria directly.
```sql SQL example
SELECT TOP 5 req_duration_ms FROM sample_http_logs ORDER BY req_duration_ms DESC
```
```kusto APL equivalent
['sample-http-logs']
| top 5 by req_duration_ms
```
## Usage
### Syntax
```kusto
| top N by Expression [asc | desc]
```
### Parameters
* `N`: The number of rows to return.
* `Expression`: A scalar expression used for sorting. The type of the values must be numeric, date, time, or string.
* `[asc | desc]`: Optional. Use to sort in ascending or descending order. The default is descending.
### Returns
The `top` operator returns the top N rows from the dataset based on the specified sorting criteria.
## Use case examples
The `top` operator helps you find the HTTP requests with the longest durations.
**Query**
```kusto
['sample-http-logs']
| top 5 by req_duration_ms
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20top%205%20by%20req_duration_ms%22%7D)
**Output**
| \_time | req\_duration\_ms | id | status | uri | method | geo.city | geo.country |
| ------------------- | ----------------- | --- | ------ | ---------------- | ------ | -------- | ----------- |
| 2024-10-01 10:12:34 | 5000 | 123 | 200 | /api/get-data | GET | New York | US |
| 2024-10-01 11:14:20 | 4900 | 124 | 200 | /api/post-data | POST | Chicago | US |
| 2024-10-01 12:15:45 | 4800 | 125 | 200 | /api/update-item | PUT | London | UK |
This query returns the top 5 HTTP requests that took the longest time to process.
The `top` operator is useful for identifying the spans with the longest duration in distributed tracing systems.
**Query**
```kusto
['otel-demo-traces']
| top 5 by duration
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D%20%7C%20top%205%20by%20duration%22%7D)
**Output**
| \_time | duration | span\_id | trace\_id | service.name | kind | status\_code |
| ------------------- | -------- | -------- | --------- | --------------- | ------ | ------------ |
| 2024-10-01 10:12:34 | 300ms | span123 | trace456 | frontend | server | 200 |
| 2024-10-01 10:13:20 | 290ms | span124 | trace457 | cartservice | client | 200 |
| 2024-10-01 10:15:45 | 280ms | span125 | trace458 | checkoutservice | server | 500 |
This query returns the top 5 spans with the longest durations from the OpenTelemetry traces.
The `top` operator is useful for identifying the most frequent HTTP status codes in security logs.
**Query**
```kusto
['sample-http-logs']
| summarize count() by status
| top 3 by count_
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20summarize%20count\(\)%20by%20status%20%7C%20top%203%20by%20count_%22%7D)
**Output**
| status | count\_ |
| ------ | ------- |
| 200 | 500 |
| 404 | 50 |
| 500 | 20 |
This query shows the top 3 most common HTTP status codes in security logs.
## List of related operators
* [order](/apl/tabular-operators/order-operator): Use when you need full control over row ordering without limiting the number of results.
* [summarize](/apl/tabular-operators/summarize-operator): Useful when aggregating data over fields and obtaining summarized results.
* [take](/apl/tabular-operators/take-operator): Returns the first N rows without sorting. Use when ordering is not necessary.
# union
This page explains how to use the union operator in APL.
The `union` operator in APL allows you to combine the results of two or more queries into a single output. The operator is useful when you need to analyze or compare data from different datasets or tables in a unified manner. By using `union`, you can merge multiple sets of records, keeping all data from the source tables without applying any aggregation or filtering.
The `union` operator is particularly helpful in scenarios like log analysis, tracing OpenTelemetry events, or correlating security logs across multiple sources. You can use it to perform comprehensive investigations by bringing together information from different datasets into one query.
## Union of two datasets
To understand how the `union` operator works, consider these datasets:
**Server requests**
| \_time | status | method | trace\_id |
| ------ | ------ | ------ | --------- |
| 12:10 | 200 | GET | 1 |
| 12:15 | 200 | POST | 2 |
| 12:20 | 503 | POST | 3 |
| 12:25 | 200 | POST | 4 |
**App logs**
| \_time | trace\_id | message |
| ------ | --------- | ------- |
| 12:12 | 1 | foo |
| 12:21 | 3 | bar |
| 13:35 | 27 | baz |
Performing a union on `Server requests` and `Application logs` would result in a new dataset with all the rows from both `DatasetA` and `DatasetB`.
A union of **requests** and **logs** would produce the following result set:
| \_time | status | method | trace\_id | message |
| ------ | ------ | ------ | --------- | ------- |
| 12:10 | 200 | GET | 1 | |
| 12:12 | | | 1 | foo |
| 12:15 | 200 | POST | 2 | |
| 12:20 | 503 | POST | 3 | |
| 12:21 | | | 3 | bar |
| 12:25 | 200 | POST | 4 | |
| 13:35 | | | 27 | baz |
This result combines the rows and merges types for overlapping fields.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, the `append` command works similarly to the `union` operator in APL. Both operators are used to combine multiple datasets. However, while `append` in Splunk typically adds one dataset to the end of another, APL’s `union` merges datasets while preserving all records.
```splunk Splunk example
index=web OR index=security
```
```kusto APL equivalent
['sample-http-logs']
| union ['security-logs']
```
In ANSI SQL, the `UNION` operator performs a similar function to the APL `union` operator. Both are used to combine the results of two or more queries. However, SQL’s `UNION` removes duplicates by default, whereas APL’s `union` keeps all rows unless you use `union with=kind=unique`.
```sql SQL example
SELECT * FROM web_logs
UNION
SELECT * FROM security_logs;
```
```kusto APL equivalent
['sample-http-logs']
| union ['security-logs']
```
## Usage
### Syntax
```kusto
T1 | union [T2], [T3], ...
```
### Parameters
* `T1, T2, T3, ...`: Tables or query results you want to combine into a single output.
### Returns
The `union` operator returns all rows from the specified tables or queries. If fields overlap, they are merged. Non-overlapping fields are retained in their original form.
## Use case examples
In log analysis, you can use the `union` operator to combine HTTP logs from different sources, such as web servers and security systems, to analyze trends or detect anomalies.
**Query**
```kusto
['sample-http-logs']
| union ['security-logs']
| where status == '500'
```
**Output**
| \_time | id | status | uri | method | geo.city | geo.country | req\_duration\_ms |
| ------------------- | ------- | ------ | ------------------- | ------ | -------- | ----------- | ----------------- |
| 2024-10-17 12:34:56 | user123 | 500 | /api/login | GET | London | UK | 345 |
| 2024-10-17 12:35:10 | user456 | 500 | /api/update-profile | POST | Berlin | Germany | 123 |
This query combines two datasets (HTTP logs and security logs) and filters the combined data to show only those entries where the HTTP status code is 500.
When working with OpenTelemetry traces, you can use the `union` operator to combine tracing information from different services for a unified view of system performance.
**Query**
```kusto
['otel-demo-traces']
| union ['otel-backend-traces']
| where ['service.name'] == 'frontend' and status_code == 'error'
```
**Output**
| \_time | trace\_id | span\_id | \['service.name'] | kind | status\_code |
| ------------------- | ---------- | -------- | ----------------- | ------ | ------------ |
| 2024-10-17 12:36:10 | trace-1234 | span-567 | frontend | server | error |
| 2024-10-17 12:38:20 | trace-7890 | span-345 | frontend | client | error |
This query combines traces from two different datasets and filters them to show only errors occurring in the `frontend` service.
For security logs, the `union` operator is useful to combine logs from different sources, such as intrusion detection systems (IDS) and firewall logs.
**Query**
```kusto
['sample-http-logs']
| union ['security-logs']
| where ['geo.country'] == 'Germany'
```
**Output**
| \_time | id | status | uri | method | geo.city | geo.country | req\_duration\_ms |
| ------------------- | ------- | ------ | ---------------- | ------ | -------- | ----------- | ----------------- |
| 2024-10-17 12:34:56 | user789 | 200 | /api/login | GET | Berlin | Germany | 245 |
| 2024-10-17 12:40:22 | user456 | 404 | /api/nonexistent | GET | Munich | Germany | 532 |
This query combines web and security logs, then filters the results to show only those records where the request originated from Germany.
## Other examples
### Basic union
This example combines all rows from `github-push-event` and `github-pull-request-event` without any transformation or filtering.
```kusto
['github-push-event']
| union ['github-pull-request-event']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27github-push-event%27%5D%5Cn%7C%20union%20%5B%27github-pull-request-event%27%5D%22%7D)
### Filter after union
This example combines the datasets, and then filters the data to only include rows where the `method` is `GET`.
```kusto
['sample-http-logs']
| union ['github-issues-event']
| where method == "GET"
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20union%20%5B%27github-issues-event%27%5D%5Cn%7C%20where%20method%20%3D%3D%20%5C%22GET%5C%22%22%7D)
### Aggregate after union
This example combines the datasets and summarizes the data, counting the occurrences of each combination of `content_type` and `actor`.
```kusto
['sample-http-logs']
| union ['github-pull-request-event']
| summarize Count = count() by content_type, actor
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20union%20%5B%27github-pull-request-event%27%5D%5Cn%7C%20summarize%20Count%20%3D%20count%28%29%20by%20content_type%2C%20actor%22%7D)
### Filter and project specific data from combined log sources
This query combines GitHub pull request event logs and GitHub push events, filters by actions made by `github-actions[bot]`, and displays key event details such as `time`, `repository`, `commits`, `head` , `id`.
```kusto
['github-pull-request-event']
| union ['github-push-event']
| where actor == "github-actions[bot]"
| project _time, repo, ['id'], commits, head
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27github-pull-request-event%27%5D%5Cn%7C%20union%20%5B%27github-push-event%27%5D%5Cn%7C%20where%20actor%20%3D%3D%20%5C%22github-actions%5Bbot%5D%5C%22%5Cn%7C%20project%20_time%2C%20repo%2C%20%5B%27id%27%5D%2C%20commits%2C%20head%22%7D)
### Union with field removing
This example removes the `content_type` and `commits` field in the datasets `sample-http-logs` and `github-push-event` before combining the datasets.
```kusto
['sample-http-logs']
| union ['github-push-event']
| project-away content_type, commits
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20union%20%5B%27github-push-event%27%5D%5Cn%7C%20project-away%20content_type%2C%20commits%22%7D)
### Filter after union
This example performs a union and then filters the resulting set to only include rows where the `method` is `GET`.
```kusto
['sample-http-logs']
| union ['github-issues-event']
| where method == "GET"
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20union%20%5B%27github-issues-event%27%5D%5Cn%7C%20where%20method%20%3D%3D%20%5C%22GET%5C%22%22%7D)
### Union with order by
After the union, the result is ordered by the `type` field.
```kusto
['sample-http-logs']
| union hn
| order by type
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20union%20hn%5Cn%7C%20order%20by%20type%22%7D)
### Union with joint conditions
This example performs a union and then filters the resulting dataset for rows where `content_type` contains the letter `a` and `city` is `seattle`.
```kusto
['sample-http-logs']
| union ['github-pull-request-event']
| where content_type contains "a" and ['geo.city'] == "Seattle"
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20union%20%5B%27github-pull-request-event%27%5D%5Cn%7C%20where%20content_type%20contains%20%5C%22a%5C%22%20and%20%5B%27geo.city%27%5D%20%20%3D%3D%20%5C%22Seattle%5C%22%22%7D)
### Union and count unique values
After the union, the query calculates the number of unique `geo.city` and `repo` entries in the combined dataset.
```kusto
['sample-http-logs']
| union ['github-push-event']
| summarize UniqueNames = dcount(['geo.city']), UniqueData = dcount(repo)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20union%20%5B%27github-push-event%27%5D%5Cn%7C%20summarize%20UniqueNames%20%3D%20dcount%28%5B%27geo.city%27%5D%29%2C%20UniqueData%20%3D%20dcount%28repo%29%22%7D)
## Best practices for the union operator
To maximize the effectiveness of the union operator in APL, here are some best practices to consider:
* Before using the `union` operator, ensure that the fields being merged have compatible data types.
* Use `project` or `project-away` to include or exclude specific fields. This can improve performance and the clarity of your results, especially when you only need a subset of the available data.
# where
This page explains how to use the where operator in APL.
The `where` operator in APL is used to filter rows based on specified conditions. You can use the `where` operator to return only the records that meet the criteria you define. It’s a foundational operator in querying datasets, helping you focus on specific data by applying conditions to filter out unwanted rows. This is useful when working with large datasets, logs, traces, or security events, allowing you to extract meaningful information quickly.
## For users of other query languages
If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, the `where` operator filters events based on boolean expressions. APL’s `where` operator functions similarly, allowing you to filter rows that satisfy a condition.
```sql Splunk example
index=main | where status="200"
```
```kusto APL equivalent
['sample-http-logs']
| where status == '200'
```
In ANSI SQL, the `WHERE` clause filters rows in a `SELECT` query based on a condition. APL’s `where` operator behaves similarly, but the syntax reflects APL’s specific dataset structures.
```sql SQL example
SELECT * FROM sample_http_logs WHERE status = '200'
```
```kusto APL equivalent
['sample-http-logs']
| where status == '200'
```
## Usage
### Syntax
```kusto
| where condition
```
### Parameters
* `condition`: A Boolean expression that specifies the filtering condition. The `where` operator returns only the rows that satisfy this condition.
### Returns
The `where` operator returns a filtered dataset containing only the rows where the condition evaluates to true.
## Use case examples
In this use case, you filter HTTP logs to focus on records where the HTTP status is 404 (Not Found).
**Query**
```kusto
['sample-http-logs']
| where status == '404'
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20where%20status%20%3D%3D%20'404'%22%7D)
**Output**
| \_time | id | status | method | uri | req\_duration\_ms | geo.city | geo.country |
| ------------------- | ----- | ------ | ------ | -------------- | ----------------- | -------- | ----------- |
| 2024-10-17 10:20:00 | 12345 | 404 | GET | /notfound.html | 120 | Seattle | US |
This query filters out all HTTP requests except those that resulted in a 404 error, making it easy to investigate pages that were not found.
Here, you filter OpenTelemetry traces to retrieve spans where the `duration` exceeded 500 milliseconds.
**Query**
```kusto
['otel-demo-traces']
| where duration > 500ms
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20where%20duration%20%3E%20500ms%22%7D)
**Output**
| \_time | span\_id | trace\_id | duration | service.name | kind | status\_code |
| ------------------- | -------- | --------- | -------- | ------------ | ------ | ------------ |
| 2024-10-17 11:15:00 | abc123 | xyz789 | 520ms | frontend | server | OK |
This query helps identify spans with durations longer than 500 milliseconds, which might indicate performance issues.
In this security use case, you filter logs to find requests from users in a specific country, such as Germany.
**Query**
```kusto
['sample-http-logs']
| where ['geo.country'] == 'Germany'
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20where%20%5B'geo.country'%5D%20%3D%3D%20'Germany'%22%7D)
**Output**
| \_time | id | status | method | uri | req\_duration\_ms | geo.city | geo.country |
| ------------------- | ----- | ------ | ------ | ------ | ----------------- | -------- | ----------- |
| 2024-10-17 09:45:00 | 54321 | 200 | POST | /login | 100 | Berlin | Germany |
This query helps filter logs to investigate activity originating from a specific country, useful for security and compliance.
## where \* has
The `* has` pattern in APL is a dynamic and powerful tool within the `where` operator. It offers you the flexibility to search for specific substrings across all fields in a dataset without the need to specify each field name individually. This becomes especially advantageous when dealing with datasets that have numerous or dynamically named fields.
`where * has` is an expensive operation because it searches all fields. For a more efficient query, explicitly list the fields in which you want to search. For example: `where firstName has "miguel" or lastName has "miguel"`.
### Basic where \* has usage
Find events where any field contains a specific substring.
```kusto
['sample-http-logs']
| where * has "GET"
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20where%20%2A%20has%20%5C%22GET%5C%22%22%7D\&queryOptions=%7B%22quickRange%22%3A%2230d%22%7D)
### Combine multiple substrings
Find events where any field contains one of multiple substrings.
```kusto
['sample-http-logs']
| where * has "GET" or * has "text"
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20where%20%2A%20has%20%5C%22GET%5C%22%20or%20%2A%20has%20%5C%22text%5C%22%22%7D\&queryOptions=%7B%22quickRange%22%3A%2230d%22%7D)
### Use \* has with other operators
Find events where any field contains a substring, and another specific field equals a certain value.
```kusto
['sample-http-logs']
| where * has "css" and req_duration_ms == 1
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20where%20%2A%20has%20%5C%22css%5C%22%20and%20req_duration_ms%20%3D%3D%201%22%7D\&queryOptions=%7B%22quickRange%22%3A%2230d%22%7D)
### Advanced chaining
Filter data based on several conditions, including fields containing certain substrings, then summarize by another specific criterion.
```kusto
['sample-http-logs']
| where * has "GET" and * has "css"
| summarize Count=count() by method, content_type, server_datacenter
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20where%20%2A%20has%20%5C%22GET%5C%22%20and%20%2A%20has%20%5C%22css%5C%22%5Cn%7C%20summarize%20Count%3Dcount%28%29%20by%20method%2C%20content_type%2C%20server_datacenter%22%7D\&queryOptions=%7B%22quickRange%22%3A%2230d%22%7D)
### Use with aggregations
Find the average of a specific field for events where any field contains a certain substring.
```kusto
['sample-http-logs']
| where * has "Japan"
| summarize avg(req_duration_ms)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20where%20%2A%20has%20%5C%22Japan%5C%22%5Cn%7C%20summarize%20avg%28req_duration_ms%29%22%7D\&queryOptions=%7B%22quickRange%22%3A%2230d%22%7D)
### String case transformation
The `has` operator is case insensitive. Use `has` if you’re unsure about the case of the substring in the dataset. For the case-sensitive operator, use `has_cs`.
```kusto
['sample-http-logs']
| where * has "mexico"
| summarize avg(req_duration_ms)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20where%20%2A%20has%20%5C%22mexico%5C%22%5Cn%7C%20summarize%20avg%28req_duration_ms%29%22%7D\&queryOptions=%7B%22quickRange%22%3A%2230d%22%7D)
## List of related operators
* [count](/apl/tabular-operators/count-operator): Use `count` to return the number of records that match specific criteria.
* [distinct](/apl/tabular-operators/distinct-operator): Use `distinct` to return unique values in a dataset, complementing filtering.
* [take](/apl/tabular-operators/take-operator): Use `take` to return a specific number of records, typically in combination with `where` for pagination.
# Sample queries
Explore how to use APL in Axiom’s Query tab to run queries using Tabular Operators, Scalar Functions, and Aggregation Functions.
In this tutorial, you’ll explore how to use APL in Axiom’s Query tab to run queries using Tabular Operators, Scalar Functions, and Aggregation Functions.
## Prerequisites
* Sign up and log in to [Axiom Account](https://app.axiom.co/)
* Ingest data into your dataset or you can run queries on [Play Sandbox](https://axiom.co/play)
## Overview of APL
Every query, starts with a dataset embedded in **square brackets**, with the starting expression being a tabular operator statement. The query’s tabular expression statements produce the results of the query.
Before you can start writing tabular operators or any function, the pipe (`|`) delimiter starts the query statements as they flow from one function to another.
## Commonly used Operators
To run queries on each function or operator in this tutorial, click the **Run in Playground** button.
[summarize](/apl/tabular-operators/summarize-operator): Produces a table that aggregates the content of the dataset.
The following query returns the count of events by **time**
```kusto
['github-push-event']
| summarize count() by bin_auto(_time)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27github-push-event%27%5D%5Cn%7C%20summarize%20count%28%29%20by%20bin_auto%28_time%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
You can use the [aggregation functions](/apl/aggregation-function/statistical-functions) with the **summarize operator** to produce different columns.
## Top 10 GitHub push events by maximum push id
```kusto
['github-push-event']
| summarize max_if = maxif(push_id, true) by size
| top 10 by max_if desc
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27github-push-event%27%5D%5Cn%7C%20summarize%20max_if%20%3D%20maxif%28push_id%2C%20true%29%20by%20size%5Cn%7C%20top%2010%20by%20max_if%20desc%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
## Distinct City count by server datacenter
```kusto
['sample-http-logs']
| summarize cities = dcount(['geo.city']) by server_datacenter
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20summarize%20cities%20%3D%20dcount%28%5B%27geo.city%27%5D%29%20by%20server_datacenter%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
The result of a summarize operation has:
* A row for every combination of by values
* Each column named in by
* A column for each expression
[where](/apl/tabular-operators/where-operator): Filters the content of the dataset that meets a **condition** when executed.
The following query filters the data by **method** and **content\_type**:
```kusto
['sample-http-logs']
| where method == "GET" and content_type == "application/octet-stream"
| project method , content_type
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20where%20method%20%3D%3D%20%5C%22GET%5C%22%20and%20content_type%20%3D%3D%20%5C%22application%2Foctet-stream%5C%22%5Cn%7C%20project%20method%20%2C%20content_type%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
[count](/apl/tabular-operators/count-operator): Returns the number of events from the input dataset.
```kusto
['sample-http-logs']
| count
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20count%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
[Summarize](/apl/tabular-operators/summarize-operator) count by time bins in sample HTTP logs
```kusto
['sample-http-logs']
| summarize count() by bin_auto(_time)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20summarize%20count%28%29%20by%20bin_auto%28_time%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
[project](/apl/tabular-operators/project-operator): Selects a subset of columns.
```kusto
['sample-http-logs']
| project content_type, ['geo.country'], method, resp_body_size_bytes, resp_header_size_bytes
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20content_type%2C%20%5B%27geo.country%27%5D%2C%20method%2C%20resp_body_size_bytes%2C%20resp_header_size_bytes%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
[take](/apl/tabular-operators/take-operator): Returns up to the specified number of rows.
```kusto
['sample-http-logs']
| take 100
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20take%20100%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
The **limit** operator is an alias to the **take** operator.
```kusto
['sample-http-logs']
| limit 10
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20limit%2010%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
## Scalar Functions
#### [parse\_json()](/apl/scalar-functions/string-functions#parse-json)
The following query extracts the JSON elements from an array:
```kusto
['sample-http-logs']
| project parsed_json = parse_json( "config_jsonified_metrics")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20parsed_json%20%3D%20parse_json%28%20%5C%22config_jsonified_metrics%5C%22%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
#### [replace\_string()](/apl/scalar-functions/string-functions#parse-json): Replaces all string matches with another string.
```kusto
['sample-http-logs']
| extend replaced_string = replace_string( "creator", "method", "machala" )
| project replaced_string
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20replaced_string%20%3D%20replace_string%28%20%5C%22creator%5C%22%2C%20%5C%22method%5C%22%2C%20%5C%22machala%5C%22%20%29%5Cn%7C%20project%20replaced_string%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
#### [split()](/apl/scalar-functions/string-functions#split): Splits a given string according to a given delimiter and returns a string array.
```kusto
['sample-http-logs']
| project split_str = split("method_content_metrics", "_")
| take 20
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20split_str%20%3D%20split%28%5C%22method_content_metrics%5C%22%2C%20%5C%22_%5C%22%29%5Cn%7C%20take%2020%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
#### [strcat\_delim()](/apl/scalar-functions/string-functions#strcat-delim): Concatenates a string array into a string with a given delimiter.
```kusto
['sample-http-logs']
| project strcat = strcat_delim(":", ['geo.city'], resp_body_size_bytes)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20strcat%20%3D%20strcat_delim%28%5C%22%3A%5C%22%2C%20%5B%27geo.city%27%5D%2C%20resp_body_size_bytes%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
#### [indexof()](/apl/scalar-functions/string-functions#indexof): Reports the zero-based index of the first occurrence of a specified string within the input string.
```kusto
['sample-http-logs']
| extend based_index = indexof( ['geo.country'], content_type, 45, 60, resp_body_size_bytes ), specified_time = bin(resp_header_size_bytes, 30)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20based_index%20%3D%20%20indexof%28%20%5B%27geo.country%27%5D%2C%20content_type%2C%2045%2C%2060%2C%20resp_body_size_bytes%20%29%2C%20specified_time%20%3D%20bin%28resp_header_size_bytes%2C%2030%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
## Regex Examples
```kusto
['sample-http-logs']
| project remove_cutset = trim_start_regex("[^a-zA-Z]", content_type )
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20remove_cutset%20%3D%20trim_start_regex%28%5C%22%5B%5Ea-zA-Z%5D%5C%22%2C%20content_type%20%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
## Finding logs from a specific City
```kusto
['sample-http-logs']
| where tostring(geo.city) matches regex "^Camaquã$"
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20where%20tostring%28%5B%27geo.city%27%5D%29%20matches%20regex%20%5C%22%5ECamaqu%C3%A3%24%5C%22%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
## Identifying logs from a specific user agent
```kusto
['sample-http-logs']
| where tostring(user_agent) matches regex "Mozilla/5.0"
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20where%20tostring%28user_agent%29%20matches%20regex%20%5C%22Mozilla%2F5.0%5C%22%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
## Finding logs with response body size in a certain range
```kusto
['sample-http-logs']
| where toint(resp_body_size_bytes) >= 4000 and toint(resp_body_size_bytes) <= 5000
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20where%20toint%28resp_body_size_bytes%29%20%3E%3D%204000%20and%20toint%28resp_body_size_bytes%29%20%3C%3D%205000%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
## Finding logs with user agents containing Windows NT
```kusto
['sample-http-logs']
| where tostring(user_agent) matches regex @"Windows NT [\d\.]+"
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?qid=m8yNkSVVjGq-s0z19c)
## Finding logs with specific response header size
```kusto
['sample-http-logs']
| where toint(resp_header_size_bytes) == 31
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20where%20toint%28resp_header_size_bytes%29%20%3D%3D%2031%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
## Finding logs with specific request duration
```kusto
['sample-http-logs']
| where toreal(req_duration_ms) < 1
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20where%20toreal%28req_duration_ms%29%20%3C%201%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
## Finding logs where TLS is enabled and method is POST
```kusto
['sample-http-logs']
| where tostring(is_tls) == "true" and tostring(method) == "POST"
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20where%20tostring%28is_tls%29%20%3D%3D%20%5C%22true%5C%22%20and%20tostring%28method%29%20%3D%3D%20%5C%22POST%5C%22%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
## Array functions
#### [array\_concat()](/apl/scalar-functions/array-functions#array_concat): Concatenates a number of dynamic arrays to a single array.
```kusto
['sample-http-logs']
| extend concatenate = array_concat( dynamic([5,4,3,87,45,2,3,45]))
| project concatenate
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20concatenate%20%3D%20array_concat%28%20dynamic%28%5B5%2C4%2C3%2C87%2C45%2C2%2C3%2C45%5D%29%29%5Cn%7C%20project%20concatenate%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
#### [array\_sum()](/apl/scalar-functions/array-functions#array-sum): Calculates the sum of elements in a dynamic array.
```kusto
['sample-http-logs']
| extend summary_array=dynamic([1,2,3,4])
| project summary_array=array_sum(summary_array)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20summary_array%3Ddynamic%28%5B1%2C2%2C3%2C4%5D%29%5Cn%7C%20project%20summary_array%3Darray_sum%28summary_array%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
## Conversion functions
#### [todatetime()](/apl/scalar-functions/conversion-functions#todatetime): Converts input to datetime scalar.
```kusto
['sample-http-logs']
| extend dated_time = todatetime("2026-08-16")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20dated_time%20%3D%20todatetime%28%5C%222026-08-16%5C%22%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
#### [dynamic\_to\_json()](/apl/scalar-functions/conversion-functions#dynamic-to-json): Converts a scalar value of type dynamic to a canonical string representation.
```kusto
['sample-http-logs']
| extend dynamic_string = dynamic_to_json(dynamic([10,20,30,40 ]))
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20dynamic_string%20%3D%20dynamic_to_json%28dynamic%28%5B10%2C20%2C30%2C40%20%5D%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
## String Operators
[We support various query string](/apl/scalar-operators/string-operators), [logical](/apl/scalar-operators/logical-operators) and [numerical operators](/apl/scalar-operators/numerical-operators).
In the query below, we use the **contains** operator, to find the strings that contain the string **-bot** and **\[bot]**:
```kusto
['github-issue-comment-event']
| extend bot = actor contains "-bot" or actor contains "[bot]"
| where bot == true
| summarize count() by bin_auto(_time), actor
| take 20
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27github-issue-comment-event%27%5D%5Cn%7C%20extend%20bot%20%3D%20actor%20contains%20%5C%22-bot%5C%22%20or%20actor%20contains%20%5C%22%5Bbot%5D%5C%22%5Cn%7C%20where%20bot%20%3D%3D%20true%5Cn%7C%20summarize%20count%28%29%20by%20bin_auto%28_time%29%2C%20actor%5Cn%7C%20take%2020%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
```kusto
['sample-http-logs']
| extend user_status = status contains "200" , agent_flow = user_agent contains "(Windows NT 6.4; AppleWebKit/537.36 Chrome/41.0.2225.0 Safari/537.36"
| where user_status == true
| summarize count() by bin_auto(_time), status
| take 15
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20user_status%20%3D%20status%20contains%20%5C%22200%5C%22%20%2C%20agent_flow%20%3D%20user_agent%20contains%20%5C%22%28Windows%20NT%206.4%3B%20AppleWebKit%2F537.36%20Chrome%2F41.0.2225.0%20Safari%2F537.36%5C%22%5Cn%7C%20where%20user_status%20%3D%3D%20true%5Cn%7C%20summarize%20count%28%29%20by%20bin_auto%28_time%29%2C%20status%5Cn%7C%20take%2015%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
## Hash Functions
* [hash\_md5()](/apl/scalar-functions/hash-functions#hash-md5): Returns an MD5 hash value for the input value.
* [hash\_sha256()](/apl/scalar-functions/hash-functions#hash-sha256): Returns a sha256 hash value for the input value.
* [hash\_sha1()](/apl/scalar-functions/hash-functions#hash-sha1): Returns a sha1 hash value for the input value.
```kusto
['sample-http-logs']
| extend sha_256 = hash_md5( "resp_header_size_bytes" ), sha_1 = hash_sha1( content_type), md5 = hash_md5( method), sha512 = hash_sha512( "resp_header_size_bytes" )
| project sha_256, sha_1, md5, sha512
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20sha_256%20%3D%20hash_md5%28%20%5C%22resp_header_size_bytes%5C%22%20%29%2C%20sha_1%20%3D%20hash_sha1%28%20content_type%29%2C%20md5%20%3D%20hash_md5%28%20method%29%2C%20sha512%20%3D%20hash_sha512%28%20%5C%22resp_header_size_bytes%5C%22%20%29%5Cn%7C%20project%20sha_256%2C%20sha_1%2C%20md5%2C%20sha512%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
## List all unique groups
```kusto
['sample-http-logs']
| distinct ['id'], is_tls
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20distinct%20%5B'id'%5D%2C%20is_tls%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
## Count of all events per service
```kusto
['sample-http-logs']
| summarize Count = count() by server_datacenter
| order by Count desc
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20summarize%20Count%20%3D%20count%28%29%20by%20server_datacenter%5Cn%7C%20order%20by%20Count%20desc%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
## Change the time clause
```kusto
['github-issues-event']
| where _time == ago(1m)
| summarize count(), sum(['milestone.number']) by _time=bin(_time, 1m)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27github-issues-event%27%5D%5Cn%7C%20where%20_time%20%3D%3D%20ago%281m%29%5Cn%7C%20summarize%20count%28%29%2C%20sum%28%5B%27milestone.number%27%5D%29%20by%20_time%3Dbin%28_time%2C%201m%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
## Rounding functions
* [floor()](/apl/scalar-functions/rounding-functions#floor): Calculates the largest integer less than, or equal to, the specified numeric expression.
* [ceiling()](/apl/scalar-functions/rounding-functions#ceiling): Calculates the smallest integer greater than, or equal to, the specified numeric expression.
* [bin()](/apl/scalar-functions/rounding-functions#bin): Rounds values down to an integer multiple of a given bin size.
```kusto
['sample-http-logs']
| extend largest_integer_less = floor( resp_header_size_bytes ), smallest_integer_greater = ceiling( req_duration_ms ), integer_multiple = bin( resp_body_size_bytes, 5 )
| project largest_integer_less, smallest_integer_greater, integer_multiple
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20largest_integer_less%20%3D%20floor%28%20resp_header_size_bytes%20%29%2C%20smallest_integer_greater%20%3D%20ceiling%28%20req_duration_ms%20%29%2C%20integer_multiple%20%3D%20bin%28%20resp_body_size_bytes%2C%205%20%29%5Cn%7C%20project%20largest_integer_less%2C%20smallest_integer_greater%2C%20integer_multiple%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
## Truncate decimals using round function
```kusto
['sample-http-logs']
| project rounded_value = round(req_duration_ms, 2)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20project%20rounded_value%20%3D%20round%28req_duration_ms%2C%202%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
## Truncate decimals using floor function
```kusto
['sample-http-logs']
| project floor_value = floor(resp_body_size_bytes), ceiling_value = ceiling(req_duration_ms)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20project%20floor_value%20%3D%20floor%28resp_body_size_bytes%29%2C%20ceiling_value%20%3D%20ceiling%28req_duration_ms%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
## HTTP 5xx responses (day wise) for the last 7 days - one bar per day
```kusto
['sample-http-logs']
| where _time > ago(7d)
| where req_duration_ms >= 5 and req_duration_ms < 6
| summarize count(), histogram(resp_header_size_bytes, 20) by bin(_time, 1d)
| order by _time desc
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20where%20_time%20%3E%20ago\(7d\)%20%7C%20where%20req_duration_ms%20%3E%3D%205%20and%20req_duration_ms%20%3C%206%20%7C%20summarize%20count\(\)%2C%20histogram\(resp_header_size_bytes%2C%2020\)%20by%20bin\(_time%2C%201d\)%20%7C%20order%20by%20_time%20desc%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%227d%22%7D%7D)
## Implement a remapper on remote address logs
```kusto
['sample-http-logs']
| extend RemappedStatus = case(req_duration_ms >= 0.57, "new data", resp_body_size_bytes >= 1000, "size bytes", resp_header_size_bytes == 40, "header values", "doesntmatch")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20extend%20RemappedStatus%20%3D%20case%28req_duration_ms%20%3E%3D%200.57%2C%20%5C%22new%20data%5C%22%2C%20resp_body_size_bytes%20%3E%3D%201000%2C%20%5C%22size%20bytes%5C%22%2C%20resp_header_size_bytes%20%3D%3D%2040%2C%20%5C%22header%20values%5C%22%2C%20%5C%22doesntmatch%5C%22%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
## Advanced aggregations
In this section, you will learn how to run queries using different functions and operators.
```kusto
['sample-http-logs']
| extend prospect = ['geo.city'] contains "Okayama" or uri contains "/api/v1/messages/back"
| extend possibility = server_datacenter contains "GRU" or status contains "301"
| summarize count(), topk( user_agent, 6 ) by bin(_time, 10d), ['geo.country']
| take 4
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20prospect%20%3D%20%5B%27geo.city%27%5D%20contains%20%5C%22Okayama%5C%22%20or%20uri%20contains%20%5C%22%2Fapi%2Fv1%2Fmessages%2Fback%5C%22%5Cn%7C%20extend%20possibility%20%3D%20server_datacenter%20contains%20%5C%22GRU%5C%22%20or%20status%20contains%20%5C%22301%5C%22%5Cn%7C%20summarize%20count%28%29%2C%20topk%28%20user_agent%2C%206%20%29%20by%20bin%28_time%2C%2010d%29%2C%20%5B%27geo.country%27%5D%5Cn%7C%20take%204%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
## Searching map fields
```kusto
['otel-demo-traces']
| where isnotnull( ['attributes.custom'])
| extend extra = tostring(['attributes.custom'])
| search extra:"0PUK6V6EV0"
| project _time, trace_id, name, ['attributes.custom']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%5Cn%7C%20where%20isnotnull%28%20%5B'attributes.custom'%5D%29%5Cn%7C%20extend%20extra%20%3D%20tostring%28%5B'attributes.custom'%5D%29%5Cn%7C%20search%20extra%3A%5C%220PUK6V6EV0%5C%22%5Cn%7C%20project%20_time%2C%20trace_id%2C%20name%2C%20%5B'attributes.custom'%5D%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
## Configure Processing rules
```kusto
['sample-http-logs']
| where _sysTime > ago(1d)
| summarize count() by method
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20where%20_sysTime%20%3E%20ago%281d%29%5Cn%7C%20summarize%20count%28%29%20by%20method%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%221d%22%7D%7D)
## Return different values based on the evaluation of a condition
```kusto
['sample-http-logs']
| extend MemoryUsageStatus = iff(req_duration_ms > 10000, "Highest", "Normal")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20MemoryUsageStatus%20%3D%20iff%28req_duration_ms%20%3E%2010000%2C%20%27Highest%27%2C%20%27Normal%27%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
## Working with different operators
```kusto
['hn']
| extend superman = text contains "superman" or title contains "superman"
| extend batman = text contains "batman" or title contains "batman"
| extend hero = case(
superman and batman, "both",
superman, "superman ", // spaces change the color
batman, "batman ",
"none")
| where (superman or batman) and not (batman and superman)
| summarize count(), topk(type, 3) by bin(_time, 30d), hero
| take 10
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27hn%27%5D%5Cn%7C%20extend%20superman%20%3D%20text%20contains%20%5C%22superman%5C%22%20or%20title%20contains%20%5C%22superman%5C%22%5Cn%7C%20extend%20batman%20%3D%20text%20contains%20%5C%22batman%5C%22%20or%20title%20contains%20%5C%22batman%5C%22%5Cn%7C%20extend%20hero%20%3D%20case%28%5Cn%20%20%20%20superman%20and%20batman%2C%20%5C%22both%5C%22%2C%5Cn%20%20%20%20superman%2C%20%5C%22superman%20%20%20%5C%22%2C%20%2F%2F%20spaces%20change%20the%20color%5Cn%20%20%20%20batman%2C%20%5C%22batman%20%20%20%20%20%20%20%5C%22%2C%5Cn%20%20%20%20%5C%22none%5C%22%29%5Cn%7C%20where%20%28superman%20or%20batman%29%20and%20not%20%28batman%20and%20superman%29%5Cn%7C%20summarize%20count%28%29%2C%20topk%28type%2C%203%29%20by%20bin%28_time%2C%2030d%29%2C%20hero%5Cn%7C%20take%2010%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
```kusto
['sample-http-logs']
| summarize flow = dcount( content_type) by ['geo.country']
| take 50
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20summarize%20flow%20%3D%20dcount%28%20content_type%29%20by%20%5B%27geo.country%27%5D%5Cn%7C%20take%2050%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
## Get the JSON into a property bag using parse-json
```kusto
example
| where isnotnull(log)
| extend parsed_log = parse_json(log)
| project service, parsed_log.level, parsed_log.message
```
## Get average response using project keep function
```kusto
['sample-http-logs']
| where ['geo.country'] == "United States" or ['id'] == 'b2b1f597-0385-4fed-a911-140facb757ef'
| extend systematic_view = ceiling( resp_header_size_bytes )
| extend resp_avg = cos( resp_body_size_bytes )
| project-away systematic_view
| project-keep resp_avg
| take 5
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20where%20%5B'geo.country'%5D%20%3D%3D%20%5C%22United%20States%5C%22%20or%20%5B'id'%5D%20%3D%3D%20%5C%22b2b1f597-0385-4fed-a911-140facb757ef%5C%22%5Cn%7C%20extend%20systematic_view%20%3D%20ceiling%28%20resp_header_size_bytes%20%29%5Cn%7C%20extend%20resp_avg%20%3D%20cos%28%20resp_body_size_bytes%20%29%5Cn%7C%20project-away%20systematic_view%5Cn%7C%20project-keep%20resp_avg%5Cn%7C%20take%205%22%7D)
## Combine multiple percentiles into a single chart in APL
```kusto
['sample-http-logs']
| summarize percentiles_array(req_duration_ms, 50, 75, 90) by bin_auto(_time)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20summarize%20percentiles_array\(req_duration_ms%2C%2050%2C%2075%2C%2090\)%20by%20bin_auto\(_time\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
## Combine mathematical functions
```kusto
['sample-http-logs']
| extend tangent = tan( req_duration_ms ), cosine = cos( resp_header_size_bytes ), absolute_input = abs( req_duration_ms ), sine = sin( resp_header_size_bytes ), power_factor = pow( req_duration_ms, 4)
| extend angle_pi = degrees( resp_body_size_bytes ), pie = pi()
| project tangent, cosine, absolute_input, angle_pi, pie, sine, power_factor
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20tangent%20%3D%20tan%28%20req_duration_ms%20%29%2C%20cosine%20%3D%20cos%28%20resp_header_size_bytes%20%29%2C%20absolute_input%20%3D%20abs%28%20req_duration_ms%20%29%2C%20sine%20%3D%20sin%28%20resp_header_size_bytes%20%29%2C%20power_factor%20%3D%20pow%28%20req_duration_ms%2C%204%29%5Cn%7C%20extend%20angle_pi%20%3D%20degrees%28%20resp_body_size_bytes%20%29%2C%20pie%20%3D%20pi%28%29%5Cn%7C%20project%20tangent%2C%20cosine%2C%20absolute_input%2C%20angle_pi%2C%20pie%2C%20sine%2C%20power_factor%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
```kusto
['github-issues-event']
| where actor !endswith "[bot]"
| where repo startswith "kubernetes/"
| where action == "opened"
| summarize count() by bin_auto(_time)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27github-issues-event%27%5D%5Cn%7C%20where%20actor%20%21endswith%20%5C%22%5Bbot%5D%5C%22%5Cn%7C%20where%20repo%20startswith%20%5C%22kubernetes%2F%5C%22%5Cn%7C%20where%20action%20%3D%3D%20%5C%22opened%5C%22%5Cn%7C%20summarize%20count%28%29%20by%20bin_auto%28_time%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
## Change global configuration attributes
```kusto
['sample-http-logs']
| extend status = coalesce(status, "info")
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20extend%20status%20%3D%20coalesce\(status%2C%20%5C%22info%5C%22\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
## Set defualt value on event field
```kusto
['sample-http-logs']
| project status = case(
isnotnull(status) and status != "", content_type, // use the contenttype if it’s not null and not an empty string
"info" // default value
)
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20project%20status%20%3D%20case\(isnotnull\(status\)%20and%20status%20!%3D%20%5C%22%5C%22%2C%20content_type%2C%20%5C%22info%5C%22\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
## Extract nested payment amount from custom attributes map field
```kusto
['otel-demo-traces']
| extend amount = ['attributes.custom']['app.payment.amount']
| where isnotnull( amount)
| project _time, trace_id, name, amount, ['attributes.custom']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20extend%20amount%20%3D%20%5B'attributes.custom'%5D%5B'app.payment.amount'%5D%20%7C%20where%20isnotnull\(%20amount\)%20%7C%20project%20_time%2C%20trace_id%2C%20name%2C%20amount%2C%20%5B'attributes.custom'%5D%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D)
## Filtering GitHub issues by label identifier
```kusto
['github-issues-event']
| extend data = tostring(labels)
| where labels contains "d73a4a"
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%20%7C%20extend%20data%20%3D%20tostring\(labels\)%20%7C%20where%20labels%20contains%20'd73a4a'%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D)
## Aggregate trace counts by HTTP method attribute in custom map
```kusto
['otel-demo-traces']
| extend httpFlavor = tostring(['attributes.custom'])
| summarize Count=count() by ['attributes.http.method']
```
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20extend%20httpFlavor%20%3D%20tostring\(%5B'attributes.custom'%5D\)%20%7C%20summarize%20Count%3Dcount\(\)%20by%20%5B'attributes.http.method'%5D%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D)
# Connect Axiom with Cloudflare Logpush
Axiom gives you an all-at-once view of key Cloudflare Logpush metrics and logs, out of the box, with our dynamic Cloudflare Logpush dashboard.
Cloudflare Logpush is a feature that allows you to push HTTP request logs and other Cloudflare-generated logs directly to your desired storage, analytics, and monitoring solutions like Axiom. The integration with Axiom aims to provide real-time insights into web traffic, and operational issues, thereby helping to monitor and troubleshoot effectively.
## What’s Cloudflare Logpush?
Cloudflare Logpush enables Cloudflare users to automatically export their logs in JSON format to a variety of endpoints. This feature is incredibly useful for analytics, auditing, debugging, and monitoring the performance and security of websites. Types of logs you can export include HTTP request logs, firewall events, and more.
## Installing Cloudflare Logpush app
### Prerequisites
* An active Cloudflare Enterprise account
* API token or global API key
You can create a token that has access to a single zone, single account or a mix of all these, depending on your needs. For account access, the token must
have theses permissions:
* Logs: Edit
* Account settings: Read
For the zones, only edit permission is required for logs.
## Steps
* Log in to Cloudflare, go to your Cloudflare dashboard, and then select the Enterprise zone (domain) you want to enable Logpush for.
* Optionally, set filters and fields. You can filter logs by field (like Client IP, User Agent, etc.) and set the type of logs you want (for example, HTTP requests, firewall events).
* In Axiom, click **Settings**, select **Apps**, and install the Cloudflare Logpush app with the token you created from the profile settings in Cloudflare.
* You see your available accounts and zones. Select the Cloudflare datasets you want to subscribe to.
* The installation uses the Cloudflare API to create Logpush jobs for each selected dataset.
* After the installation completes, you can find the installed Logpush jobs at Cloudflare.
For zone-scoped Logpush jobs:
For account-scoped Logpush jobs:
* In the Axiom, you can see your Cloudflare Logpush dashboard.
Using Axiom with Cloudflare Logpush offers a powerful solution for real-time monitoring, observability, and analytics. Axiom can help you gain deep insights into your app’s performance, errors, and app bottlenecks.
### Benefits of using the Axiom Cloudflare Logpush Dashboard
* Real-time visibility into web performance: One of the most crucial features is the ability to see how your website or app is performing in real-time. The dashboard can show everything from page load times to error rates, giving you immediate insights that can help in timely decision-making.
* Actionable insights for troubleshooting: The dashboard doesn’t just provide raw data; it provides insights. Whether it’s an error that needs immediate fixing or performance metrics that show an error from your app, having this information readily available makes it easier to identify problems and resolve them swiftly.
* DNS metrics: Understanding the DNS requests, DNS queries, and DNS cache hit from your app is vital to track if there’s a request spike or get the total number of queries in your system.
* Centralized logging and error tracing: With logs coming in from various parts of your app stack, centralizing them within Axiom makes it easier to correlate events across different layers of your infrastructure. This is crucial for troubleshooting complex issues that may span multiple services or components.
## Supported Cloudflare Logpush Datasets
Axiom supports all the Cloudflare account-scoped datasets.
Zone-scoped
* DNS logs
* Firewall events
* HTTP requests
* NEL reports
* Spectrum events
Account-scoped
* Access requests
* Audit logs
* CASB Findings
* Device posture results
* DNS Firewall Logs
* Gateway DNS
* Gateway HTTP
* Gateway Network
* Magic IDS Detections
* Network Analytics Logs
* Workers Trace Events
* Zero Trust Network Session Logs
# Connect Axiom with Cloudflare Workers
Axiom gives you an all-at-once view of key Cloudflare worker’s metrics and logs, out of the box, with our Dynamic Cloudflare workers dashboard.
Axiom’s Cloudflare Workers app provides granular detail about the traffic coming in from your monitored sites. This includes edge requests, static resources, client auth, response duration, and status. Axiom gives you an all-at-once view of key Cloudflare worker’s metrics and logs, out of the box, with our Dynamic Cloudflare workers dashboard.
The data obtained with the Axiom dashboard gives you better insights into the state of your Cloudflare workers so you can easily monitor bad requests, popular URLs, cumulative execution time, successful requests, and more. The app is part of Axiom’s unified logging and observability platform, so you can easily track Cloudflare Workers edge requests alongside a comprehensive view of other resources in your Cloudflare worker environments.
## What is Cloudflare Workers
Cloudflare Workers is a serverless computing platform developed by Cloudflare. The Workers platform allows developers to deploy and run JavaScript code directly at the network edge in more than 200 data centers worldwide. This serverless architecture enables high performance, low latency, and efficient scaling for web apps and APIs.
## Sending Cloudflare Worker logs to Axiom
The Axiom Cloudflare worker repository plugin is available on [GitHub](https://github.com/axiomhq/axiom-cloudflare-workers).
1. Copy the contents of [src/worker.js](https://github.com/axiomhq/axiom-cloudflare-workers/blob/main/src/worker.js) into a new worker on Cloudflare.
2. Update the authentication variables to corresponding dataset and token:
```bash
const axiomDataset = "my-dataset" // Your Axiom dataset
const axiomToken = "xapt-xxx" // Your Axiom API token
```
* The dataset is where your Cloudflare worker logs will be stored. Create a dataset from the settings page in the Axiom UI.
* The Axiom token will be your API token with ingest and query permissions.
3. Add triggers for the worker, for example,, a route trigger:
* Navigate to the worker and click on the Triggers tab.
* Scroll down to Routes and click Add Route.
* Enter a route, for example,, \*.example.com, choose the related zone, then click Save.
## View Cloudflare Workers Logs
When requests are made to the routes you set up, the worker will be triggered, and you will see the logs delivered to your Axiom dataset.
# Connect Axiom with Grafana
Learn how to extend the functionality of Grafana by installing the Axiom data source plugin.
## What is a Grafana data source plugin?
Grafana is an open-source tool for time-series analytics, visualization, and alerting. It’s frequently used in DevOps and IT Operations roles to provide real-time information on system health and performance.
Data sources in Grafana are the actual databases or services where the data is stored. Grafana has a variety of data source plugins that connect Grafana to different types of databases or services. This enables Grafana to query those sources from display that data on its dashboards. The data sources can be anything from traditional SQL databases to time-series databases or metrics, and logs from Axiom.
A Grafana data source plugin extends the functionality of Grafana by allowing it to interact with a specific type of data source. These plugins enable users to extract data from a variety of different sources, not just those that come supported by default in Grafana.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/).
* [Create a dataset in Axiom](/reference/datasets) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to create, read, update, and delete datasets.
## Install the Axiom Grafana data source plugin on Grafana Cloud
* In Grafana, click Administration > Plugins in the side navigation menu to view installed plugins.
* In the filter bar, search for the Axiom plugin
* Click on the plugin logo.
* Click Install.
When the update is complete, a confirmation message is displayed, indicating that the installation was successful.
* The Axiom Grafana Plugin is also installable from the [Grafana Plugins page](https://grafana.com/grafana/plugins/axiomhq-axiom-datasource/)
## Install the Axiom Grafana data source plugin on local Grafana
The Axiom data source plugin for Grafana is [open source on GitHub](https://github.com/axiomhq/axiom-grafana). It can be installed via the Grafana CLI, or via Docker.
### Install the Axiom Grafana Plugin using Grafana CLI
```bash
grafana-cli plugins install axiomhq-axiom-datasource
```
### Install Via Docker
* Add the plugin to your `docker-compose.yml` or `Dockerfile`
* Set the environment variable `GF_INSTALL_PLUGINS` to include the plugin
Example:
`GF_INSTALL_PLUGINS="axiomhq-axiom-datasource"`
## Configuration
* Add a new data source in Grafana
* Select the Axiom data source type.
* Enter the previously generated API token.
* Save and test the data source.
## Build Queries with Query Editor
The Axiom data source Plugin provides a custom query editor to build and visualize your Axiom event data. After configuring the Axiom data source, start building visualizations from metrics and logs stored in Axiom.
* Create a new panel in Grafana by clicking on Add visualization
* Select the Axiom data source.
* Use the query editor to choose the desired metrics, dimensions, and filters.
## Benefits of the Axiom Grafana data source plugin
The Axiom Grafana data source plugin allows users to display and interact with their Axiom data directly from within Grafana. By doing so, it provides several advantages:
1. **Unified visualization:** The Axiom Grafana data source plugin allows users to utilize Grafana’s powerful visualization tools with Axiom’s data. This enables users to create, explore, and share dashboards which visually represent their Axiom logs and metrics.
2. **Rich Querying Capability:** Grafana has a powerful and flexible interface for building data queries. With the Axiom plugin, and leverage this capability to build complex queries against your Axiom data.
3. **Customizable Alerting:** Grafana’s alerting feature allows you to set alerts based on your queries' results, and set up custom alerts based on specific conditions in your Axiom log data.
4. **Sharing and Collaboration:** Grafana’s features for sharing and collaboration can help teams work together more effectively. Share Axiom data visualizations with others, collaborate on dashboards, and discuss insights directly in Grafana.
# Apps
Enrich your Axiom organization with dedicated apps.
This section walks you through a catalog of dedicated apps that enrich your Axiom organization.
To use standard APIs and other data shippers like the Elasticsearch Bulk API, FluentBit log processor or Fluentd log collector, go to [Send data](/send-data/ingest) instead.
# Enrich Axiom experience with AWS Lambda
This page explains how to enrich your Axiom experience with AWS Lambda.
Use the Axiom Lambda Extension to enrich your Axiom organization with quick filters and a dashboard.
For information on how to send logs and platform events of your Lambda function to Axiom, see [Send data from AWS Lambda](/send-data/aws-lambda).
## What’s the Axiom Lambda Extension
AWS Lambda is a compute service that allows you to build applications and run your code at scale without provisioning or maintaining any servers.
Use the AWS Lambda Extension to collect Lambda logs, performance metrics, platform events, and memory usage from your Lambda functions. With the Axiom Lambda Extension, you can monitor Lambda performance and aggregate system-level metrics for your serverless applications and optimize lambda functions through easy-to-use automatic dashboards.
With the Axiom Lambda extension, you can:
* Monitor your Lambda functions and invocations.
* Get full visibility into your AWS Lambda events in minutes.
* Collect metrics and logs from your Lambda-based Serverless Applications.
* Track and view enhanced memory usage by versions, durations, and cold start.
* Detect and get alerts on Lambda event errors, Lambda request timeout, and low execution time.
## Comprehensive AWS Lambda dashboards
The Axiom AWS Lambda integration comes with a pre-built dashboard where you can see and group your functions with the versions and AWS resource that triggers them, making this the ideal starting point for getting an advanced view of the performance and health of your AWS Lambda serverless services and Lambda function events. The AWS Lambda dashboards automatically show up in Axiom through schema detection after installing the Axiom Lambda Extension.
These new zero-config dashboards help you spot and troubleshoot Lambda function errors. For example, if there’s high memory usage on your functions, you can spot the unusual delay from the max execution dashboard and filter your errors by functions, durations, invocations, and versions. With your Lambda version name, you can gain and expand your views on what’s happening in your Lambda event source mapping and invocation type.
## Monitor Lambda functions and usage in Axiom
Having real-time visibility into your function logs is important because any duration between sending your lambda request and the execution time can cause a delay and adds to customer-facing latency. You need to be able to measure and track your Lambda invocations, maximum and minimum execution time, and all invocations by function.
The Axiom Lambda Extension gives you full visibility into the most important metrics and logs coming from your Lambda function out of the box without any further configuration required.
## Track cold start on your Lambda function
A cold start occurs when there’s a delay between your invocation and runtime created during the initialization process. During this period, there’s no available function instance to respond to an invocation. With the Axiom built-in Serverless AWS Lambda dashboard, you can track and see the effect of cold start on your Lambda functions and its impact on every Lambda function. This data lets you know when to take actionable steps, such as using provisioned concurrency or reducing function dependencies.
## Optimize slow-performing Lambda queries
Grouping logs with Lambda invocations and execution time by function provides insights into your events request and response pattern. You can extend your query to view when an invocation request is rejected and configure alerts to be notified on Serverless log patterns and Lambda function payloads. With the invocation request dashboard, you can monitor request function logs and see how your Lambda serverless functions process your events and Lambda queues over time.
## Detect timeout on your Lambda function
Axiom Lambda function monitors let you identify the different points of invocation failures, cold-start delays, and AWS Lambda errors on your Lambda functions. With standard function logs like invocations by function, and Lambda cold start, monitoring the rate of your execution time can alert you to be aware of a significant spike whenever an error occurs in your Lambda function.
## Smart filters
Axiom Lambda Serverless Smart Filters lets you easily filter down to specific AWS Lambda functions or Serverless projects and use saved queries to get deep insights on how functions are performing with a single click.
# Connect Axiom with Netlify
Integrating Axiom with Netlify to get a comprehensive observability experience for your Netlify projects. This app will give you a better understanding of how your Jamstack apps are performing.
Integrate Axiom with Netlify to get a comprehensive observability experience for your Netlify projects. This integration will give you a better understanding of how your Jamstack apps are performing.
You can easily monitor logs and metrics related to your website traffic, serverless functions, and app requests. The integration is easy to set up, and you don’t need to configure anything to get started.
With Axiom’s Zero-Config Observability app, you can see all your metrics in real-time, without sampling. That means you can get a complete view of your app’s performance without any gaps in data.
Axiom’s Netlify app is complete with a pre-built dashboard that gives you control over your Jamstack projects. You can use this dashboard to track key metrics and make informed decisions about your app’s performance.
Overall, the Axiom Netlify app makes it easy to monitor and optimize your Jamstack apps. However, do note that this integration is only available for Netlify customers enterprise-level plans where [Log Drains are supported](https://docs.netlify.com/monitor-sites/log-drains/).
## What is Netlify
Netlify is a platform for building highly-performant and dynamic websites, e-commerce stores, and web apps. Netlify automatically builds your site and deploys it across its global edge network.
The Netlify platform provides teams everything they need to take modern web projects from the first preview to full production.
## Sending logs to Axiom
The log events gotten from Axiom gives you better insight into the state of your Netlify sites environment so that you can easily monitor traffic volume, website configurations, function logs, resource usage, and more.
1. Simply login to your [Axiom account](https://app.axiom.co/), click on **Apps** from the **Settings** menu, select the **Netlify app** and click on **Install now**.
* It’ll redirect you to Netlify to authorize Axiom.
* Click **Authorize**, and then copy the integration token.
2. Log into your **Netlify Team Account**, click on your site settings and select **Log Drains**.
* In your log drain service, select **Axiom**, paste the integration token from Step 1, and then click **Connect**.
## App overview
### Traffic and function Logs
With Axiom, you can instrument, and actively monitor your Netlify sites, stream your build logs, and analyze your deployment process, or use our pre-build Netlify Dashboard to get an overview of all the important traffic data, usage, and metrics. Various logs will be produced when users collaborate and interact with your sites and websites hosted on Netlify. Axiom captures and ingests all these logs into the `netlify` dataset.
You can also drill down to your site source with our advanced query language and fork our dashboard to start building your own site monitors.
* Back in your Axiom datasets console you'll see all your traffic and function logs in your `netlify` dataset.
### Live stream logs
Stream your sites and app logs live, and filter them to see important information.
### Zero-config dashboard for your Netlify sites
Use our pre-build Netlify Dashboard to get an overview of all the important metrics. When ready, you can fork our dashboard and start building your own!
## Start logging Netlify Sites today
Axiom Netlify integration allows you to monitor, and log all of your sites, and apps in one place. With the Axiom app, you can quickly detect site errors, and get high-level insights into your Netlify projects.
* We welcome ideas, feedback, and collaboration, join us in our [Discord Community](http://axiom.co/discord) to share them with us.
# Connect Axiom with Tailscale
This page explains how to integrate Axiom with Tailscale.
Tailscale is a secure networking solution that allows you to create and manage a private network (tailnet), securely connecting all your devices.
Integrating Axiom with Tailscale allows you to stream your audit and network flow logs directly to Axiom seamlessly, unlocking powerful insights and analysis. Whether you’re conducting a security audit, optimizing performance, or ensuring compliance, Axiom’s Tailscale dashboard equips you with the tools to maintain a secure and efficient network, respond quickly to potential issues, and make informed decisions about your network configuration and usage.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
{/* list separator */}
* [Create a Tailscale account](https://login.tailscale.com/start).
## Setup
1. In Tailscale, go to the [configuration logs page](https://login.tailscale.com/admin/logs) of the admin console.
2. Add Axiom as a configuration log streaming destination in Tailscale. For more information, see the [Tailscale documentation](https://tailscale.com/kb/1255/log-streaming?q=stream#add-a-configuration-log-streaming-destination).
## Tailscale dashboard
Axiom displays the data it receives in a pre-built Tailscale dashboard that delivers immediate, actionable insights into your tailnet’s activity and health.
This comprehensive overview includes:
* **Log type distribution**: Understand the balance between configuration audit logs and network flow logs over time.
* **Top actions and hosts**: Identify the most common network actions and most active devices.
* **Traffic visualization**: View physical, virtual, and exit traffic patterns for both sources and destinations.
* **User activity tracking**: Monitor actions by user display name, email, and ID for security audits and compliance.
* **Configuration log stream**: Access a detailed audit trail of all configuration changes.
With these insights, you can:
* Quickly identify unusual network activity or traffic patterns.
* Track configuration changes and user actions.
* Monitor overall network health and performance.
* Investigate specific events or users as needed.
* Understand traffic distribution across your tailnet.
# Connect Axiom with Terraform
Provision and manage Axiom resources such as datasets and monitors with Terraform.
Axiom Terraform Provider lets you provision and manage Axiom resources (datasets, notifiers, monitors, and users) with Terraform. This means that you can programmatically create resources, access existing ones, and perform further infrastructure automation tasks.
Install the Axiom Terraform Provider from the [Terraform Registry](https://registry.terraform.io/providers/axiomhq/axiom/latest). To see the provider in action, check out the [example](https://github.com/axiomhq/terraform-provider-axiom/blob/main/example/main.tf).
This guide explains how to install the provider and perform some common procedures such as creating new resources and accessing existing ones. For the full API reference, see the [documentation in the Terraform Registry](https://registry.terraform.io/providers/axiomhq/axiom/latest/docs).
## Prerequisites
* [Sign up for a free Axiom account](https://app.axiom.co/register). All you need is an email address.
* [Create an advanced API token in Axiom](/reference/tokens#create-advanced-api-token) with the permissions to perform the actions you want to use Terraform for. For example, to use Terraform to create and update datasets, create the advanced API token with these permissions.
* [Create a Terraform account](https://app.terraform.io/signup/account).
* [Install the Terraform CLI](https://developer.hashicorp.com/terraform/cli).
## Install the provider
To install the Axiom Terraform Provider from the [Terraform Registry](https://registry.terraform.io/providers/axiomhq/axiom/latest), follow these steps:
1. Add the following code to your Terraform configuration file. Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
```hcl
terraform {
required_providers {
axiom = {
source = "axiomhq/axiom"
}
}
}
provider "axiom" {
api_token = "API_TOKEN"
}
```
2. In your terminal, go to the folder of your main Terraform configuration file, and then run the command `terraform init`.
## Create new resources
### Create dataset
To create a dataset in Axiom using the provider, add the following code to your Terraform configuration file. Customize the `name` and `description` fields.
```hcl
resource "axiom_dataset" "test_dataset" {
name = "test_dataset"
description = "This is a test dataset created by Terraform."
}
```
### Create notifier
To create a Slack notifier in Axiom using the provider, add the following code to your Terraform configuration file. Replace `SLACK_URL` with the webhook URL from your Slack instance. For more information on obtaining this URL, see the [Slack documentation](https://api.slack.com/messaging/webhooks).
```hcl
resource "axiom_notifier" "test_slack_notifier" {
name = "test_slack_notifier"
properties = {
slack = {
slack_url = "SLACK_URL"
}
}
}
```
To create a Discord notifier in Axiom using the provider, add the following code to your Terraform configuration file.
* Replace `DISCORD_CHANNEL` with the webhook URL from your Discord instance. For more information on obtaining this URL, see the [Discord documentation](https://discord.com/developers/resources/webhook).
* Replace `DISCORD_TOKEN` with your Discord API token. For more information on obtaining this token, see the [Discord documentation](https://discord.com/developers/topics/oauth2).
```hcl
resource "axiom_notifier" "test_discord_notifier" {
name = "test_discord_notifier"
properties = {
discord = {
discord_channel = "DISCORD_CHANNEL"
discord_token = "DISCORD_TOKEN"
}
}
}
```
To create an email notifier in Axiom using the provider, add the following code to your Terraform configuration file. Replace `EMAIL1` and `EMAIL2` with the email addresses you want to notify.
```hcl
resource "axiom_notifier" "test_email_notifier" {
name = "test_email_notifier"
properties = {
email= {
emails = ["EMAIL1","EMAIL2"]
}
}
}
```
For more information on the types of notifier you can create, see the [documentation in the Terraform Registry](https://registry.terraform.io/providers/axiomhq/axiom/latest/resources/notifier).
### Create monitor
To create a monitor in Axiom using the provider, add the following code to your Terraform configuration file and customize it:
```hcl
resource "axiom_monitor" "test_monitor" {
depends_on = [axiom_dataset.test_dataset, axiom_notifier.test_slack_notifier]
name = "test_monitor"
description = "This is a test monitor created by Terraform."
apl_query = "['test_dataset'] | summarize count() by bin_auto(_time)"
interval_minutes = 5
operator = "Above"
range_minutes = 5
threshold = 1
notifier_ids = [
axiom_notifier.test_slack_notifier.id
]
alert_on_no_data = false
notify_by_group = false
}
```
This example creates a monitor using the dataset `test_dataset` and the notifier `test_slack_notifier`. These are resources you have created and accessed in the sections above.
* Customize the `name` and the `description` fields.
* In the `apl_query` field, specify the APL query for the monitor.
For more information on these fields, see the [documentation in the Terraform Registry](https://registry.terraform.io/providers/axiomhq/axiom/latest/resources/monitor).
### Create user
To create a user in Axiom using the provider, add the following code to your Terraform configuration file. Customize the `name`, `email`, and `role` fields.
```hcl
resource "axiom_user" "test_user" {
name = "test_user"
email = "test@abc.com"
role = "user"
}
```
## Access existing resources
### Access existing dataset
To access an existing dataset, follow these steps:
1. Determine the ID of the Axiom dataset by sending a GET request to the [`datasets` endpoint of the Axiom API](/restapi/endpoints/getDatasets).
2. Add the following code to your Terraform configuration file. Replace `DATASET_ID` with the ID of the Axiom dataset.
```hcl
data "axiom_dataset" "test_dataset" {
id = "DATASET_ID"
}
```
### Access existing notifier
To access an existing notifier, follow these steps:
1. Determine the ID of the Axiom notifier by sending a GET request to the `notifiers` endpoint of the Axiom API.
2. Add the following code to your Terraform configuration file. Replace `NOTIFIER_ID` with the ID of the Axiom notifier.
```hcl
data "axiom_dataset" "test_slack_notifier" {
id = "NOTIFIER_ID"
}
```
### Access existing monitor
To access an existing monitor, follow these steps:
1. Determine the ID of the Axiom monitor by sending a GET request to the `monitors` endpoint of the Axiom API.
2. Add the following code to your Terraform configuration file. Replace `MONITOR_ID` with the ID of the Axiom monitor.
```hcl
data "axiom_monitor" "test_monitor" {
id = "MONITOR_ID"
}
```
### Access existing user
To access an existing user, follow these steps:
1. Determine the ID of the Axiom user by sending a GET request to the `users` endpoint of the Axiom API.
2. Add the following code to your Terraform configuration file. Replace `USER_ID` with the ID of the Axiom user.
```hcl
data "axiom_user" "test_user" {
id = "USER_ID"
}
```
# Connect Axiom with Vercel
Easily monitor data from requests, functions, and web vitals in one place to get the deepest observability experience for your Vercel projects.
Connect Axiom with Vercel to get the deepest observability experience for your Vercel projects.
Easily monitor data from requests, functions, and web vitals in one place. 100% live and 100% of your data, no sampling.
Axiom’s Vercel app ships with a pre-built dashboard and pre-installed monitors so you can be in complete control of your projects with minimal effort.
If you use Axiom Vercel integration, [annotations](/query-data/annotate-charts) are automatically created for deployments.
## What is Vercel?
Vercel is a platform for frontend frameworks and static sites, built to integrate with your headless content, commerce, or database.
Vercel provides a frictionless developer experience to take care of the hard things: deploying instantly, scaling automatically, and serving personalized content around the globe.
Vercel makes it easy for frontend teams to develop, preview, and ship delightful user experiences, where performance is the default.
## Send logs to Axiom
Simply install the [Axiom Vercel app from here](https://vercel.com/integrations/axiom) and be streaming logs and web vitals within minutes!
## App Overview
### Request and function logs
For both requests and serverless functions, Axiom automatically installs a [log drain](https://vercel.com/blog/log-drains) in your Vercel account to capture data live.
As users interact with your website, various logs will be produced. Axiom captures all these logs and ingests them into the `vercel` dataset. You can stream and analyze these logs live, or use our pre-build Vercel Dashboard to get an overview of all the important metrics. When you’re ready, you can fork our dashboard and start building your own!
For function logs, if you call `console.log`, `console.warn` or `console.error` in your function, the output will also be captured and made available as part of the log. You can use our extended query language, APL, to easily search these logs.
## Web vitals
Axiom supports capturing and analyzing Web Vital data directly from your user’s browser without any sampling and with more data than is available elsewhere. It is perfect to pair with Vercel’s in-built analytics when you want to get really deep into a specific problem or debug issues with a specific audience (user-agent, location, region, etc).
Web Vitals are only currently supported for Next.js websites. Expanded support is coming soon.
### Installation
Perform the following steps to install Web Vitals:
1. In your Vercel project, run `npm install --save next-axiom`.
2. In `next.config.js`, wrap your NextJS config in `withAxiom` as follows:
```js
const { withAxiom } = require('next-axiom');
module.exports = withAxiom({
// ... your existing config
})
```
This will proxy the Axiom ingest call to improve deliverability.
3. For Web Vitals, navigate to `app/layout.tsx` and add the `AxiomWebVitals` component:
```js
import { AxiomWebVitals } from 'next-axiom';
export default function RootLayout() {
return (
...
...
);
}
```
WebVitals are sent only from production deployments.
4. Deploy your site and watch data coming into your Axiom dashboard.
* To send logs from different parts of your app, make use of the provided logging functions. For example:
```js
log.info('Payment completed', { userID: '123', amount: '25USD' });
```
### Client Components
For Client Components, replace the `log` prop usage with the `useLogger` hook:
```js
'use client';
import { useLogger } from 'next-axiom';
export default function ClientComponent() {
const log = useLogger();
log.debug('User logged in', { userId: 42 });
return
Logged in
;
}
```
### Server Components
For Server Components, create a logger and make sure to call flush before returning:
```js
import { Logger } from 'next-axiom';
export default async function ServerComponent() {
const log = new Logger();
log.info('User logged in', { userId: 42 });
// ... other operations ...
await log.flush();
return
Logged in
;
}
```
### Route Handlers
For Route Handlers, wrapping your Route Handlers in `withAxiom` will add a logger to your request and automatically log exceptions:
```js
import { withAxiom, AxiomRequest } from 'next-axiom';
export const GET = withAxiom((req: AxiomRequest) => {
req.log.info('Login function called');
// You can create intermediate loggers
const log = req.log.with({ scope: 'user' });
log.info('User logged in', { userId: 42 });
return NextResponse.json({ hello: 'world' });
});
```
## Use Next.js 12 for Web Vitals
If you’re using Next.js version 12, follow the instructions below to integrate Axiom for logging and capturing Web Vitals data.
In your `pages/_app.js` or `pages/_app.ts` and add the following line:
```js
export { reportWebVitals } from 'next-axiom';
```
## Upgrade to Next.js 13 from Next.js 12
If you plan on upgrading to Next.js 13, you'll need to make specific changes to ensure compatibility:
* Upgrade the next-axiom package to version `1.0.0` or higher:
* Make sure any exported variables have the `NEXT_PUBLIC_ prefix`, for example,, `NEXT_PUBLIC_AXIOM_TOKEN`.
* In client components, use the `useLogger` hook instead of the `log` prop.
* For server-side components, you need to create an instance of the `Logger` and flush the logs before the component returns.
* For Web Vitals tracking, you'll replace the previous method of capturing data. Remove the `reportWebVitals()` line and instead integrate the `AxiomWebVitals` component into your layout.
## Vercel Function logs 4KB limit
The Vercel 4KB log limit refers to a restriction placed by Vercel on the size of log output generated by serverless functions running on their platform. The 4KB log limit means that each log entry produced by your function should be at most 4 Kilobytes in size.
If your log output is larger than 4KB, you might experience truncation or missing logs. To log above this limit, you can send your function logs using [next-axiom](https://github.com/axiomhq/next-axiom).
## Parse JSON on the message field
If you use a logging library in your Vercel project that prints JSON, your **message** field will contain a stringified and therefore escaped JSON object.
* If your Vercel logs are encoded as JSON, they will look like this:
```json
{
"level": "error",
"message": "{ \"message\": \"user signed in\", \"metadata\": { \"userId\": 2234, \"signInType\": \"sso-google\" }}",
"request": {
"host": "www.axiom.co",
"id": "iad1:iad1::sgh2r-1655985890301-f7025aa764a9",
"ip": "199.16.157.13",
"method": "GET",
"path": "/sign-in/google",
"scheme": "https",
"statusCode": 500,
"teamName": "AxiomHQ",
},
"vercel": {
"deploymentId": "dpl_7UcdgdgNsdgbcPY3Lg6RoXPfA6xbo8",
"deploymentURL": "axiom-bdsgvweie6au-axiomhq.vercel.app",
"projectId": "prj_TxvF2SOZdgdgwJ2OBLnZH2QVw7f1Ih7",
"projectName": "axiom-co",
"region": "iad1",
"route": "/signin/[id]",
"source": "lambda-log"
}
}
```
* The **JSON** data in your **message** would be:
```json
{
"message": "user signed in",
"metadata": {
"userId": 2234,
"signInType": "sso-google"
}
}
```
You can **parse** the JSON using the [parse\_json function](/apl/scalar-functions/string-functions#parse-json\(\)) and run queries against the **values** in the **message** field.
### Example
```kusto
['vercel']
| extend parsed = parse_json(message)
```
* You can select the field to **insert** into new columns using the [project operator](/apl/tabular-operators/project-operator)
```kusto
['vercel']
| extend parsed = parse_json('{"message":"user signed in", "metadata": { "userId": 2234, "SignInType": "sso-google" }}')
| project parsed["message"]
```
### More Examples
* If you have **null values** in your data you can use the **isnotnull()** function
```kusto
['vercel']
| extend parsed = parse_json(message)
| where isnotnull(parsed)
| summarize count() by parsed["message"], parsed["metadata"]["userId"]
```
* Check out our [APL Documentation on how to use more functions](/apl/scalar-functions/string-functions) and run your own queries against your Vercel logs.
## Migrate from Vercel app to next-axiom
In May 2024, Vercel [introduced higher costs](https://axiom.co/blog/changes-to-vercel-log-drains) for using Vercel Log Drains. Because the Axiom Vercel app depends on Log Drains, using the next-axiom library can be the cheaper option to analyze telemetry data for higher volume projects.
To migrate from the Axiom Vercel app to the next-axiom library, follow these steps:
1. Delete the existing log drain from your Vercel project.
2. Delete `NEXT_PUBLIC_AXIOM_INGEST_ENDPOINT` from the environment variables of your Vercel project. For more information, see the [Vercel documentation](https://vercel.com/projects/environment-variables).
3. [Create a new dataset in Axiom](/reference/datasets), and [create a new advanced API token](/reference/tokens) with ingest permissions for that dataset.
4. Add the following environment variables to your Vercel project:
* `NEXT_PUBLIC_AXIOM_DATASET` is the name of the Axiom dataset where you want to send data.
* `NEXT_PUBLIC_AXIOM_TOKEN` is the Axiom API token you have generated.
5. In your terminal, go to the root folder of your Next.js app, and then run `npm install --save next-axiom` to install the latest version of next-axiom.
6. In the `next.config.ts` file, wrap your Next.js configuration in `withAxiom`:
```js
const { withAxiom } = require('next-axiom');
module.exports = withAxiom({
// Your existing configuration
});
```
For more configuration options, see the [documentation in the next-axiom GitHub repository](https://github.com/axiomhq/next-axiom).
## Send logs from Vercel preview deployments
To send logs from Vercel preview deployments to Axiom, enable preview deployments for the environment variable `NEXT_PUBLIC_AXIOM_INGEST_ENDPOINT`. For more information, see the [Vercel documentation](https://vercel.com/docs/projects/environment-variables/managing-environment-variables).
# Configure dashboard elements
This section explains how to configure dashboard elements.
When you create a chart, click to access the following options.
## Values
Specify how to treat missing or undefined values:
* **Auto:** This option automatically decides the best way to represent missing or undefined values in the data series based on the chart type and the rest of the data.
* **Ignore:** This option ignores any missing or undefined values in the data series. This means that the chart only displays the known, defined values.
* **Join adjacent values:** This option connects adjacent data points in the data series, effectively filling in any gaps caused by missing values. The benefit of joining adjacent values is that it can provide a smoother, more continuous visualization of your data.
* **Fill with zeros:** This option replaces any missing or undefined values in the data series with zero. This can be useful if you want to emphasize that the data is missing or undefined, as it causes a drop to zero in your chart.
## Variant
Specify the chart type.
**Area:** An area chart displays the area between the data line and the axes, often filled with a color or pattern. Stacked charts provide the capability to design and implement intricate query dashboards while integrating advanced visualizations, enriching your logging experience over time.
**Bars:** A bar chart represents data in rectangular bars. The length of each bar is proportional to the value it represents. Bar charts can be used to compare discrete quantities, or when you have categorical data.
**Line:** A line chart connects individual data points into a continuous line, which is useful for showing logs over time. Line charts are often used for time series data.
## Y-Axis
Specify the scale of the vertical axis.
**Linear:** A linear scale maintains a consistent scale where equal distances represent equal changes in value. This is the most common scale type and is useful for most types of data.
**Log:** A logarithmic (or log) scale represents values in terms of their order of magnitude. Each unit of distance on a log scale represents a tenfold increase in value. Log scales make it easy to see backend errors and compare values across a wide range.
## Annotations
Specify the types of annotations to display in the chart:
* Show all annotations
* Hide all annotations
* Selective determine the annotations types to display
# Create dashboard elements
This section explains how to create dashboard elements.
To create new dashboard elements:
1. [Create a dashboard](/dashboards/create) or open an existing dashboard.
2. Click **Add element** in the top right corner.
3. Choose the dashboard element from the list.
4. For charts, select one of the following:
* Click **Simple Query Builder** to create your chart using a [visual query builder](#create-chart-using-visual-query-builder).
* Click **Advanced Query Language** to create your chart using the Axiom Processing Language (APL). Create a chart in the same way you create a chart in the APL query builder of the [Query tab](/query-data/explore#create-a-query-using-apl).
5. Optional: [Configure chart options](/dashboard-elements/configure).
6. Optional: Set a custom time range that is different from the dashboard’s time range.
7. Click **Save**.
The new element appears in your dashboard. At the bottom, click **Save** to save your changes to the dashboard.
## Create chart using visual query builder
Use the query builder to create or edit queries for the selected dataset:
This component is a visual query builder that eases the process of building visualizations and segments of your data.
This guide walks you through the individual sections of the query builder.
### Time range
Every query has a start and end time and the time range component allows quick selection of common time ranges as well as the ability to input specific start and end timestamps:
* Use the **Quick Range** items to quickly select popular ranges
* Use the **Custom Start/End Date** inputs to select specific times
* Use the **Resolution** items to choose between various time bucket resolutions
### Against
When a time series visualization is selected, such as `count`, the **Against** menu is enabled and it’s possible to select a historical time to compare the results of your time range too.
For example, to compare the last hour’s average response time to the same time yesterday, select `1 hr` in the time range menu, and then select `-1D` from the **Against** menu:
The results look like this:
The dotted line represents results from the base date, and the totals table includes the comparative totals.
When you add `field` to the `group by` clause, the **time range against** values are attached to each `events`.
### Visualizations
Axiom provides powerful visualizations that display the output of running aggregate functions across your dataset. The Visualization menu allows you to add these visualizations and, where required, input their arguments:
You can select a visualization to add it to the query. If a visualization requires an argument (such as the field and/or other parameters), the menu allows you to select eligible fields and input those arguments. Press `Enter` to complete the addition:
Click Visualization in the query builder to edit it at any time.
[Learn about supported visualizations](/query-data/visualizations)
### Filters
Use the filter menu to attach filter clauses to your search.
Axiom supports AND/OR operators at the top-level as well as one level deep. This means you can create filters that would read as `status == 200 AND (method == get OR method == head) AND (user-agent contains Mozilla or user-agent contains Webkit)`.
Filters are divided up by the field type they operate on, but some may apply to more than one field type.
#### List of filters
*String Fields*
* `==`
* `!=`
* `exists`
* `not-exists`
* `starts-with`
* `not-starts-with`
* `ends-with`
* `not-ends-with`
* `contains`
* `not-contains`
* `regexp`
* `not-regexp`
*Number Fields*
* `==`
* `!=`
* `exists`
* `not-exists`
* `>`
* `>=`
* `<`
* `<=`
*Boolean Fields*
* `==`
* `!=`
* `exists`
* `not-exists`
*Array Fields*
* `contains`
* `not-contains`
* `exists`
* `not-exists`
#### Special fields
Axiom creates the following two fields automatically for a new dataset:
* `_time` is the timestamp of the event. If the data you ingest doesn’t have a `_time` field, Axiom assigns the time of the data ingest to the events.
* `_sysTime` is the time when you ingested the data.
In most cases, you can use `_time` and `_sysTime` interchangeably. The difference between them can be useful if you experience clock skews on your event-producing systems.
### Group by (segmentation)
When visualizing data, it can be useful to segment data into specific groups to more clearly understand how the data behaves.
The Group By component enables you to add one or more fields to group events by:
### Other options
#### Order
By default, Axiom automatically chooses the best ordering for results. However, you can manually set the desired order through this menu.
#### Limit
By default, Axiom chooses a reasonable limit for the query that has been passed in. However, you can control that limit manually through this component.
## Change element’s position
To change element’s position on the dashboard, drag the title bar of the chart.
## Change element size
To change the size of the element, drag the bottom-right corner.
## Set custom time range
You can set a custom time range for individual dashboard elements that is different from the dashboard’s time range. For example, the dashboard displays data about the last 30 minutes but individual dashboard elements display data for different time ranges. This can be useful for visualizing the same chart or statistic for different time periods, among others.
To set a custom time range for a dashboard element:
1. In the top right of the dashboard element, click **More >** **Edit**.
2. In the top right above the chart, click .
3. Click **Custom**.
4. Choose one of the following options:
* Use the **Quick range** items to quickly select popular time ranges.
* Use the **Custom start/end date** fields to select specific times.
5. Click **Save**.
Axiom displays the new time range in the top left of the dashboard element.
### Set custom time range in APL
To set a custom time range for dashboard elements created with APL, you can use the [procedure above](#set-custom-time-range) or define the time range in the APL query:
1. In the top right of the dashboard element, click **More >** **Edit**.
2. In the APL query, specify the custom time range using the [where](/apl/tabular-operators/where-operator) operator. For example:
```kusto
| where _time > now(-6h)
```
3. Click **Run query** to preview the result.
4. Click **Save**.
Axiom displays in the top left of the dashboard element to indicate that its time range is defined in the APL query and might be different from the dashboard’s time range.
## Set custom comparison period
You can set a custom comparison time period for individual dashboard elements that is different from the dashboard’s. For example, the dashboard compares against data from yesterday but individual dashboard elements display data for different comparison periods.
To set a custom comparison period for a dashboard element:
1. In the top right of the dashboard element, click **More >** **Edit**.
2. In the top right above the chart, click **Compare period**.
3. Click **Custom**.
4. Choose one of the following options:
* Use the **Quick range** items to quickly select popular comparison periods.
* Use the **Custom time** field to select specific comparison periods.
5. Click **Save**.
Axiom displays the new comparison period in the top left of the dashboard element.
# Heatmap
This section explains how to create heatmap dashboard elements and add them to your dashboard.
export const elementName_0 = "heatmap"
export const elementButtonLabel_0 = "Heatmap"
Heatmaps represent the distribution of numerical data by grouping values into ranges or buckets. Each bucket reflects a frequency count of data points that fall within its range. Instead of showing individual events or measurements, heatmaps give a clear view of the overall distribution patterns. This allows you to identify performance bottlenecks, outliers, or shifts in behavior. For instance, you can use heatmaps to track response times, latency, or error rates.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets) where you send your data.
* [Send data](/send-data/ingest) to your Axiom dataset.
* [Create an empty dashboard](/dashboards/create).
## Create {elementName_0}
1. Go to the Dashboards tab and open the dashboard to which you want to add the {elementName_0}.
2. Click **Add element** in the top right corner.
3. Click **{elementButtonLabel_0}** from the list.
4. Choose one of the following:
* Click **Simple Query Builder** to create your chart using a visual query builder. For more information, see [Create chart using visual query builder](/dashboard-elements/create#create-chart-using-visual-query-builder).
* Click **Advanced Query Language** to create your chart using the Axiom Processing Language (APL). Create a chart in the same way you create a chart in the APL query builder of the [Query tab](/query-data/explore#create-a-query-using-apl).
5. Optional: [Configure the dashboard element](/dashboard-elements/configure).
6. Click **Save**.
The new element appears in your dashboard. At the bottom, click **Save** to save your changes to the dashboard.
## Example with Simple Query Builder
## Example with Advanced Query Language
```kusto
['http-logs']
| summarize histogram(req_duration_ms, 15) by bin_auto(_time)
```
# Log stream
This section explains how to create log stream dashboard elements and add them to your dashboard.
The log stream dashboard element displays your logs as they come in real-time. Each log appears as a separate line with various details. The benefit of a log stream is that it provides immediate visibility into your system’s operations. When you’re debugging an issue or trying to understand an ongoing event, the log stream allows you to see exactly what’s happening as it occurs.
## Example with Simple Query Builder
## Example with Advanced Query Language
```kusto
['sample-http-logs']
| project method, status, content_type
```
# Monitor list
This section explains how to create monitor list dashboard elements and add them to your dashboard.
The monitor list dashboard element provides a visual overview of the monitors you specify. It offers a quick glance into important developments about the monitors such as their status and history.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create an empty dashboard](/dashboards/create).
{/* list separator */}
* [Create a monitor](/monitor-data/monitors).
## Create monitor list
1. Go to the Dashboards tab and open the dashboard to which you want to add the monitor list.
2. Click **Add element** in the top right corner.
3. Click **Monitor list** from the list.
4. In **Columns**, select the type of information you want to display for each monitor:
* **Status** displays if the monitor state is normal, triggered, or disabled.
* **History** provides a visual overview of the recent runs of the monitor. Green squares mean normal operation and red squares mean triggered state.
* **Dataset** is the name of the dataset on which the monitor operates.
* **Type** is the type of the monitor.
* **Notifiers** displays the notifiers connected to the monitor.
5. From the list, select the monitors you want to display on the dashboard.
6. Click **Save**.
The new element appears in your dashboard. At the bottom, click **Save** to save your changes to the dashboard.
# Note
This section explains how to create note dashboard elements and add them to your dashboard.
The note dashboard element adds a textbox to your dashboard that you can customise to your needs. For example, you can provide context in a note about the other dashboard elements.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create an empty dashboard](/dashboards/create).
## Create note
1. Go to the Dashboards tab and open the dashboard to which you want to add the note.
2. Click **Add element** in the top right corner.
3. Click **Note** from the list.
4. Enter your text on the left in [GitHub Flavored Markdown](https://docs.github.com/en/get-started/writing-on-github/getting-started-with-writing-and-formatting-on-github/basic-writing-and-formatting-syntax) format. You see the preview of the note dashboard element on the right.
5. Click **Save**.
The new element appears in your dashboard. At the bottom, click **Save** to save your changes to the dashboard.
# Dashboard elements
This section explains how to create different dashboard elements and add them to your dashboard.
Dashboard elements are the different visual elements that you can include in your dashboard to display your data and other information. For example, you can track key metrics, logs, and traces, and monitor real-time data flow.
Choose one of the following to learn more about a dashboard element:
# Pie chart
This section explains how to create pie chart dashboard elements and add them to your dashboard.
export const elementName_0 = "pie chart"
export const elementButtonLabel_0 = "Pie"
Pie charts can illustrate the distribution of different types of event data. Each slice represents the proportion of a specific value relative to the total. For example, a pie chart can show the breakdown of status codes in HTTP logs. This helps quickly identify the dominant types of status responses and assess the system’s health at a glance.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets) where you send your data.
* [Send data](/send-data/ingest) to your Axiom dataset.
* [Create an empty dashboard](/dashboards/create).
## Create {elementName_0}
1. Go to the Dashboards tab and open the dashboard to which you want to add the {elementName_0}.
2. Click **Add element** in the top right corner.
3. Click **{elementButtonLabel_0}** from the list.
4. Choose one of the following:
* Click **Simple Query Builder** to create your chart using a visual query builder. For more information, see [Create chart using visual query builder](/dashboard-elements/create#create-chart-using-visual-query-builder).
* Click **Advanced Query Language** to create your chart using the Axiom Processing Language (APL). Create a chart in the same way you create a chart in the APL query builder of the [Query tab](/query-data/explore#create-a-query-using-apl).
5. Optional: [Configure the dashboard element](/dashboard-elements/configure).
6. Click **Save**.
The new element appears in your dashboard. At the bottom, click **Save** to save your changes to the dashboard.
## Example with Simple Query Builder
## Example with Advanced Query Language
```kusto
['http-logs']
| summarize count() by status
```
# Scatter plot
This section explains how to create scatter plot dashboard elements and add them to your dashboard.
Scatter plots are used to visualize the correlation or distribution between two distinct metrics or logs. Each point in the scatter plot could represent a log entry, with the X and Y axes showing different log attributes (like request time and response size). The scatter plot chart can be created using the simple query builder or advanced query builder.
For example, plot response size against response time for an API to see if larger responses are correlated with slower response times.
## Example with Simple Query Builder
## Example with Advanced Query Language
```kusto
['sample-http-logs']
| summarize avg(req_duration_ms), avg(resp_header_size_bytes) by resp_body_size_bytes
```
# Statistic
This section explains how to create statistic dashboard elements and add them to your dashboard.
Statistics dashboard elements display a summary of the selected metrics over a given time period. For example, you can use a statistic dashboard element to show the average, sum, min, max, and count of response times or error counts.
## Example with Simple Query Builder
## Example with Advanced Query Language
```kusto
['sample-http-logs']
| summarize avg(resp_body_size_bytes)
```
# Table
This section explains how to create table dashboard elements and add them to your dashboard.
The table dashboard element displays a summary of any attributes from your metrics, logs, or traces in a sortable table format. Each row in the table could represent a different service, host, or other entity, with columns showing various attributes or metrics for that entity.
## Example with Simple Query Builder
## Example with Advanced Query Language
With this option, the table chart type has the capability to display a non-aggregated view of events.
```kusto
['sample-http-logs']
| summarize avg(resp_body_size_bytes) by bin_auto(_time)
```
# Time series
This section explains how to create time series dashboard elements and add them to your dashboard.
Time series charts show the change in your data over time which can help identify infrastructure issues, spikes, or dips in the data. This can be a simple line chart, an area chart, or a bar chart. A time series chart might be used to show the change in the volume of log events, error rates, latency, or other time-sensitive data.
## Example with Simple Query Builder
## Example with Advanced Query Language
```kusto
['sample-http-logs']
| summarize count() by bin_auto(_time)
```
# Configure dashboards
This page explains how to configure your dashboards.
## Select time range
When you select the time range, you specify the time interval for which you want to display data in the dashboard. Changing the time range affects the data displayed in all dashboard elements.
To select the time range:
1. In the top right, click **Time range**.
2. Choose one of the following options:
* Use the **Quick range** items to quickly select popular time ranges.
* Use the **Custom start/end date** fields to select specific times.
## Share dashboards
To specify who can access a dashboard:
1. In the top right, click **Share**.
2. Select one of the following:
* Select **Just Me** to make the dashboard private. Only you can access the dashboard.
* Select a group in your Axiom organization. Only members of the selected group can access the dashboard. For more information about groups, see [Access](/reference/settings#access-overview).
* Select **Everyone** to make the dashboard accessible to all users in your Axiom organization.
3. At the bottom, click **Save** to save your changes to the dashboard.
The data that individual users see in the dashboard is determined by the datasets the users have access to. If a user has access to a dashboard but only to some of the datasets referenced in the dashboard’s charts, the user only sees data from the datasets they have access to.
## Control display of annotations
To specify the types of annotations to display in all dashboard elements:
1. In the top right, click **Annotations**.
2. Select one of the following:
* Show all annotations
* Hide all annotations
* Selective determine the annotations types to display
3. At the bottom, click **Save** to save your changes to the dashboard.
## Set dashboard as homepage
To set a dashboard as the homepage of your browser, click **Set as homepage** in the top right.
## Enter full screen
Full-screen mode is useful for displaying the dashboard on a TV or shared monitor.
To enter full-screen mode, click **Full screen** in the top right.
# Create dashboards
This section explains how to create and delete dashboards.
To create a dashboard, choose one of the following:
* [Create an empty dashboard](#create-empty-dashboards).
* [Fork an existing dashboard](#fork-dashboards). This is how you make a copy of prebuilt integration dashboards that you cannot directly edit.
* [Duplicate an existing dashboard](#duplicate-dashboards). This is how you make a copy of dashboards other than prebuilt integration dashboards.
After creating a dashboard:
* [Add dashboard elements](/dashboard-elements/create). For example, add a table or a time series chart.
* [Configure the dashboard](/dashboards/configure). For example, control who can access the dashboard and change the time range.
## Create empty dashboards
1. Click the Dashboards tab.
2. In the top right corner, click **New dashboard**.
3. Add a name and a description.
4. Click **Create**.
## Fork dashboards
1. Click the Dashboards tab.
2. Find the dashboard in the list.
3. Click **More**.
4. Click **Fork dashboard**.
Alternatively:
1. Open the dashboard.
2. Click **Fork dashboard** in the top right corner.
## Duplicate dashboards
1. Click the Dashboards tab.
2. Find the dashboard in the list.
3. Click **More**.
4. Click **Duplicate dashboard**.
## Delete dashboard
1. Click the Dashboards tab.
2. Find the dashboard in the list.
3. Click **More**.
4. Click **Delete dashboard**.
5. Click **Delete**.
# Dashboards
This section introduces the Dashboards tab and explains how to create your first dashboard.
Dashboards provide a single view into your data.
Axiom provides a mature dashboards experience that allows you to visualize collections of queries across multiple datasets in one place.
Dashboards are easy to share, benefit from collaboration, and bring separate datasets together in a single view.
## Dashboards tab
The Dashboards tab lists the dashboards you have access to.
* The **Integrations** section lists prebuilt dashboards. Axiom automatically built these dashboards as part of the [apps that enrich your Axiom experience](/apps/introduction). The integration dashboards are read-only and you cannot edit them. To create a copy of an integration dashboard that you can edit, [fork the original dashboard](/dashboards/configure#fork-dashboards).
* The sections below list the private and shared dashboards you can access.
To open a dashboard, click a dashboard in the list.
## Work with dashboards
# Send data from Honeycomb to Axiom
Integrate Axiom in your existing Honeycomb stack with minimal effort and without breaking any of your existing Honeycomb workflows.
export const endpointName_0 = "Honeycomb"
This page explains how to send data from Honeycomb to Axiom.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
## Configure {endpointName_0} endpoint in Axiom
1. Click **Settings > Endpoints**.
2. Click **New endpoint**.
3. Click **{endpointName_0}**.
4. Name the endpoint.
5. Select the dataset where you want to send data.
6. Copy the URL displayed for the newly created endpoint. This is the target URL where you send the data.
## Configure Honeycomb
In Honeycomb, specify the following environment variables:
* `APIKey` or `WriteKey` is your Honeycomb API token. For information, see the [Honeycomb documentation](https://docs.honeycomb.io/get-started/configure/environments/manage-api-keys/).
* `APIHost` is the target URL for the endpoint you have generated in Axiom by following the procedure above. For example, `https://opbizplsf8klnw.ingress.axiom.co`.
* `Dataset` is the name of the Axiom dataset where you want to send data.
## Examples
### Send logs from Honeycomb using JavaScript
```js
const Libhoney = require('libhoney');
const hny = new Libhoney({
writeKey: '',
dataset: '',
apiHost: '',
});
hny.sendNow({ message: 'Welcome to Axiom Endpoints!' });
```
### Send logs from Honeycomb using Python
```py
import libhoney
libhoney.init(writekey="", dataset="", api_host="")
event = libhoney.new_event()
event.add_field("foo", "bar")
event.add({"message": "Welcome, to Axiom Endpoints!"})
event.send()
```
### Send logs from Honeycomb using Golang
```go
package main
import (
"github.com/honeycombio/libhoney-go"
)
func main() {
libhoney.Init(libhoney.Config{
WriteKey: "",
Dataset: "",
APIHost: "",
})
defer libhoney.Close() // Flush any pending calls to Honeycomb
var ev = libhoney.NewEvent()
ev.Add(map[string]interface{}{
"duration_ms": 155.67,
"method": "post",
"hostname": "endpoints",
"payload_length": 43,
})
ev.Send()
}
```
# Send data from Loki to Axiom
Integrate Axiom in your existing Loki stack with minimal effort and without breaking any of your existing Loki workflows.
export const endpointName_0 = "Loki"
This page explains how to send data from Loki to Axiom.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
## Configure {endpointName_0} endpoint in Axiom
1. Click **Settings > Endpoints**.
2. Click **New endpoint**.
3. Click **{endpointName_0}**.
4. Name the endpoint.
5. Select the dataset where you want to send data.
6. Copy the URL displayed for the newly created endpoint. This is the target URL where you send the data.
## Configure Loki
In Loki, specify the following environment variables:
* `host` or `url` is the target URL for the endpoint you have generated in Axiom by following the procedure above. For example, `https://opbizplsf8klnw.ingress.axiom.co`.
* Optional: Use `labels` or `tags` to specify labels or tags for your app.
## Examples
### Send logs from Loki using JavaScript
```js
const { createLogger, transports, format, } = require("winston");
const LokiTransport = require("winston-loki");
let logger;
const initializeLogger = () => {
if (logger) {
return;
}
logger = createLogger({
transports: [
new LokiTransport({
host: "$LOKI_ENDPOINT_URL",
labels: { app: "axiom-loki-endpoint" },
json: true,
format: format.json(),
replaceTimestamp: true,
onConnectionError: (err) => console.error(err),
}),
new transports.Console({
format: format.combine(format.simple(), format.colorize()),
}),
],
});
};
initializeLogger()
logger.info("Starting app...");
```
### Send logs from Loki using Python
```py
import logging
import logging_loki
# Create a handler
handler = logging_loki.LokiHandler(
url='$LOKI_ENDPOINT_URL',
tags={'app': 'axiom-loki-py-endpoint'},
version='1',
)
# Create a logger
logger = logging.getLogger('loki')
# Add the handler to the logger
logger.addHandler(handler)
# Log some messages
logger.info('Hello, world from Python!')
logger.warning('This is a warning')
logger.error('This is an error')
```
# Send data from Splunk to Axiom
Integrate Axiom in your existing Splunk app with minimal effort and without breaking any of your existing Splunk stack.
export const endpointName_0 = "Splunk"
This page explains how to send data from Splunk to Axiom.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
## Configure {endpointName_0} endpoint in Axiom
1. Click **Settings > Endpoints**.
2. Click **New endpoint**.
3. Click **{endpointName_0}**.
4. Name the endpoint.
5. Select the dataset where you want to send data.
6. Copy the URL displayed for the newly created endpoint. This is the target URL where you send the data.
## Configure Splunk
In Splunk, specify the following environment variables:
* `token` is your Splunk API token. For information, see the [Splunk documentation](https://docs.splunk.com/observability/en/admin/authentication/authentication-tokens/api-access-tokens.html).
* `url` or `host` is the target URL for the endpoint you have generated in Axiom by following the procedure above. For example, `https://opbizplsf8klnw.ingress.axiom.co`.
## Examples
### Send logs from Splunk using JavaScript
```js
var SplunkLogger = require('splunk-logging').Logger;
var config = {
token: '$SPLUNK_TOKEN',
url: '$AXIOM_ENDPOINT_URL',
};
var Logger = new SplunkLogger({
token: config.token,
url: config.url,
host: '$AXIOM_ENDPOINT_URL',
});
var payload = {
// Message can be anything; doesn’t have to be an object
message: {
temperature: '70F',
chickenCount: 500,
},
};
console.log('Sending payload', payload);
Logger.send(payload, function (err, resp, body) {
// If successful, body will be { text: 'Success', code: 0 }
console.log('Response from Splunk', body);
});
```
### Send logs from Splunk using Python
* Your Splunk deployment `port` and `index` values are required in your Python code.
```py
import logging
from splunk_handler import SplunkHandler
splunk = SplunkHandler(
host="$AXIOM_SPLUNK_ENDPOINT_URL",
port='8088',
token='',
index='main'
)
logging.getLogger('').addHandler(splunk)
logging.warning('Axiom endpoints!')
```
### Send logs from Splunk using Golang
```js
package main
import "github.com/docker/docker/daemon/logger/splunk"
func main() {
// Create new Splunk client
splunk := splunk.NewClient(
nil,
"https://{$AXIOM_SPLUNK_ENDPOINT}:8088/services/collector",
"{your-token}",
"{your-source}",
"{your-sourcetype}",
"{your-index}"
)
err := splunk.Log(
interface{"msg": "axiom endpoints", "msg2": "endpoints"}
)
if err != nil {
return err
}
err = splunk.LogWithTime(
time.Now(),
interface{"msg": "axiom endpoints", "msg2": "endpoints"}
)
if err != nil {
return err
}
```
# Frequently asked questions
Learn more about Axiom.
This page aims to offer a deeper understanding of Axiom. If you can’t find an answer to your questions, please feel free to [contact our team](https://axiom.co/contact).
## What is Axiom?
Axiom is a log management and analytics solution that reduces the cost and management overhead of logging as much data as you want.
With Axiom, organizations no longer need to choose between their data and their costs. Axiom has been built from the ground-up to allow for highly efficient data ingestion and storage, and then a zero-to-infinite query scaling that allows you to query all your data, all the time.
Organizations use Axiom for continuous monitoring and observability, as well as an event store for running analytics and deriving insights from all their event data.
Axiom consists of a datastore and a user-experience that work in tandem to provide a completely unique log-management and analytics experience.
## Can I run Axiom in my own cloud/infrastructure?
Axiom enables you to store data in your own storage with the Bring Your Own Bucket (BYOB) feature. You provide your own S3-compatible object storage, and the Axiom control plane handles ingest, query execution, and all other background tasks. This is not an on-premises solution, but it enables you to maintain control over your data at rest.
Using Axiom as a cloud SaaS product without the BYOB option is safe, affordable, and the best choice for most use cases. Billed month-to-month on our [Team plan](https://axiom.co/pricing) for ingest workloads up to 50TB/mo, and with no upper limit on our annual [Enterprise plan](https://axiom.co/pricing), Axiom supports tens of thousands of organizations today. However, if you are a large enterprise customer and your organization requires data sovereignty for compliance reasons or secondary workloads, using Axiom with the BYOB premium option is the answer.
Axiom BYOB is available exclusively on our annual [Enterprise plan](https://axiom.co/pricing).
## How is Axiom different than other logging solutions?
At Axiom, our goal is that no organization has to ignore or delete a single piece of data no matter its source: logs, events, frontend, backends, audits, etc.
We found that existing solutions would place restrictions on how much data can be collected either on purpose or as a side-effect of their architectures.
For example, state of the art in logging is running stateful clusters that need shared knowledge of ingestion and will use a mixture of local SSD-based storage and remote object storage.
### Side-effects of legacy vendors
1. There is a non-trivial cost in increasing your data ingest as clusters need to be scaled and more SSD storage and IOPs need to be provided
2. The choice needs to be made between hot and cold data, and also what is archived. Now your data is in 2-3 different places and queries can be fast or slow depending on where the data is
The end result is needing to carefully consider all data that is ingested, and putting limits and/or sampling to control the DevOps and cost burden.
### The ways Axiom is different
1. Decoupled ingest and querying pipelines
2. Stateless ingest pipeline that requires minimal compute/memory to storage as much as 1.5TB/day per vCPU
3. Ingests all data into object storage, enabling the cheapest storage possible for all ingested data
4. Enables querying scale-out with cloud functions, requiring no constantly-running servers waiting for a query to be processed. Instead, enjoy zero-to-infinity querying instantly
### The benefits of Axiom’s approach
1. The most efficient ingestion pipeline for massive amounts of data
2. Store more data for less by exclusively using inexpensive object storage for all data
3. Query data that’s 10 milliseconds or 10 years old at any time
4. Reduce the total cost of ownership of your log management and analytics pipelines with simple scale and maintenance that Axiom provides
5. Free your organization to do more with it’s data
## How long can I retain data for with Axiom?
Axiom’s free forever [Personal plan](https://axiom.co/pricing) provides a generous 30 days of retention.
Axiom’s [Team plan](https://axiom.co/pricing) provides 95 days of retention, ensuring a complete picture of your data for over 3 months.
Retention on Axiom’s [Enterprise plan](https://axiom.co/pricing) can be customised to your needs, with the option for unlimited retention so your organization has access to all its data, all the time.
## Can I try Axiom for free?
Yes. Axiom’s [Personal plan](https://axiom.co/pricing) is free forever with a generous allowance, and is available to all customers.
With unlimited users included, Axiom’s [Team plan](https://axiom.co/pricing) starting at \$25/mo is a great choice for growing companies, and for Enterprise organizations who want to run a proof-of-concept.
## How is Axiom licensed?
Axiom’s [Team plan](https://axiom.co/pricing) is billed on a monthly basis.
Axiom’s [Enterprise plan](https://axiom.co/pricing) is billed on an annual basis, with license details tailored to your organization’s needs.
# Event data
This page explains the fundamentals of timestamped event data in Axiom.
Axiom’s mission is to operationalize every bit of event data in your organization.
Timestamped event data records every digital interaction between human, sensor, and machine, making it the atomic unit of activity for organizations. For this reason, every function in any business with digital activity can benefit from leveraging event data.
Each event is simply a structured record—composed of key-value pairs—that captures meaningful interactions or changes in state within a system. While these can appear in various forms, they usually contain the following:
* **Timestamp**: When the event occurred.
* **Attributes**: A set of key-value pairs offering details about the event context.
* **Metadata**: Contextual labels and IDs that connect related events.
## Uses of event data
Event data, understood as the atomic unit of digital activity, is the lifeblood of modern businesses. Leveraging the power of event data is essential in the following areas, among others:
* [Observability](/getting-started-guide/observability)
* Security
* Product analytics
* Business intelligence
* AI and machine learning
# Get started
This guide introduces you to the concepts behind working with Axiom and give a short introduction to each of the high-level features.
## 1. Send your data to Axiom
You can send data to Axiom in a variety of ways. Each individual piece of data is an event.
Events can be emitted from internal or third-party services, cloud functions, containers, virtual machines (VMs), or even scripts. Events follow the [JSON specification](https://www.json.org/json-en.html) for which field types are supported, an event could look like this:
```json
{
"service": "api-http",
"severity": "error",
"duration": 231,
"customer_id": "ghj34g32poiu4",
"tags": ["aws-east-1", "zone-b"],
"metadata": {
"version": "3.1.2"
}
}
```
An event must belong to a dataset which is a collection of similar events. You can have multiple datasets that help to segment your events to make them easier to query and visualize, and also aide in access control.
Axiom stores every event you send and makes it available to you for querying either by streaming logs in real-time, or by analyzing events to produce visualizations.
The underlying data store of Axiom is a time series database. This means every event is indexed with a timestamp specified at ingress or set automatically.
Axiom doesn’t sample your data on ingest or querying, unless you’ve expressly instructed it to.
## 2. Stream your data
Axiom makes it really easy to view your data as it’s being ingested live. This is also referred to as "Live Stream" or "Live Tail," and the result is having a terminal-like feel of being able to view all your events in real-time:
From the Stream tab, you can easily add filters to narrow down the results as well as save popular searches and share them with your organization members. You can also hide/show specific fields
Another useful feature of the Stream tab is to only show events in a particular time-window. This could be the last N minutes or a more-specific time range you specify manually. This feature is extremely useful when you need to closely inspect your data, allowing you to get an chronological view of every event in that time window.
## 3. Analyze your data
In Axiom, an individual piece of data is an event, and a dataset is a collection of related events. Datasets contain incoming event data. The Datasets tab allows you to analyze fields within your datasets. For example:
* Determine field data types and names.
* Edit field properties.
* Gain insights about the underlying data using quick charts.
* Add virtual fields.
## 4. Explore your data
While viewing individual events can be very useful, at scale and for general monitoring and observability, it’s important to be able to quickly aggregate, filter, and segment your data.
The Query tab gives you various tools to extract insights from your data:
* Visualize aggregations with count, min, max, average, percentiles, heatmaps, and more.
* Filter events.
* Segment data with `group-by`.
## 5. Monitor for problems
Get alerted when there are problems with your data. For example:
* A queue size is larger than acceptable limits.
* Web containers take too long to respond.
* A specific customer starts using a new feature.
## 6. Integrate with data shippers
Integrations can be installed and configured using different third-party Data shippers to quickly get insights from your logs and services by setting up a background task that continuously synchronizes events into Axiom.
## 7. Customize your organization
As your use of Axiom widens, customize it for your organization’s needs. For example:
* Add users.
* Set up third-party authentication providers.
* Set up role-based access control.
* Create and manage API tokens.
# Glossary of key Axiom terms
The glossary explains the key concepts in Axiom.
[A](#a) [B](#b) [C](#c) [D](#d) [E](#e) [F](#f) G H I K [L](#l) [M](#m) [N](#n) [O](#o) [P](#p) [Q](#q) [R](#r) S [T](#t) W X Y Z
## A
### Anomaly monitor
Anomaly monitors allow you to aggregate your event data and compare the results of this aggregation to what can be considered normal for the query. When the results are too much above or below the value that Axiom expects based on the event history, the monitor enters the alert state. The monitor remains in the alert state until the results no longer deviate from the expected value. This can happen without the results returning to their previous level if they stabilize around a new value. An anomaly monitor sends you a notification each time it enters or exits the alert state.
For more information, see [Anomaly monitors](/monitor-data/anomaly-monitors).
### API
The Axiom API allows you to ingest structured data logs, handle queries, and manage your deployments.
For more information, see [Introduction to Axiom API](/restapi/introduction).
### API token
See [Tokens](#token).
### App
Axiom’s dedicated apps enrich your Axiom organization by integrating into popular external services and providing out-of-the-box features such as prebuilt dashboards.
For more information, see [Introduction to apps](/apps/introduction).
### Axiom
Axiom represents the next generation of business intelligence. Designed and built for the cloud, Axiom is an event platform for logs, traces, and all technical data.
Axiom efficiently ingests, stores, and queries vast amounts of event data from any source at a fraction of the cost. The Axiom platform is built for unmatched efficiency, scalability, and performance.
### Axiom Processing Language (APL)
The Axiom Processing Language (APL) is a query language that is perfect for getting deeper insights from your data. Whether logs, events, analytics, or similar, APL provides the flexibility to filter, manipulate, and summarize your data exactly the way you need it.
For more information, see [Introduction to APL](/apl/introduction).
## B
### Bring Your Own Bucket (BYOB)
Axiom enables you to store data in your own storage with the Bring Your Own Bucket (BYOB) feature. You provide your own S3-compatible object storage, and the Axiom control plane handles ingest, query execution, and all other background tasks. This is not an on-premises solution, but it enables you to maintain control over your data at rest.
## C
### CLI
Axiom’s command line interface (CLI) is an Axiom tool that lets you test, manage, and build your Axiom organizations by typing commands on the command-line. You can use the command line to ingest data, manage authentication state, and configure multiple organizations.
For more information, see [Introduction to CLI](/reference/cli).
## D
### Dashboard
Dashboards allow you to visualize collections of queries across multiple datasets in one place. Dashboards are easy to share, benefit from collaboration, and bring separate datasets together in a single view.
For more information, see [Introduction to dashboards](/dashboards/overview).
### Dashboard element
Dashboard elements are the different visual elements that you can include in your dashboard to display your data and other information. For example, you can track key metrics, logs, and traces, and monitor real-time data flow.
For more information, see [Introduction to dashboard elements](/dashboard-elements/overview).
### Dataset
Axiom’s datastore is tuned for the efficient collection, storage, and analysis of timestamped event data. An individual piece of data is an event, and a dataset is a collection of related events. Datasets contain incoming event data.
For more information, see [Datasets](/reference/datasets).
### Destination
To transform and route data from an Axiom dataset to a destination, you need to set up a destination. This is where data is routed. Once you set up a destination, it can be used in any flow.
For more information, see [Manage destinations](/process-data/destinations/manage-destinations).
## E
### Event
An event is a granular record capturing a specific action or interaction within a system, often represented as key-value pairs. It’s the smallest unit of information detailing what occurred, who or what was involved, and potentially when and where it took place. In Axiom’s context, events are timestamped records, originating from human, machine, or sensor interactions, providing a foundational data point that informs a broader view of activities across different business units, from product, through security, to marketing, and more.
For more information, see [Event data](/getting-started-guide/event-data).
## F
### Flow
Flow provides onward event processing, including filtering, shaping, and routing. Flow works after persisting data in Axiom’s highly efficient queryable store, and uses APL to define processing.
A flow consists of three elements:
* **Source:** This is the Axiom dataset used as the flow origin.
* **Transformation:** This is the APL query used to filter, shape, and enrich the events.
* **Destination:** This is where events are routed.
For more information, see [Introduction to Flow](/process-data/introduction).
## L
### Log
A log is a structured or semi-structured data record typically used to document actions or system states over time, primarily for monitoring, debugging, and auditing. Traditionally formatted as text entries with timestamps and message content, logs have evolved to include standardized key-value structures, making them easier to search, interpret, and correlate across distributed systems. In Axiom, logs represent historical records designed for consistent capture, storage, and collaborative analysis, allowing for real-time visibility and troubleshooting across services.
For more information, see [Axiom for observability](/getting-started-guide/observability).
## M
### Match monitor
Match monitors allow you to continuously filter your log data and send you matching events. Axiom sends a notification for each matching event. By default, the notification message contains the entire matching event in JSON format. When you define your match monitor using APL, you can control which event attributes to include in the notification message.
For more information, see [Match monitors](/monitor-data/match-monitors).
### Metric
A metric is a quantitative measurement collected at specific time intervals, reflecting the state or performance of a system or component. Metrics focus on numeric values, such as CPU usage or memory consumption, enabling aggregation, trend analysis, and alerting based on thresholds. Within Axiom, metrics are data points associated with timestamps, labels, and values, designed to monitor resource utilization or performance. Metrics enable predictive insights by identifying patterns over time, offering foresight into system health and potential issues before they escalate.
For more information, see [Axiom for observability](/getting-started-guide/observability).
### Monitor
A monitor is a background task that periodically runs a query that you define. For example, it counts the number of error messages in your logs over the previous 5 minutes. A notifier defines how Axiom notifies you about the monitor output. For example, Axiom can send you an email.
You can use the following types of monitor:
* [Anomaly monitors](#anomaly-monitor) aggregate event data over time and look for values that are unexpected based on the event history. When the results of the aggregation are too high or low compared to the expected value, Axiom sends you an alert.
* [Match monitors](#match-monitor) filter for key events and send them to you.
* [Threshold monitors](#threshold-monitor) aggregate event data over time. When the results of the aggregation cross a threshold, Axiom sends you an alert.
For more information, see [Introduction to monitors](/monitor-data/monitors).
## N
### Notifier
A monitor is a background task that periodically runs a query that you define. For example, it counts the number of error messages in your logs over the previous 5 minutes. A notifier defines how Axiom notifies you about the monitor output. For example, Axiom can send you an email.
For more information, see [Introduction to notifiers](/monitor-data/notifiers-overview).
## O
### Observability
Observability is a principle in software engineering and systems monitoring that focuses on the ability to understand and diagnose the internal state of a system by examining the data it generates, such as logs, metrics, and traces. It goes beyond traditional monitoring by giving teams the power to pinpoint and resolve issues, optimize performance, and understand user behaviors across complex, interconnected services. Observability leverages various types of [event data](#event) to provide granular insights that span everything from simple log messages to multi-service transactions (traces) and performance metrics.
Traditionally, observability has been associated with three pillars:
* Logs capture individual events or errors.
* Metrics provide quantitative data over time, like CPU usage.
* Traces represent workflows across microservices.
However, modern observability expands on this by aggregating diverse data types from engineering, product, marketing, and security functions, all of which contribute to understanding the deeper “why” behind user interactions and system behaviors. This holistic view, in turn, enables real-time diagnostics, predictive analyses, and proactive issue resolution.
In essence, observability transforms raw event data into actionable insights, helping organizations not only to answer “what happened?” but also to delve into “why it happened” and “what might happen next.”
For more information, see [Axiom for observability](/getting-started-guide/observability).
## P
### Personal access token (PAT)
See [Tokens](#token).
### Playground
The Axiom Playground is an interactive sandbox environment where you can quickly try out Axiom’s capabilities.
To try out Axiom, go to the [Axiom Playground](https://play.axiom.co/).
## Q
### Query
In Axiom, a query is a specific, structured request used to get deeper insights into your data. It typically involves looking for information based on defined parameters like keywords, date ranges, or specific fields. The intent of a query is precision: to locate, analyze, or manipulate specific subsets of data within vast data structures, enhancing insights into various operational aspects or user behaviors.
Querying enables you to filter, manipulate, extend, and summarize your data.
{/*
As opposed to [searching](#search) which relies on sampling, querying allows you to explore all your event data. For this reason, querying is the modern way of making sense of your event data.
*/}
### Query-hours
When you run queries, your usage of the Axiom platform is measured in query-hours. The unit of this measurement is GB-hours which reflects the duration (measured in milliseconds) serverless functions are running to execute your query multiplied by the amount of memory (GB) allocated to execution. This metric is important for monitoring and managing your usage against the monthly allowance included in your plan.
For more information, see [Query costs](/reference/query-hours).
## R
### Role-based access control (RBAC)
Role-based access control (RBAC) allows you to manage and restrict access to your data and resources efficiently.
For more information, see [Access](/reference/settings#access-overview).
{/*
## S
### Search
Most observability solutions rely on search to seek information within event data. In contrast, Axiom’s approach is [query](#query). Unlike search that only gives you approximate results because it relies on sampling, a query is precise because it explores all your data. For this reason, querying is the modern way of making sense of your event data.
*/}
## T
### Threshold monitor
Threshold monitors allow you to periodically aggregate your event data and compare the results of this aggregation to a threshold that you define. When the results cross the threshold, the monitor enters the alert state. The monitor remains in the alert state until the results no longer cross the threshold. A threshold monitor sends you a notification each time it enters or exits the alert state.
For more information, see [Threshold monitors](/monitor-data/threshold-monitors).
### Token
You can use the Axiom API and CLI to programmatically ingest and query data, and manage settings and resources. For example, you can create new API tokens and change existing datasets with API requests. To prove that these requests come from you, you must include forms of authentication called tokens in your API requests. Axiom offers two types of tokens:
* API tokens let you control the actions that can be performed with the token. For example, you can specify that requests authenticated with a certain API token can only query data from a particular dataset.
* Personal access tokens (PATs) provide full control over your Axiom account. Requests authenticated with a PAT can perform every action you can perform in Axiom.
For more information, see [Tokens](/reference/tokens).
### Trace
A trace is a sequence of events that captures the path and flow of a single request as it navigates through multiple services or components within a distributed system. Utilizing trace IDs to group-related spans (individual actions or operations within a request), traces enable visibility into the lifecycle of a request, illustrating how it progresses, where delays or errors may occur, and how components interact. By connecting each event in the request journey, traces provide insights into system performance, pinpointing bottlenecks and latency.
# Axiom for observability
This page explains how Axiom helps you leverage timestamped event data for observability purposes.
Axiom helps you leverage the power of timestamped event data. A common use case of event data is observability (o11y) in the field of software engineering. Observability is the ability to explain what is happening inside a software system by observing it from the outside. It allows you to understand the behavior of systems based on their outputs such as telemetry data, which is a type of event data.
Software engineers most often work with timestamped event data in the form of logs or metrics. However, Axiom believes that event data reflects a much broader range of interactions, crossing boundaries from engineering to product management, security, and beyond. For a more general explanation of event data in Axiom, see [Events](/getting-started-guide/event-data).
## Types of event data in observability
Traditionally, observability has been associated with three pillars, each effectively a specialized view of event data:
* **Logs**: Logs record discrete events, such as error messages or access requests, typically associated with engineering or security.
* **Traces**: Traces track the path of requests through a system, capturing each step’s duration. By linking related spans within a trace, developers can identify bottlenecks and dependencies.
* **Metrics**: Metrics quantify state over time, recording data like CPU usage or user count at intervals. Product or engineering teams can then monitor and aggregate these values for performance insights.
In Axiom, these observability elements are stored as event data, allowing for fine-grained, efficient tracking across all three pillars.
## Logs and traces support
Axiom excels at collecting, storing, and analyzing timestamped event data.
For logs and traces, Axiom offers unparalleled efficiency and query performance. You can send logs and traces to Axiom from a wide range of popular sources. For information, see [Send data to Axiom
](/send-data/ingest).
## Metrics support
For metrics data, Axiom is well-suited for event-level metrics that behave like logs, with each data point representing a discrete event.
For example, you have the following timestamped data in Axiom:
```json
{
"job_id": "train_123",
"user_name": "acme",
"timestamp": "2024-10-08T15:30:00Z",
"node_host": "worker-01",
"metric_name": "gpu_utilization",
"metric_value": 87.5,
"training_type": "image_classification"
}
```
You can easily query and analyze this type of metrics data in Axiom. The query below computes the average GPU utilization across nodes:
```kusto
dataset
| summarize avg(metric_value) by node_host, bin_auto(_time)
```
Axiom’s support for metrics data currently comes with the following limitations:
* Axiom doesn’t support pre-aggregated metrics such as scrape samples.
* Axiom isn’t optimized for high-dimensional metric time series with a very large number of metric/label combinations.
Support for these types of metrics data is coming soon in the first half of 2025.
# Axiom Go Adapter for apex/log
Adapter to ship logs generated by apex/log to Axiom.
# Send data from Go app to Axiom
This page explains how to send data from a Go app to Axiom.
To send data from a Go app to Axiom, use the Axiom Go SDK.
The Axiom Go SDK is an open-source project and welcomes your contributions. For more information, see the [GitHub repository](https://github.com/axiomhq/axiom-go).
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
## Install SDK
To install the SDK, run the following:
```shell
go get github.com/axiomhq/axiom-go/axiom
```
Import the package:
```go
import "github.com/axiomhq/axiom-go/axiom"
```
If you use the [Axiom CLI](/reference/cli), run `eval $(axiom config export -f)` to configure your environment variables. Otherwise, [create an API token](/reference/tokens) and export it as `AXIOM_TOKEN`.
Alternatively, configure the client using [options](https://pkg.go.dev/github.com/axiomhq/axiom-go/axiom#Option) passed to the `axiom.NewClient` function:
```go
client, err := axiom.NewClient(
axiom.SetPersonalTokenConfig("AXIOM_TOKEN"),
)
```
## Use client
Create and use a client in the following way:
```go
package main
import (
"context"
"fmt"
"log"
"github.com/axiomhq/axiom-go/axiom"
"github.com/axiomhq/axiom-go/axiom/ingest"
)
func main() {
ctx := context.Background()
client, err := axiom.NewClient()
if err != nil {
log.Fatal(err)
}
if _, err = client.IngestEvents(ctx, "my-dataset", []axiom.Event{
{ingest.TimestampField: time.Now(), "foo": "bar"},
{ingest.TimestampField: time.Now(), "bar": "foo"},
}); err != nil {
log.Fatal(err)
}
res, err := client.Query(ctx, "['my-dataset'] | where foo == 'bar' | limit 100")
if err != nil {
log.Fatal(err)
} else if res.Status.RowsMatched == 0 {
log.Fatal("No matches found")
}
rows := res.Tables[0].Rows()
if err := rows.Range(ctx, func(_ context.Context, row query.Row) error {
_, err := fmt.Println(row)
return err
}); err != nil {
log.Fatal(err)
}
}
```
For more examples, see the [examples in GitHub](https://github.com/axiomhq/axiom-go/tree/main/examples).
## Adapters
To use a logging package, see the [adapters in GitHub](https://github.com/axiomhq/axiom-go/tree/main/adapters).
# Send data from JavaScript app to Axiom
This page explains how to send data from a JavaScript app to Axiom.
To send data from a JavaScript app to Axiom, use the Axiom JavaScript SDK.
The Axiom JavaScript SDK is an open-source project and welcomes your contributions. For more information, see the [GitHub repository](https://github.com/axiomhq/axiom-js).
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
## Install SDK
To install the SDK, run the following:
```shell
npm install @axiomhq/js
```
If you use the [Axiom CLI](/reference/cli), run `eval $(axiom config export -f)` to configure your environment variables. Otherwise, [create an API token](/reference/tokens) and export it as `AXIOM_TOKEN`.
You can also configure the client using options passed to the constructor of the client:
```ts
import { Axiom } from '@axiomhq/js';
const axiom = new Axiom({
token: process.env.AXIOM_TOKEN,
});
```
## Send data
The following example sends data to Axiom:
```ts
axiom.ingest('DATASET_NAME', [{ foo: 'bar' }]);
await axiom.flush();
```
The client automatically batches events in the background. In most cases, you only want to call `flush()` before your application exits.
## Query data
The following example queries data from Axiom:
```ts
const res = await axiom.query(`['DATASET_NAME'] | where foo == 'bar' | limit 100`);
console.log(res);
```
For more examples, see the [examples in GitHub](https://github.com/axiomhq/axiom-js/tree/main/examples).
## Capture errors
To capture errors, pass a method `onError` to the client:
```ts
let client = new Axiom({
token: '',
...,
onError: (err) => {
console.error('ERROR:', err);
}
});
```
By default, `onError` is set to `console.error`.
## Create annotations
The following example creates an annotation:
```ts
import { annotations } from '@axiomhq/js';
const client = new annotations.Service({ token: process.env.AXIOM_TOKEN });
await annotations.create({
type: 'deployment',
datasets: ['DATASET_NAME'],
title: 'New deployment',
description: 'Deployed version 1.0.0',
})
```
## Log from Node.js
While the Axiom JavaScript client works on both the backend and the browsers, Axiom provides transports for some of the popular loggers:
* [Pino](/guides/pino)
* [Winston](/guides/winston)
# Axiom Go Adapter for sirupsen/logrus
Adapter to ship logs generated by sirupsen/logrus to Axiom.
# OpenTelemetry using Cloudflare Workers
This guide explains how to configure a Cloudflare Workers app to send telemetry data to Axiom.
This guide demonstrates how to configure OpenTelemetry in Cloudflare Workers to send telemetry data to Axiom using the [OTel CF Worker package](https://github.com/evanderkoogh/otel-cf-workers).
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/).
* [Create a dataset in Axiom](/reference/settings#data) where you will send your data.
* [Create an API token in Axiom with permissions to query and ingest data](/reference/settings#access-overview).
* Create a Cloudflare account.
* [Install Wrangler](https://developers.cloudflare.com/workers/wrangler/install-and-update/), the CLI tool for Cloudflare.
## Setting up your Cloudflare Workers environment
Create a new directory for your project and navigate into it:
```bash
mkdir my-axiom-worker && cd my-axiom-worker
```
Initialize a new Wrangler project using this command:
```bash
wrangler init --type="javascript"
```
## Cloudflare Workers Script Configuration (index.ts)
Configure and implement your Workers script by integrating OpenTelemetry with the `@microlabs/otel-cf-workers` package to send telemetry data to Axiom, as illustrated in the example `index.ts` below:
```js
// index.ts
import { trace } from '@opentelemetry/api';
import { instrument, ResolveConfigFn } from '@microlabs/otel-cf-workers';
export interface Env {
AXIOM_API_TOKEN: string,
AXIOM_DATASET: string
}
const handler = {
async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise {
await fetch('https://cloudflare.com');
const greeting = "Welcome to Axiom Cloudflare instrumentation";
trace.getActiveSpan()?.setAttribute('greeting', greeting);
ctx.waitUntil(fetch('https://workers.dev'));
return new Response(`${greeting}!`);
},
};
const config: ResolveConfigFn = (env: Env, _trigger) => {
return {
exporter: {
url: 'https://api.axiom.co/v1/traces',
headers: {
'Authorization': `Bearer ${env.AXIOM_API_TOKEN}`,
'X-Axiom-Dataset': `${env.AXIOM_DATASET}`
},
},
service: { name: 'axiom-cloudflare-workers' },
};
};
export default instrument(handler, config);
```
## Wrangler Configuration (`wrangler.toml`)
Configure **`wrangler.toml`** with your Cloudflare account details and set environment variables for the Axiom API token and dataset.
```toml
name = "my-axiom-worker"
type = "javascript"
account_id = "$YOUR_CLOUDFLARE_ACCOUNT_ID" # Replace with your actual Cloudflare account ID
workers_dev = true
compatibility_date = "2023-03-27"
compatibility_flags = ["nodejs_compat"]
main = "index.ts"
# Define environment variables here
[vars]
AXIOM_API_TOKEN = "$API_TOKEN" # Replace $API_TOKEN with your actual Axiom API token
AXIOM_DATASET = "$DATASET" # Replace $DATASET with your actual Axiom dataset name
```
## Install Dependencies
Navigate to the root directory of your project and add `@microlabs/otel-cf-workers` and other OTel packages to the `package.json` file.
```json
{
"name": "my-axiom-worker",
"version": "1.0.0",
"description": "A template for kick-starting a Cloudflare Workers project",
"main": "index.ts",
"scripts": {
"start": "wrangler dev",
"deploy": "wrangler publish"
},
"dependencies": {
"@microlabs/otel-cf-workers": "^1.0.0-rc.20",
"@opentelemetry/api": "^1.6.0",
"@opentelemetry/core": "^1.17.1",
"@opentelemetry/exporter-trace-otlp-http": "^0.43.0",
"@opentelemetry/otlp-exporter-base": "^0.43.0",
"@opentelemetry/otlp-transformer": "^0.43.0",
"@opentelemetry/resources": "^1.17.1",
"@opentelemetry/sdk-trace-base": "^1.17.1",
"@opentelemetry/semantic-conventions": "^1.17.1",
"deepmerge": "^4.3.1",
"husky": "^8.0.3",
"lint-staged": "^15.0.2",
"ts-checked-fsm": "^1.1.0"
},
"devDependencies": {
"@changesets/cli": "^2.26.2",
"@cloudflare/workers-types": "^4.20231016.0",
"prettier": "^3.0.3",
"rimraf": "^4.4.1",
"typescript": "^5.2.2",
"wrangler": "2.13.0"
},
"private": true
}
```
Run `npm install` to install the packages. This command will install all the necessary packages listed in your `package.json` file.
## Running the instrumented app
To run your Cloudflare Workers app with OpenTelemetry instrumentation, ensure your API token and dataset are correctly set in your `wrangler.toml` file. As outlined in our `package.json` file, you have two primary scripts to manage your app’s lifecycle.
### In development mode
For local development and testing, you can start a local development server by running:
```bash
npm run start
```
This command runs `wrangler dev` allowing you to preview and test your app locally.
### Deploying to production
Deploy your app to the Cloudflare Workers environment by running:
```bash
npm run deploy
```
This command runs **`wrangler publish`**, deploying your project to Cloudflare Workers.
### Alternative: Use Wrangler directly
If you prefer not to use **`npm`** commands or want more direct control over the deployment process, you can use Wrangler commands directly in your terminal.
For local development:
```bash
wrangler dev
```
For deploying to Cloudflare Workers:
```bash
wrangler deploy
```
## View your app in Cloudflare Workers
Once you've deployed your app using Wrangler, view and manage it through the Cloudflare dashboard. To see your Cloudflare Workers app, follow these steps:
* In your [Cloudflare dashboard](https://dash.cloudflare.com/), click **Workers & Pages** to access the Workers section. You see a list of your deployed apps.
* Locate your app by its name. For this tutorial, look for `my-axiom-worker`.
* Click your app’s name to view its details. Within the app’s page, select the triggers tab to review the triggers associated with your app.
* Under the routes section of the triggers tab, you will find the URL route assigned to your Worker. This is where your Cloudflare Worker responds to incoming requests. Vist the [Cloudflare Workers documentation](https://developers.cloudflare.com/workers/get-started/guide/) to learn how to configure routes
## Observe the telemetry data in Axiom
As you interact with your app, traces will be collected and exported to Axiom, allowing you to monitor, analyze, and gain insights into your app’s performance and behavior.
## Dynamic OpenTelemetry traces dashboard
This data can then be further viewed and analyzed in Axiom’s dashboard, offering a deeper understanding of your app’s performance and behavior.
**Working with Cloudflare Pages Functions:** Integration with OpenTelemetry is similar to Workers but uses the Cloudflare Dashboard for configuration, bypassing **`wrangler.toml`**. This simplifies setup through the Cloudflare dashboard web interface.
## Manual Instrumentation
Manual instrumentation requires adding code into your Worker’s script to create and manage spans around the code blocks you want to trace.
1. Initialize Tracer:
Use the OpenTelemetry API to create a tracer instance at the beginning of your script using the **`@microlabs/otel-cf-workers`** package.
```js
import { trace } from '@opentelemetry/api';
const tracer = trace.getTracer('your-service-name');
```
2. Create start and end Spans:
Manually start spans before the operations or events you want to trace and ensure you end them afterward to complete the tracing lifecycle.
```js
const span = tracer.startSpan('operationName');
try {
// Your operation code here
} finally {
span.end();
}
```
3. Annotate Spans:
Add important metadata to spans to provide additional context. This can include setting attributes or adding events within the span.
```js
span.setAttribute('key', 'value');
span.addEvent('eventName', { 'eventAttribute': 'value' });
```
## Automatic Instrumentation
Automatic instrumentation uses the **`@microlabs/otel-cf-workers`** package to automatically trace incoming requests and outbound fetch calls without manual span management.
1. Instrument your Worker:
Wrap your Cloudflare Workers script with the `instrument` function from the **`@microlabs/otel-cf-workers`** package. This automatically instruments incoming requests and outbound fetch calls.
```js
import { instrument } from '@microlabs/otel-cf-workers';
export default instrument(yourHandler, yourConfig);
```
2. Configuration: Provide configuration details, including how to export telemetry data and service metadata to Axiom as part of the `instrument` function call.
```js
const config = (env) => ({
exporter: {
url: 'https://api.axiom.co/v1/traces',
headers: {
'Authorization': `Bearer ${env.AXIOM_API_TOKEN}`,
'X-Axiom-Dataset': `${env.AXIOM_DATASET}`
},
},
service: { name: 'axiom-cloudflare-workers' },
});
```
After instrumenting your Worker script, the `@microlabs/otel-cf-workers` package takes care of tracing automatically.
## Reference
### List of OpenTelemetry trace fields
| Field Category | Field Name | Description |
| ---------------------------- | ------------------------------------------- | ------------------------------------------------------------------------------------- |
| **Unique Identifiers** | | |
| | \_rowid | Unique identifier for each row in the trace data. |
| | span\_id | Unique identifier for the span within the trace. |
| | trace\_id | Unique identifier for the entire trace. |
| **Timestamps** | | |
| | \_systime | System timestamp when the trace data was recorded. |
| | \_time | Timestamp when the actual event being traced occurred. |
| **HTTP Attributes** | | |
| | attributes.custom\["http.host"] | Host information where the HTTP request was sent. |
| | attributes.custom\["http.server\_name"] | Server name for the HTTP request. |
| | attributes.http.flavor | HTTP protocol version used. |
| | attributes.http.method | HTTP method used for the request. |
| | attributes.http.route | Route accessed during the HTTP request. |
| | attributes.http.scheme | Protocol scheme (HTTP/HTTPS). |
| | attributes.http.status\_code | HTTP response status code. |
| | attributes.http.target | Specific target of the HTTP request. |
| | attributes.http.user\_agent | User agent string of the client. |
| | attributes.custom.user\_agent.original | Original user agent string, providing client software and OS. |
| | attributes.custom\["http.accepts"] | Accepted content types for the HTTP request. |
| | attributes.custom\["http.mime\_type"] | MIME type of the HTTP response. |
| | attributes.custom.http.wrote\_bytes | Number of bytes written in the HTTP response. |
| | attributes.http.request.method | HTTP request method used. |
| | attributes.http.response.status\_code | HTTP status code returned in response. |
| **Network Attributes** | | |
| | attributes.net.host.port | Port number on the host receiving the request. |
| | attributes.net.peer.port | Port number on the peer (client) side. |
| | attributes.custom\["net.peer.ip"] | IP address of the peer in the network interaction. |
| | attributes.net.sock.peer.addr | Socket peer address, indicating the IP version used. |
| | attributes.net.sock.peer.port | Socket peer port number. |
| | attributes.custom.net.protocol.version | Protocol version used in the network interaction. |
| | attributes.network.protocol.name | Name of the network protocol used. |
| | attributes.network.protocol.version | Version of the network protocol used. |
| | attributes.server.address | Address of the server handling the request. |
| | attributes.url.full | Full URL accessed in the request. |
| | attributes.url.path | Path component of the URL accessed. |
| | attributes.url.query | Query component of the URL accessed. |
| | attributes.url.scheme | Scheme component of the URL accessed. |
| **Operational Details** | | |
| | duration | Time taken for the operation. |
| | kind | Type of span (for example,, server, client). |
| | name | Name of the span. |
| | scope | Instrumentation scope. |
| | scope.name | Name of the scope for the operation. |
| | service.name | Name of the service generating the trace. |
| | service.version | Version of the service generating the trace. |
| **Resource Attributes** | | |
| | resource.environment | Environment where the trace was captured, for example,, production. |
| | resource.cloud.platform | Platform of the cloud provider, for example,, cloudflare.workers. |
| | resource.cloud.provider | Name of the cloud provider, for example,, cloudflare. |
| | resource.cloud.region | Cloud region where the service is located, for example,, earth. |
| | resource.faas.max\_memory | Maximum memory allocated for the function as a service (FaaS). |
| **Telemetry SDK Attributes** | | |
| | telemetry.sdk.language | Language of the telemetry SDK, for example,, js. |
| | telemetry.sdk.name | Name of the telemetry SDK, for example,, @microlabs/otel-workers-sdk. |
| | telemetry.sdk.version | Version of the telemetry SDK. |
| **Custom Attributes** | | |
| | attributes.custom.greeting | Custom greeting message, for example,, "Welcome to Axiom Cloudflare instrumentation." |
| | attributes.custom\["http.accepts"] | Specifies acceptable response formats for HTTP request. |
| | attributes.custom\["net.asn"] | Autonomous System Number representing the hosting entity. |
| | attributes.custom\["net.colo"] | Colocation center where the request was processed. |
| | attributes.custom\["net.country"] | Country where the request was processed. |
| | attributes.custom\["net.request\_priority"] | Priority of the request processing. |
| | attributes.custom\["net.tcp\_rtt"] | Round Trip Time of the TCP connection. |
| | attributes.custom\["net.tls\_cipher"] | TLS cipher suite used for the connection. |
| | attributes.custom\["net.tls\_version"] | Version of the TLS protocol used for the connection. |
| | attributes.faas.coldstart | Indicates if the function execution was a cold start. |
| | attributes.faas.invocation\_id | Unique identifier for the function invocation. |
| | attributes.faas.trigger | Trigger that initiated the function execution. |
### List of imported libraries
**`@microlabs/otel-cf-workers`**
This package is designed for integrating OpenTelemetry within Cloudflare Workers. It provides automatic instrumentation capabilities, making it easier to collect telemetry data from your Workers apps without extensive manual instrumentation. This package simplifies tracing HTTP requests and other asynchronous operations within Workers.
**`@opentelemetry/api`**
The core API for OpenTelemetry in JavaScript, providing the necessary interfaces and utilities for tracing, metrics, and context propagation. In the context of Cloudflare Workers, it allows developers to manually instrument custom spans, manipulate context, and access the active span if needed.
**`@opentelemetry/exporter-trace-otlp-http`**
This exporter enables your Cloudflare Workers app to send trace data over HTTP to any backend that supports the OTLP (OpenTelemetry Protocol), such as Axiom. Using OTLP ensures compatibility with a wide range of observability tools and standardizes the data export process.
**`@opentelemetry/otlp-exporter-base`**, **`@opentelemetry/otlp-transformer`**
These packages provide the foundational elements for OTLP exporters, including the transformation of telemetry data into the OTLP format and base classes for implementing OTLP exporters. They are important for ensuring that the data exported from Cloudflare Workers adheres to the OTLP specification.
**`@opentelemetry/resources`**
Defines the Resource, which represents the entity producing telemetry. In Cloudflare Workers, Resources can be used to describe the worker (for example,, service name, version) and are attached to all exported telemetry, aiding in identifying data in backend systems.
# Send OpenTelemetry data from a Django app to Axiom
This guide explains how to send OpenTelemetry data from a Django app to Axiom using the Python OpenTelemetry SDK.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
{/* list separator */}
* [Install Python version 3.7 or higher](https://www.python.org/downloads/).
## Install required dependencies
Install the necessary Python dependencies by running the following command in your terminal:
```bash
pip install django opentelemetry-api opentelemetry-sdk opentelemetry-exporter-otlp-proto-http opentelemetry-instrumentation-django
```
Alternatively, you can add these dependencies to your `requirements.txt` file:
```bash
django
opentelemetry-api
opentelemetry-sdk
opentelemetry-exporter-otlp-proto-http
opentelemetry-instrumentation-django
```
Then, install them using the command:
```bash
pip install -r requirements.txt
```
## Get started with a Django project
1. Create a new Django project if you don’t have one already:
```bash
django-admin startproject your_project_name
```
2. Go to your project directory:
```bash
cd your_project_name
```
3. Create a Django app:
```bash
python manage.py startapp your_app_name
```
## Set up OpenTelemetry Tracing
### Update `manage.py` to initialize tracing
This code initializes OpenTelemetry instrumentation for Django when the project is run. Adding `DjangoInstrumentor().instrument()` ensures that all incoming HTTP requests are automatically traced, which helps in monitoring the app’s performance and behavior without manually adding trace points in every view.
```py
# manage.py
#!/usr/bin/env python
import os
import sys
from opentelemetry.instrumentation.django import DjangoInstrumentor
def main():
"""Run administrative tasks."""
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'your_project_name.settings')
# Initialize OpenTelemetry instrumentation
DjangoInstrumentor().instrument()
try:
from django.core.management import execute_from_command_line
except ImportError as exc:
raise ImportError(
"Couldn't import Django. Are you sure it's installed and "
"available on your PYTHONPATH environment variable? Did you "
"forget to activate a virtual environment?"
) from exc
execute_from_command_line(sys.argv)
if __name__ == '__main__':
main()
```
### Create `exporter.py` for tracer configuration
This file configures the OpenTelemetry tracing provider and exporter. By setting up a `TracerProvider` and configuring the `OTLPSpanExporter`, you define how and where the trace data is sent. The `BatchSpanProcessor` is used to batch and send trace spans efficiently. The tracer created at the end is used throughout the app to create new spans.
```py
# exporter.py
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.sdk.resources import Resource, SERVICE_NAME
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
# Define the service name resource
resource = Resource(attributes={
SERVICE_NAME: "your-service-name" # Replace with your actual service name
})
# Create a TracerProvider with the defined resource
provider = TracerProvider(resource=resource)
# Configure the OTLP/HTTP Span Exporter with necessary headers and endpoint
otlp_exporter = OTLPSpanExporter(
endpoint="https://api.axiom.co/v1/traces",
headers={
"Authorization": "Bearer YOUR_API_TOKEN", # Replace with your actual API token
"X-Axiom-Dataset": "YOUR_DATASET_NAME" # Replace with your dataset name
}
)
# Create a BatchSpanProcessor with the OTLP exporter
processor = BatchSpanProcessor(otlp_exporter)
provider.add_span_processor(processor)
# Set the TracerProvider as the global tracer provider
trace.set_tracer_provider(provider)
# Define a tracer for external use
tracer = trace.get_tracer("your-service-name")
```
### Use the tracer in your views
In this step, modify the Django views to use the tracer defined in `exporter.py`. By wrapping the view logic within `tracer.start_as_current_span`, you create spans that capture the execution of these views. This provides detailed insights into the performance of individual request handlers, helping to identify slow operations or errors.
```py
# views.py
from django.http import HttpResponse
from .exporter import tracer # Import the tracer
def roll_dice(request):
with tracer.start_as_current_span("roll_dice_span"):
# Your logic here
return HttpResponse("Dice rolled!")
def home(request):
with tracer.start_as_current_span("home_span"):
return HttpResponse("Welcome to the homepage!")
```
### Update `settings.py` for OpenTelemetry instrumentation
In your Django project’s `settings.py`, add the OpenTelemetry Django instrumentation. This setup automatically creates spans for HTTP requests handled by Django:
```py
# settings.py
from pathlib import Path
from opentelemetry.instrumentation.django import DjangoInstrumentor
DjangoInstrumentor().instrument()
# Build paths inside the project like this: BASE_DIR / 'subdir'.
BASE_DIR = Path(__file__).resolve().parent.parent
```
### Update the app’s urls.py to include the views
Include your views in the URL routing by updating [`urls.py`](http://urls.py) Updating `urls.py` with these entries sets up the URL routing for the Django app. It connects the URL paths to the corresponding view functions. This ensures that when users visit the specified paths, the corresponding views are executed, and their spans are created and sent to Axiom for monitoring.
```python
# urls.py
from django.urls import path
from .views import roll_dice, home
urlpatterns = [
path('', home, name='home'),
path('rolldice/', roll_dice, name='roll_dice'),
]
```
## Run the project
Run the command to start the Django project:
```bash
python3 manage.py runserver
```
In your browser, go to `http://127.0.0.1:8000/rolldice` to interact with your Django app. Each time you load the page, the app displays a message and sends the collected traces to Axiom.
## Send data from an existing Django project
### Manual instrumentation
Manual instrumentation in Python with OpenTelemetry involves adding code to create and manage spans around the blocks of code you want to trace. This approach allows for precise control over the trace data.
1. Install necessary OpenTelemetry packages to enable manual tracing capabilities in your Django app.
```py
pip install django opentelemetry-api opentelemetry-sdk opentelemetry-exporter-otlp-proto-http opentelemetry-instrumentation-django
```
2. Set up OpenTelemetry in your Django project to manually trace app activities.
```py
# otel_config.py
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
def configure_opentelemetry():
resource = Resource(attributes={"service.name": "your-django-app"})
trace.set_tracer_provider(TracerProvider(resource=resource))
otlp_exporter = OTLPSpanExporter(
endpoint="https://api.axiom.co/v1/traces",
headers={"Authorization": "Bearer YOUR_API_TOKEN", "X-Axiom-Dataset": "YOUR_DATASET_NAME"}
)
span_processor = BatchSpanProcessor(otlp_exporter)
trace.get_tracer_provider().add_span_processor(span_processor)
return trace.get_tracer(__name__)
tracer = configure_opentelemetry()
```
3. Configure OpenTelemetry to your Django settings to capture telemetry data upon app startup.
```py
# settings.py
from otel_config import configure_opentelemetry
configure_opentelemetry()
```
4. Manually instrument views to create custom spans that trace specific operations within your Django app.
```py
# views.py
from django.http import HttpResponse
from otel_config import tracer
def home_view(request):
with tracer.start_as_current_span("home_view") as span:
span.set_attribute("http.method", request.method)
span.set_attribute("http.url", request.build_absolute_uri())
response = HttpResponse("Welcome to the home page!")
span.set_attribute("http.status_code", response.status_code)
return response
```
5. Apply manual tracing to database operations by wrapping database cursor executions with OpenTelemetry spans.
```py
# db_tracing.py
from django.db import connections
from otel_config import tracer
class TracingCursorWrapper:
def __init__(self, cursor):
self.cursor = cursor
def execute(self, sql, params=None):
with tracer.start_as_current_span("database_query") as span:
span.set_attribute("db.statement", sql)
span.set_attribute("db.type", "sql")
return self.cursor.execute(sql, params)
def __getattr__(self, attr):
return getattr(self.cursor, attr)
def patch_database():
for connection in connections.all():
connection.cursor_wrapper = TracingCursorWrapper
# settings.py
from db_tracing import patch_database
patch_database()
```
### Automatic instrumentation
Automatic instrumentation in Django with OpenTelemetry simplifies the process of adding telemetry data to your app. It uses pre-built libraries that automatically instrument the frameworks and libraries.
1. Install required packages that support automatic instrumentation.
```bash
pip install django opentelemetry-api opentelemetry-sdk opentelemetry-exporter-otlp-proto-http opentelemetry-instrumentation-django
```
2. Automatically configure OpenTelemetry to trace Django app operations without manual span management.
```py
# otel_config.py
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.instrumentation.django import DjangoInstrumentor
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
def configure_opentelemetry():
resource = Resource(attributes={"service.name": "your-django-app"})
trace.set_tracer_provider(TracerProvider(resource=resource))
otlp_exporter = OTLPSpanExporter(
endpoint="https://api.axiom.co/v1/traces",
headers={"Authorization": "Bearer YOUR_API_TOKEN", "X-Axiom-Dataset": "YOUR_DATASET_NAME"}
)
span_processor = BatchSpanProcessor(otlp_exporter)
trace.get_tracer_provider().add_span_processor(span_processor)
DjangoInstrumentor().instrument()
```
3. Initialize OpenTelemetry in Django to capture telemetry data from all HTTP requests automatically.
```py
# settings.py
from otel_config import configure_opentelemetry
configure_opentelemetry()
```
4. Update `manage.py` to include OpenTelemetry initialization, ensuring that tracing is active before the Django app fully starts.
```py
#!/usr/bin/env python
import os
import sys
def main():
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'your_project.settings')
from otel_config import configure_opentelemetry
configure_opentelemetry()
try:
from django.core.management import execute_from_command_line
except ImportError as exc:
raise ImportError("Couldn't import Django.") from exc
execute_from_command_line(sys.argv)
if __name__ == '__main__':
main()
```
5. (Optional) Combine automatic and custom manual spans in Django views to enhance trace details for specific complex operations.
```py
# views.py
from opentelemetry import trace
tracer = trace.get_tracer(__name__)
def complex_view(request):
with tracer.start_as_current_span("complex_operation"):
result = perform_complex_operation()
return HttpResponse(result)
```
## Reference
### List of OpenTelemetry trace fields
| Field Category | Field Name | Description |
| ------------------------- | --------------------------------------- | ----------------------------------------------------------------------------------- |
| General Trace Information | | |
| | \_rowId | Unique identifier for each row in the trace data. |
| | \_sysTime | System timestamp when the trace data was recorded. |
| | \_time | Timestamp when the actual event being traced occurred. |
| | trace\_id | Unique identifier for the entire trace. |
| | span\_id | Unique identifier for the span within the trace. |
| | parent\_span\_id | Unique identifier for the parent span within the trace. |
| HTTP Attributes | | |
| | attributes.http.method | HTTP method used for the request. |
| | attributes.http.status\_code | HTTP status code returned in response. |
| | attributes.http.route | Route accessed during the HTTP request. |
| | attributes.http.scheme | Protocol scheme (HTTP/HTTPS). |
| | attributes.http.url | Full URL accessed during the HTTP request. |
| User Agent | | |
| | attributes.http.user\_agent | User agent string, providing client software and OS. |
| Custom Attributes | | |
| | attributes.custom\["http.host"] | Host information where the HTTP request was sent. |
| | attributes.custom\["http.server\_name"] | Server name for the HTTP request. |
| | attributes.custom\["net.peer.ip"] | IP address of the peer in the network interaction. |
| Network Attributes | | |
| | attributes.net.host.port | Port number on the host receiving the request. |
| Operational Details | | |
| | duration | Time taken for the operation, typically in microseconds or milliseconds. |
| | kind | Type of span (For example, server, internal). |
| | name | Name of the span, often a high-level title for the operation. |
| Scope and Instrumentation | | |
| | scope | Instrumentation scope, (For example., opentelemetry.instrumentation.django.) |
| Service Attributes | | |
| | service.name | Name of the service generating the trace, typically set as the app or service name. |
| Telemetry SDK Attributes | | |
| | telemetry.sdk.language | Programming language of the SDK used for telemetry, typically 'python' for Django. |
| | telemetry.sdk.name | Name of the telemetry SDK, for example., OpenTelemetry. |
| | telemetry.sdk.version | Version of the telemetry SDK used in the tracing setup. |
### List of imported libraries
The `exporter.py` file and other relevant parts of the Django OpenTelemetry setup import the following libraries:
### `exporter.py`
This module creates and manages trace data in your app. It creates spans and tracers which track the execution flow and performance of your app.
```py
from opentelemetry import trace
```
TracerProvider acts as a container for the configuration of your app’s tracing behavior. It allows you to define how spans are generated and processed, essentially serving as the central point for managing trace creation and propagation in your app.
```py
from opentelemetry.sdk.trace import TracerProvider
```
BatchSpanProcessor is responsible for batching spans before they’re exported. This is an important aspect of efficient trace data management as it aggregates multiple spans into fewer network requests, reducing the overhead on your app’s performance and the tracing backend.
```py
from opentelemetry.sdk.trace.export import BatchSpanProcessor
```
The Resource class is used to describe your app’s service attributes, such as its name, version, and environment. This contextual information is attached to the traces and helps in identifying and categorizing trace data, making it easier to filter and analyze in your monitoring setup.
```py
from opentelemetry.sdk.resources import Resource, SERVICE_NAME
```
The OTLPSpanExporter is responsible for sending your app’s trace data to a backend that supports the OTLP such as Axiom. It formats the trace data according to the OTLP standards and transmits it over HTTP, ensuring compatibility and standardization in how telemetry data is sent across different systems and services.
```py
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
```
### `manage.py`
The DjangoInstrumentor module is used to automatically instrument Django applications. It integrates OpenTelemetry with Django, enabling automatic creation of spans for incoming HTTP requests handled by Django, and simplifying the process of adding telemetry to your app.
```py
from opentelemetry.instrumentation.django import DjangoInstrumentor
```
### `views.py`
This import brings in the tracer instance defined in `exporter.py`, which is used to create spans for tracing the execution of Django views. By wrapping view logic within `tracer.start_as_current_span`, it captures detailed insights into the performance of individual request handlers.
```py
from .exporter import tracer
```
# OpenTelemetry using .NET
This guide explains how to configure a .NET app using the .NET OpenTelemetry SDK to send telemetry data to Axiom.
OpenTelemetry provides a [unified approach to collecting telemetry data](https://opentelemetry.io/docs/languages/net/) from your .NET applications. This guide explains how to configure OpenTelemetry in a .NET application to send telemetry data to Axiom using the OpenTelemetry SDK.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/).
* [Create a dataset](/reference/settings#data) where you want to send data.
* [Create an API token in Axiom with permissions to ingest and query data](/reference/tokens).
* Install the .NET 6.0 SDK on your development machine.
* Use your existing .NET application or start with the sample provided in the `program.cs` below.
## Install dependencies
Run the following command in your terminal to install the necessary NuGet packages:
```bash
dotnet add package OpenTelemetry --version 1.7.0
dotnet add package OpenTelemetry.Exporter.Console --version 1.7.0
dotnet add package OpenTelemetry.Exporter.OpenTelemetryProtocol --version 1.7.0
dotnet add package OpenTelemetry.Extensions.Hosting --version 1.7.0
dotnet add package OpenTelemetry.Instrumentation.AspNetCore --version 1.7.1
dotnet add package OpenTelemetry.Instrumentation.Http --version 1.6.0-rc.1
```
Replace the `dotnet.csproj` file in your project with the following:
```csharp
net6.0enableenable
```
The `dotnet.csproj` file is important for defining your project’s settings, including target framework, nullable reference types, and package references. It informs the .NET SDK and build tools about the components and configurations your project requires.
## Core application
`program.cs` is the core of the .NET application. It uses ASP.NET to create a simple web server. The server has an endpoint `/rolldice` that returns a random number, simulating a basic API.
```csharp
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Logging;
using System;
using System.Globalization;
// Set up the web application builder
var builder = WebApplication.CreateBuilder(args);
// Configure OpenTelemetry for detailed tracing information
TracingConfiguration.ConfigureOpenTelemetry();
var app = builder.Build();
// Map the GET request for '/rolldice/{player?}' to a handler
app.MapGet("/rolldice/{player?}", (ILogger logger, string? player) =>
{
// Start a manual tracing activity
using var activity = TracingConfiguration.StartActivity("HandleRollDice");
// Call the RollDice function to get a dice roll result
var result = RollDice();
if (activity != null)
{
// Add detailed information to the tracing activity for debugging and monitoring
activity.SetTag("player.name", player ?? "anonymous"); // Tag the player’s name, default to 'anonymous' if not provided
activity.SetTag("dice.rollResult", result); // Tag the result of the dice roll
activity.SetTag("operation.success", true); // Flag the operation as successful
activity.SetTag("custom.attribute", "Additional detail here"); // Add a custom attribute for potential further detail
}
// Log the dice roll event
LogRollDice(logger, player, result);
// Retur the dice roll result as a string
return result.ToString(CultureInfo.InvariantCulture);
});
// Start the web application
app.Run();
// Log function to log the result of a dice roll
void LogRollDice(ILogger logger, string? player, int result)
{
// Log message varies based on whether a player’s name is provided
if (string.IsNullOrEmpty(player))
{
// Log for an anonymous player
logger.LogInformation("Anonymous player is rolling the dice: {result}", result);
}
else
{
// Log for a named player
logger.LogInformation("{player} is rolling the dice: {result}", player, result);
}
}
// Function to roll a dice and return a random number between 1 and 6
int RollDice()
{
// Use the shared instance of Random for thread safety
return Random.Shared.Next(1, 7);
}
```
## Exporter
The `tracing.cs` file sets up the OpenTelemetry instrumentation. It configures the OTLP (OpenTelemetry Protocol) exporters for traces and initializes the ASP.NET SDK with automatic instrumentation capabilities.
```csharp
using OpenTelemetry;
using OpenTelemetry.Resources;
using OpenTelemetry.Trace;
using System;
using System.Diagnostics;
using System.Reflection;
// Class to configure OpenTelemetry tracing
public static class TracingConfiguration
{
// Declare an ActivitySource for creating tracing activities
private static readonly ActivitySource ActivitySource = new("MyCustomActivitySource");
// Configure OpenTelemetry with custom settings and instrumentation
public static void ConfigureOpenTelemetry()
{
// Retrieve the service name and version from the executing assembly metadata
var serviceName = Assembly.GetExecutingAssembly().GetName().Name ?? "UnknownService";
var serviceVersion = Assembly.GetExecutingAssembly().GetName().Version?.ToString() ?? "UnknownVersion";
// Set up the tracer provider with various configurations
Sdk.CreateTracerProviderBuilder()
.SetResourceBuilder(
// Set resource attributes including service name and version
ResourceBuilder.CreateDefault().AddService(serviceName, serviceVersion: serviceVersion)
.AddAttributes(new[] { new KeyValuePair("environment", "development") }) // Additional attributes
.AddTelemetrySdk() // Add telemetry SDK information to the traces
.AddEnvironmentVariableDetector()) // Detect resource attributes from environment variables
.AddSource(ActivitySource.Name) // Add the ActivitySource defined above
.AddAspNetCoreInstrumentation() // Add automatic instrumentation for ASP.NET Core
.AddHttpClientInstrumentation() // Add automatic instrumentation for HttpClient requests
.AddOtlpExporter(options => // Configure the OTLP exporter
{
options.Endpoint = new Uri("https://api.axiom.co/v1/traces"); // Set the endpoint for the exporter
options.Protocol = OpenTelemetry.Exporter.OtlpExportProtocol.HttpProtobuf; // Set the protocol
options.Headers = "Authorization=Bearer API_TOKEN, X-Axiom-Dataset=DATASET"; // Update API token and dataset
})
.Build(); // Build the tracer provider
}
// Method to start a new tracing activity with an optional activity kind
public static Activity? StartActivity(string activityName, ActivityKind kind = ActivityKind.Internal)
{
// Starts and returns a new activity if sampling allows it, otherwise returns null
return ActivitySource.StartActivity(activityName, kind);
}
}
```
In the `tracing.cs` file, make the following changes:
* Replace the value of the `serviceName` variable with the name of the service you want to trace. This is used for identifying and categorizing trace data, particularly in systems with multiple services.
* Replace `API_TOKEN` with your Axiom API key.
* Replace `DATASET_NAME` with the name of the Axiom dataset where you want to send data.
## Run the instrumented application
1. Run in local development mode using the development settings in `appsettings.development.json`. Ensure your Axiom API token and dataset name are correctly set in `tracing.cs`.
2. Before deploying, run in production mode by switching to `appsettings.json` for production settings. Ensure your Axiom API token and dataset name are correctly set in `tracing.cs`.
3. Run your application with `dotnet run`. Your application starts and you can interact with it by sending requests to the `/rolldice` endpoint.
For example, if you are using port `8080`, your application is accessible locally at `http://localhost:8080/rolldice`. This URL will direct your requests to the `/rolldice` endpoint of your server running on your local machine.
## Observe the telemetry data
As you interact with your application, traces are collected and exported to Axiom where you can monitor and analyze your application’s performance and behavior.
1. Log into your Axiom account and click the **Datasets** or **Stream** tab.
2. Select your dataset from the list.
3. From the list of fields, click on the **trace\_id**, to view your spans.
## Dynamic OpenTelemetry Traces dashboard
The data can then be further viewed and analyzed in the traces dashboard, providing insights into the performance and behavior of your application.
1. Log into your Axiom account, select **Dashboards**, and click on the traces dashboard named after your dataset.
2. View the dashboard which displays your total traces, incoming spans, average span duration, errors, slowest operations, and top 10 span errors across services.
## Send data from an existing .NET project
### Manual Instrumentation
Manual instrumentation involves adding code to create, configure, and manage telemetry data, such as traces and spans, providing control over what data is collected.
1. Initialize ActivitySource. Define an `ActivitySource` to create activities (spans) for tracing specific operations within your application.
```csharp
private static readonly ActivitySource MyActivitySource = new ActivitySource("MyActivitySourceName");
```
2. Start and stop activities. Manually start activities (spans) at the beginning of the operations you want to trace and stop them when the operations complete. You can add custom attributes to these activities for more detailed tracing.
```csharp
using var activity = MyActivitySource.StartActivity("MyOperationName");
activity?.SetTag("key", "value");
// Perform the operation here
activity?.Stop();
```
3. Add custom attributes. Enhance activities with custom attributes to provide additional context, making it easier to analyze telemetry data.
```csharp
activity?.SetTag("UserId", userId);
activity?.SetTag("OperationDetail", "Detail about the operation");
```
### Automatic Instrumentation
Automatic instrumentation uses the OpenTelemetry SDK and additional libraries to automatically generate telemetry data for certain operations, such as incoming HTTP requests and database queries.
1. Configure OpenTelemetry SDK. Use the OpenTelemetry SDK to configure automatic instrumentation in your application. This typically involves setting up a `TracerProvider` in your `program.cs` or startup configuration, which automatically captures telemetry data from supported libraries.
```csharp
Sdk.CreateTracerProviderBuilder()
.AddAspNetCoreInstrumentation()
.AddHttpClientInstrumentation()
.AddOtlpExporter(options =>
{
options.Endpoint = new Uri("https://api.axiom.co/v1/traces");
// Ensure to replace YOUR_API_TOKEN and YOUR_DATASET_NAME with your actual API token and dataset name
options.Headers = $"Authorization=Bearer YOUR_API_TOKEN, X-Axiom-Dataset=YOUR_DATASET_NAME";
})
.Build();
```
2. Install and configure additional OpenTelemetry instrumentation packages as needed, based on the technologies your application uses. For example, to automatically trace SQL database queries, you might add the corresponding database instrumentation package.
3. With automatic instrumentation set up, no further code changes are required for tracing basic operations. The OpenTelemetry SDK and its instrumentation packages handle the creation and management of traces for supported operations.
## Reference
### List of OpenTelemetry trace fields
| Field Category | Field Name | Description |
| ----------------------------- | --------------------------------------- | ------------------------------------------------------------------------------------ |
| **General Trace Information** | | |
| | \_rowId | Unique identifier for each row in the trace data. |
| | \_sysTime | System timestamp when the trace data was recorded. |
| | \_time | Timestamp when the actual event being traced occurred. |
| | trace\_id | Unique identifier for the entire trace. |
| | span\_id | Unique identifier for the span within the trace. |
| | parent\_span\_id | Unique identifier for the parent span within the trace. |
| **HTTP Attributes** | | |
| | attributes.http.request.method | HTTP method used for the request. |
| | attributes.http.response.status\_code | HTTP status code returned in response. |
| | attributes.http.route | Route accessed during the HTTP request. |
| | attributes.url.path | Path component of the URL accessed. |
| | attributes.url.scheme | Scheme component of the URL accessed. |
| | attributes.server.address | Address of the server handling the request. |
| | attributes.server.port | Port number on the server handling the request. |
| **Network Attributes** | | |
| | attributes.network.protocol.version | Version of the network protocol used. |
| **User Agent** | | |
| | attributes.user\_agent.original | Original user agent string, providing client software and OS. |
| **Custom Attributes** | | |
| | attributes.custom\["custom.attribute"] | Custom attribute provided in the trace. |
| | attributes.custom\["dice.rollResult"] | Result of a dice roll operation. |
| | attributes.custom\["operation.success"] | Indicates if the operation was successful. |
| | attributes.custom\["player.name"] | Name of the player in the operation. |
| **Operational Details** | | |
| | duration | Time taken for the operation. |
| | kind | Type of span (e.g., server, client, internal). |
| | name | Name of the span. |
| **Resource Attributes** | | |
| | resource.custom.environment | Environment where the trace was captured, e.g., development. |
| **Telemetry SDK Attributes** | | |
| | telemetry.sdk.language | Language of the telemetry SDK, e.g., dotnet. |
| | telemetry.sdk.name | Name of the telemetry SDK, e.g., opentelemetry. |
| | telemetry.sdk.version | Version of the telemetry SDK, e.g., 1.7.0. |
| **Service Attributes** | | |
| | service.instance.id | Unique identifier for the instance of the service. |
| | service.name | Name of the service generating the trace, e.g., dotnet. |
| | service.version | Version of the service generating the trace, e.g., 1.0.0.0. |
| **Scope Attributes** | | |
| | scope.name | Name of the scope for the operation, e.g., OpenTelemetry.Instrumentation.AspNetCore. |
| | scope.version | Version of the scope, e.g., 1.0.0.0. |
### List of imported libraries
### OpenTelemetry
``
This is the core SDK for OpenTelemetry in .NET. It provides the foundational tools needed to collect and manage telemetry data within your .NET applications. It’s the base upon which all other OpenTelemetry instrumentation and exporter packages build.
### OpenTelemetry.Exporter.Console
``
This package allows applications to export telemetry data to the console. It is primarily useful for development and testing purposes, offering a simple way to view the telemetry data your application generates in real time.
### OpenTelemetry.Exporter.OpenTelemetryProtocol
``
This package enables your application to export telemetry data using the OpenTelemetry Protocol (OTLP) over gRPC or HTTP. It’s vital for sending data to observability platforms that support OTLP, ensuring your telemetry data can be easily analyzed and monitored across different systems.
### OpenTelemetry.Extensions.Hosting
``
Designed for .NET applications, this package integrates OpenTelemetry with the .NET Generic Host. It simplifies the process of configuring and managing the lifecycle of OpenTelemetry resources such as TracerProvider, making it easier to collect telemetry data in applications that use the hosting model.
### OpenTelemetry.Instrumentation.AspNetCore
``
This package is designed for instrumenting ASP.NET Core applications. It automatically collects telemetry data about incoming requests and responses. This is important for monitoring the performance and reliability of web applications and APIs built with ASP.NET Core.
### OpenTelemetry.Instrumentation.Http
``
This package provides automatic instrumentation for HTTP clients in .NET applications. It captures telemetry data about outbound HTTP requests, including details such as request and response headers, duration, success status, and more. It’s key for understanding external dependencies and interactions in your application.
# OpenTelemetry using Golang
This guide explains how to configure a Go app using the Go OpenTelemetry SDK to send telemetry data to Axiom.
OpenTelemetry offers a [single set of APIs and libraries](https://opentelemetry.io/docs/languages/go/instrumentation/) that standardize how you collect and transfer telemetry data. This guide focuses on setting up OpenTelemetry in a Go app to send traces to Axiom.
## Prerequisites
* Go 1.19 or higher: Ensure you have Go version 1.19 or higher installed in your environment.
* Go app: Use your own app written in Go or start with the provided `main.go` sample below.
* [Create an Axiom account](https://app.axiom.co/).
* [Create a dataset in Axiom](/reference/datasets) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to create, read, update, and delete datasets.
## Installing Dependencies
First, run the following in your terminal to install the necessary Go packages:
```go
go get go.opentelemetry.io/otel
go get go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp
go get go.opentelemetry.io/otel/sdk/resource
go get go.opentelemetry.io/otel/sdk/trace
go get go.opentelemetry.io/otel/semconv/v1.24.0
go get go.opentelemetry.io/otel/trace
go get go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp
go get go.opentelemetry.io/otel/propagation
```
This installs the OpenTelemetry Go SDK, the OTLP (OpenTelemetry Protocol) trace exporter, and other necessary packages for instrumentation and resource definition.
## Initializing a Go module and managing dependencies
Before installing the OpenTelemetry dependencies, ensure your Go project is properly initialized as a module and all dependencies are correctly managed. This step is important for resolving import issues and managing your project’s dependencies effectively.
### Initialize a Go module
If your project is not already initialized as a Go module, run the following command in your project’s root directory. This step creates a `go.mod` file which tracks your project’s dependencies.
```bash
go mod init
```
Replace `` with your project’s name or the GitHub repository path if you plan to push the code to GitHub. For example, `go mod init github.com/yourusername/yourprojectname`.
### Manage dependencies
After initializing your Go module, tidy up your project’s dependencies. This ensures that your `go.mod` file accurately reflects the packages your project depends on, including the correct versions of the OpenTelemetry libraries you'll be using.
Run the following command in your project’s root directory:
```bash
go mod tidy
```
This command will download the necessary dependencies and update your `go.mod` and `go.sum` files accordingly. It’s a good practice to run `go mod tidy` after adding new imports to your project or periodically to keep dependencies up to date.
## HTTP server configuration (main.go)
`main.go` is the entry point of the app. It invokes `InstallExportPipeline` from `exporter.go` to set up the tracing exporter. It also sets up a basic HTTP server with OpenTelemetry instrumentation to demonstrate how telemetry data can be collected and exported in a simple web app context. It also demonstrates the usage of span links to establish relationships between spans across different traces.
```go
// main.go
package main
import (
"context"
"fmt"
"log"
"math/rand"
"net"
"net/http"
"os"
"os/signal"
"time"
// OpenTelemetry imports for tracing and observability.
"go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/trace"
)
// main function starts the application and handles run function errors.
func main() {
if err := run(); err != nil {
log.Fatalln(err)
}
}
// run sets up signal handling, tracer initialization, and starts an HTTP server.
func run() error {
// Creating a context that listens for the interrupt signal from the OS.
ctx, stop := signal.NotifyContext(context.Background(), os.Interrupt)
defer stop()
// Initializes tracing and returns a function to shut down OpenTelemetry cleanly.
otelShutdown, err := SetupTracer()
if err != nil {
return err
}
defer func() {
if shutdownErr := otelShutdown(ctx); shutdownErr != nil {
log.Printf("failed to shutdown OpenTelemetry: %v", shutdownErr) // Log fatal errors during server shutdown
}
}()
// Configuring the HTTP server settings.
srv := &http.Server{
Addr: ":8080", // Server address
BaseContext: func(_ net.Listener) context.Context { return ctx },
ReadTimeout: 5 * time.Second, // Server read timeout
WriteTimeout: 15 * time.Second, // Server write timeout
Handler: newHTTPHandler(), // HTTP handler
}
// Starting the HTTP server in a new goroutine.
go func() {
if err := srv.ListenAndServe(); err != http.ErrServerClosed {
log.Fatalf("HTTP server ListenAndServe: %v", err)
}
}()
// Wait for interrupt signal to gracefully shut down the server with a timeout context.
<-ctx.Done()
shutdownCtx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel() // Ensures cancel function is called on exit
if err := srv.Shutdown(shutdownCtx); err != nil {
log.Fatalf("HTTP server Shutdown: %v", err) // Log fatal errors during server shutdown
}
return nil
}
// newHTTPHandler configures the HTTP routes and integrates OpenTelemetry.
func newHTTPHandler() http.Handler {
mux := http.NewServeMux() // HTTP request multiplexer
// Wrapping the handler function with OpenTelemetry instrumentation.
handleFunc := func(pattern string, handlerFunc func(http.ResponseWriter, *http.Request)) {
handler := otelhttp.WithRouteTag(pattern, http.HandlerFunc(handlerFunc))
mux.Handle(pattern, handler) // Associate pattern with handler
}
// Registering route handlers with OpenTelemetry instrumentation
handleFunc("/rolldice", rolldice)
handleFunc("/roll_with_link", rollWithLink)
handler := otelhttp.NewHandler(mux, "/")
return handler
}
// rolldice handles the /rolldice route by generating a random dice roll.
func rolldice(w http.ResponseWriter, r *http.Request) {
_, span := otel.Tracer("example-tracer").Start(r.Context(), "rolldice")
defer span.End()
// Generating a random dice roll.
randGen := rand.New(rand.NewSource(time.Now().UnixNano()))
roll := 1 + randGen.Intn(6)
// Writing the dice roll to the response.
fmt.Fprintf(w, "Rolled a dice: %d\n", roll)
}
// rollWithLink handles the /roll_with_link route by creating a new span with a link to the parent span.
func rollWithLink(w http.ResponseWriter, r *http.Request) {
ctx, span := otel.Tracer("example-tracer").Start(r.Context(), "roll_with_link")
defer span.End()
/**
* Create a new span for rolldice with a link to the parent span.
* This link helps correlate events that are related but not directly a parent-child relationship.
*/
rollDiceCtx, rollDiceSpan := otel.Tracer("example-tracer").Start(ctx, "rolldice",
trace.WithLinks(trace.Link{
SpanContext: span.SpanContext(),
Attributes: nil,
}),
)
defer rollDiceSpan.End()
// Generating a random dice roll linked to the parent context.
randGen := rand.New(rand.NewSource(time.Now().UnixNano()))
roll := 1 + randGen.Intn(6)
// Writing the linked dice roll to the response.
fmt.Fprintf(w, "Dice roll result (with link): %d\n", roll)
// Use the rollDiceCtx if needed.
_ = rollDiceCtx
}
```
## Exporter configuration (exporter.go)
`exporter.go` is responsible for setting up the OpenTelemetry tracing exporter. It defines the `resource attributes`, `initializes` the `tracer`, and configures the OTLP (OpenTelemetry Protocol) exporter with appropriate endpoints and headers, allowing your app to send telemetry data to Axiom.
```go
package main
import (
"context" // For managing request-scoped values, cancellation signals, and deadlines.
"crypto/tls" // For configuring TLS options, like certificates.
// OpenTelemetry imports for setting up tracing and exporting telemetry data.
"go.opentelemetry.io/otel" // Core OpenTelemetry APIs for managing tracers.
"go.opentelemetry.io/otel/attribute" // For creating and managing trace attributes.
"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp" // HTTP trace exporter for OpenTelemetry Protocol (OTLP).
"go.opentelemetry.io/otel/propagation" // For managing context propagation formats.
"go.opentelemetry.io/otel/sdk/resource" // For defining resources that describe an entity producing telemetry.
"go.opentelemetry.io/otel/sdk/trace" // For configuring tracing, like sampling and processors.
semconv "go.opentelemetry.io/otel/semconv/v1.24.0" // Semantic conventions for resource attributes.
)
const (
serviceName = "axiom-go-otel" // Name of the service for tracing.
serviceVersion = "0.1.0" // Version of the service.
otlpEndpoint = "api.axiom.co" // OTLP collector endpoint.
bearerToken = "Bearer $API_TOKEN" // Authorization token.
deploymentEnvironment = "production" // Deployment environment.
)
func SetupTracer() (func(context.Context) error, error) {
ctx := context.Background()
return InstallExportPipeline(ctx) // Setup and return the export pipeline for telemetry data.
}
func Resource() *resource.Resource {
// Defines resource with service name, version, and environment.
return resource.NewWithAttributes(
semconv.SchemaURL,
semconv.ServiceNameKey.String(serviceName),
semconv.ServiceVersionKey.String(serviceVersion),
attribute.String("environment", deploymentEnvironment),
)
}
func InstallExportPipeline(ctx context.Context) (func(context.Context) error, error) {
// Sets up OTLP HTTP exporter with endpoint, headers, and TLS config.
exporter, err := otlptracehttp.New(ctx,
otlptracehttp.WithEndpoint(otlpEndpoint),
otlptracehttp.WithHeaders(map[string]string{
"Authorization": bearerToken,
"X-AXIOM-DATASET": "$DATASET_NAME",
}),
otlptracehttp.WithTLSClientConfig(&tls.Config{}),
)
if err != nil {
return nil, err
}
// Configures the tracer provider with the exporter and resource.
tracerProvider := trace.NewTracerProvider(
trace.WithBatcher(exporter),
trace.WithResource(Resource()),
)
otel.SetTracerProvider(tracerProvider)
// Sets global propagator to W3C Trace Context and Baggage.
otel.SetTextMapPropagator(propagation.NewCompositeTextMapPropagator(
propagation.TraceContext{},
propagation.Baggage{},
))
return tracerProvider.Shutdown, nil // Returns a function to shut down the tracer provider.
}
```
## Run the app
To run the app, execute both `exporter.go` and `main.go`. Use the command `go run main.go exporter.go` to start the app. Once your app is running, traces collected by your app are exported to Axiom. The server starts on the specified port, and you can interact with it by sending requests to the `/rolldice` endpoint.
For example, if you are using port `8080`, your app will be accessible locally at `http://localhost:8080/rolldice`. This URL will direct your requests to the `/rolldice` endpoint of your server running on your local machine.
## Observe the telemetry data in Axiom
After deploying your app, you can log into your Axiom account to view and analyze the telemetry data. As you interact with your app, traces will be collected and exported to Axiom, where you can monitor and analyze your app’s performance and behavior.
## Dynamic OpenTelemetry traces dashboard
This data can then be further viewed and analyzed in Axiom’s dashboard, providing insights into the performance and behavior of your app.
## Send data from an existing Golang project
### Manual Instrumentation
Manual instrumentation in Go involves managing spans within your code to track operations and events. This method offers precise control over what is instrumented and how spans are configured.
1. Initialize the tracer:
Use the OpenTelemetry API to obtain a tracer instance. This tracer will be used to start and manage spans.
```go
tracer := otel.Tracer("serviceName")
```
2. Create and manage spans:
Manually start spans before the operations you want to trace and ensure they are ended after the operations complete.
```go
ctx, span := tracer.Start(context.Background(), "operationName")
defer span.End()
// Perform the operation here
```
3. Annotate spans:
Enhance spans with additional information using attributes or events to provide more context about the traced operation.
```go
span.SetAttributes(attribute.String("key", "value"))
span.AddEvent("eventName", trace.WithAttributes(attribute.String("key", "value")))
```
### Automatic Instrumentation
Automatic instrumentation in Go uses libraries and integrations that automatically create spans for operations, simplifying the addition of observability to your app.
1. Instrumentation libraries:
Use `OpenTelemetry-contrib` libraries designed for automatic instrumentation of standard Go frameworks and libraries, such as `net/http`.
```go
import "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
```
2. Wrap handlers and clients:
Automatically instrument HTTP servers and clients by wrapping them with OpenTelemetry’s instrumentation. For HTTP servers, wrap your handlers with `otelhttp.NewHandler`.
```go
http.Handle("/path", otelhttp.NewHandler(handler, "operationName"))
```
3. Minimal code changes:
After setting up automatic instrumentation, no further changes are required for tracing standard operations. The instrumentation takes care of starting, managing, and ending spans.
## Reference
### List of OpenTelemetry trace fields
| Field Category | Field Name | Description |
| ---------------------------- | --------------------------------------- | ------------------------------------------------------------------- |
| **Unique Identifiers** | | |
| | \_rowid | Unique identifier for each row in the trace data. |
| | span\_id | Unique identifier for the span within the trace. |
| | trace\_id | Unique identifier for the entire trace. |
| **Timestamps** | | |
| | \_systime | System timestamp when the trace data was recorded. |
| | \_time | Timestamp when the actual event being traced occurred. |
| **HTTP Attributes** | | |
| | attributes.custom\["http.host"] | Host information where the HTTP request was sent. |
| | attributes.custom\["http.server\_name"] | Server name for the HTTP request. |
| | attributes.http.flavor | HTTP protocol version used. |
| | attributes.http.method | HTTP method used for the request. |
| | attributes.http.route | Route accessed during the HTTP request. |
| | attributes.http.scheme | Protocol scheme (HTTP/HTTPS). |
| | attributes.http.status\_code | HTTP response status code. |
| | attributes.http.target | Specific target of the HTTP request. |
| | attributes.http.user\_agent | User agent string of the client. |
| | attributes.custom.user\_agent.original | Original user agent string, providing client software and OS. |
| **Network Attributes** | | |
| | attributes.net.host.port | Port number on the host receiving the request. |
| | attributes.net.peer.port | Port number on the peer (client) side. |
| | attributes.custom\["net.peer.ip"] | IP address of the peer in the network interaction. |
| | attributes.net.sock.peer.addr | Socket peer address, indicating the IP version used. |
| | attributes.net.sock.peer.port | Socket peer port number. |
| | attributes.custom.net.protocol.version | Protocol version used in the network interaction. |
| **Operational Details** | | |
| | duration | Time taken for the operation. |
| | kind | Type of span (for example,, server, client). |
| | name | Name of the span. |
| | scope | Instrumentation scope. |
| | service.name | Name of the service generating the trace. |
| | service.version | Version of the service generating the trace. |
| **Resource Attributes** | | |
| | resource.environment | Environment where the trace was captured, for example,, production. |
| | attributes.custom.http.wrote\_bytes | Number of bytes written in the HTTP response. |
| **Telemetry SDK Attributes** | | |
| | telemetry.sdk.language | Language of the telemetry SDK (if previously not included). |
| | telemetry.sdk.name | Name of the telemetry SDK (if previously not included). |
| | telemetry.sdk.version | Version of the telemetry SDK (if previously not included). |
### List of imported libraries
### OpenTelemetry Go SDK
**`go.opentelemetry.io/otel`**
This is the core SDK for OpenTelemetry in Go. It provides the necessary tools to create and manage telemetry data (traces, metrics, and logs).
### OTLP Trace Exporter
**`go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp`**
This package allows your app to export telemetry data over HTTP using the OpenTelemetry Protocol (OTLP). It’s important for sending data to Axiom or any other backend that supports OTLP.
### Resource and Trace Packages
**`go.opentelemetry.io/otel/sdk/resource`** and **`go.opentelemetry.io/otel/sdk/trace`**
These packages help define the properties of your telemetry data, such as service name and version, and manage trace data within your app.
### Semantic Conventions
**`go.opentelemetry.io/otel/semconv/v1.24.0`**
This package provides standardized schema URLs and attributes, ensuring consistency across different OpenTelemetry implementations.
### Tracing API
**`go.opentelemetry.io/otel/trace`**
This package offers the API for tracing. It enables you to create spans, record events, and manage context propagation in your app.
### HTTP Instrumentation
**`go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp`**
Used for instrumenting HTTP clients and servers. It automatically records data about HTTP requests and responses, which is essential for web apps.
### Propagators
**`go.opentelemetry.io/otel/propagation`**
This package provides the ability to propagate context and trace information across service boundaries.
# Send data from Java app using OpenTelemetry
This page explains how to configure a Java app using the Java OpenTelemetry SDK to send telemetry data to Axiom.
OpenTelemetry provides a unified approach to collecting telemetry data from your Java applications. This page demonstrates how to configure OpenTelemetry in a Java app to send telemetry data to Axiom using the OpenTelemetry SDK.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
{/* list separator */}
* [Install JDK 11](https://www.oracle.com/java/technologies/java-se-glance.html) or later
* [Install Maven](https://maven.apache.org/download.cgi)
* Use your own app written in Java or the provided `DiceRollerApp.java` sample.
## Create project
To create a Java project, run the Maven archetype command in the terminal:
```bash
mvn archetype:generate -DgroupId=com.example -DartifactId=MyProject -DarchetypeArtifactId=maven-archetype-quickstart -DinteractiveMode=false
```
This command creates a new project in a directory named `MyProject` with a standard directory structure.
## Create core app
`DiceRollerApp.java` is the core of the sample app. It simulates rolling a dice and demonstrates the usage of OpenTelemetry for tracing. The app includes two methods: one for a simple dice roll and another that demonstrates the usage of span links to establish relationships between spans across different traces.
Create the `DiceRollerApp.java` in the `src/main/java/com/example` directory with the following content:
```java
package com.example;
import io.opentelemetry.api.OpenTelemetry;
import io.opentelemetry.api.trace.Span;
import io.opentelemetry.api.trace.Tracer;
import io.opentelemetry.context.Scope;
import java.util.Random;
public class DiceRollerApp {
private static final Tracer tracer;
static {
OpenTelemetry openTelemetry = OtelConfiguration.initializeOpenTelemetry();
tracer = openTelemetry.getTracer(DiceRollerApp.class.getName());
}
public static void main(String[] args) {
rollDice();
rollDiceWithLink();
}
private static void rollDice() {
Span span = tracer.spanBuilder("rollDice").startSpan();
try (Scope scope = span.makeCurrent()) {
int roll = 1 + new Random().nextInt(6);
System.out.println("Rolled a dice: " + roll);
} finally {
span.end();
}
}
private static void rollDiceWithLink() {
Span parentSpan = tracer.spanBuilder("rollWithLink").startSpan();
try (Scope parentScope = parentSpan.makeCurrent()) {
Span childSpan = tracer.spanBuilder("rolldice")
.addLink(parentSpan.getSpanContext())
.startSpan();
try (Scope childScope = childSpan.makeCurrent()) {
int roll = 1 + new Random().nextInt(6);
System.out.println("Dice roll result (with link): " + roll);
} finally {
childSpan.end();
}
} finally {
parentSpan.end();
}
}
}
```
## Configure OpenTelemetry
`OtelConfiguration.java` sets up the OpenTelemetry SDK and configures the exporter to send data to Axiom. It initializes the tracer provider, sets up the Axiom exporter, and configures the resource attributes.
Create the `OtelConfiguration.java` file in the `src/main/java/com/example` directory with the following content:
```java
package com.example;
import io.opentelemetry.api.OpenTelemetry;
import io.opentelemetry.api.common.Attributes;
import io.opentelemetry.api.common.AttributeKey;
import io.opentelemetry.exporter.otlp.http.trace.OtlpHttpSpanExporter;
import io.opentelemetry.sdk.OpenTelemetrySdk;
import io.opentelemetry.sdk.resources.Resource;
import io.opentelemetry.sdk.trace.SdkTracerProvider;
import io.opentelemetry.sdk.trace.export.BatchSpanProcessor;
import java.util.concurrent.TimeUnit;
public class OtelConfiguration {
private static final String SERVICE_NAME = "YOUR_SERVICE_NAME";
private static final String SERVICE_VERSION = "YOUR_SERVICE_VERSION";
private static final String OTLP_ENDPOINT = "https://api.axiom.co/v1/traces";
private static final String BEARER_TOKEN = "Bearer API_TOKEN";
private static final String AXIOM_DATASET = "DATASET_NAME";
public static OpenTelemetry initializeOpenTelemetry() {
Resource resource = Resource.getDefault()
.merge(Resource.create(Attributes.of(
AttributeKey.stringKey("service.name"), SERVICE_NAME,
AttributeKey.stringKey("service.version"), SERVICE_VERSION
)));
OtlpHttpSpanExporter spanExporter = OtlpHttpSpanExporter.builder()
.setEndpoint(OTLP_ENDPOINT)
.addHeader("Authorization", BEARER_TOKEN)
.addHeader("X-Axiom-Dataset", AXIOM_DATASET)
.build();
SdkTracerProvider sdkTracerProvider = SdkTracerProvider.builder()
.addSpanProcessor(BatchSpanProcessor.builder(spanExporter)
.setScheduleDelay(100, TimeUnit.MILLISECONDS)
.build())
.setResource(resource)
.build();
OpenTelemetrySdk openTelemetry = OpenTelemetrySdk.builder()
.setTracerProvider(sdkTracerProvider)
.buildAndRegisterGlobal();
Runtime.getRuntime().addShutdownHook(new Thread(sdkTracerProvider::close));
return openTelemetry;
}
}
```
* Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
* Replace `DATASET_NAME` with the name of the Axiom dataset where you want to send data.
## Configure project
The `pom.xml` file defines the project structure and dependencies for Maven. It includes the necessary OpenTelemetry libraries and configures the build process.
Update the `pom.xml` file in the root of your project directory with the following content:
```xml
4.0.0com.exampleaxiom-otel-java1.0-SNAPSHOTUTF-811111.18.0io.opentelemetryopentelemetry-api${opentelemetry.version}io.opentelemetryopentelemetry-sdk${opentelemetry.version}io.opentelemetryopentelemetry-exporter-otlp${opentelemetry.version}io.grpcgrpc-netty-shaded1.42.1junitjunit4.13.2testorg.apache.maven.pluginsmaven-compiler-plugin3.8.111org.apache.maven.pluginsmaven-surefire-plugin3.0.0-M5trueorg.apache.maven.pluginsmaven-shade-plugin3.2.4packageshadecom.example.DiceRollerApp
```
## Run the instrumented app
To run your Java app with OpenTelemetry instrumentation, follow these steps:
1. Clean the project and download dependencies:
```bash
mvn clean
```
2. Compile the code:
```bash
mvn compile
```
3. Package the app:
```bash
mvn package
```
4. Run the app:
```bash
java -jar target/axiom-otel-java-1.0-SNAPSHOT.jar
```
The app executes the `rollDice()` and `rollDiceWithLink()` methods, generates telemetry data, and sends the data to Axiom.
## Observe telemetry data in Axiom
As the app runs, it sends traces to Axiom. To view the traces:
1. In Axiom, click the **Stream** tab.
2. Click your dataset.
Axiom provides a dynamic dashboard for visualizing and analyzing your OpenTelemetry traces. This dashboard offers insights into the performance and behavior of your app. To view the dashboard:
1. In Axiom, click the **Dashboards** tab.
2. Look for the OpenTelemetry traces dashboard or create a new one.
3. Customize the dashboard to show the event data and visualizations most relevant to the app.
## Send data from an existing Java project
### Manual instrumentation
Manual instrumentation gives fine-grained control over which parts of the app are traced and what information is included in the traces. It requires adding OpenTelemetry-specific code to the app.
Set up OpenTelemetry. Create a configuration class to initialize OpenTelemetry with necessary settings, exporters, and span processors.
```java
// OtelConfiguration.java
package com.example;
import io.opentelemetry.api.OpenTelemetry;
import io.opentelemetry.api.trace.Tracer;
import io.opentelemetry.context.Scope;
import io.opentelemetry.exporter.otlp.trace.OtlpGrpcSpanExporter;
import io.opentelemetry.sdk.OpenTelemetrySdk;
import io.opentelemetry.sdk.trace.SdkTracerProvider;
import io.opentelemetry.sdk.trace.export.BatchSpanProcessor;
public class OtelConfiguration {
public static OpenTelemetry initializeOpenTelemetry() {
OtlpGrpcSpanExporter spanExporter = OtlpGrpcSpanExporter.builder()
.setEndpoint("https://api.axiom.co/v1/traces")
.addHeader("Authorization", "Bearer API_TOKEN")
.addHeader("X-Axiom-Dataset", "DATASET_NAME")
.build();
SdkTracerProvider tracerProvider = SdkTracerProvider.builder()
.addSpanProcessor(BatchSpanProcessor.builder(spanExporter).build())
.build();
return OpenTelemetrySdk.builder()
.setTracerProvider(tracerProvider)
.buildAndRegisterGlobal();
}
}
```
Spans represent units of work in the app. They have a start time and duration and can be nested.
```java
// DiceRollerApp.java
package com.example;
import io.opentelemetry.api.OpenTelemetry;
import io.opentelemetry.api.trace.Span;
import io.opentelemetry.api.trace.Tracer;
import io.opentelemetry.context.Scope;
public class DiceRollerApp {
private static final Tracer tracer;
static {
OpenTelemetry openTelemetry = OtelConfiguration.initializeOpenTelemetry();
tracer = openTelemetry.getTracer("com.example.DiceRollerApp");
}
public static void main(String[] args) {
try (Scope scope = tracer.spanBuilder("Main").startScopedSpan()) {
rollDice();
}
}
private static void rollDice() {
Span span = tracer.spanBuilder("rollDice").startSpan();
try (Scope scope = span.makeCurrent()) {
// Simulate dice roll
int result = new Random().nextInt(6) + 1;
System.out.println("Rolled a dice: " + result);
} finally {
span.end();
}
}
}
```
Custom spans are manually managed to provide detailed insights into specific functions or methods within the app.
Spans can be annotated with attributes and events to provide more context about the operation being performed.
```java
private static void rollDice() {
Span span = tracer.spanBuilder("rollDice").startSpan();
try (Scope scope = span.makeCurrent()) {
int roll = 1 + new Random().nextInt(6);
span.setAttribute("roll.value", roll);
span.addEvent("Dice rolled");
System.out.println("Rolled a dice: " + roll);
} finally {
span.end();
}
}
```
Span links allow association of spans that aren’t in a parent-child relationship.
```java
private static void rollDiceWithLink() {
Span parentSpan = tracer.spanBuilder("rollWithLink").startSpan();
try (Scope parentScope = parentSpan.makeCurrent()) {
Span childSpan = tracer.spanBuilder("rolldice")
.addLink(parentSpan.getSpanContext())
.startSpan();
try (Scope childScope = childSpan.makeCurrent()) {
int roll = 1 + new Random().nextInt(6);
System.out.println("Dice roll result (with link): " + roll);
} finally {
childSpan.end();
}
} finally {
parentSpan.end();
}
}
```
### Automatic instrumentation
Automatic instrumentation simplifies adding telemetry to a Java app by automatically capturing data from supported libraries and frameworks.
Ensure all necessary OpenTelemetry libraries are included in your Maven `pom.xml`.
```xml
io.opentelemetryopentelemetry-api{opentelemetry_version}io.opentelemetryopentelemetry-sdk{opentelemetry_version}io.opentelemetry.instrumentationopentelemetry-instrumentation-httpclient{instrumentation_version}
```
Dependencies include the OpenTelemetry SDK and instrumentation libraries that automatically capture data from common Java libraries.
Implement an initialization class to configure the OpenTelemetry SDK along with auto-instrumentation for frameworks used by the app.
```java
// AutoInstrumentationSetup.java
package com.example;
import io.opentelemetry.instrumentation.httpclient.HttpClientInstrumentation;
import io.opentelemetry.api.OpenTelemetry;
public class AutoInstrumentationSetup {
public static void setup() {
OpenTelemetry openTelemetry = OtelConfiguration.initializeOpenTelemetry();
HttpClientInstrumentation.instrument(openTelemetry);
}
}
```
Auto-instrumentation is initialized early in the app lifecycle to ensure all relevant activities are automatically captured.
```java
// Main.java
package com.example;
public class Main {
public static void main(String[] args) {
AutoInstrumentationSetup.setup(); // Initialize OpenTelemetry auto-instrumentation
DiceRollerApp.main(args); // Start the application logic
}
}
```
## Reference
### List of OpenTelemetry trace fields
| Field category | Field name | Description |
| ------------------------- | ---------------------- | ------------------------------------------------------------------------------------------------------------- |
| General trace information | | |
| | \_rowId | Unique identifier for each row in the trace data. |
| | \_sysTime | System timestamp when the trace data was recorded. |
| | \_time | Timestamp when the actual event being traced occurred. |
| | trace\_id | Unique identifier for the entire trace. |
| | span\_id | Unique identifier for the span within the trace. |
| | parent\_span\_id | Unique identifier for the parent span within the trace. |
| Operational details | | |
| | duration | Time taken for the operation, typically in microseconds or milliseconds. |
| | kind | Type of span. For example, `server`, `internal`. |
| | name | Name of the span, often a high-level title for the operation. |
| Scope and instrumentation | | |
| | scope.name | Instrumentation scope, typically the Java package or app component. For example, `com.example.DiceRollerApp`. |
| Service attributes | | |
| | service.name | Name of the service generating the trace. For example, `axiom-java-otel`. |
| | service.version | Version of the service generating the trace. For example, `0.1.0`. |
| Telemetry SDK attributes | | |
| | telemetry.sdk.language | Programming language of the SDK used for telemetry, typically `java`. |
| | telemetry.sdk.name | Name of the telemetry SDK. For example, `opentelemetry`. |
| | telemetry.sdk.version | Version of the telemetry SDK used in the tracing setup. For example, `1.18.0`. |
### List of imported libraries
The Java implementation of OpenTelemetry uses the following key libraries.
`io.opentelemetry:opentelemetry-api`
This package provides the core OpenTelemetry API for Java. It defines the interfaces and classes that developers use to instrument their apps manually. This includes the `Tracer`, `Span`, and `Context` classes, which are fundamental to creating and managing traces in your app. The API is designed to be stable and consistent, allowing developers to instrument their code without tying it to a specific implementation.
`io.opentelemetry:opentelemetry-sdk`
The opentelemetry-sdk package is the reference implementation of the OpenTelemetry API for Java. It provides the actual capability behind the API interfaces, including span creation, context propagation, and resource management. This SDK is highly configurable and extensible, allowing developers to customize how telemetry data is collected, processed, and exported. It’s the core component that brings OpenTelemetry to life in a Java app.
`io.opentelemetry:opentelemetry-exporter-otlp`
This package provides an exporter that sends telemetry data using the OpenTelemetry Protocol (OTLP). OTLP is the standard protocol for transmitting telemetry data in the OpenTelemetry ecosystem. This exporter allows Java applications to send their collected traces, metrics, and logs to any backend that supports OTLP, such as Axiom. The use of OTLP ensures broad compatibility and a standardized way of transmitting telemetry data across different systems and platforms.
`io.opentelemetry:opentelemetry-sdk-extension-autoconfigure`
This extension package provides auto-configuration capabilities for the OpenTelemetry SDK. It allows developers to configure the SDK using environment variables or system properties, making it easier to set up and deploy OpenTelemetry-instrumented applications in different environments. This is particularly useful for containerized applications or those running in cloud environments where configuration through environment variables is common.
`io.opentelemetry:opentelemetry-sdk-trace`
This package is part of the OpenTelemetry SDK and focuses specifically on tracing capability. It includes important classes like `SdkTracerProvider` and `BatchSpanProcessor`. The `SdkTracerProvider` is responsible for creating and managing tracers, while the `BatchSpanProcessor` efficiently processes and exports spans in batches, similar to its Node.js counterpart. This batching mechanism helps optimize the performance of trace data export in OpenTelemetry-instrumented Java applications.
`io.opentelemetry:opentelemetry-sdk-common`
This package provides common capability used across different parts of the OpenTelemetry SDK. It includes utilities for working with attributes, resources, and other shared concepts in OpenTelemetry. This package helps ensure consistency across the SDK and simplifies the implementation of cross-cutting concerns in telemetry data collection and processing.
# OpenTelemetry using Next.js
This guide demonstrates how to configure OpenTelemetry in a Next.js app to send telemetry data to Axiom.
OpenTelemetry provides a standardized way to collect and export telemetry data from your Next.js apps. This guide walks you through the process of configuring OpenTelemetry in a Next.js app to send traces to Axiom using the OpenTelemetry SDK.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/).
* [Create a dataset in Axiom](/reference/datasets) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to create, read, update, and delete datasets.
* [Install Node.js version 14](https://nodejs.org/en/download/package-manager) or newer.
* An existing Next.js app. Alternatively, use the provided example project.
## Initial setup
For initial setup, choose one of the following options:
* Use the `@vercel/otel` package for easier setup.
* Set up your app without the `@vercel/otel` package.
### Initial setup with @vercel/otel
To use the `@vercel/otel` package for easier setup, run the following command to install the dependencies:
```bash
npm install @vercel/otel @opentelemetry/exporter-trace-otlp-http @opentelemetry/sdk-trace-node
```
Create an `instrumentation.ts` file in the root of your project with the following content:
```js
import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http';
import { SimpleSpanProcessor } from '@opentelemetry/sdk-trace-node';
import { registerOTel } from '@vercel/otel';
export function register() {
registerOTel({
serviceName: 'nextjs-app',
spanProcessors: [
new SimpleSpanProcessor(
new OTLPTraceExporter({
url: 'https://api.axiom.co/v1/traces',
headers: {
Authorization: `Bearer ${process.env.API_TOKEN}`,
'X-Axiom-Dataset': `${process.env.DATASET_NAME}`,
},
})
),
],
});
}
```
Add the `API_TOKEN` and `DATASET_NAME` environment variables to your `.env` file. For example:
```bash
API_TOKEN=xaat-123
DATASET_NAME=my-dataset
```
### Initial setup without @vercel/otel
To set up your app without the `@vercel/otel` package, run the following command to install the dependencies:
```bash
npm install @opentelemetry/sdk-node @opentelemetry/exporter-trace-otlp-http @opentelemetry/resources @opentelemetry/semantic-conventions @opentelemetry/sdk-trace-node
```
Create an `instrumentation.ts` file in the root of your project with the following content:
```js
import { NodeSDK } from '@opentelemetry/sdk-node';
import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http';
import { Resource } from '@opentelemetry/resources';
import { SEMRESATTRS_SERVICE_NAME } from '@opentelemetry/semantic-conventions';
import { SimpleSpanProcessor } from '@opentelemetry/sdk-trace-node';
export function register() {
const sdk = new NodeSDK({
resource: new Resource({
[SEMRESATTRS_SERVICE_NAME]: 'nextjs-app',
}),
spanProcessor: new SimpleSpanProcessor(
new OTLPTraceExporter({
url: 'https://api.axiom.co/v1/traces',
headers: {
Authorization: `Bearer ${process.env.API_TOKEN}`,
'X-Axiom-Dataset': process.env.DATASET_NAME,
},
})
),
});
sdk.start();
}
```
Add the `API_TOKEN` and `DATASET_NAME` environment variables to your `.env` file. For example:
```bash
API_TOKEN=xaat-123
DATASET_NAME=my-dataset
```
## Set up the Next.js environment
### layout.tsx
In the `src/app/layout.tsx` file, import and call the `register` function from the `instrumentation` module:
```js
import { register } from '../../instrumentation';
register();
export default function RootLayout({ children }: Readonly<{ children: React.ReactNode }>) {
return (
{children}
);
}
```
This file sets up the root layout for your Next.js app and initializes the OpenTelemetry instrumentation by calling the `register` function.
### route.ts
Create a `route.ts` file in `src/app/api/rolldice/` to handle HTTP GET requests to the `/rolldice` API endpoint:
```js
// src/app/api/rolldice/route.ts
import { NextResponse } from 'next/server';
function getRandomNumber(min: number, max: number): number {
return Math.floor(Math.random() * (max - min) + min);
}
export async function GET() {
const diceRoll = getRandomNumber(1, 6);
return NextResponse.json(diceRoll.toString());
}
```
This file defines a route handler for the `/rolldice` endpoint, which returns a random number between 1 and 6.
### next.config.js
Configure the `next.config.js` file to enable instrumentation and resolve the `tls` module:
```js
module.exports = {
experimental: {
// Enable the instrumentation hook for collecting telemetry data
instrumentationHook: true,
},
webpack: (config, { isServer }) => {
if (!isServer) {
config.resolve.fallback = {
// Disable the 'tls' module on the client side
tls: false,
};
}
return config;
},
};
```
This configuration enables the instrumentation hook and resolves the `tls` module for the client-side build.
### tsconfig.json
Add the following options to your `tsconfig.json` file to ensure compatibility with OpenTelemetry and Next.js:
```json
{
"compilerOptions": {
"lib": ["dom", "dom.iterable", "esnext"],
"allowJs": true,
"skipLibCheck": true,
"strict": true,
"noEmit": true,
"esModuleInterop": true,
"module": "esnext",
"moduleResolution": "bundler",
"resolveJsonModule": true,
"isolatedModules": true,
"jsx": "preserve",
"incremental": true,
"plugins": [
{
"name": "next"
}
],
"paths": {
"@/*": ["./src/*"]
}
},
"include": ["next-env.d.ts", "**/*.ts", "**/*.tsx", ".next/types/**/*.ts"],
"exclude": ["node_modules"]
}
```
This file configures the TypeScript compiler options for your Next.js app.
## Project structure
After completing the steps above, the project structure of your Next.js app is the following:
```bash
my-nextjs-app/
├── src/
│ ├── app/
│ │ ├── api/
│ │ │ └── rolldice/
│ │ │ └── route.ts
│ │ ├── page.tsx
│ │ └── layout.tsx
│ └── ...
├── instrumentation.ts
├── next.config.js
├── tsconfig.json
└── ...
```
## Run the app and observe traces in Axiom
Use the following command to run your Next.js app with OpenTelemetry instrumentation in development mode:
```bash
npm run dev
```
This command starts the Next.js development server, and the OpenTelemetry instrumentation automatically collects traces. As you interact with your app, traces are sent to Axiom where you can monitor and analyze your app’s performance and behavior.
In Axiom, go to the **Stream** tab and click your dataset. This page displays the traces sent to Axiom and lets you monitor and analyze your app’s performance and behavior.
Go to the **Dashboards** tab and click **OpenTelemetry Traces**. This pre-built traces dashboard provides further insights into the performance and behavior of your app.
## Send data from an existing Next.js project
### Manual instrumentation
Manual instrumentation allows you to create, configure, and manage spans and traces, providing detailed control over telemetry data collection at specific points within the app.
1. Set up and retrieve a tracer from the OpenTelemetry API. This tracer starts and manages spans within your app components or API routes.
```js
import { trace } from '@opentelemetry/api';
const tracer = trace.getTracer('nextjs-app');
```
2. Manually start a span at the beginning of significant operations or transactions within your Next.js app and ensure you end it appropriately. This approach is for tracing specific custom events or operations not automatically captured by instrumentations.
```js
const span = tracer.startSpan('operationName');
try {
// Perform your operation here
} finally {
span.end();
}
```
3. Enhance the span with additional information such as user details or operation outcomes, which can provide deeper insights when analyzing telemetry data.
```js
span.setAttribute('user_id', userId);
span.setAttribute('operation_status', 'success');
```
### Automatic instrumentation
Automatic instrumentation uses the capabilities of OpenTelemetry to automatically capture telemetry data for standard operations such as HTTP requests and responses.
1. Use the OpenTelemetry Node SDK to configure your app to automatically instrument supported libraries and frameworks. Set up `NodeSDK` in an `instrumentation.ts` file in your project.
```js
import { NodeSDK } from '@opentelemetry/sdk-node';
import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http';
export function register() {
const sdk = new NodeSDK({
resource: new Resource({ [SEM_RESOURCE_ATTRIBUTES.SERVICE_NAME]: 'nextjs-app' }),
spanProcessor: new BatchSpanProcessor(
new OTLPTraceExporter({
url: 'https://api.axiom.co/v1/traces',
headers: {
Authorization: `Bearer ${process.env.API_TOKEN}`,
'X-Axiom-Dataset': `${process.env.DATASET_NAME_NAME}`,
},
})
),
});
sdk.start();
}
```
2. Include necessary OpenTelemetry instrumentation packages to automatically capture telemetry from Node.js libraries like HTTP and any other middlewares used by Next.js.
3. Call the `register` function from the `instrumentation.ts` within your app startup file or before your app starts handling traffic to initialize the OpenTelemetry instrumentation.
```js
// In pages/_app.js or an equivalent entry point
import { register } from '../instrumentation';
register();
```
## Reference
### List of OpenTelemetry trace fields
| Field Category | Field Name | Description |
| --------------------------- | ------------------------------------- | ------------------------------------------------------------------ |
| General Trace Information | | |
| | \_rowId | Unique identifier for each row in the trace data. |
| | \_sysTime | System timestamp when the trace data was recorded. |
| | \_time | Timestamp when the actual event being traced occurred. |
| | trace\_id | Unique identifier for the entire trace. |
| | span\_id | Unique identifier for the span within the trace. |
| | parent\_span\_id | Unique identifier for the parent span within the trace. |
| HTTP Attributes | | |
| | attributes.http.method | HTTP method used for the request. |
| | attributes.http.status\_code | HTTP status code returned in response. |
| | attributes.http.route | Route accessed during the HTTP request. |
| | attributes.http.target | Specific target of the HTTP request. |
| Custom Attributes | | |
| | attributes.custom\["next.route"] | Custom attribute defining the Next.js route. |
| | attributes.custom\["next.rsc"] | Indicates if React Server Components are used. |
| | attributes.custom\["next.span\_name"] | Custom name of the span within Next.js context. |
| | attributes.custom\["next.span\_type"] | Type of the Next.js span, describing the operation context. |
| Resource Process Attributes | | |
| | resource.process.pid | Process ID of the Node.js app. |
| | resource.process.runtime.description | Description of the runtime environment. For example, Node.js. |
| | resource.process.runtime.name | Name of the runtime environment. For example, nodejs. |
| | resource.process.runtime.version | Version of the runtime environment For example, 18.17.0. |
| | resource.process.executable.name | Executable name running the process. For example, next-server. |
| Resource Host Attributes | | |
| | resource.host.arch | Architecture of the host machine. For example, arm64. |
| | resource.host.name | Name of the host machine. For example, MacBook-Pro.local. |
| Operational Details | | |
| | duration | Time taken for the operation. |
| | kind | Type of span (for example, server, internal). |
| | name | Name of the span, often a high-level title for the operation. |
| Scope Attributes | | |
| | scope.name | Name of the scope for the operation. For example, next.js. |
| | scope.version | Version of the scope. For example, 0.0.1. |
| Service Attributes | | |
| | service.name | Name of the service generating the trace. For example, nextjs-app. |
| Telemetry SDK Attributes | | |
| | telemetry.sdk.language | Language of the telemetry SDK. For example, nodejs. |
| | telemetry.sdk.name | Name of the telemetry SDK. For example, opentelemetry. |
| | telemetry.sdk.version | Version of the telemetry SDK. For example, 1.23.0. |
s
### List of imported libraries
`@opentelemetry/api`
The core API for OpenTelemetry in JavaScript, providing the necessary interfaces and utilities for tracing, metrics, and context propagation. In the context of Next.js, it allows developers to manually instrument custom spans, manipulate context, and access the active span if needed.
`@opentelemetry/exporter-trace-otlp-http`
This exporter enables your Next.js app to send trace data over HTTP to any backend that supports the OTLP (OpenTelemetry Protocol), such as Axiom. Using OTLP ensures compatibility with a wide range of observability tools and standardizes the data export process.
`@opentelemetry/resources`
This defines the Resource which represents the entity producing telemetry. In Next.js, Resources can be used to describe the app (for example, service name, version) and are attached to all exported telemetry, aiding in identifying data in backend systems.
`@opentelemetry/sdk-node`
The OpenTelemetry SDK for Node.js which provides a comprehensive set of tools for instrumenting Node.js apps. It includes automatic instrumentation for popular libraries and frameworks, as well as APIs for manual instrumentation. In the Next.js setup, it’s used to configure and initialize the OpenTelemetry SDK.
`@opentelemetry/semantic-conventions`
A set of standard attributes and conventions for describing resources, spans, and metrics in OpenTelemetry. By adhering to these conventions, your Next.js app’s telemetry data becomes more consistent and interoperable with other OpenTelemetry-compatible tools and systems.
`@vercel/otel`
A package provided by Vercel that simplifies the setup and configuration of OpenTelemetry for Next.js apps deployed on the Vercel platform. It abstracts away some of the boilerplate code and provides a more streamlined integration experience.
# OpenTelemetry using Node.js
This guide demonstrates how to configure OpenTelemetry in a Node.js app to send telemetry data to Axiom.
OpenTelemetry provides a [unified approach to collecting telemetry data](https://opentelemetry.io/docs/languages/js/instrumentation/) from your Node.js and TypeScript apps. This guide demonstrates how to configure OpenTelemetry in a Node.js app to send telemetry data to Axiom using OpenTelemetry SDK.
## Prerequisites
To configure OpenTelemetry in a Node.js app for sending telemetry data to Axiom, certain prerequisites are necessary. These include:
* Node:js: Node.js version 14 or newer.
* Node.js app: Use your own app written in Node.js, or you can start with the provided **`app.ts`** sample.
* [Create an Axiom account](https://app.axiom.co/).
* [Create a dataset in Axiom](/reference/datasets) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to create, read, update, and delete datasets.
## Core Application (app.ts)
`app.ts` is the core of the app. It uses Express.js to create a simple web server. The server has an endpoint `/rolldice` that returns a random number, simulating a basic API. It also demonstrates the usage of span links to establish relationships between spans across different traces.
```js
/*app.ts*/
// Importing OpenTelemetry instrumentation for tracing
import './instrumentation';
import { trace, context } from '@opentelemetry/api';
// Importing Express.js: A minimal and flexible Node.js web app framework
import express from 'express';
// Setting up the server port: Use the PORT environment variable or default to 8080
const PORT = parseInt(process.env.PORT || '8080');
const app = express();
// Get the tracer from the global tracer provider
const tracer = trace.getTracer('node-traces');
/**
* Function to generate a random number between min and max (inclusive).
* @param min - The minimum number (inclusive).
* @param max - The maximum number (exclusive).
* @returns A random number between min and max.
*/
function getRandomNumber(min: number, max: number): number {
return Math.floor(Math.random() * (max - min) + min);
}
// Defining a route handler for '/rolldice' that returns a random dice roll
app.get('/rolldice', (req, res) => {
const span = trace.getSpan(context.active());
/**
* Spans can be created with zero or more Links to other Spans that are related.
* Links allow creating connections between different traces
*/
const rollDiceSpan = tracer.startSpan('roll_dice_span', {
links: span ? [{ context: span.spanContext() }] : [],
});
// Set the rollDiceSpan as the currently active span
context.with(trace.setSpan(context.active(), rollDiceSpan), () => {
const diceRoll = getRandomNumber(1, 6).toString();
res.send(diceRoll);
rollDiceSpan.end();
});
});
// Defining a route handler for '/roll_with_link' that creates a parent span and calls '/rolldice'
app.get('/roll_with_link', (req, res) => {
/**
* A common scenario is to correlate one or more traces with the current span.
* This can help in tracing and debugging complex interactions across different parts of the app.
*/
const parentSpan = tracer.startSpan('parent_span');
// Set the parentSpan as the currently active span
context.with(trace.setSpan(context.active(), parentSpan), () => {
const diceRoll = getRandomNumber(1, 6).toString();
res.send(`Dice roll result (with link): ${diceRoll}`);
parentSpan.end();
});
});
// Starting the server on the specified PORT and logging the listening message
app.listen(PORT, () => {
console.log(`Listening for requests on http://localhost:${PORT}`);
});
```
## Exporter (instrumentation.ts)
`instrumentation.ts` sets up the OpenTelemetry instrumentation. It configures the OTLP (OpenTelemetry Protocol) exporters for traces and initializes the Node SDK with automatic instrumentation capabilities.
```js
/*instrumentation.ts*/
// Importing necessary OpenTelemetry packages including the core SDK, auto-instrumentations, OTLP trace exporter, and batch span processor
import { NodeSDK } from '@opentelemetry/sdk-node';
import { getNodeAutoInstrumentations } from '@opentelemetry/auto-instrumentations-node';
import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-proto';
import { BatchSpanProcessor } from '@opentelemetry/sdk-trace-base';
import { Resource } from '@opentelemetry/resources';
import { SemanticResourceAttributes } from '@opentelemetry/semantic-conventions';
// Initialize OTLP trace exporter with the endpoint URL and headers
const traceExporter = new OTLPTraceExporter({
url: 'https://api.axiom.co/v1/traces',
headers: {
'Authorization': 'Bearer $API_TOKEN',
'X-Axiom-Dataset': '$DATASET'
},
});
// Creating a resource to identify your service in traces
const resource = new Resource({
[SemanticResourceAttributes.SERVICE_NAME]: 'node traces',
});
// Configuring the OpenTelemetry Node SDK
const sdk = new NodeSDK({
// Adding a BatchSpanProcessor to batch and send traces
spanProcessor: new BatchSpanProcessor(traceExporter),
// Registering the resource to the SDK
resource: resource,
// Adding auto-instrumentations to automatically collect trace data
instrumentations: [getNodeAutoInstrumentations()],
});
// Starting the OpenTelemetry SDK to begin collecting telemetry data
sdk.start();
```
## Installing the Dependencies
Navigate to the root directory of your project and run the following command to install the required dependencies:
```bash
npm install
```
This command will install all the necessary packages listed in your `package.json` [below](/guides/opentelemetry-nodejs#setting-up-typescript-development-environment)
## Setting Up TypeScript Development Environment
To run the TypeScript app, you need to set up a TypeScript development environment. This includes adding a `package.json` file to manage your project’s dependencies and scripts, and a `tsconfig.json` file to manage TypeScript compiler options.
### Add `package.json`
Create a `package.json` file in the root of your project with the following content:
```json
{
"name": "typescript-traces",
"version": "1.0.0",
"description": "",
"main": "app.js",
"scripts": {
"build": "tsc",
"start": "ts-node app.ts",
"dev": "ts-node-dev --respawn app.ts"
},
"keywords": [],
"author": "",
"license": "ISC",
"dependencies": {
"@opentelemetry/api": "^1.6.0",
"@opentelemetry/api-logs": "^0.46.0",
"@opentelemetry/auto-instrumentations-node": "^0.39.4",
"@opentelemetry/exporter-metrics-otlp-http": "^0.45.0",
"@opentelemetry/exporter-metrics-otlp-proto": "^0.45.1",
"@opentelemetry/exporter-trace-otlp-http": "^0.45.0",
"@opentelemetry/sdk-logs": "^0.46.0",
"@opentelemetry/sdk-metrics": "^1.20.0",
"@opentelemetry/sdk-node": "^0.45.1",
"express": "^4.18.2"
},
"devDependencies": {
"@types/express": "^4.17.21",
"@types/node": "^16.18.71",
"ts-node": "^10.9.2",
"ts-node-dev": "^2.0.0",
"tsc-watch": "^4.6.2",
"typescript": "^4.9.5"
}
}
```
### Add `tsconfig.json`
Create a `tsconfig.json` file in the root of your project with the following content:
```json
{
"compilerOptions": {
"target": "es2016",
"module": "commonjs",
"esModuleInterop": true,
"forceConsistentCasingInFileNames": true,
"strict": true,
"skipLibCheck": true
}
}
```
This configuration file specifies how the TypeScript compiler should transpile TypeScript files into JavaScript.
## Running the Instrumented Application
To run your Node.js app with OpenTelemetry instrumentation, make sure your API token, and dataset is set in the `instrumentation.ts` file.
### In Development Mode
For development purposes, especially when you need automatic restarts upon file changes, use:
```bash
npm run dev
```
This command will start the OpenTelemetry instrumentation in development mode using `ts-node-dev`. It sets up the exporter for tracing and restarts the server automatically whenever you make changes to the files.
### In Production Mode
To run the app in production mode, you need to first build the TypeScript files into JavaScript. Run the following command to build your application:
```bash
npm run build
```
This command compiles the TypeScript files to JavaScript based on the settings specified in `tsconfig.json`. Once the build process is complete, you can start your app in production mode with:
```bash
npm start
```
The server will start on the specified port, and you can interact with it by sending requests to the `/rolldice` endpoint.
## Observe the telemetry data in Axiom
As you interact with your app, traces will be collected and exported to Axiom, where you can monitor and analyze your app’s performance and behavior.
## Dynamic OpenTelemetry traces dashboard
This data can then be further viewed and analyzed in Axiom’s dashboard, providing insights into the performance and behaviour of your app.
## Send data from an existing Node project
### Manual Instrumentation
Manual instrumentation in Node.js requires adding code to create and manage spans around the code blocks you want to trace.
1. Initialize Tracer:
Import and configure a tracer in your Node.js app. Use the tracer configured in your instrumentation setup (instrumentation.ts).
```js
// Assuming OpenTelemetry SDK is already configured
const { trace } = require('@opentelemetry/api');
const tracer = trace.getTracer('example-tracer');
```
2. Create Spans:
Wrap the code blocks that you want to trace with spans. Start and end these spans within your code.
```js
const span = tracer.startSpan('operation_name');
try {
// Your code here
span.end();
} catch (error) {
span.recordException(error);
span.end();
}
```
3. Annotate Spans:
Add metadata and logs to your spans for the trace data.
```js
span.setAttribute('key', 'value');
span.addEvent('event name', { eventKey: 'eventValue' });
```
### Automatic Instrumentation
Automatic instrumentation in Node.js simplifies adding telemetry data to your app. It uses pre-built libraries to automatically instrument common frameworks and libraries.
1. Install Instrumentation Libraries:
Use OpenTelemetry packages that automatically instrument common Node.js frameworks and libraries.
```bash
npm install @opentelemetry/auto-instrumentations-node
```
2. Instrument Application:
Configure your app to use these libraries, which will automatically generate spans for standard operations.
```js
// In your instrumentation setup (instrumentation.ts)
const { getNodeAutoInstrumentations } = require('@opentelemetry/auto-instrumentations-node');
const sdk = new NodeSDK({
// ... other configurations ...
instrumentations: [getNodeAutoInstrumentations()]
});
```
After you set them up, these libraries automatically trace relevant operations without additional code changes in your app.
## Reference
### List of OpenTelemetry trace fields
| Field Category | Field Name | Description |
| ------------------------------- | --------------------------------------- | ------------------------------------------------------------ |
| **Unique Identifiers** | | |
| | \_rowid | Unique identifier for each row in the trace data. |
| | span\_id | Unique identifier for the span within the trace. |
| | trace\_id | Unique identifier for the entire trace. |
| **Timestamps** | | |
| | \_systime | System timestamp when the trace data was recorded. |
| | \_time | Timestamp when the actual event being traced occurred. |
| **HTTP Attributes** | | |
| | attributes.custom\["http.host"] | Host information where the HTTP request was sent. |
| | attributes.custom\["http.server\_name"] | Server name for the HTTP request. |
| | attributes.http.flavor | HTTP protocol version used. |
| | attributes.http.method | HTTP method used for the request. |
| | attributes.http.route | Route accessed during the HTTP request. |
| | attributes.http.scheme | Protocol scheme (HTTP/HTTPS). |
| | attributes.http.status\_code | HTTP response status code. |
| | attributes.http.target | Specific target of the HTTP request. |
| | attributes.http.user\_agent | User agent string of the client. |
| **Network Attributes** | | |
| | attributes.net.host.port | Port number on the host receiving the request. |
| | attributes.net.peer.port | Port number on the peer (client) side. |
| | attributes.custom\["net.peer.ip"] | IP address of the peer in the network interaction. |
| **Operational Details** | | |
| | duration | Time taken for the operation. |
| | kind | Type of span (for example,, server, client). |
| | name | Name of the span. |
| | scope | Instrumentation scope. |
| | service.name | Name of the service generating the trace. |
| **Resource Process Attributes** | | |
| | resource.process.command | Command line string used to start the process. |
| | resource.process.command\_args | List of command line arguments used in starting the process. |
| | resource.process.executable.name | Name of the executable running the process. |
| | resource.process.executable.path | Path to the executable running the process. |
| | resource.process.owner | Owner of the process. |
| | resource.process.pid | Process ID. |
| | resource.process.runtime.description | Description of the runtime environment. |
| | resource.process.runtime.name | Name of the runtime environment. |
| | resource.process.runtime.version | Version of the runtime environment. |
| **Telemetry SDK Attributes** | | |
| | telemetry.sdk.language | Language of the telemetry SDK. |
| | telemetry.sdk.name | Name of the telemetry SDK. |
| | telemetry.sdk.version | Version of the telemetry SDK. |
### List of imported libraries
The `instrumentation.ts` file imports the following libraries:
### **`@opentelemetry/sdk-node`**
This package is the core SDK for OpenTelemetry in Node.js. It provides the primary interface for configuring and initializing OpenTelemetry in a Node.js app. It includes functionalities for managing traces and context propagation. The SDK is designed to be extensible, allowing for custom configurations and integration with different telemetry backends like Axiom.
### **`@opentelemetry/auto-instrumentations-node`**
This package offers automatic instrumentation for Node.js apps. It simplifies the process of instrumenting various common Node.js libraries and frameworks. By using this package, developers can automatically collect telemetry data (such as traces) from their apps without needing to manually instrument each library or API call. This is important for apps with complex dependencies, as it ensures comprehensive and consistent telemetry collection across the app.
### **`@opentelemetry/exporter-trace-otlp-proto`**
The **`@opentelemetry/exporter-trace-otlp-proto`** package provides an exporter that sends trace data using the OpenTelemetry Protocol (OTLP). OTLP is the standard protocol for transmitting telemetry data in the OpenTelemetry ecosystem. This exporter allows Node.js apps to send their collected traces to any backend that supports OTLP, such as Axiom. The use of OTLP ensures broad compatibility and a standardized way of transmitting telemetry data.
### **`@opentelemetry/sdk-trace-base`**
Contained within this package is the **`BatchSpanProcessor`**, among other foundational elements for tracing in OpenTelemetry. The **`BatchSpanProcessor`** is a component that collects and processes spans (individual units of trace data). As the name suggests, it batches these spans before sending them to the configured exporter (in this case, the `OTLPTraceExporter`). This batching mechanism is efficient as it reduces the number of outbound requests by aggregating multiple spans into fewer batches. It helps in the performance and scalability of trace data export in an OpenTelemetry-instrumented app.
# Send OpenTelemetry data from a Python app to Axiom
This guide explains how to send OpenTelemetry data from a Python app to Axiom using the Python OpenTelemetry SDK.
This guide explains how to send OpenTelemetry data from a Python app to Axiom using the [Python OpenTelemetry SDK](https://opentelemetry.io/docs/languages/python/instrumentation/).
## Prerequisites
* Install Python version 3.7 or higher.
* Create an Axiom account. To sign up for a free account, go to the [Axiom app](https://app.axiom.co/).
* Create a dataset in Axiom. This is where the Python app sends telemetry data. For more information, see [Data Settings](/reference/datasets).
* Create an API key in Axiom with permissions to query and ingest data. For more information, see [Access Settings](/reference/tokens).
## Install required dependencies
To install the required Python dependencies, run the following code in your terminal:
```bash
pip install opentelemetry-api opentelemetry-sdk opentelemetry-instrumentation-flask opentelemetry-exporter-otlp Flask
```
### Install dependencies with requirements file
Alternatively, if you use a `requirements.txt` file in your Python project, add these lines:
```txt
opentelemetry-api
opentelemetry-sdk
opentelemetry-instrumentation-flask
opentelemetry-exporter-otlp
Flask
```
Then run the following code in your terminal to install dependencies:
```bash
pip install -r requirements.txt
```
## Create an app.py file
Create an `app.py` file with the following content. This file creates a basic HTTP server using Flask. It also demonstrates the usage of span links to establish relationships between spans across different traces.
```python
# app.py
from flask import Flask
from opentelemetry.instrumentation.flask import FlaskInstrumentor
from opentelemetry import trace
from random import randint
import exporter
# Creating a Flask app instance
app = Flask(__name__)
# Automatically instruments Flask app to enable tracing
FlaskInstrumentor().instrument_app(app)
# Retrieving a tracer from the custom exporter
tracer = exporter.service1_tracer
@app.route("/rolldice")
def roll_dice(parent_span=None):
# Starting a new span for the dice roll. If a parent span is provided, link to its span context.
with tracer.start_as_current_span("roll_dice_span",
links=[trace.Link(parent_span.get_span_context())] if parent_span else None) as span:
# Spans can be created with zero or more Links to other Spans that are related.
# Links allow creating connections between different traces
return str(roll())
@app.route("/roll_with_link")
def roll_with_link():
# Starting a new 'parent_span' which may later link to other spans
with tracer.start_as_current_span("parent_span") as parent_span:
# A common scenario is to correlate one or more traces with the current span.
# This can help in tracing and debugging complex interactions across different parts of the app.
result = roll_dice(parent_span)
return f"Dice roll result (with link): {result}"
def roll():
# Function to generate a random number between 1 and 6
return randint(1, 6)
if __name__ == "__main__":
# Starting the Flask server on the specified PORT and enabling debug mode
app.run(port=8080, debug=True)
```
## Create an exporter.py file
Create an `exporter.py` file with the following content. This file establishes an OpenTelemetry configuration and sets up an exporter that sends trace data to Axiom.
```python
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.sdk.resources import Resource, SERVICE_NAME
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
# Define the service name resource for the tracer.
resource = Resource(attributes={
SERVICE_NAME: "NAME_OF_SERVICE" # Replace `NAME_OF_SERVICE` with the name of the service you want to trace.
})
# Create a TracerProvider with the defined resource for creating tracers.
provider = TracerProvider(resource=resource)
# Configure the OTLP/HTTP Span Exporter with Axiom headers and endpoint. Replace `API_TOKEN` with your Axiom API key, and replace `DATASET_NAME` with the name of the Axiom dataset where you want to send data.
otlp_exporter = OTLPSpanExporter(
endpoint="https://api.axiom.co/v1/traces",
headers={
"Authorization": "Bearer API_TOKEN",
"X-Axiom-Dataset": "DATASET_NAME"
}
)
# Create a BatchSpanProcessor with the OTLP exporter to batch and send trace spans.
processor = BatchSpanProcessor(otlp_exporter)
provider.add_span_processor(processor)
# Set the TracerProvider as the global tracer provider.
trace.set_tracer_provider(provider)
# Define a tracer for external use in different parts of the app.
service1_tracer = trace.get_tracer("service1")
```
In the `exporter.py` file, make the following changes:
* Replace `NAME_OF_SERVICE` with the name of the service you want to trace. This is important for identifying and categorizing trace data, particularly in systems with multiple services.
* Replace `API_TOKEN` with your Axiom API key.
* Replace `DATASET_NAME` with the name of the Axiom dataset where you want to send data.
For more information on the libraries imported by the `exporter.py` file, see the [Reference](#reference) below.
## Run the app
Run the following code in your terminal to run the Python project:
macOS/Linux
```bash
python3 app.py
```
Windows
```
py -3 app.py
```
In your browser, go to `http://127.0.0.1:8080/rolldice` to interact with your Python app. Each time you load the page, the app displays a random number and sends the collected traces to Axiom.
## Observe the telemetry data in Axiom
In Axiom, go the **Stream** tab and click your dataset. This page displays the traces sent to Axiom and enables you to monitor and analyze your app’s performance and behavior.
## Dynamic OpenTelemetry traces dashboard
In Axiom, go the **Dashboards** tab and click **OpenTelemetry Traces (python)**. This pre-built traces dashboard provides further insights into the performance and behavior of your app.
## Send data from an existing Python project
### Manual instrumentation
Manual instrumentation in Python with OpenTelemetry involves adding code to create and manage spans around the blocks of code you want to trace. This approach allows for precise control over the trace data.
1. Import and configure a tracer at the start of your main Python file. For example, use the tracer from the `exporter.py` configuration.
```python
import exporter
tracer = exporter.service1_tracer
```
2. Enclose the code blocks in your app that you want to trace within spans. Start and end these spans in your code.
```python
with tracer.start_as_current_span("operation_name"):
```
3. Add relevant metadata and logs to your spans to enrich the trace data, providing more context for your data.
```python
with tracer.start_as_current_span("operation_name") as span:
span.set_attribute("key", "value")
```
### Automatic instrumentation
Automatic instrumentation in Python with OpenTelemetry simplifies the process of adding telemetry data to your app. It uses pre-built libraries that automatically instrument the frameworks and libraries.
1. Install the OpenTelemetry packages designed for specific frameworks like Flask or Django.
```bash
pip install opentelemetry-instrumentation-flask
```
2. Configure your app to use these libraries that automatically generate spans for standard operations.
```python
from opentelemetry.instrumentation.flask import FlaskInstrumentor
# This assumes `app` is your Flask app.
FlaskInstrumentor().instrument_app(app)
```
After you set them up, these libraries automatically trace relevant operations without additional code changes in your app.
## Reference
### List of OpenTelemetry trace fields
| Field Category | Field Name | Description |
| ------------------- | --------------------------------------- | ------------------------------------------------------ |
| Unique Identifiers | | |
| | \_rowid | Unique identifier for each row in the trace data. |
| | span\_id | Unique identifier for the span within the trace. |
| | trace\_id | Unique identifier for the entire trace. |
| Timestamps | | |
| | \_systime | System timestamp when the trace data was recorded. |
| | \_time | Timestamp when the actual event being traced occurred. |
| HTTP Attributes | | |
| | attributes.custom\["http.host"] | Host information where the HTTP request was sent. |
| | attributes.custom\["http.server\_name"] | Server name for the HTTP request. |
| | attributes.http.flavor | HTTP protocol version used. |
| | attributes.http.method | HTTP method used for the request. |
| | attributes.http.route | Route accessed during the HTTP request. |
| | attributes.http.scheme | Protocol scheme (HTTP/HTTPS). |
| | attributes.http.status\_code | HTTP response status code. |
| | attributes.http.target | Specific target of the HTTP request. |
| | attributes.http.user\_agent | User agent string of the client. |
| Network Attributes | | |
| | attributes.net.host.port | Port number on the host receiving the request. |
| | attributes.net.peer.port | Port number on the peer (client) side. |
| | attributes.custom\["net.peer.ip"] | IP address of the peer in the network interaction. |
| Operational Details | | |
| | duration | Time taken for the operation. |
| | kind | Type of span (for example,, server, client). |
| | name | Name of the span. |
| | scope | Instrumentation scope. |
| | service.name | Name of the service generating the trace. |
### List of imported libraries
The `exporter.py` file imports the following libraries:
from opentelemetry import trace
This module creates and manages trace data in your app. It creates spans and tracers which track the execution flow and performance of your app.
from opentelemetry.sdk.trace import TracerProvider
`TracerProvider` acts as a container for the configuration of your app’s tracing behavior. It allows you to define how spans are generated and processed, essentially serving as the central point for managing trace creation and propagation in your app.
from opentelemetry.sdk.trace.export import BatchSpanProcessor
`BatchSpanProcessor` is responsible for batching spans before they are exported. This is an important aspect of efficient trace data management as it aggregates multiple spans into fewer network requests, reducing the overhead on your app’s performance and the tracing backend.
from opentelemetry.sdk.resources import Resource, SERVICE\_NAME
The `Resource` class is used to describe your app’s service attributes, such as its name, version, and environment. This contextual information is attached to the traces and helps in identifying and categorizing trace data, making it easier to filter and analyze in your monitoring setup.
from opentelemetry.exporter.otlp.proto.http.trace\_exporter import OTLPSpanExporter
The `OTLPSpanExporter` is responsible for sending your app’s trace data to a backend that supports the OTLP such as Axiom. It formats the trace data according to the OTLP standards and transmits it over HTTP, ensuring compatibility and standardization in how telemetry data is sent across different systems and services.
# Send OpenTelemetry data from a Ruby on Rails app to Axiom
This guide explains how to send OpenTelemetry data from a Ruby on Rails App to Axiom using the Ruby OpenTelemetry SDK.
This guide provides detailed steps on how to configure OpenTelemetry in a Ruby application to send telemetry data to Axiom using the [OpenTelemetry Ruby SDK](https://opentelemetry.io/docs/languages/ruby/).
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/).
* [Create a dataset](/reference/settings#data) where you want to send data.
* [Create an API token in Axiom with permissions to ingest and query data](/reference/tokens).
* Install a [Ruby version manager](https://www.ruby-lang.org/en/documentation/installation/) like `rbenv` and use it to install the latest Ruby version.
* Install [Rails](https://guides.rubyonrails.org/v5.0/getting_started.html) using the `gem install rails` command.
## Set up the Ruby on Rails application
1. Create a new Rails app using the `rails new myapp` command.
2. Go to the app directory with the `cd myapp` command.
3. Open the `Gemfile` and add the following OpenTelemetry packages:
```ruby
gem 'opentelemetry-api'
gem 'opentelemetry-sdk'
gem 'opentelemetry-exporter-otlp'
gem 'opentelemetry-instrumentation-rails'
gem 'opentelemetry-instrumentation-http'
gem 'opentelemetry-instrumentation-active_record', require: false
gem 'opentelemetry-instrumentation-all'
```
Install the dependencies by running `bundle install`.
## Configure the OpenTelemetry exporter
In the `initializers` folder of your Rails app, create a new file called `opentelemetry.rb`, and then add the following OpenTelemetry exporter configuration:
```ruby
require 'opentelemetry/sdk'
require 'opentelemetry/exporter/otlp'
require 'opentelemetry/instrumentation/all'
OpenTelemetry::SDK.configure do |c|
c.service_name = 'ruby-traces' # Set your service name
c.use_all # Or specify individual instrumentation you need
c.add_span_processor(
OpenTelemetry::SDK::Trace::Export::BatchSpanProcessor.new(
OpenTelemetry::Exporter::OTLP::Exporter.new(
endpoint: 'https://api.axiom.co/v1/traces',
headers: {
'Authorization' => 'Bearer API_TOKEN',
'X-AXIOM-DATASET' => 'DATASET_NAME'
}
)
)
)
end
```
In the code above, make the following changes:
* Replace `API_TOKEN` with your Axiom API key.
* Replace `DATASET_NAME` with the name of the Axiom dataset where you want to send data.
## Run the instrumented application
Run your Ruby on Rails application with OpenTelemetry instrumentation.
### In development mode
Start the Rails server using the `rails server` command. The server will start on the default port (usually 3000), and you can access your application by visiting `http://localhost:3000` in your web browser.
As you interact with your application, OpenTelemetry automatically collects telemetry data and sends it to Axiom using the configured OTLP exporter.
### In production mode
For production, ensure to precompile assets and run migrations if necessary. Start the server with `RAILS_ENV=production bin/rails server`. This setup ensures your Ruby application is instrumented to send traces to Axiom, using OpenTelemetry for observability.
## Observe the telemetry data in Axiom
As you interact with your application, traces are collected and exported to Axiom, allowing you to monitor, analyze, and gain insights into your application’s performance and behavior.
1. In your Axiom account and click on the **Datasets** or **Stream** tab.
2. Select your dataset from the list.
3. From the list of fields, click on the **trace\_id** to view your spans.
## Dynamic OpenTelemetry Traces dashboard
This data can then be further viewed and analyzed in Axiom’s dashboard, offering a deeper understanding of your application’s performance and behavior.
1. In your Axiom account, select **Dashboards**, and click on the traces dashboard named after your dataset.
2. View the dashboard which displays your total traces, incoming spans, average span duration, errors, slowest operations, and top 10 span errors across services.
## Send data from an existing Ruby app
### Manual instrumentation
Manual instrumentation allows users to define and manage telemetry data collection points within their Ruby applications, providing granular control over what is traced.
1. Initialize Tracer. Use the OpenTelemetry API to obtain a tracer from the global tracer provider. This tracer will be used to start and manage spans.
```ruby
tracer = OpenTelemetry.tracer_provider.tracer('my-tracer')
```
2. Manually start a span at the beginning of the block of code you want to trace and ensure to end it when your operations complete. This is useful for gathering detailed data about specific operations.
```ruby
span = tracer.start_span('operation_name')
begin
# Perform operation
rescue => e
span.record_exception(e)
span.status = OpenTelemetry::Trace::Status.error("Operation failed")
ensure
span.finish
end
```
3. Enhance spans with custom attributes to provide additional context about the traced operations, helping in debugging and monitoring performance.
```ruby
span.set_attribute("user_id", user.id)
span.add_event("query_executed", attributes: { "query" => sql_query })
```
### Automatic instrumentation
Automatic instrumentation in Ruby uses OpenTelemetry’s libraries to automatically generate telemetry data for common operations, such as HTTP requests and database queries.
1. Set up the OpenTelemetry SDK with the necessary instrumentation libraries in your Ruby application. This typically involves modifying the Gemfile and an initializer to set up the SDK and auto-instrumentation.
```ruby
# In config/initializers/opentelemetry.rb
OpenTelemetry::SDK.configure do |c|
c.service_name = 'ruby-traces'
c.use_all # Automatically use all available instrumentation
end
```
2. Ensure your Gemfile includes gems for the automatic instrumentation of the frameworks and libraries your application uses.
```ruby
gem 'opentelemetry-instrumentation-rails'
gem 'opentelemetry-instrumentation-http'
gem 'opentelemetry-instrumentation-active_record'
```
After setting up, no additional manual changes are required for basic telemetry data collection. The instrumentation libraries handle the creation and management of telemetry data automatically.
## Reference
### List of OpenTelemetry trace fields
| Field Category | Field Name | Description |
| ------------------------------- | ------------------------------------ | ------------------------------------------------------------- |
| **General Trace Information** | | |
| | \_rowId | Unique identifier for each row in the trace data. |
| | \_sysTime | System timestamp when the trace data was recorded. |
| | \_time | Timestamp when the actual event being traced occurred. |
| | trace\_id | Unique identifier for the entire trace. |
| | span\_id | Unique identifier for the span within the trace. |
| | parent\_span\_id | Unique identifier for the parent span within the trace. |
| **HTTP Attributes** | | |
| | attributes.http.method | HTTP method used for the request. |
| | attributes.http.status\_code | HTTP status code returned in response. |
| | attributes.http.target | Specific target of the HTTP request. |
| | attributes.http.scheme | Protocol scheme (HTTP/HTTPS). |
| **User Agent** | | |
| | attributes.http.user\_agent | User agent string, providing client software and OS. |
| **Custom Attributes** | | |
| | attributes.custom\["http.host"] | Host information where the HTTP request was sent. |
| | attributes.custom.identifier | Path to a file or identifier in the trace context. |
| | attributes.custom.layout | Layout used in the rendering process of a view or template. |
| **Resource Process Attributes** | | |
| | resource.process.command | Command line string used to start the process. |
| | resource.process.pid | Process ID. |
| | resource.process.runtime.description | Description of the runtime environment. |
| | resource.process.runtime.name | Name of the runtime environment. |
| | resource.process.runtime.version | Version of the runtime environment. |
| **Operational Details** | | |
| | duration | Time taken for the operation. |
| | kind | Type of span (e.g., server, client, internal). |
| | name | Name of the span, often a high-level title for the operation. |
| **Code Attributes** | | |
| | attributes.code.function | Function or method being executed. |
| | attributes.code.namespace | Namespace or module that includes the function. |
| **Scope Attributes** | | |
| | scope.name | Name of the scope for the operation. |
| | scope.version | Version of the scope. |
| **Service Attributes** | | |
| | service.name | Name of the service generating the trace. |
| | service.version | Version of the service generating the trace. |
| | service.instance.id | Unique identifier for the instance of the service. |
| **Telemetry SDK Attributes** | | |
| | telemetry.sdk.language | Language of the telemetry SDK, e.g., ruby. |
| | telemetry.sdk.name | Name of the telemetry SDK, e.g., opentelemetry. |
| | telemetry.sdk.version | Version of the telemetry SDK, e.g., 1.4.1. |
### List of imported libraries
`gem 'opentelemetry-api'`
The `opentelemetry-api` gem provides the core OpenTelemetry API for Ruby. It defines the basic concepts and interfaces for distributed tracing, such as spans, tracers, and context propagation. This gem is essential for instrumenting your Ruby application with OpenTelemetry.
`gem 'opentelemetry-sdk'`
The `opentelemetry-sdk` gem is the OpenTelemetry SDK for Ruby. It provides the implementation of the OpenTelemetry API, including the tracer provider, span processors, and exporters. This gem is responsible for managing the lifecycle of spans and sending them to the specified backend.
`gem 'opentelemetry-exporter-otlp'`
The `opentelemetry-exporter-otlp` gem is an exporter that sends trace data to a backend that supports the OpenTelemetry Protocol (OTLP), such as Axiom. It formats the trace data according to the OTLP standards and transmits it over HTTP or gRPC, ensuring compatibility and standardization in how telemetry data is sent across different systems and services.
`gem 'opentelemetry-instrumentation-rails'`
The `opentelemetry-instrumentation-rails` gem provides automatic instrumentation for Ruby on Rails applications. It integrates with various aspects of a Rails application, such as controllers, views, and database queries, to capture relevant trace data without requiring manual instrumentation. This gem simplifies the process of adding tracing to your Rails application.
`gem 'opentelemetry-instrumentation-http'`
The `opentelemetry-instrumentation-http` gem provides automatic instrumentation for HTTP requests made using the `Net::HTTP` library. It captures trace data for outgoing HTTP requests, including request headers, response status, and timing information. This gem helps in tracing the external dependencies of your application.
`gem 'opentelemetry-instrumentation-active_record', require: false`
The `opentelemetry-instrumentation-active_record` gem provides automatic instrumentation for ActiveRecord, the Object-Relational Mapping (ORM) library used in Ruby on Rails. It captures trace data for database queries, including the SQL statements executed and their duration. This gem helps in identifying performance bottlenecks related to database interactions.
`gem 'opentelemetry-instrumentation-all'`
The `opentelemetry-instrumentation-all` gem is a meta-gem that includes all the available instrumentation libraries for OpenTelemetry in Ruby. It provides a convenient way to install and configure multiple instrumentation libraries at once, covering various aspects of your application, such as HTTP requests, database queries, and external libraries. This gem simplifies the setup process and ensures comprehensive tracing coverage for your Ruby application.
# Axiom transport for Pino logger
This page explains how to send data from a Node.js app to Axiom through Pino.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
## Install SDK
To install the SDK, run the following:
```shell
npm install @axiomhq/pino
```
## Create Pino logger
The example below creates a Pino logger with Axiom configured:
```ts
import pino from 'pino';
const logger = pino(
{ level: 'info' },
pino.transport({
target: '@axiomhq/pino',
options: {
dataset: process.env.AXIOM_DATASET,
token: process.env.AXIOM_TOKEN,
},
}),
);
```
After setting up the Axiom transport for Pino, use the logger as usual:
```js
logger.info('Hello from Pino!');
```
## Examples
For more examples, see the [examples in GitHub](https://github.com/axiomhq/axiom-js/tree/main/examples/pino).
# Send data from Python app to Axiom
This page explains how to send data from a Python app to Axiom.
To send data from a Python app to Axiom, use the Axiom Python SDK.
The Axiom Python SDK is an open-source project and welcomes your contributions. For more information, see the [GitHub repository](https://github.com/axiomhq/axiom-py).
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
## Install SDK
```shell Linux / MacOS
python3 -m pip install axiom-py
```
```shell Windows
py -m pip install axiom-py
```
```shell pip
pip3 install axiom-py
```
If you use the [Axiom CLI](/reference/cli), run `eval $(axiom config export -f)` to configure your environment variables. Otherwise, [create an API token](/reference/tokens) and export it as `AXIOM_TOKEN`.
You can also configure the client using options passed to the client constructor:
```py
import axiom_py
client = axiom_py.Client("API_TOKEN")
```
## Use client
```py
import axiom_py
import rfc3339
from datetime import datetime,timedelta
client = axiom_py.Client()
client.ingest_events(
dataset="DATASET_NAME",
events=[
{"foo": "bar"},
{"bar": "baz"},
])
client.query(r"['DATASET_NAME'] | where foo == 'bar' | limit 100")
```
For more examples, see the [examples in GitHub](https://github.com/axiomhq/axiom-py/tree/main/examples/client_example.py).
## Example with `AxiomHandler`
The example below uses `AxiomHandler` to send logs from the `logging` module to Axiom:
```python
import axiom_py
from axiom_py.logging import AxiomHandler
import logging
def setup_logger():
client = axiom_py.Client()
handler = AxiomHandler(client, "DATASET_NAME")
logging.getLogger().addHandler(handler)
```
For a full example, see [GitHub](https://github.com/axiomhq/axiom-py/tree/main/examples/logger_example.py).
## Example with `structlog`
The example below uses [structlog](https://github.com/hynek/structlog) to send logs to Axiom:
```python
from axiom_py import Client
from axiom_py.structlog import AxiomProcessor
def setup_logger():
client = Client()
structlog.configure(
processors=[
# ...
structlog.processors.add_log_level,
structlog.processors.TimeStamper(fmt="iso", key="_time"),
AxiomProcessor(client, "DATASET_NAME"),
# ...
]
)
```
For a full example, see [GitHub](https://github.com/axiomhq/axiom-py/tree/main/examples/structlog_example.py).
# Send data from Rust app to Axiom
This page explains how to send data from a Rust app to Axiom.
To send data from a Rust app to Axiom, use the Axiom Rust SDK.
The Axiom Rust SDK is an open-source project and welcomes your contributions. For more information, see the [GitHub repository](https://github.com/axiomhq/axiom-rs).
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
## Install SDK
Add the following to your `Cargo.toml`:
```toml
[dependencies]
axiom-rs = "VERSION"
```
Replace `VERSION` with the latest version number specified on the [GitHub Releases](https://github.com/axiomhq/axiom-rs/releases) page. For example, `0.11.0`.
If you use the [Axiom CLI](/reference/cli), run `eval $(axiom config export -f)` to configure your environment variables. Otherwise, [create an API token](/reference/tokens) and export it as `AXIOM_TOKEN`.
## Use client
```rust
use axiom_rs::Client;
use serde_json::json;
#[tokio::main]
async fn main() -> Result<(), Box> {
// Build your client by providing a personal token and an org id:
let client = Client::builder()
.with_token("API_TOKEN")
.build()?;
// Alternatively, auto-configure the client from the environment variable AXIOM_TOKEN:
let client = Client::new()?;
client.datasets().create("DATASET_NAME", "").await?;
client
.ingest(
"DATASET_NAME",
vec![json!({
"foo": "bar",
})],
)
.await?;
let res = client
.query(r#"['DATASET_NAME'] | where foo == "bar" | limit 100"#, None)
.await?;
println!("{:?}", res);
client.datasets().delete("DATASET_NAME").await?;
Ok(())
}
```
For more examples, see the [examples in GitHub](https://github.com/axiomhq/axiom-rs/tree/main/examples).
## Optional features
You can use the [Cargo features](https://doc.rust-lang.org/stable/cargo/reference/features.html#the-features-section):
* `default-tls`: Provides TLS support to connect over HTTPS. Enabled by default.
* `native-tls`: Enables TLS functionality provided by `native-tls`.
* `rustls-tls`: Enables TLS functionality provided by `rustls`.
* `tokio`: Enables usage with the `tokio` runtime. Enabled by default.
* `async-std`: Enables usage with the `async-std` runtime.
# Send logs from Apache Log4j to Axiom
This guide explains how to configure Apache Log4j to send logs to Axiom
Log4j is a Java logging framework developed by the Apache Software Foundation and widely used in the Java community. This page covers how to get started with Log4j, configure it to forward log messages to Fluentd, and send logs to Axiom.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
{/* list separator */}
* [Install JDK 11](https://www.oracle.com/java/technologies/java-se-glance.html) or later
* [Install Maven](https://maven.apache.org/download.cgi)
* [Install Fluentd](https://www.fluentd.org/download)
* [Install Docker](https://docs.docker.com/get-docker/)
## Configure Log4j
Log4j is a flexible and powerful logging framework for Java applications. To use Log4j in your project, add the necessary dependencies to your `pom.xml` file. The dependencies required for Log4j include `log4j-core`, `log4j-api`, and `log4j-slf4j2-impl` for logging capability, and `jackson-databind` for JSON support.
1. Create a new Maven project:
```bash
mvn archetype:generate -DgroupId=com.example -DartifactId=log4j-axiom-test -DarchetypeArtifactId=maven-archetype-quickstart -DinteractiveMode=false
cd log4j-axiom-test
```
2. Open the `pom.xml` file and replace its contents with the following:
```xml
4.0.0com.examplelog4j-axiom-testjar1.0-SNAPSHOTlog4j-axiom-testhttp://maven.apache.org11112.19.0junitjunit4.12testorg.apache.logging.log4jlog4j-core${log4j.version}org.apache.logging.log4jlog4j-api${log4j.version}org.apache.logging.log4jlog4j-slf4j2-impl${log4j.version}com.fasterxml.jackson.corejackson-databind2.13.0org.apache.maven.pluginsmaven-shade-plugin3.2.4packageshadecom.example.Appfalse
```
This `pom.xml` file includes the necessary Log4j dependencies and configures the Maven Shade plugin to create an executable JAR file.
3. Create a new file named `log4j2.xml` in your root directory and add the following content:
```xml
```
This configuration sets up two appenders:
* A Socket appender that sends logs to Fluentd, running on `localhost:24224`. Is uses JSON format for the log messages, which makes it easier to parse and analyze the logs later in Axiom.
* A Console appender that prints logs to the standard output,
## Set log level
Log4j supports various log levels, allowing you to control the verbosity of your logs. The main log levels, in order of increasing severity, are the following:
* `TRACE`: Fine-grained information for debugging.
* `DEBUG`: General debugging information.
* `INFO`: Informational messages.
* `WARN`: Indications of potential problems.
* `ERROR`: Error events that might still allow the app to continue running.
* `FATAL`: Severe error events that might lead the app to cancel.
In the configuration above, the root logger level is set to INFO which means it logs messages at INFO level and above (WARN, ERROR, and FATAL).
To set the log level, create a simple Java class to demonstrate these log levels. Create a new file named `App.java` in the `src/main/java/com/example` directory with the following content:
```java
package com.example;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.apache.logging.log4j.ThreadContext;
import org.apache.logging.log4j.core.config.Configurator;
import org.apache.logging.log4j.Level;
import java.util.Random;
public class App {
// Define loggers for different purposes
private static final Logger logger = LogManager.getLogger(App.class);
private static final Logger securityLogger = LogManager.getLogger("SecurityLogger");
private static final Logger performanceLogger = LogManager.getLogger("PerformanceLogger");
public static void main(String[] args) {
// Configure logging levels programmatically
configureLogging();
Random random = new Random();
// Infinite loop to continuously generate log events
while (true) {
try {
// Simulate various logging scenarios
simulateUserActivity(random);
simulateDatabaseOperations(random);
simulateSecurityEvents(random);
simulatePerformanceMetrics(random);
// Simulate a critical error with 10% probability
if (random.nextInt(10) == 0) {
throw new RuntimeException("Simulated critical error");
}
Thread.sleep(1000); // Sleep for 1 second
} catch (InterruptedException e) {
logger.warn("Sleep interrupted", e);
} catch (Exception e) {
logger.error("Critical error occurred", e);
} finally {
// Clear thread context after each iteration
ThreadContext.clearAll();
}
}
}
private static void configureLogging() {
// Set root logger level to DEBUG
Configurator.setRootLevel(Level.DEBUG);
// Set custom logger levels
Configurator.setLevel("SecurityLogger", Level.INFO);
Configurator.setLevel("PerformanceLogger", Level.TRACE);
}
// Simulate user activities and log them
private static void simulateUserActivity(Random random) {
String[] users = {"Alice", "Bob", "Charlie", "David"};
String[] actions = {"login", "logout", "view_profile", "update_settings"};
String user = users[random.nextInt(users.length)];
String action = actions[random.nextInt(actions.length)];
// Add user and action to thread context
ThreadContext.put("user", user);
ThreadContext.put("action", action);
// Log different user actions with appropriate levels
switch (action) {
case "login":
logger.info("User logged in successfully");
break;
case "logout":
logger.info("User logged out");
break;
case "view_profile":
logger.debug("User viewed their profile");
break;
case "update_settings":
logger.info("User updated their settings");
break;
}
}
// Simulate database operations and log them
private static void simulateDatabaseOperations(Random random) {
String[] operations = {"select", "insert", "update", "delete"};
String operation = operations[random.nextInt(operations.length)];
long duration = random.nextInt(1000);
// Add operation and duration to thread context
ThreadContext.put("operation", operation);
ThreadContext.put("duration", String.valueOf(duration));
// Log slow database operations as warnings
if (duration > 500) {
logger.warn("Slow database operation detected");
} else {
logger.debug("Database operation completed");
}
// Simulate database connection loss with 5% probability
if (random.nextInt(20) == 0) {
logger.error("Database connection lost", new SQLException("Connection timed out"));
}
}
// Simulate security events and log them
private static void simulateSecurityEvents(Random random) {
String[] events = {"failed_login", "password_change", "role_change", "suspicious_activity"};
String event = events[random.nextInt(events.length)];
ThreadContext.put("security_event", event);
// Log different security events with appropriate levels
switch (event) {
case "failed_login":
securityLogger.warn("Failed login attempt");
break;
case "password_change":
securityLogger.info("User changed their password");
break;
case "role_change":
securityLogger.info("User role was modified");
break;
case "suspicious_activity":
securityLogger.error("Suspicious activity detected", new SecurityException("Potential breach attempt"));
break;
}
}
// Simulate performance metrics and log them
private static void simulatePerformanceMetrics(Random random) {
String[] metrics = {"cpu_usage", "memory_usage", "disk_io", "network_latency"};
String metric = metrics[random.nextInt(metrics.length)];
double value = random.nextDouble() * 100;
// Add metric and value to thread context
ThreadContext.put("metric", metric);
ThreadContext.put("value", String.format("%.2f", value));
// Log high resource usage as warnings
if (value > 80) {
performanceLogger.warn("High resource usage detected");
} else {
performanceLogger.trace("Performance metric recorded");
}
}
// Custom exception classes for simulating errors
private static class SQLException extends Exception {
public SQLException(String message) {
super(message);
}
}
private static class SecurityException extends Exception {
public SecurityException(String message) {
super(message);
}
}
}
```
This class demonstrates the use of different log levels and also shows how to add context to your logs using `ThreadContext`.
## Forward log messages to Fluentd
Fluentd is a popular open-source data collector used to forward logs from Log4j to Axiom. The Log4j configuration is already set up to send logs to Fluentd using the Socket appender. Fluentd acts as a unified logging layer, allowing you to collect, process, and forward logs from various sources to different destinations.
### Configure the Fluentd.conf file
To configure Fluentd, create a configuration file. Create a new file named `fluentd.conf` in your project root directory with the following content:
```xml
@type record_transformer
tag java.log4j
@type http
endpoint https://api.axiom.co/v1/datasets/DATASET_NAME/ingest
headers {"Authorization":"Bearer API_TOKEN"}
json_array true
@type memory
flush_interval 5s
chunk_limit_size 5m
total_limit_size 10m
@type json
```
* Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.
* Replace `DATASET_NAME` with the name of the Axiom dataset where you want to send data.
This configuration does the following:
1. Set up a forward input plugin to receive logs from Log4j.
2. Add a `java.log4j` tag to all logs.
3. Forward the logs to Axiom using the HTTP output plugin.
### Create the Dockerfile
To simplify the deployment of the Java app and Fluentd, use Docker. Create a new file named `Dockerfile` in your project root directory with the following content:
```yaml
# Build stage
FROM maven:3.8.1-openjdk-11-slim AS build
WORKDIR /usr/src/app
COPY pom.xml .
COPY src ./src
COPY log4j2.xml .
RUN mvn clean package
# Runtime stage
FROM openjdk:11-jre-slim
WORKDIR /usr/src/app
RUN apt-get update && \
apt-get install -y --no-install-recommends \
ruby \
ruby-dev \
build-essential && \
gem install fluentd --no-document && \
fluent-gem install fluent-plugin-multi-format-parser && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
COPY --from=build /usr/src/app/target/log4j-axiom-test-1.0-SNAPSHOT.jar .
COPY fluentd.conf /etc/fluent/fluent.conf
COPY log4j2.xml .
# Create startup script
RUN echo '#!/bin/sh\n\
fluentd -c /etc/fluent/fluent.conf &\n\
sleep 5\n\
java -Dlog4j.configurationFile=log4j2.xml -jar log4j-axiom-test-1.0-SNAPSHOT.jar\n'\
> /usr/src/app/start.sh && chmod +x /usr/src/app/start.sh
EXPOSE 24224
CMD ["/usr/src/app/start.sh"]
```
This Dockerfile does the following:
1. Build the Java app.
2. Set up a runtime environment with Java and Fluentd.
3. Copy the necessary files and configurations.
4. Create a startup script to run both Fluentd and the Java app.
### Build and run the Dockerfile
1. To build the Docker image, run the following command in your project root directory:
```bash
docker build -t log4j-axiom-test .
```
2. Run the container with the following:
```bash
docker run -p 24224:24224 log4j-axiom-test
```
This command starts the container, running both Fluentd and your Java app.
## View logs in Axiom
Now that your app is running and sending logs to Axiom, you can view them in the Axiom dashboard. Log in to your Axiom account and go to the dataset you specified in the Fluentd configuration.
Logs appear in real-time, with various log levels and context information added.
## Logging in Log4j best practices
* Use appropriate log levels: Reserve ERROR and FATAL for serious issues, use WARN for potential problems, and INFO for general app flow.
* Include context: Add relevant information to your logs using ThreadContext or by including important variables in your log messages.
* Use structured logging: Log in JSON format to make it easier to parse, and later, analyze the logs using [APL](https://axiom.co/docs/apl/introduction).
* Log actionable information: Include enough detail in your logs to understand and potentially reproduce issues.
* Use parameterized logging: Instead of string concatenation, use Log4j’s support for parameterized messages to improve performance.
* Configure appenders appropriately: Use asynchronous appenders for better performance in high-throughput scenarios.
* Regularly review and maintain your logs: Periodically check your logging configuration and the logs themselves to ensure they’re providing value.
# Send logs from a .NET app
This guide explains how to set up and configure logging in a .NET application, and how to send logs to Axiom.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/register).
* [Create a dataset in Axiom](/reference/datasets) where you send your data.
* [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created.
{/* list separator */}
* [Install the .NET SDK](https://dotnet.microsoft.com/download).
## Option 1: Using HTTP Client
### Create a new .NET project
Create a new .NET project. In your terminal, go to the directory where you want to create your project. Run the following command to create a new console app named `AxiomLogs`.
```bash
dotnet new console -n AxiomLogs
```
### Install packages
Install the packages for your project. Use the `Microsoft.AspNet.WebApi.Client` package to make HTTP requests to the Axiom API. Run the following command to install the package:
```bash
dotnet add package Microsoft.AspNet.WebApi.Client
```
### Configure the Axiom logger
Create a class to handle logging to Axiom. Create a new file named `AxiomLogger.cs` in your project directory with the following content:
```csharp
using System;
using System.Net.Http;
using System.Text;
using System.Threading.Tasks;
public static class AxiomLogger
{
public static async Task LogToAxiom(string message, string logLevel)
{
// Create an instance of HttpClient to make HTTP requests
var client = new HttpClient();
// Specify the Axiom dataset name and construct the API endpoint URL
var datasetName = "YOUR-DATASET-NAME"; // Replace with your actual dataset name
var axiomUri = $"https://api.axiom.co/v1/datasets/{datasetName}/ingest";
// Replace with your Axiom API token
var apiToken = "YOUR-API-TOKEN"; // Ensure your API token is correct
// Create an array of log entries, including the timestamp, message, and log level
var logEntries = new[]
{
new
{
timestamp = DateTime.UtcNow.ToString("o"),
message = message,
level = logLevel
}
};
// Serialize the log entries to JSON format using System.Text.Json.JsonSerializer
var content = new StringContent(System.Text.Json.JsonSerializer.Serialize(logEntries), Encoding.UTF8, "application/json");
// Set the authorization header with the Axiom API token
client.DefaultRequestHeaders.Authorization = new System.Net.Http.Headers.AuthenticationHeaderValue("Bearer", apiToken);
// Make a POST request to the Axiom API endpoint with the serialized log entries
var response = await client.PostAsync(axiomUri, content);
// Check the response status code
if (!response.IsSuccessStatusCode)
{
// If the response is not successful, print the error details
var responseBody = await response.Content.ReadAsStringAsync();
Console.WriteLine($"Failed to send log: {response.StatusCode}\n{responseBody}");
}
else
{
// If the response is successful, print "Log sent successfully."
Console.WriteLine("Log sent successfully.");
}
}
}
```
### Configure the main program
Now that the Axiom logger is in place, update the main program so it can be used. Open the `Program.cs` file and replace its contents with the following code:
```csharp
using System;
using System.Threading.Tasks;
class Program
{
static async Task Main(string[] args)
{
// Log the application startup event with an "INFO" log level
await AxiomLogger.LogToAxiom("Application started", "INFO");
// Call the SimulateOperations method to simulate various application operations
await SimulateOperations();
// Log the .NET runtime version information with an "INFO" log level
await AxiomLogger.LogToAxiom($"CLR version: {Environment.Version}", "INFO");
// Log the application shutdown event with an "INFO" log level
await AxiomLogger.LogToAxiom("Application shutting down", "INFO");
}
static async Task SimulateOperations()
{
// Log the start of operations with a "DEBUG" log level
await AxiomLogger.LogToAxiom("Starting operations", "DEBUG");
// Log the database connection event with a "DEBUG" log level
await AxiomLogger.LogToAxiom("Connecting to database", "DEBUG");
await Task.Delay(500); // Simulated delay
// Log the successful database connection with an "INFO" log level
await AxiomLogger.LogToAxiom("Connected to database successfully", "INFO");
// Log the user data retrieval event with a "DEBUG" log level
await AxiomLogger.LogToAxiom("Retrieving user data", "DEBUG");
await Task.Delay(1000);
// Log the number of retrieved user records with an "INFO" log level
await AxiomLogger.LogToAxiom("Retrieved 100 user records", "INFO");
// Log the user preference update event with a "DEBUG" log level
await AxiomLogger.LogToAxiom("Updating user preferences", "DEBUG");
await Task.Delay(800);
// Log the successful user preference update with an "INFO" log level
await AxiomLogger.LogToAxiom("Updated user preferences successfully", "INFO");
try
{
// Log the payment processing event with a "DEBUG" log level
await AxiomLogger.LogToAxiom("Processing payments", "DEBUG");
await Task.Delay(1500);
// Intentionally throw an exception to demonstrate error logging
throw new Exception("Payment gateway unavailable");
}
catch (Exception ex)
{
// Log the payment processing failure with an "ERROR" log level
await AxiomLogger.LogToAxiom($"Payment processing failed: {ex.Message}", "ERROR");
}
// Log the email notification sending event with a "DEBUG" log level
await AxiomLogger.LogToAxiom("Sending email notifications", "DEBUG");
await Task.Delay(1200);
// Log the number of sent email notifications with an "INFO" log level
await AxiomLogger.LogToAxiom("Sent 50 email notifications", "INFO");
// Log the high memory usage detection with a "WARN" log level
await AxiomLogger.LogToAxiom("Detected high memory usage", "WARN");
await Task.Delay(500);
// Log the memory usage normalization with an "INFO" log level
await AxiomLogger.LogToAxiom("Memory usage normalized", "INFO");
// Log the completion of operations with a "DEBUG" log level
await AxiomLogger.LogToAxiom("Operations completed", "DEBUG");
}
}
```
This code simulates various app operations and logs messages at different levels (DEBUG, INFO, WARN, ERROR) to Axiom.
### Project file configuration
Ensure your `axiomlogs.csproj` file is configured with the package reference. The file should look like this:
```xml
Exenet6.0enableenable
```
### Build and run the app
To build and run the app, go to the project directory in your terminal and run the following command:
```bash
dotnet build
dotnet run
```
This command builds the project and runs the app. You see the log messages being sent to Axiom, and the console displays `Log sent successfully.` for each log entry.
## Option 2: Using Serilog
### Install Serilog Packages
Add Serilog and the necessary extensions to your project. You need the `Serilog`, `Serilog.Sinks.Http`, and `Serilog.Formatting.Json` packages.
```bash
dotnet add package Serilog
dotnet add package Serilog.Sinks.Http
dotnet add package Serilog.Formatting.Json
```
### Configure Serilog
In your `Program.cs` or a startup configuration file, set up Serilog to use the HTTP sink. Configure the sink to point to the Axiom ingestion API endpoint.
```csharp
using Serilog;
Log.Logger = new LoggerConfiguration()
.WriteTo.Http(
requestUri: "https://api.axiom.co/v1/datasets/YOUR-DATASET-NAME/ingest",
textFormatter: new Serilog.Formatting.Json.JsonFormatter(),
httpClient: new HttpClient
{
DefaultRequestHeaders =
{
{ "Authorization", "Bearer YOUR-API-TOKEN" }
}
})
.CreateLogger();
class Program
{
static async Task Main(string[] args)
{
Log.Information("Application started");
await SimulateOperations();
Log.Information($"CLR version: {Environment.Version}");
Log.Information("Application shutting down");
}
static async Task SimulateOperations()
{
Log.Debug("Starting operations");
Log.Debug("Connecting to database");
await Task.Delay(500); // Simulated delay
Log.Information("Connected to database successfully");
Log.Debug("Retrieving user data");
await Task.Delay(1000);
Log.Information("Retrieved 100 user records");
Log.Debug("Updating user preferences");
await Task.Delay(800);
Log.Information("Updated user preferences successfully");
try
{
Log.Debug("Processing payments");
await Task.Delay(1500);
throw new Exception("Payment gateway unavailable");
}
catch (Exception ex)
{
Log.Error($"Payment processing failed: {ex.Message}");
}
Log.Debug("Sending email notifications");
await Task.Delay(1200);
Log.Information("Sent 50 email notifications");
Log.Warning("Detected high memory usage");
await Task.Delay(500);
Log.Information("Memory usage normalized");
Log.Debug("Operations completed");
}
}
```
### Project file configuration
Ensure your `axiomlogs.csproj` file is configured with the package references. The file should look like this:
```xml
Exenet6.0enableenable
```
### Build and run the app
To build and run the app, go to the project directory in your terminal and run the following commands:
```bash
dotnet build
dotnet run
```
This command builds the project and runs the app. You see the log messages being sent to Axiom.
## Option 3: Using NLog
### Install NLog Packages
You need NLog and potentially an extension for HTTP targets.
```bash
dotnet add package NLog
dotnet add package NLog.Web.AspNetCore
```
### Configure NLog
Set up NLog by creating an `NLog.config` file or configuring it programmatically. Here is an example configuration for `NLog` using an HTTP target:
```xml
```
### Configure the main program
Update the main program to use `NLog`. In your `Program.cs` file:
```csharp
using NLog;
using NLog.Web;
var logger = NLogBuilder.ConfigureNLog("nlog.config").GetCurrentClassLogger();
class Program
{
static async Task Main(string[] args)
{
logger.Info("Application started");
await SimulateOperations();
logger.Info($"CLR version: {Environment.Version}");
logger.Info("Application shutting down");
}
static async Task SimulateOperations()
{
logger.Debug("Starting operations");
logger.Debug("Connecting to database");
await Task.Delay(500); // Simulated delay
logger.Info("Connected to database successfully");
logger.Debug("Retrieving user data");
await Task.Delay(1000);
logger.Info("Retrieved 100 user records");
logger.Debug("Updating user preferences");
await Task.Delay(800);
logger.Info("Updated user preferences successfully");
try
{
logger.Debug("Processing payments");
await Task.Delay(1500);
throw new Exception("Payment gateway unavailable");
}
catch (Exception ex)
{
logger.Error($"Payment processing failed: {ex.Message}");
}
logger.Debug("Sending email notifications");
await Task.Delay(1200);
logger.Info("Sent 50 email notifications");
logger.Warn("Detected high memory usage");
await Task.Delay(500);
logger.Info("Memory usage normalized");
logger.Debug("Operations completed");
}
}
```
### Project file configuration
Ensure your `axiomlogs.csproj` file is configured with the package references. The file should look like this:
```xml
Exenet6.0enableenable
```
### Build and run the app
To build and run the app, go to the project directory in your terminal and run the following commands:
```bash
dotnet build
dotnet run
```
This command builds the project and runs the app. You should see the log messages being sent to Axiom.
## Best practices for logging
To make your logging more effective, consider the following best practices:
* Include relevant information such as user IDs, request details, and system state in your log messages to provide context when investigating issues.
* Use different log levels (DEBUG, INFO, WARN, ERROR) to categorize the severity and importance of log messages. This allows you to filter and analyze logs more effectively
* Use structured logging formats like JSON to make it easier to parse and analyze log data
## Conclusion
This guide covers the steps to send logs from a C# .NET app to Axiom. By following these instructions and adhering to logging best practices, you can effectively monitor your app, diagnose issues, and gain valuable insights into its behavior.
# Send logs from Laravel to Axiom
This guide demonstrates how to configure logging in a Laravel app to send logs to Axiom
This guide explains integrating Axiom as a logging solution in a Laravel app. Using Axiom’s capabilities with a custom log channel, you can efficiently send your app’s logs to Axiom for storage, analysis, and monitoring. This integration uses Monolog, Laravel’s underlying logging library, to create a custom logging handler that forwards logs to Axiom.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/).
* [Create a dataset in Axiom](/reference/settings#data) where you will send your data.
* [Create an API token in Axiom with permissions to query and ingest data](/reference/settings#access-overview).
* PHP development [environment](https://www.php.net/manual/en/install.php)
* [Composer](https://laravel.com/docs/11.x/installation) installed on your system
* Laravel app setup
## Installation
### Create a Laravel project
Create a new Laravel project:
```bash
composer create-project --prefer-dist laravel/laravel laravel-axiom-logger
```
## Exploring the logging config file
In your Laravel project, the `config` directory contains several configurations on how different parts of your app work, such as how it connects to the database, manages sessions, and handles caching. Among these files, **`logging.php`** identifies how you can define your app logs activities and errors. This file is designed to let you specify where your logs go: a file, a cloud service, or other destinations. The configuration file below includes the Axiom logging setup.
```bash
code config/logging.php
```
```php
env('LOG_CHANNEL', 'stack'),
'deprecations' => [
'channel' => env('LOG_DEPRECATIONS_CHANNEL', 'null'),
'trace' => false,
],
'channels' => [
'stack' => [
'driver' => 'stack',
'channels' => ['single'],
'ignore_exceptions' => false,
],
'single' => [
'driver' => 'single',
'path' => storage_path('logs/laravel.log'),
'level' => env('LOG_LEVEL', 'debug'),
'replace_placeholders' => true,
],
'axiom' => [
'driver' => 'monolog',
'handler' => App\Logging\AxiomHandler::class,
'level' => env('LOG_LEVEL', 'debug'),
'with' => [
'apiToken' => env('AXIOM_API_TOKEN'),
'dataset' => env('AXIOM_DATASET'),
],
],
'daily' => [
'driver' => 'daily',
'path' => storage_path('logs/laravel.log'),
'level' => env('LOG_LEVEL', 'debug'),
'days' => 14,
'replace_placeholders' => true,
],
'stderr' => [
'driver' => 'monolog',
'level' => env('LOG_LEVEL', 'debug'),
'handler' => StreamHandler::class,
'formatter' => env('LOG_STDERR_FORMATTER'),
'with' => [
'stream' => 'php://stderr',
],
'processors' => [PsrLogMessageProcessor::class],
],
'syslog' => [
'driver' => 'syslog',
'level' => env('LOG_LEVEL', 'debug'),
'facility' => LOG_USER,
'replace_placeholders' => true,
],
'errorlog' => [
'driver' => 'errorlog',
'level' => env('LOG_LEVEL', 'debug'),
'replace_placeholders' => true,
],
'null' => [
'driver' => 'monolog',
'handler' => NullHandler::class,
],
'emergency' => [
'path' => storage_path('logs/laravel.log'),
],
],
];
```
At the start of the `logging.php` file in your Laravel project, you'll find some Monolog handlers like `NullHandler`, `StreamHandler`, and a few more. This shows that Laravel uses Monolog to help with logging, which means it can do a lot of different things with logs.
### Default log channel
The `default` configuration specifies the primary channel Laravel uses for logging. In our setup, this is set through the **`.env`** file with the **`LOG_CHANNEL`** variable, which you've set to **`axiom`**. This means that, by default, log messages will be sent to the Axiom channel, using the custom handler you've defined to send logs to the dataset.
```bash
LOG_CHANNEL=axiom
AXIOM_API_TOKEN=$API_TOKEN
AXIOM_DATASET=$DATASET
LOG_LEVEL=debug
LOG_DEPRECATIONS_CHANNEL=null
```
### Deprecations log channel
The `deprecations` channel is configured to handle logs about deprecated features in PHP and libraries, helping you prepare for updates. By default, it’s set to ignore these warnings, but you can adjust this to direct deprecation logs to a specific channel if needed.
```php
'deprecations' => [
'channel' => env('LOG_DEPRECATIONS_CHANNEL', 'null'),
'trace' => false,
],
```
### Configuration log channel
The heart of the `logging.php` file lies within the **`channels`** array where you define all available logging channels. The configuration highlights channels like **`single`**, **`axiom`**, and **`daily`**, each serving different logging purposes:
```php
'single' => [
'driver' => 'single',
'path' => storage_path('logs/laravel.log'),
'level' => env('LOG_LEVEL', 'debug'),
'replace_placeholders' => true,
],
'axiom' => [
'driver' => 'monolog',
'handler' => App\Logging\AxiomHandler::class,
'level' => env('LOG_LEVEL', 'debug'),
'with' => [
'apiToken' => env('AXIOM_API_TOKEN'),
'dataset' => env('AXIOM_DATASET'),
],
],
```
* **Single**: Designed for simplicity, the **`single`** channel writes logs to a single file. It’s a straightforward solution for tracking logs without needing complex log management strategies.
* Axiom: The custom **`axiom`** channel sends logs to your specified Axiom dataset, providing advanced log management capabilities. This integration enables powerful log analysis and monitoring, supporting better insights into your app’s performance and issues.
* **Daily**: This channel rotates logs daily, keeping your log files manageable and making it easier to navigate log entries over time.
Each channel can be customized further, such as adjusting the log level to control the verbosity of logs captured. The **`LOG_LEVEL`** environment variable sets this, defaulting to **`debug`** for capturing detailed log information.
## Getting started with log levels in Laravel
Laravel lets you choose from eight different levels of importance for your log messages, just like a list of warnings from very serious to just for info. Here’s what each level means, starting with the most severe:
* **EMERGENCY**: Your app is broken and needs immediate attention.
* **ALERT**: similar to `EMERGENCY`, but less severe.
* **CRITICAL**: Critical errors within the main parts of your app.
* **ERROR**: error conditions in your app.
* **WARNING**: something unusual happened that may need to be addressed later.
* **NOTICE**: Important info, but not a warning or error.
* **INFO**: General updates about what your app is doing.
* **DEBUG**: used to record some debugging messages.
Not every situation fits into one of these levels. For example, in an online store, you might use **INFO** to log when someone buys something and **ERROR** if a payment doesn’t go through because of a problem.
Here’s a simple way to log messages at each level in Laravel:
```php
use Illuminate\Support\Facades\Log;
Log::debug("Checking details.");
Log::info("User logged in.");
Log::notice("User tried a feature.");
Log::warning("Feature might not work as expected.");
Log::error("Feature failed to load.");
Log::critical("Major issue with the app.");
Log::alert("Immediate action needed.");
Log::emergency("The app is down.");
```
Output:
```php
[2023-09-01 00:00:00] local.DEBUG: Checking details.
[2023-09-01 00:00:00] local.INFO: User logged in.
[2023-09-01 00:00:00] local.NOTICE: User tried a feature.
[2023-09-01 00:00:00] local.WARNING: Feature might not work as expected.
[2023-09-01 00:00:00] local.ERROR: Feature failed to load.
[2023-09-01 00:00:00] local.CRITICAL: Major issue with the app.
[2023-09-01 00:00:00] local.ALERT: Immediate action needed.
[2023-09-01 00:00:00] local.EMERGENCY: The app is down.
```
## Creating the custom logger class
In this section, we will explain how to create the custom logger class designed for sending your Laravel app’s logs to Axiom. This class named `AxiomHandler` , extends Monolog’s **`AbstractProcessingHandler`** giving us a structured way to handle log messages and forward them to Axiom.
* **Initializing cURL**: The **`initializeCurl`** method sets up a cURL handle to communicate with Axiom’s API. It prepares the request with the appropriate headers, including the authorization header that uses your Axiom API token and content type set to **`application/json` .**
* **Handling errors**: If there’s an error during the cURL request, it’s logged to PHP’s error log. This helps in diagnosing issues with log forwarding without disrupting your app’s normal operations.
* **Formatting logs**: Lastly, we specify the log message format using the **`getDefaultFormatter`** method. By default, we use Monolog’s **`JsonFormatter`** to ensure our log messages are JSON encoded, making them easy to parse and analyze in Axiom.
```php
apiToken = $apiToken;
$this->dataset = $dataset;
}
private function initializeCurl(): \CurlHandle
{
$endpoint = "https://api.axiom.co/v1/datasets/{$this->dataset}/ingest";
$ch = curl_init($endpoint);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_POST, true);
curl_setopt($ch, CURLOPT_HTTPHEADER, [
'Authorization: Bearer ' . $this->apiToken,
'Content-Type: application/json',
]);
return $ch;
}
protected function write(LogRecord $record): void
{
$ch = $this->initializeCurl();
$data = [
'message' => $record->message,
'context' => $record->context,
'level' => $record->level->getName(),
'channel' => $record->channel,
'extra' => $record->extra,
];
$payload = json_encode([$data]);
curl_setopt($ch, CURLOPT_POSTFIELDS, $payload);
curl_exec($ch);
if (curl_errno($ch)) {
// Optionally log the curl error to PHP error log
error_log('Curl error: ' . curl_error($ch));
}
curl_close($ch);
}
protected function getDefaultFormatter(): FormatterInterface
{
return new \Monolog\Formatter\JsonFormatter();
}
}
```
## Creating the test controller
In this section, we will demonstrate the process of verifying that your custom Axiom logger is properly set up and functioning within your Laravel app. To do this, we'll create a simple test controller with a method designed to send a log message using the Axiom channel. Following this, we'll define a route that triggers this logging action, allowing you to easily test the logger by accessing a specific URL in your browser or using a tool like cURL.
Create a new controller called `TestController` within your `app/Http/Controllers` directory. In this controller, add a method named `logTest` . This method will use Laravel’s logging to send a test log message to your Axiom dataset. Here’s how you set it up:
```php
check() ? auth()->user()->id : 'guest';
return $record;
};
// Get the Monolog instance for the 'axiom' channel and push the custom processor
$logger = Log::channel('axiom')->getLogger();
if ($logger instanceof Logger) {
$logger->pushProcessor($customProcessor);
}
Log::channel('axiom')->debug("Checking details.", ['action' => 'detailCheck', 'status' => 'initiated']);
Log::channel('axiom')->info("User logged in.", ['user_id' => 'exampleUserId', 'method' => 'standardLogin']);
Log::channel('axiom')->info("User tried a feature.", ['feature' => 'experimentalFeatureX', 'status' => 'trial']);
Log::channel('axiom')->warning("Feature might not work as expected.", ['feature' => 'experimentalFeature', 'warning' => 'betaStage']);
Log::channel('axiom')->warning("Feature failed to load.", ['feature' => 'featureY', 'error_code' => 500]);
Log::channel('axiom')->error("Major issue with the app.", ['system' => 'paymentProcessing', 'error' => 'serviceUnavailable']);
Log::channel('axiom')->warning("Immediate action needed.", ['issue' => 'security', 'level' => 'high']);
Log::channel('axiom')->error("The app is down.", ['system' => 'entireApplication', 'status' => 'offline']);
return 'Log messages sent to Axiom';
}
}
```
This method targets the `axiom` channel, which we previously configured to forward logs to your Axiom account. The message **Testing Axiom logger!** should then appear in your Axiom dataset, confirming that the logger is working as expected.
## Registering the route
Next, you need to make this test accessible via a web route. Open your `routes/web.php` file and add a new route that points to the **`logTest`** method in your **`TestController`**. This enables you to trigger the log message by visiting a specific URL in your web browser.
```php
## Conclusion
This guide has introduced you to integrating Axiom for logging in Laravel apps. You've learned how to create a custom logger, configure log channels, and understand the significance of log levels. With this knowledge, you’re set to track errors and analyze log data effectively using Axiom.
# Send logs from a Ruby on Rails application using Faraday
This guide provides step-by-step instructions on how to send logs from a Ruby on Rails application to Axiom using the Faraday library.
This guide provides step-by-step instructions on how to send logs from a Ruby on Rails application to Axiom using the Faraday library. By following this guide, you configure your Rails app to send logs to Axiom, allowing you to monitor and analyze your application logs effectively.
## Prerequisites
* [Create an Axiom account](https://app.axiom.co/).
* [Create a dataset](/reference/settings#data) where you want to send data.
* [Create an API token in Axiom with permissions to ingest and query data](/reference/tokens).
* Install a [Ruby version manager](https://www.ruby-lang.org/en/documentation/installation/) like `rbenv` and use it to install the latest Ruby version.
* Install [Ruby on Rails](https://guides.rubyonrails.org/v5.0/getting_started.html) using the `gem install rails` command.
## Set up the Ruby on Rails application
1. Create a new Rails app using the `rails new myapp` command.
2. Navigate to the app directory: `cd myapp`
## Setting up the Gemfile
Open the `Gemfile` in your Rails app, and then add the following gems:
```ruby
gem 'faraday'
gem 'dotenv-rails', groups: [:development, :test]
```
Install the dependencies by running `bundle install`.
## Create and configure the Axiom logger
1. Create a new file named `axiom_logger.rb` in the `app/services` directory of your Rails app.
2. Add the following code to `axiom_logger.rb`:
```ruby
# app/services/axiom_logger.rb
require 'faraday'
require 'json'
class AxiomLogger
def self.send_log(log_data)
dataset_name = "DATASET_NAME"
axiom_ingest_api_url = "https://api.axiom.co/v1/datasets/#{dataset_name}/ingest"
ingest_token = "API_TOKEN"
conn = Faraday.new(url: axiom_ingest_api_url) do |faraday|
faraday.request :url_encoded
faraday.adapter Faraday.default_adapter
end
wrapped_log_data = [log_data]
response = conn.post do |req|
req.headers['Content-Type'] = 'application/json'
req.headers['Authorization'] = "Bearer #{ingest_token}"
req.body = wrapped_log_data.to_json
end
puts "AxiomLogger Response status: #{response.status}, body: #{response.body}"
if response.status != 200
Rails.logger.error "Failed to send log to Axiom: #{response.body}"
end
end
end
```
In the code above, make the following changes:
* Replace `API_TOKEN` with your Axiom API key.
* Replace `DATASET_NAME` with the name of the Axiom dataset where you want to send data.
## Test with the Axiom logger
1. Create a new file named `axiom_logger_test.rb` in the `config/initializers` directory.
2. Add the following code to `axiom_logger_test.rb`:
```ruby
# config/initializers/axiom_logger_test.rb
Rails.application.config.after_initialize do
puts "Sending test logs to Axiom using Ruby on Rails Faraday..."
# Info logs
AxiomLogger.send_log({ message: "Application started successfully", level: "info", service: "initializer" })
AxiomLogger.send_log({ message: "User authentication successful", level: "inf