# avg This page explains how to use the avg aggregation function in APL. The `avg` aggregation in APL calculates the average value of a numeric field across a set of records. You can use this aggregation when you need to determine the mean value of numerical data, such as request durations, response times, or other performance metrics. It is useful in scenarios such as performance analysis, trend identification, and general statistical analysis. When to use `avg`: * When you want to analyze the average of numeric values over a specific time range or set of data. * For comparing trends, like average request duration or latency across HTTP requests. * To provide insight into system or user performance, such as the average duration of transactions in a service. ## For users of other query languages If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL. In Splunk SPL, the `avg` function works similarly, but the syntax differs slightly. Here’s how to write the equivalent query in APL. ```sql Splunk example | stats avg(req_duration_ms) by status ``` ```kusto APL equivalent ['sample-http-logs'] | summarize avg(req_duration_ms) by status ``` In ANSI SQL, the `avg` aggregation is used similarly, but APL has a different syntax for structuring the query. ```sql SQL example SELECT status, AVG(req_duration_ms) FROM sample_http_logs GROUP BY status ``` ```kusto APL equivalent ['sample-http-logs'] | summarize avg(req_duration_ms) by status ``` ## Usage ### Syntax ```kusto summarize avg(ColumnName) [by GroupingColumn] ``` ### Parameters * **ColumnName**: The numeric field you want to calculate the average of. * **GroupingColumn** (optional): A column to group the results by. If not specified, the average is calculated over all records. ### Returns * A table with the average value for the specified field, optionally grouped by another column. ## Use case examples This example calculates the average request duration for HTTP requests, grouped by status. **Query** ```kusto ['sample-http-logs'] | summarize avg(req_duration_ms) by status ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20summarize%20avg\(req_duration_ms\)%20by%20status%22%7D) **Output** | status | avg\_req\_duration\_ms | | ------ | ---------------------- | | 200 | 350.4 | | 404 | 150.2 | This query calculates the average request duration (in milliseconds) for each HTTP status code. This example calculates the average span duration for each service to analyze performance across services. **Query** ```kusto ['otel-demo-traces'] | summarize avg(duration) by ['service.name'] ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%5Cn%7C%20summarize%20avg\(duration\)%20by%20%5B'service.name'%5D%22%7D) **Output** | service.name | avg\_duration | | ------------ | ------------- | | frontend | 500ms | | cartservice | 250ms | This query calculates the average duration of spans for each service. In security logs, you can calculate the average request duration by country to analyze regional performance trends. **Query** ```kusto ['sample-http-logs'] | summarize avg(req_duration_ms) by ['geo.country'] ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20summarize%20avg\(req_duration_ms\)%20by%20%5B'geo.country'%5D%22%7D) **Output** | geo.country | avg\_req\_duration\_ms | | ----------- | ---------------------- | | US | 400.5 | | DE | 250.3 | This query calculates the average request duration for each country from where the requests originated. ## List of related aggregations * [**sum**](/apl/aggregation-function/sum): Use `sum` to calculate the total of a numeric field. This is useful when you want the total of values rather than their average. * [**count**](/apl/aggregation-function/count): The `count` function returns the total number of records. It’s useful when you want to count occurrences rather than averaging numerical values. * [**min**](/apl/aggregation-function/min): The `min` function returns the minimum value of a numeric field. Use this when you’re interested in the smallest value in your dataset. * [**max**](/apl/aggregation-function/max): The `max` function returns the maximum value of a numeric field. This is useful for finding the largest value in the data. * [**stdev**](/apl/aggregation-function/stdev): This function calculates the standard deviation of a numeric field, providing insight into how spread out the data is around the mean. # avgif This page explains how to use the avgif aggregation function in APL. The `avgif` aggregation function in APL allows you to calculate the average value of a field, but only for records that satisfy a given condition. This function is particularly useful when you need to perform a filtered aggregation, such as finding the average response time for requests that returned a specific status code or filtering by geographic regions. The `avgif` function is highly valuable in scenarios like log analysis, performance monitoring, and anomaly detection, where focusing on subsets of data can provide more accurate insights. ## For users of other query languages If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL. In Splunk, you achieve similar functionality using the combination of a `stats` function with conditional filtering. In APL, `avgif` provides this filtering inline as part of the aggregation function, which can simplify your queries. ```sql Splunk example | stats avg(req_duration_ms) by id where status = "200" ``` ```kusto APL equivalent ['sample-http-logs'] | summarize avgif(req_duration_ms, status == "200") by id ``` In ANSI SQL, you can use a `CASE` statement inside an `AVG` function to achieve similar behavior. APL simplifies this with `avgif`, allowing you to specify the condition directly. ```sql SQL example SELECT id, AVG(CASE WHEN status = '200' THEN req_duration_ms ELSE NULL END) FROM sample_http_logs GROUP BY id ``` ```kusto APL equivalent ['sample-http-logs'] | summarize avgif(req_duration_ms, status == "200") by id ``` ## Usage ### Syntax ```kusto summarize avgif(expr, predicate) by grouping_field ``` ### Parameters * **`expr`**: The field for which you want to calculate the average. * **`predicate`**: A boolean condition that filters which records are included in the calculation. * **`grouping_field`**: (Optional) A field by which you want to group the results. ### Returns The function returns the average of the values from the `expr` field for the records that satisfy the `predicate`. If no records match the condition, the result is `null`. ## Use case examples In this example, you calculate the average request duration for HTTP status 200 in different cities. **Query** ```kusto ['sample-http-logs'] | summarize avgif(req_duration_ms, status == "200") by ['geo.city'] ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20summarize%20avgif%28req_duration_ms%2C%20status%20%3D%3D%20%22200%22%29%20by%20%5B%27geo.city%27%5D%22%7D) **Output** | geo.city | avg\_req\_duration\_ms | | -------- | ---------------------- | | New York | 325 | | London | 400 | | Tokyo | 275 | This query calculates the average request duration (`req_duration_ms`) for HTTP requests that returned a status of 200 (`status == "200"`), grouped by the city where the request originated (`geo.city`). In this example, you calculate the average span duration for traces that ended with HTTP status 500. **Query** ```kusto ['otel-demo-traces'] | summarize avgif(duration, status == "500") by ['service.name'] ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D%20%7C%20summarize%20avgif%28duration%2C%20status%20%3D%3D%20%22500%22%29%20by%20%5B%27service.name%27%5D%22%7D) **Output** | service.name | avg\_duration | | --------------- | ------------- | | checkoutservice | 500ms | | frontend | 600ms | | cartservice | 475ms | This query calculates the average span duration (`duration`) for traces where the status code is 500 (`status == "500"`), grouped by the service name (`service.name`). In this example, you calculate the average request duration for failed HTTP requests (status code 400 or higher) by country. **Query** ```kusto ['sample-http-logs'] | summarize avgif(req_duration_ms, toint(status) >= 400) by ['geo.country'] ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20summarize%20avgif%28req_duration_ms%2C%20toint%28status%29%20%3E%3D%20400%29%20by%20%5B%27geo.country%27%5D%22%7D) **Output** | geo.country | avg\_req\_duration\_ms | | ----------- | ---------------------- | | USA | 450 | | Canada | 500 | | Germany | 425 | This query calculates the average request duration (`req_duration_ms`) for failed HTTP requests (`status >= 400`), grouped by the country of origin (`geo.country`). ## List of related aggregations * [**minif**](/apl/aggregation-function/minif): Returns the minimum value of an expression, filtered by a predicate. Use when you want to find the smallest value for a subset of data. * [**maxif**](/apl/aggregation-function/maxif): Returns the maximum value of an expression, filtered by a predicate. Use when you are looking for the largest value within specific conditions. * [**countif**](/apl/aggregation-function/countif): Counts the number of records that match a condition. Use when you want to know how many records meet a specific criterion. * [**sumif**](/apl/aggregation-function/sumif): Sums the values of a field that match a given condition. Ideal for calculating the total of a subset of data. # count This page explains how to use the count aggregation function in APL. The `count` aggregation in APL returns the total number of records in a dataset or the total number of records that match specific criteria. This function is useful when you need to quantify occurrences, such as counting log entries, user actions, or security events. When to use `count`: * To count the total number of events in log analysis, such as the number of HTTP requests or errors. * To monitor system usage, such as the number of transactions or API calls. * To identify security incidents by counting failed login attempts or suspicious activities. ## For users of other query languages If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL. In Splunk SPL, the `count` function works similarly to APL, but the syntax differs slightly. ```sql Splunk example | stats count by status ``` ```kusto APL equivalent ['sample-http-logs'] | summarize count() by status ``` In ANSI SQL, the `count` function works similarly, but APL uses different syntax for querying. ```sql SQL example SELECT status, COUNT(*) FROM sample_http_logs GROUP BY status ``` ```kusto APL equivalent ['sample-http-logs'] | summarize count() by status ``` ## Usage ### Syntax ```kusto summarize count() [by GroupingColumn] ``` ### Parameters * **GroupingColumn** (optional): A column to group the count results by. If not specified, the total number of records across the dataset is returned. ### Returns * A table with the count of records for the entire dataset or grouped by the specified column. ## Use case examples In log analysis, you can count the number of HTTP requests by status to get a sense of how many requests result in different HTTP status codes. **Query** ```kusto ['sample-http-logs'] | summarize count() by status ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20summarize%20count\(\)%20by%20status%22%7D) **Output** | status | count | | ------ | ----- | | 200 | 1500 | | 404 | 200 | This query counts the total number of HTTP requests for each status code in the logs. For OpenTelemetry traces, you can count the total number of spans for each service, which helps you monitor the distribution of requests across services. **Query** ```kusto ['otel-demo-traces'] | summarize count() by ['service.name'] ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%5Cn%7C%20summarize%20count\(\)%20by%20%5B'service.name'%5D%22%7D) **Output** | service.name | count | | ------------ | ----- | | frontend | 1000 | | cartservice | 500 | This query counts the number of spans for each service in the OpenTelemetry traces dataset. In security logs, you can count the number of requests by country to identify where the majority of traffic or suspicious activity originates. **Query** ```kusto ['sample-http-logs'] | summarize count() by ['geo.country'] ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20summarize%20count\(\)%20by%20%5B'geo.country'%5D%22%7D) **Output** | geo.country | count | | ----------- | ----- | | US | 3000 | | DE | 500 | This query counts the number of requests originating from each country. ## List of related aggregations * [**sum**](/apl/aggregation-function/sum): Use `sum` to calculate the total sum of a numeric field, as opposed to counting the number of records. * [**avg**](/apl/aggregation-function/avg): The `avg` function calculates the average of a numeric field. Use it when you want to determine the mean value of data instead of the count. * [**min**](/apl/aggregation-function/min): The `min` function returns the minimum value of a numeric field, helping to identify the smallest value in a dataset. * [**max**](/apl/aggregation-function/max): The `max` function returns the maximum value of a numeric field, useful for identifying the largest value. * [**countif**](/apl/aggregation-function/countif): The `countif` function allows you to count only records that meet specific conditions, giving you more flexibility in your count queries. # countif This page explains how to use the countif aggregation function in APL. The `countif` aggregation function in Axiom Processing Language (APL) counts the number of records that meet a specified condition. You can use this aggregation to filter records based on a specific condition and return a count of matching records. This is particularly useful for log analysis, security audits, and tracing events when you need to isolate and count specific data subsets. Use `countif` when you want to count occurrences of certain conditions, such as HTTP status codes, errors, or actions in telemetry traces. ## For users of other query languages If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL. In Splunk SPL, conditional counting is typically done using the `eval` function combined with `stats`. APL provides a more streamlined approach with the `countif` function, which performs conditional counting directly. ```sql Splunk example | stats count(eval(status="500")) AS error_count ``` ```kusto APL equivalent ['sample-http-logs'] | summarize countif(status == '500') ``` In ANSI SQL, conditional counting is achieved by using the `COUNT` function with a `CASE` statement. In APL, `countif` simplifies this process by offering a direct approach to conditional counting. ```sql SQL example SELECT COUNT(CASE WHEN status = '500' THEN 1 END) AS error_count FROM sample_http_logs ``` ```kusto APL equivalent ['sample-http-logs'] | summarize countif(status == '500') ``` ## Usage ### Syntax ```kusto countif(condition) ``` ### Parameters * **condition**: A boolean expression that filters the records based on a condition. Only records where the condition evaluates to `true` are counted. ### Returns The function returns the number of records that match the specified condition. ## Use case examples In log analysis, you might want to count how many HTTP requests returned a 500 status code to detect server errors. **Query** ```kusto ['sample-http-logs'] | summarize countif(status == '500') ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20countif\(status%20%3D%3D%20'500'\)%22%7D) **Output** | count\_errors | | ------------- | | 72 | This query counts the number of HTTP requests with a `500` status, helping you identify how many server errors occurred. In OpenTelemetry traces, you might want to count how many requests were initiated by the client service kind. **Query** ```kusto ['otel-demo-traces'] | summarize countif(kind == 'client') ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20countif\(kind%20%3D%3D%20'client'\)%22%7D) **Output** | count\_client\_kind | | ------------------- | | 345 | This query counts how many requests were initiated by the `client` service kind, providing insight into the volume of client-side traffic. In security logs, you might want to count how many HTTP requests originated from a specific city, such as New York. **Query** ```kusto ['sample-http-logs'] | summarize countif(['geo.city'] == 'New York') ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20countif\(%5B'geo.city'%5D%20%3D%3D%20'New%20York'\)%22%7D) **Output** | count\_nyc\_requests | | -------------------- | | 87 | This query counts how many HTTP requests originated from New York, which can help detect traffic from a particular location for security analysis. ## List of related aggregations * [**count**](/apl/aggregation-function/count): Counts all records in a dataset without applying a condition. Use this when you need the total count of records, regardless of any specific condition. * [**sumif**](/apl/aggregation-function/sumif): Adds up the values of a field for records that meet a specific condition. Use `sumif` when you want to sum values based on a filter. * [**dcountif**](/apl/aggregation-function/dcountif): Counts distinct values of a field for records that meet a condition. This is helpful when you need to count unique occurrences. * [**avgif**](/apl/aggregation-function/avgif): Calculates the average value of a field for records that match a condition, useful for performance monitoring. * [**maxif**](/apl/aggregation-function/maxif): Returns the maximum value of a field for records that meet a condition. Use this when you want to find the highest value in filtered data. # dcount This page explains how to use the dcount aggregation function in APL. The `dcount` aggregation function in Axiom Processing Language (APL) counts the distinct values in a column. This function is essential when you need to know the number of unique values, such as counting distinct users, unique requests, or distinct error codes in log files. Use `dcount` for analyzing datasets where it’s important to identify the number of distinct occurrences, such as unique IP addresses in security logs, unique user IDs in application logs, or unique trace IDs in OpenTelemetry traces. ## For users of other query languages If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL. In Splunk SPL, you can count distinct values using the `dc` function within the `stats` command. In APL, the `dcount` function offers similar functionality. ```sql Splunk example | stats dc(user_id) AS distinct_users ``` ```kusto APL equivalent ['sample-http-logs'] | summarize dcount(id) ``` In ANSI SQL, distinct counting is typically done using `COUNT` with the `DISTINCT` keyword. In APL, `dcount` provides a direct and efficient way to count distinct values. ```sql SQL example SELECT COUNT(DISTINCT user_id) AS distinct_users FROM sample_http_logs ``` ```kusto APL equivalent ['sample-http-logs'] | summarize dcount(id) ``` ## Usage ### Syntax ```kusto dcount(column_name) ``` ### Parameters * **column\_name**: The name of the column for which you want to count distinct values. ### Returns The function returns the count of distinct values found in the specified column. ## Use case examples In log analysis, you can count how many distinct users accessed the service. **Query** ```kusto ['sample-http-logs'] | summarize dcount(id) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20dcount\(id\)%22%7D) **Output** | distinct\_users | | --------------- | | 45 | This query counts the distinct values in the `id` field, representing the number of unique users who accessed the system. In OpenTelemetry traces, you can count how many unique trace IDs are recorded. **Query** ```kusto ['otel-demo-traces'] | summarize dcount(trace_id) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20dcount\(trace_id\)%22%7D) **Output** | distinct\_traces | | ---------------- | | 321 | This query counts the distinct trace IDs in the dataset, helping you determine how many unique traces are being captured. In security logs, you can count how many distinct IP addresses were logged. **Query** ```kusto ['sample-http-logs'] | summarize dcount(['geo.city']) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20dcount\(%5B'geo.city'%5D\)%22%7D) **Output** | distinct\_cities | | ---------------- | | 35 | This query counts the number of distinct cities recorded in the logs, which helps analyze the geographic distribution of traffic. ## List of related aggregations * [**count**](/apl/aggregation-function/count): Counts the total number of records in the dataset, including duplicates. Use it when you need to know the overall number of records. * [**countif**](/apl/aggregation-function/countif): Counts records that match a specific condition. Use `countif` when you want to count records based on a filter or condition. * [**dcountif**](/apl/aggregation-function/dcountif): Counts the distinct values in a column but only for records that meet a condition. It’s useful when you need a filtered distinct count. * [**sum**](/apl/aggregation-function/sum): Sums the values in a column. Use this when you need to add up values rather than counting distinct occurrences. * [**avg**](/apl/aggregation-function/avg): Calculates the average value for a column. Use this when you want to find the average of a specific numeric field. # dcountif This page explains how to use the dcountif aggregation function in APL. The `dcountif` aggregation function in Axiom Processing Language (APL) counts the distinct values in a column that meet a specific condition. This is useful when you want to filter records and count only the unique occurrences that satisfy a given criterion. Use `dcountif` in scenarios where you need a distinct count but only for a subset of the data, such as counting unique users from a specific region, unique error codes for specific HTTP statuses, or distinct traces that match a particular service type. ## For users of other query languages If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL. In Splunk SPL, counting distinct values conditionally is typically achieved using a combination of `eval` and `dc` in the `stats` function. APL simplifies this with the `dcountif` function, which handles both filtering and distinct counting in a single step. ```sql Splunk example | stats dc(eval(status="200")) AS distinct_successful_users ``` ```kusto APL equivalent ['sample-http-logs'] | summarize dcountif(id, status == '200') ``` In ANSI SQL, conditional distinct counting can be done using a combination of `COUNT(DISTINCT)` and `CASE`. APL's `dcountif` function provides a more concise and readable way to handle conditional distinct counting. ```sql SQL example SELECT COUNT(DISTINCT CASE WHEN status = '200' THEN user_id END) AS distinct_successful_users FROM sample_http_logs ``` ```kusto APL equivalent ['sample-http-logs'] | summarize dcountif(id, status == '200') ``` ## Usage ### Syntax ```kusto dcountif(column_name, condition) ``` ### Parameters * **column\_name**: The name of the column for which you want to count distinct values. * **condition**: A boolean expression that filters the records. Only records that meet the condition will be included in the distinct count. ### Returns The function returns the count of distinct values that meet the specified condition. ## Use case examples In log analysis, you might want to count how many distinct users accessed the service and received a successful response (HTTP status 200). **Query** ```kusto ['sample-http-logs'] | summarize dcountif(id, status == '200') ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20dcountif\(id%2C%20status%20%3D%3D%20'200'\)%22%7D) **Output** | distinct\_successful\_users | | --------------------------- | | 50 | This query counts the distinct users (`id` field) who received a successful HTTP response (status 200), helping you understand how many unique users had successful requests. In OpenTelemetry traces, you might want to count how many unique trace IDs are recorded for a specific service, such as `frontend`. **Query** ```kusto ['otel-demo-traces'] | summarize dcountif(trace_id, ['service.name'] == 'frontend') ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20dcountif\(trace_id%2C%20%5B'service.name'%5D%20%3D%3D%20'frontend'\)%22%7D) **Output** | distinct\_frontend\_traces | | -------------------------- | | 123 | This query counts the number of distinct trace IDs that belong to the `frontend` service, providing insight into the volume of unique traces for that service. In security logs, you might want to count how many unique IP addresses were logged for requests that resulted in a 403 status (forbidden access). **Query** ```kusto ['sample-http-logs'] | summarize dcountif(['geo.city'], status == '403') ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20dcountif\(%5B'geo.city'%5D%2C%20status%20%3D%3D%20'403'\)%22%7D) **Output** | distinct\_cities\_forbidden | | --------------------------- | | 20 | This query counts the number of distinct cities (`geo.city` field) where requests resulted in a `403` status, helping you identify potential unauthorized access attempts from different regions. ## List of related aggregations * [**dcount**](/apl/aggregation-function/dcount): Counts distinct values without applying any condition. Use this when you need to count unique values across the entire dataset. * [**countif**](/apl/aggregation-function/countif): Counts records that match a specific condition, without focusing on distinct values. Use this when you need to count records based on a filter. * [**dcountif**](/apl/aggregation-function/dcountif): Use this function to get a distinct count for records that meet a condition. It combines both filtering and distinct counting. * [**sumif**](/apl/aggregation-function/sumif): Sums values in a column for records that meet a condition. This is useful when you need to sum data points after filtering. * [**avgif**](/apl/aggregation-function/avgif): Calculates the average value of a column for records that match a condition. Use this when you need to find the average based on a filter. # histogram This page explains how to use the histogram aggregation function in APL. The `histogram` aggregation in APL allows you to create a histogram that groups numeric values into intervals or "bins." This is useful for visualizing the distribution of data, such as the frequency of response times, request durations, or other continuous numerical fields. You can use it to analyze patterns and trends in datasets like logs, traces, or metrics. It is especially helpful when you need to summarize a large volume of data into a digestible form, providing insights on the distribution of values. The `histogram` aggregation is ideal for identifying peaks, valleys, and outliers in your data. For example, you can analyze the distribution of request durations in web server logs or span durations in OpenTelemetry traces to understand performance bottlenecks. ## For users of other query languages If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL. In Splunk SPL, a similar operation to APL's `histogram` is the `timechart` or `histogram` command, which groups events into time buckets. However, in APL, the `histogram` function focuses on numeric values, allowing you to control the bin size and boundaries more precisely. ```splunk Splunk example | stats count by duration | timechart span=10 count ``` ```kusto APL equivalent ['sample-http-logs'] | summarize count() by histogram(req_duration_ms, 10) ``` In ANSI SQL, you can use the `GROUP BY` clause combined with range calculations to achieve a similar result to APL’s `histogram`. However, APL’s `histogram` function simplifies the process by automatically calculating bin intervals. ```sql SQL example SELECT COUNT(*), FLOOR(req_duration_ms/10)*10 as duration_bin FROM sample_http_logs GROUP BY duration_bin ``` ```kusto APL equivalent ['sample-http-logs'] | summarize count() by histogram(req_duration_ms, 10) ``` ## Usage ### Syntax ```kusto histogram(numeric_field, bin_size) ``` ### Parameters * `numeric_field`: The numeric field you want to create a histogram for. This can be a field like request duration or span duration. * `bin_size`: The size of each bin, or interval, into which the numeric values will be grouped. ### Returns The `histogram` aggregation returns a table where each row represents a bin, along with the number of occurrences (counts) that fall within each bin. ## Use case examples You can use the `histogram` aggregation to analyze the distribution of request durations in web server logs. **Query** ```kusto ['sample-http-logs'] | summarize histogram(req_duration_ms, 100) by bin_auto(_time) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20histogram\(req_duration_ms%2C%20100\)%20by%20bin_auto\(_time\)%22%7D) **Output** | req\_duration\_ms\_bin | count | | ---------------------- | ----- | | 0 | 50 | | 100 | 200 | | 200 | 120 | This query creates a histogram that groups request durations into bins of 100 milliseconds and shows the count of requests in each bin. It helps you visualize how frequently requests fall within certain duration ranges. In OpenTelemetry traces, you can use the `histogram` aggregation to analyze the distribution of span durations. **Query** ```kusto ['otel-demo-traces'] | summarize histogram(duration, 100) by bin_auto(_time) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20histogram\(duration%2C%20100\)%20by%20bin_auto\(_time\)%22%7D) **Output** | duration\_bin | count | | ------------- | ----- | | 0.1s | 30 | | 0.2s | 120 | | 0.3s | 50 | This query groups the span durations into 100ms intervals, making it easier to spot latency issues in your traces. In security logs, the `histogram` aggregation helps you understand the frequency distribution of request durations to detect anomalies or attacks. **Query** ```kusto ['sample-http-logs'] | where status == '200' | summarize histogram(req_duration_ms, 50) by bin_auto(_time) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20where%20status%20%3D%3D%20'200'%20%7C%20summarize%20histogram\(req_duration_ms%2C%2050\)%20by%20bin_auto\(_time\)%22%7D) **Output** | req\_duration\_ms\_bin | count | | ---------------------- | ----- | | 0 | 150 | | 50 | 400 | | 100 | 100 | This query analyzes the request durations for HTTP 200 (Success) responses, helping you identify patterns in security-related events. ## List of related aggregations * [**percentile**](/apl/aggregation-function/percentile): Use `percentile` when you need to find the specific value below which a percentage of observations fall, which can provide more precise distribution analysis. * [**avg**](/apl/aggregation-function/avg): Use `avg` for calculating the average value of a numeric field, useful when you are more interested in the central tendency rather than distribution. * [**sum**](/apl/aggregation-function/sum): The `sum` function adds up the total values in a numeric field, helpful for determining overall totals. * [**count**](/apl/aggregation-function/count): Use `count` when you need a simple tally of rows or events, often in conjunction with `histogram` for more basic summarization. # make_list This page explains how to use the make_list aggregation function in APL. The `make_list` aggregation function in Axiom Processing Language (APL) collects all values from a specified column into a dynamic array for each group of rows in a dataset. This aggregation is particularly useful when you want to consolidate multiple values from distinct rows into a single grouped result. For example, if you have multiple log entries for a particular user, you can use `make_list` to gather all request URIs accessed by that user into a single list. You can also apply `make_list` to various contexts, such as trace aggregation, log analysis, or security monitoring, where collating related events into a compact form is needed. Key uses of `make_list`: * Consolidating values from multiple rows into a list per group. * Summarizing activity (e.g., list all HTTP requests by a user). * Generating traces or timelines from distributed logs. ## For users of other query languages If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL. In Splunk SPL, the `make_list` equivalent is `values` or `mvlist`, which gathers multiple values into a multivalue field. In APL, `make_list` behaves similarly by collecting values from rows into a dynamic array. ```sql Splunk example index=logs | stats values(uri) by user ``` ```kusto APL equivalent ['sample-http-logs'] | summarize uris=make_list(uri) by id ``` In ANSI SQL, the `make_list` function is similar to `ARRAY_AGG`, which aggregates column values into an array for each group. In APL, `make_list` performs the same role, grouping the column values into a dynamic array. ```sql SQL example SELECT ARRAY_AGG(uri) AS uris FROM sample_http_logs GROUP BY id; ``` ```kusto APL equivalent ['sample-http-logs'] | summarize uris=make_list(uri) by id ``` ## Usage ### Syntax ```kusto make_list(column) ``` ### Parameters * `column`: The name of the column to collect into a list. ### Returns The `make_list` function returns a dynamic array that contains all values of the specified column for each group of rows. ## Use case examples In log analysis, `make_list` is useful for collecting all URIs a user has accessed in a session. This can help in identifying browsing patterns or tracking user activity. **Query** ```kusto ['sample-http-logs'] | summarize uris=make_list(uri) by id ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20summarize%20uris%3Dmake_list%28uri%29%20by%20id%22%7D) **Output** | id | uris | | ------- | --------------------------------- | | user123 | \[‘/home’, ‘/profile’, ‘/cart’] | | user456 | \[‘/search’, ‘/checkout’, ‘/pay’] | This query collects all URIs accessed by each user, providing a compact view of user activity in the logs. In OpenTelemetry traces, `make_list` can help in gathering the list of services involved in a trace by consolidating all service names related to a trace ID. **Query** ```kusto ['otel-demo-traces'] | summarize services=make_list(['service.name']) by trace_id ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D%20%7C%20summarize%20services%3Dmake_list%28%5B%27service.name%27%5D%29%20by%20trace_id%22%7D) **Output** | trace\_id | services | | --------- | ----------------------------------------------- | | trace\_a | \[‘frontend’, ‘cartservice’, ‘checkoutservice’] | | trace\_b | \[‘productcatalogservice’, ‘loadgenerator’] | This query aggregates all service names associated with a particular trace, helping trace spans across different services. In security logs, `make_list` is useful for collecting all IPs or cities from where a user has initiated requests, aiding in detecting anomalies or patterns. **Query** ```kusto ['sample-http-logs'] | summarize cities=make_list(['geo.city']) by id ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20summarize%20cities%3Dmake_list%28%5B%27geo.city%27%5D%29%20by%20id%22%7D) **Output** | id | cities | | ------- | ---------------------------- | | user123 | \[‘New York’, ‘Los Angeles’] | | user456 | \[‘Berlin’, ‘London’] | This query collects the cities from which each user has made HTTP requests, useful for geographical analysis or anomaly detection. ## List of related aggregations * [**make\_set**](/apl/aggregation-function/make-set): Similar to `make_list`, but only unique values are collected in the set. Use `make_set` when duplicates aren’t relevant. * [**count**](/apl/aggregation-function/count): Returns the count of rows in each group. Use this instead of `make_list` when you're interested in row totals rather than individual values. * [**max**](/apl/aggregation-function/max): Aggregates values by returning the maximum value from each group. Useful for numeric comparison across rows. * [**dcount**](/apl/aggregation-function/dcount): Returns the distinct count of values for each group. Use this when you need unique value counts instead of listing them. # make_list_if This page explains how to use the make_list_if aggregation function in APL. The `make_list_if` aggregation function in APL creates a list of values from a given field, conditioned on a Boolean expression. This function is useful when you need to gather values from a column that meet specific criteria into a single array. By using `make_list_if`, you can aggregate data based on dynamic conditions, making it easier to perform detailed analysis. This aggregation is ideal in scenarios where filtering at the aggregation level is required, such as gathering only the successful requests or collecting trace spans of a specific service in OpenTelemetry data. It’s particularly useful when analyzing logs, tracing information, or security events, where conditional aggregation is essential for understanding trends or identifying issues. ## For users of other query languages If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL. In Splunk, you would typically use the `eval` and `stats` commands to create conditional lists. In APL, the `make_list_if` function serves a similar purpose by allowing you to aggregate data into a list based on a condition. ```sql Splunk example | stats list(field) as field_list by condition ``` ```kusto APL equivalent summarize make_list_if(field, condition) ``` In ANSI SQL, conditional aggregation often involves the use of `CASE` statements combined with aggregation functions such as `ARRAY_AGG`. In APL, `make_list_if` directly applies a condition to the aggregation. ```sql SQL example SELECT ARRAY_AGG(CASE WHEN condition THEN field END) FROM table ``` ```kusto APL equivalent summarize make_list_if(field, condition) ``` ## Usage ### Syntax ```kusto summarize make_list_if(expression, condition) ``` ### Parameters * `expression`: The field or expression whose values will be included in the list. * `condition`: A Boolean condition that determines which values from `expression` are included in the result. ### Returns The function returns an array containing all values from `expression` that meet the specified `condition`. ## Use case examples In this example, we will gather a list of request durations for successful HTTP requests. **Query** ```kusto ['sample-http-logs'] | summarize make_list_if(req_duration_ms, status == '200') by id ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D+%7C+summarize+make_list_if%28req_duration_ms%2C+status+%3D%3D+%27200%27%29+by+id%22%7D) **Output** | id | req\_duration\_ms\_list | | --- | ----------------------- | | 123 | \[100, 150, 200] | | 456 | \[300, 350, 400] | This query aggregates request durations for HTTP requests that returned a status of ‘200’ for each user ID. Here, we will aggregate the span durations for `cartservice` where the status code indicates success. **Query** ```kusto ['otel-demo-traces'] | summarize make_list_if(duration, status_code == '200' and ['service.name'] == 'cartservice') by trace_id ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D+%7C+summarize+make_list_if%28duration%2C+status_code+%3D%3D+%27200%27+and+%5B%27service.name%27%5D+%3D%3D+%27cartservice%27%29+by+trace_id%22%7D) **Output** | trace\_id | duration\_list | | --------- | --------------------- | | abc123 | \[00:01:23, 00:01:45] | | def456 | \[00:02:12, 00:03:15] | This query collects span durations for successful requests to the `cartservice` by `trace_id`. In this case, we gather a list of IP addresses from security logs where the HTTP status is `403` (Forbidden) and group them by the country of origin. **Query** ```kusto ['sample-http-logs'] | summarize make_list_if(uri, status == '403') by ['geo.country'] ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D+%7C+summarize+make_list_if%28uri%2C+status+%3D%3D+%27403%27%29+by+%5B%27geo.country%27%5D%22%7D) **Output** | geo.country | uri\_list | | ----------- | ---------------------- | | USA | \['/login', '/admin'] | | Canada | \['/admin', '/secure'] | This query collects a list of URIs that resulted in a `403` error, grouped by the country where the request originated. ## List of related aggregations * [**make\_list**](/apl/aggregation-function/make-list): Aggregates all values into a list without any conditions. Use `make_list` when you don’t need to filter the values based on a condition. * [**countif**](/apl/aggregation-function/countif): Counts the number of records that satisfy a specific condition. Use `countif` when you need a count of occurrences rather than a list of values. * [**avgif**](/apl/aggregation-function/avgif): Calculates the average of values that meet a specified condition. Use `avgif` for numerical aggregations where you want a conditional average instead of a list. # make_set This page explains how to use the make_set aggregation function in APL. The `make_set` aggregation in APL (Axiom Processing Language) is used to collect unique values from a specific column into an array. It is useful when you want to reduce your data by grouping it and then retrieving all unique values for each group. This aggregation is valuable for tasks such as grouping logs, traces, or events by a common attribute and retrieving the unique values of a specific field for further analysis. You can use `make_set` when you need to collect non-repeating values across rows within a group, such as finding all the unique HTTP methods in web server logs or unique trace IDs in telemetry data. ## For users of other query languages If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL. In Splunk SPL, the `values` function is similar to `make_set` in APL. The main difference is that while `values` returns all non-null values, `make_set` specifically returns only unique values and stores them in an array. ```sql Splunk example | stats values(method) by id ``` ```kusto APL equivalent ['sample-http-logs'] | summarize make_set(method) by id ``` In ANSI SQL, the `GROUP_CONCAT` or `ARRAY_AGG(DISTINCT)` functions are commonly used to aggregate unique values in a column. `make_set` in APL works similarly by aggregating distinct values from a specific column into an array, but it offers better performance for large datasets. ```sql SQL example SELECT id, ARRAY_AGG(DISTINCT method) FROM sample_http_logs GROUP BY id; ``` ```kusto APL equivalent ['sample-http-logs'] | summarize make_set(method) by id ``` ## Usage ### Syntax ```kusto make_set(column, [limit]) ``` ### Parameters * `column`: The column from which unique values are aggregated. * `limit`: (Optional) The maximum number of unique values to return. Defaults to 128 if not specified. ### Returns An array of unique values from the specified column. ## Use case examples In this use case, you want to collect all unique HTTP methods used by each user in the log data. **Query** ```kusto ['sample-http-logs'] | summarize make_set(method) by id ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D+%7C+summarize+make_set%28method%29+by+id%22%7D) **Output** | id | make\_set\_method | | ------- | ----------------- | | user123 | \['GET', 'POST'] | | user456 | \['GET'] | This query groups the log entries by `id` and returns all unique HTTP methods used by each user. In this use case, you want to gather the unique service names involved in a trace. **Query** ```kusto ['otel-demo-traces'] | summarize make_set(['service.name']) by trace_id ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D+%7C+summarize+make_set%28%5B%27service.name%27%5D%29+by+trace_id%22%7D) **Output** | trace\_id | make\_set\_service.name | | --------- | -------------------------------- | | traceA | \['frontend', 'checkoutservice'] | | traceB | \['cartservice'] | This query groups the telemetry data by `trace_id` and collects the unique services involved in each trace. In this use case, you want to collect all unique HTTP status codes for each country where the requests originated. **Query** ```kusto ['sample-http-logs'] | summarize make_set(status) by ['geo.country'] ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D+%7C+summarize+make_set%28status%29+by+%5B%27geo.country%27%5D%22%7D) **Output** | geo.country | make\_set\_status | | ----------- | ----------------- | | USA | \['200', '404'] | | UK | \['200'] | This query collects all unique HTTP status codes returned for each country from which requests were made. ## List of related aggregations * [**make\_list**](/apl/aggregation-function/make-list): Similar to `make_set`, but returns all values, including duplicates, in a list. Use `make_list` if you want to preserve duplicates. * [**count**](/apl/aggregation-function/count): Counts the number of records in each group. Use `count` when you need the total count rather than the unique values. * [**dcount**](/apl/aggregation-function/dcount): Returns the distinct count of values in a column. Use `dcount` when you need the number of unique values, rather than an array of them. * [**max**](/apl/aggregation-function/max): Finds the maximum value in a group. Use `max` when you are interested in the largest value rather than collecting values. # make_set_if This page explains how to use the make_set_if aggregation function in APL. The `make_set_if` aggregation function in APL allows you to create a set of distinct values from a column based on a condition. You can use this function to aggregate values that meet specific criteria, helping you filter and reduce data to unique entries while applying a conditional filter. This is especially useful when analyzing large datasets to extract relevant, distinct information without duplicates. You can use `make_set_if` in scenarios where you need to aggregate conditional data points, such as log analysis, tracing information, or security logs, to summarize distinct occurrences based on particular conditions. ## For users of other query languages If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL. In Splunk SPL, you may use `values` with a `where` condition to achieve similar functionality to `make_set_if`. However, in APL, the `make_set_if` function is explicitly designed to create a distinct set of values based on a conditional filter within the aggregation step itself. ```sql Splunk example | stats values(field) by another_field where condition ``` ```kusto APL equivalent summarize make_set_if(field, condition) by another_field ``` In ANSI SQL, you would typically use `GROUP BY` in combination with conditional aggregation, such as using `CASE WHEN` inside aggregate functions. In APL, the `make_set_if` function directly aggregates distinct values conditionally without requiring a `CASE WHEN`. ```sql SQL example SELECT DISTINCT CASE WHEN condition THEN field END FROM table GROUP BY another_field ``` ```kusto APL equivalent summarize make_set_if(field, condition) by another_field ``` ## Usage ### Syntax ```kusto make_set_if(column, predicate, [max_size]) ``` ### Parameters * `column`: The column from which distinct values will be aggregated. * `predicate`: A condition that filters the values to be aggregated. * `[max_size]`: (Optional) Specifies the maximum number of elements in the resulting set. If omitted, the default is 1048576. ### Returns The `make_set_if` function returns a dynamic array of distinct values from the specified column that satisfy the given condition. ## Use case examples In this use case, you're analyzing HTTP logs and want to get the distinct cities from which requests originated, but only for requests that took longer than 500 ms. **Query** ```kusto ['sample-http-logs'] | summarize make_set_if(['geo.city'], req_duration_ms > 500) by ['method'] ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20summarize%20make_set_if%28%5B%27geo.city%27%5D%2C%20req_duration_ms%20%3E%20500%29%20by%20%5B%27method%27%5D%22%7D) **Output** | method | make\_set\_if\_geo.city | | ------ | ------------------------------ | | GET | \[‘New York’, ‘San Francisco’] | | POST | \[‘Berlin’, ‘Tokyo’] | This query returns the distinct cities from which requests took more than 500 ms, grouped by HTTP request method. Here, you're analyzing OpenTelemetry traces and want to identify the distinct services that processed spans with a duration greater than 1 second, grouped by trace ID. **Query** ```kusto ['otel-demo-traces'] | summarize make_set_if(['service.name'], duration > 1s) by ['trace_id'] ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D%20%7C%20summarize%20make_set_if%28%5B%27service.name%27%5D%2C%20duration%20%3E%201s%29%20by%20%5B%27trace_id%27%5D%22%7D) **Output** | trace\_id | make\_set\_if\_service.name | | --------- | ------------------------------------- | | abc123 | \[‘frontend’, ‘cartservice’] | | def456 | \[‘checkoutservice’, ‘loadgenerator’] | This query extracts distinct services that have processed spans longer than 1 second for each trace. In security log analysis, you may want to find out which HTTP status codes were encountered for each city, but only for POST requests. **Query** ```kusto ['sample-http-logs'] | summarize make_set_if(status, method == 'POST') by ['geo.city'] ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20summarize%20make_set_if%28status%2C%20method%20%3D%3D%20%27POST%27%29%20by%20%5B%27geo.city%27%5D%22%7D) **Output** | geo.city | make\_set\_if\_status | | -------- | --------------------- | | Berlin | \[‘200’, ‘404’] | | Tokyo | \[‘500’, ‘403’] | This query identifies the distinct HTTP status codes for POST requests grouped by the originating city. ## List of related aggregations * [**make\_list\_if**](/apl/aggregation-function/make-list-if): Similar to `make_set_if`, but returns a list that can include duplicates instead of a distinct set. * [**make\_set**](/apl/aggregation-function/make-set): Aggregates distinct values without a conditional filter. * [**countif**](/apl/aggregation-function/countif): Counts rows that satisfy a specific condition, useful for when you need to count rather than aggregate distinct values. # max This page explains how to use the max aggregation function in APL. The `max` aggregation in APL allows you to find the highest value in a specific column of your dataset. This is useful when you need to identify the maximum value of numerical data, such as the longest request duration, highest sales figures, or the latest timestamp in logs. The `max` function is ideal when you are working with large datasets and need to quickly retrieve the largest value, ensuring you're focusing on the most critical or recent data point. ## For users of other query languages If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL. In Splunk SPL, the `max` function works similarly, used to find the maximum value in a given field. The syntax in APL, however, requires you to specify the column to aggregate within a query and make use of APL's structured flow. ```sql Splunk example | stats max(req_duration_ms) ``` ```kusto APL equivalent ['sample-http-logs'] | summarize max(req_duration_ms) ``` In ANSI SQL, `MAX` works similarly to APL’s `max`. In SQL, you aggregate over a column using the `MAX` function in a `SELECT` statement. In APL, you achieve the same result using the `summarize` operator followed by the `max` function. ```sql SQL example SELECT MAX(req_duration_ms) FROM sample_http_logs; ``` ```kusto APL equivalent ['sample-http-logs'] | summarize max(req_duration_ms) ``` ## Usage ### Syntax ```kusto summarize max(ColumnName) ``` ### Parameters * `ColumnName`: The column or field from which you want to retrieve the maximum value. The column should contain numerical data, timespans, or dates. ### Returns The maximum value from the specified column. ## Use case examples In log analysis, you might want to find the longest request duration to diagnose performance issues. **Query** ```kusto ['sample-http-logs'] | summarize max(req_duration_ms) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20max\(req_duration_ms\)%22%7D) **Output** | max\_req\_duration\_ms | | ---------------------- | | 5400 | This query returns the highest request duration from the `req_duration_ms` field, which helps you identify the slowest requests. When analyzing OpenTelemetry traces, you can find the longest span duration to determine performance bottlenecks in distributed services. **Query** ```kusto ['otel-demo-traces'] | summarize max(duration) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20max\(duration\)%22%7D) **Output** | max\_duration | | ------------- | | 00:00:07.234 | This query returns the longest trace span from the `duration` field, helping you pinpoint the most time-consuming operations. In security log analysis, you may want to identify the most recent event for monitoring threats or auditing activities. **Query** ```kusto ['sample-http-logs'] | summarize max(_time) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20max\(_time\)%22%7D) **Output** | max\_time | | ------------------- | | 2024-09-25 12:45:01 | This query returns the most recent timestamp from your logs, allowing you to monitor the latest security events. ## List of related aggregations * [**min**](/apl/aggregation-function/min): Retrieves the minimum value from a column, which is useful when you need to find the smallest or earliest value, such as the lowest request duration or first event in a log. * [**avg**](/apl/aggregation-function/avg): Calculates the average value of a column. This function helps when you want to understand the central tendency, such as the average response time for requests. * [**sum**](/apl/aggregation-function/sum): Sums all values in a column, making it useful when calculating totals, such as total sales or total number of requests over a period. * [**count**](/apl/aggregation-function/count): Counts the number of records or non-null values in a column. It’s useful for finding the total number of log entries or transactions. * [**percentile**](/apl/aggregation-function/percentile): Finds a value below which a specified percentage of data falls. This aggregation is helpful when you need to analyze performance metrics like latency at the 95th percentile. # maxif This page explains how to use the maxif aggregation function in APL. # maxif aggregation in APL ## Introduction The `maxif` aggregation function in APL is useful when you want to return the maximum value from a dataset based on a conditional expression. This allows you to filter the dataset dynamically and only return the maximum for rows that satisfy the given condition. It’s particularly helpful for scenarios where you want to find the highest value of a specific metric, like response time or duration, but only for a subset of the data (e.g., successful responses, specific users, or requests from a particular geographic location). You can use the `maxif` function when analyzing logs, monitoring system traces, or inspecting security-related data to get insights into the maximum value under certain conditions. ## For users of other query languages If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL. In Splunk SPL, you might use the `stats max()` function alongside a conditional filtering step to achieve a similar result. APL’s `maxif` function combines both operations into one, streamlining the query. ```splunk | stats max(req_duration_ms) as max_duration where status="200" ``` ```kusto ['sample-http-logs'] | summarize maxif(req_duration_ms, status == "200") ``` In ANSI SQL, you typically use the `MAX` function in conjunction with a `WHERE` clause. APL’s `maxif` allows you to perform the same operation with a single aggregation function. ```sql SELECT MAX(req_duration_ms) FROM logs WHERE status = '200'; ``` ```kusto ['sample-http-logs'] | summarize maxif(req_duration_ms, status == "200") ``` ## Usage ### Syntax ```kusto summarize maxif(column, condition) ``` ### Parameters * `column`: The column containing the values to aggregate. * `condition`: The condition that must be true for the values to be considered in the aggregation. ### Returns The maximum value from `column` for rows that meet the `condition`. If no rows match the condition, it returns `null`. ## Use case examples In log analysis, you might want to find the maximum request duration, but only for successful requests. **Query** ```kusto ['sample-http-logs'] | summarize maxif(req_duration_ms, status == "200") ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20maxif\(req_duration_ms,%20status%20%3D%3D%20'200'\)%22%7D) **Output** | max\_req\_duration | | ------------------ | | 1250 | This query returns the maximum request duration (`req_duration_ms`) for HTTP requests with a `200` status. In OpenTelemetry traces, you might want to find the longest span duration for a specific service type. **Query** ```kusto ['otel-demo-traces'] | summarize maxif(duration, ['service.name'] == "checkoutservice" and kind == "server") ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20maxif\(duration,%20%5B'service.name'%5D%20%3D%3D%20'checkoutservice'%20and%20kind%20%3D%3D%20'server'\)%22%7D) **Output** | max\_duration | | ------------- | | 2.05s | This query returns the maximum span duration (`duration`) for server spans in the `checkoutservice`. For security logs, you might want to identify the longest request duration for any requests originating from a specific country, such as the United States. **Query** ```kusto ['sample-http-logs'] | summarize maxif(req_duration_ms, ['geo.country'] == "United States") ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20maxif\(req_duration_ms,%20%5B'geo.country'%5D%20%3D%3D%20'United%20States'\)%22%7D) **Output** | max\_req\_duration | | ------------------ | | 980 | This query returns the maximum request duration for requests coming from the United States (`geo.country`). ## List of related aggregations * [**minif**](/apl/aggregation-function/minif): Returns the minimum value from a column for rows that satisfy a condition. Use `minif` when you're interested in the lowest value under specific conditions. * [**max**](/apl/aggregation-function/max): Returns the maximum value from a column without filtering. Use `max` when you want the highest value across the entire dataset without conditions. * [**sumif**](/apl/aggregation-function/sumif): Returns the sum of values for rows that satisfy a condition. Use `sumif` when you want the total value of a column under specific conditions. * [**avgif**](/apl/aggregation-function/avgif): Returns the average of values for rows that satisfy a condition. Use `avgif` when you want to calculate the mean value based on a filter. * [**countif**](/apl/aggregation-function/countif): Returns the count of rows that satisfy a condition. Use `countif` when you want to count occurrences that meet certain criteria. # min This page explains how to use the min aggregation function in APL. The `min` aggregation function in APL returns the minimum value from a set of input values. You can use this function to identify the smallest numeric or comparable value in a column of data. This is useful when you want to find the quickest response time, the lowest transaction amount, or the earliest date in log data. It’s ideal for analyzing performance metrics, filtering out abnormal low points in your data, or discovering outliers. ## For users of other query languages If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL. In Splunk, the `min` function works similarly to APL's `min` aggregation, allowing you to find the minimum value in a field across your dataset. The main difference is in the query structure and syntax between the two. ```sql Splunk example | stats min(duration) by id ``` ```kusto APL equivalent ['sample-http-logs'] | summarize min(req_duration_ms) by id ``` In ANSI SQL, the `MIN` function works almost identically to the APL `min` aggregation. You use it to return the smallest value in a column of data, grouped by one or more fields. ```sql SQL example SELECT MIN(duration), id FROM sample_http_logs GROUP BY id; ``` ```kusto APL equivalent ['sample-http-logs'] | summarize min(req_duration_ms) by id ``` ## Usage ### Syntax ```kusto summarize min(Expression) ``` ### Parameters * `Expression`: The expression from which to calculate the minimum value. Typically, this is a numeric or date/time field. ### Returns The function returns the smallest value found in the specified column or expression. ## Use case examples In this use case, you analyze HTTP logs to find the minimum request duration for each unique user. **Query** ```kusto ['sample-http-logs'] | summarize min(req_duration_ms) by id ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20min\(req_duration_ms\)%20by%20id%22%7D) **Output** | id | min\_req\_duration\_ms | | --------- | ---------------------- | | user\_123 | 32 | | user\_456 | 45 | This query returns the minimum request duration for each user, helping you identify the fastest responses. Here, you analyze OpenTelemetry trace data to find the minimum span duration per service. **Query** ```kusto ['otel-demo-traces'] | summarize min(duration) by ['service.name'] ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20min\(duration\)%20by%20%5B'service.name'%5D%22%7D) **Output** | service.name | min\_duration | | --------------- | ------------- | | frontend | 2ms | | checkoutservice | 5ms | This query returns the minimum span duration for each service in the trace logs. In this example, you analyze security logs to find the minimum request duration for each HTTP status code. **Query** ```kusto ['sample-http-logs'] | summarize min(req_duration_ms) by status ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20min\(req_duration_ms\)%20by%20status%22%7D) **Output** | status | min\_req\_duration\_ms | | ------ | ---------------------- | | 200 | 10 | | 404 | 40 | This query returns the minimum request duration for each HTTP status code, helping you identify if certain statuses are associated with faster or slower response times. ## List of related aggregations * [**max**](/apl/aggregation-function/max): Returns the maximum value from a set of values. Use `max` when you need to find the highest value instead of the lowest. * [**avg**](/apl/aggregation-function/avg): Calculates the average of a set of values. Use `avg` to find the mean value instead of the minimum. * [**count**](/apl/aggregation-function/count): Counts the number of records or distinct values. Use `count` when you need to know how many records or unique values exist, rather than calculating the minimum. * [**sum**](/apl/aggregation-function/sum): Adds all values together. Use `sum` when you need the total of a set of values rather than the minimum. * [**percentile**](/apl/aggregation-function/percentile): Returns the value at a specified percentile. Use `percentile` if you need a value that falls at a certain point in the distribution of your data, rather than the minimum. # minif This page explains how to use the minif aggregation function in APL. ## Introduction The `minif` aggregation in Axiom Processing Language (APL) allows you to calculate the minimum value of a numeric expression, but only for records that meet a specific condition. This aggregation is useful when you want to find the smallest value in a subset of data that satisfies a given predicate. For example, you can use `minif` to find the shortest request duration for successful HTTP requests, or the minimum span duration for a specific service in your OpenTelemetry traces. The `minif` aggregation is especially useful in scenarios where you need conditional aggregations, such as log analysis, monitoring distributed systems, or examining security-related events. ## For users of other query languages If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL. In Splunk, you might use the `min` function in combination with `where` to filter results. In APL, the `minif` function combines both the filtering condition and the minimum calculation into one step. ```sql Splunk example | stats min(req_duration_ms) as min_duration where status="200" ``` ```kusto APL equivalent ['sample-http-logs'] | summarize minif(req_duration_ms, status == "200") by id ``` In ANSI SQL, you would typically use a `CASE` statement with `MIN` to apply conditional logic for aggregation. In APL, the `minif` function simplifies this by combining both the condition and the aggregation. ```sql SQL example SELECT MIN(CASE WHEN status = '200' THEN req_duration_ms ELSE NULL END) as min_duration FROM sample_http_logs GROUP BY id; ``` ```kusto APL equivalent ['sample-http-logs'] | summarize minif(req_duration_ms, status == "200") by id ``` ## Usage ### Syntax ```kusto summarize minif(Expression, Predicate) ``` ### Parameters | Parameter | Description | | ------------ | ------------------------------------------------------------ | | `Expression` | The numeric expression whose minimum value you want to find. | | `Predicate` | The condition that determines which records to include. | ### Returns The `minif` aggregation returns the minimum value of the specified `Expression` for the records that satisfy the `Predicate`. ## Use case examples In log analysis, you might want to find the minimum request duration for successful HTTP requests. **Query** ```kusto ['sample-http-logs'] | summarize minif(req_duration_ms, status == '200') by ['geo.city'] ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20minif\(req_duration_ms,%20status%20%3D%3D%20'200'\)%20by%20%5B'geo.city'%5D%22%7D) **Output** | geo.city | min\_duration | | --------- | ------------- | | San Diego | 120 | | New York | 95 | This query finds the minimum request duration for HTTP requests with a `200` status code, grouped by city. For distributed tracing, you can use `minif` to find the minimum span duration for a specific service. **Query** ```kusto ['otel-demo-traces'] | summarize minif(duration, ['service.name'] == 'frontend') by trace_id ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20minif\(duration,%20%5B'service.name'%5D%20%3D%3D%20'frontend'\)%20by%20trace_id%22%7D) **Output** | trace\_id | min\_duration | | --------- | ------------- | | abc123 | 50ms | | def456 | 40ms | This query returns the minimum span duration for traces from the `frontend` service, grouped by `trace_id`. In security logs, you can use `minif` to find the minimum request duration for HTTP requests from a specific country. **Query** ```kusto ['sample-http-logs'] | summarize minif(req_duration_ms, ['geo.country'] == 'US') by status ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20minif\(req_duration_ms,%20%5B'geo.country'%5D%20%3D%3D%20'US'\)%20by%20status%22%7D) **Output** | status | min\_duration | | ------ | ------------- | | 200 | 95 | | 404 | 120 | This query returns the minimum request duration for HTTP requests originating from the United States, grouped by HTTP status code. ## List of related aggregations * [**maxif**](/apl/aggregation-function/maxif): Finds the maximum value of an expression that satisfies a condition. Use `maxif` when you need the maximum value under a condition, rather than the minimum. * [**avgif**](/apl/aggregation-function/avgif): Calculates the average value of an expression that meets a specified condition. Useful when you want an average instead of a minimum. * [**countif**](/apl/aggregation-function/countif): Counts the number of records that satisfy a given condition. Use this for counting records rather than calculating a minimum. * [**sumif**](/apl/aggregation-function/sumif): Sums the values of an expression for records that meet a condition. Helpful when you're interested in the total rather than the minimum. # percentile This page explains how to use the percentile aggregation function in APL. The `percentile` aggregation function in Axiom Processing Language (APL) allows you to calculate the value below which a given percentage of data points fall. It is particularly useful when you need to analyze distributions and want to summarize the data using specific thresholds, such as the 90th or 95th percentile. This function can be valuable in performance analysis, trend detection, or identifying outliers across large datasets. You can apply the `percentile` function to various use cases, such as analyzing log data for request durations, OpenTelemetry traces for service latencies, or security logs to assess risk patterns. ## For users of other query languages If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL. In Splunk SPL, the `percentile` function is referred to as `perc` or `percentile`. APL's `percentile` function works similarly, but the syntax is different. The main difference is that APL requires you to explicitly define the column on which you want to apply the percentile and the target percentile value. ```sql Splunk example | stats perc95(req_duration_ms) ``` ```kusto APL equivalent ['sample-http-logs'] | summarize percentile(req_duration_ms, 95) ``` In ANSI SQL, you might use the `PERCENTILE_CONT` or `PERCENTILE_DISC` functions to compute percentiles. In APL, the `percentile` function provides a simpler syntax while offering similar functionality. ```sql SQL example SELECT PERCENTILE_CONT(0.95) WITHIN GROUP (ORDER BY req_duration_ms) FROM sample_http_logs; ``` ```kusto APL equivalent ['sample-http-logs'] | summarize percentile(req_duration_ms, 95) ``` ## Usage ### Syntax ```kusto percentile(column, percentile) ``` ### Parameters * **column**: The name of the column to calculate the percentile on. This must be a numeric field. * **percentile**: The target percentile value (between 0 and 100). ### Returns The function returns the value from the specified column that corresponds to the given percentile. ## Use case examples In log analysis, you can use the `percentile` function to identify the 95th percentile of request durations, which gives you an idea of the tail-end latencies of requests in your system. **Query** ```kusto ['sample-http-logs'] | summarize percentile(req_duration_ms, 95) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20summarize%20percentile%28req_duration_ms%2C%2095%29%22%7D) **Output** | percentile\_req\_duration\_ms | | ----------------------------- | | 1200 | This query calculates the 95th percentile of request durations, showing that 95% of requests take less than or equal to 1200ms. For OpenTelemetry traces, you can use the `percentile` function to identify the 90th percentile of span durations for specific services, which helps to understand the performance of different services. **Query** ```kusto ['otel-demo-traces'] | where ['service.name'] == 'checkoutservice' | summarize percentile(duration, 90) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D%20%7C%20where%20%5B%27service.name%27%5D%20%3D%3D%20%27checkoutservice%27%20%7C%20summarize%20percentile%28duration%2C%2090%29%22%7D) **Output** | percentile\_duration | | -------------------- | | 300ms | This query calculates the 90th percentile of span durations for the `checkoutservice`, helping to assess high-latency spans. In security logs, you can use the `percentile` function to calculate the 99th percentile of response times for a specific set of status codes, helping you focus on outliers. **Query** ```kusto ['sample-http-logs'] | where status == '500' | summarize percentile(req_duration_ms, 99) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20where%20status%20%3D%3D%20%27500%27%20%7C%20summarize%20percentile%28req_duration_ms%2C%2099%29%22%7D) **Output** | percentile\_req\_duration\_ms | | ----------------------------- | | 2500 | This query identifies that 99% of requests resulting in HTTP 500 errors take less than or equal to 2500ms. ## List of related aggregations * [**avg**](/apl/aggregation-function/avg): Use `avg` to calculate the average of a column, which gives you the central tendency of your data. In contrast, `percentile` provides more insight into the distribution and tail values. * [**min**](/apl/aggregation-function/min): The `min` function returns the smallest value in a column. Use this when you need the absolute lowest value instead of a specific percentile. * [**max**](/apl/aggregation-function/max): The `max` function returns the highest value in a column. It’s useful for finding the upper bound, while `percentile` allows you to focus on a specific point in the data distribution. * [**stdev**](/apl/aggregation-function/stdev): `stdev` calculates the standard deviation of a column, which helps measure data variability. While `stdev` provides insight into overall data spread, `percentile` focuses on specific distribution points. # rate This page explains how to use the rate aggregation function in APL. The `rate` aggregation function in APL (Axiom Processing Language) helps you calculate the rate of change over a specific time interval. This is especially useful for scenarios where you need to monitor how frequently an event occurs or how a value changes over time. For example, you can use the `rate` function to track request rates in web logs or changes in metrics like CPU usage or memory consumption. The `rate` function is useful for analyzing trends in time series data and identifying unusual spikes or drops in activity. It can help you understand patterns in logs, metrics, and traces over specific intervals, such as per minute, per second, or per hour. ## For users of other query languages If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL. In Splunk SPL, the equivalent of the `rate` function can be achieved using the `timechart` command with a `per_second` option or by calculating the difference between successive values over time. In APL, the `rate` function simplifies this process by directly calculating the rate over a specified time interval. ```splunk Splunk example | timechart per_second count by resp_body_size_bytes ``` ```kusto APL equivalent ['sample-http-logs'] | summarize rate(resp_body_size_bytes) by bin(_time, 1s) ``` In ANSI SQL, calculating rates typically involves using window functions like `LAG` or `LEAD` to calculate the difference between successive rows in a time series. In APL, the `rate` function abstracts this complexity by allowing you to directly compute the rate over time without needing window functions. ```sql SQL example SELECT resp_body_size_bytes, COUNT(*) / TIMESTAMPDIFF(SECOND, MIN(_time), MAX(_time)) AS rate FROM http_logs; ``` ```kusto APL equivalent ['sample-http-logs'] | summarize rate(resp_body_size_bytes) by bin(_time, 1s) ``` ## Usage ### Syntax ```kusto rate(field) ``` ### Parameters * `field`: The numeric field for which you want to calculate the rate. ### Returns Returns the rate of change or occurrence of the specified `field` over the time interval specified in the query. Specify the time interval in the query in the following way: * `| summarize rate(field)` calculates the rate value of the field over the entire query window. * `| summarize rate(field) by bin(_time, 1h)` calculates the rate value of the field over a one-hour time window. * `| summarize rate(field) by bin_auto(_time)` calculates the rate value of the field bucketed by an automatic time window computed by `bin_auto()`. Use two `summarize` statements to visualize the average rate over one minute per hour. For example: ```kusto ['sample-http-logs'] | summarize respBodyRate = rate(resp_body_size_bytes) by bin(_time, 1m) | summarize avg(respBodyRate) by bin(_time, 1h) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20respBodyRate%20%3D%20rate\(resp_body_size_bytes\)%20by%20bin\(_time%2C%201m\)%20%7C%20summarize%20avg\(respBodyRate\)%20by%20bin\(_time%2C%201h\)%22%2C%20%22queryOptions%22%3A%7B%22quickRange%22%3A%226h%22%7D%7D) ## Use case examples In this example, the `rate` aggregation calculates the rate of HTTP response sizes per second. **Query** ```kusto ['sample-http-logs'] | summarize rate(resp_body_size_bytes) by bin(_time, 1s) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20rate\(resp_body_size_bytes\)%20by%20bin\(_time%2C%201s\)%22%7D) **Output** | rate | \_time | | ------ | ------------------- | | 854 kB | 2024-01-01 12:00:00 | | 635 kB | 2024-01-01 12:00:01 | This query calculates the rate of HTTP response sizes per second. This example calculates the rate of span duration per second. **Query** ```kusto ['otel-demo-traces'] | summarize rate(toint(duration)) by bin(_time, 1s) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20rate\(toint\(duration\)\)%20by%20bin\(_time%2C%201s\)%22%7D) **Output** | rate | \_time | | ---------- | ------------------- | | 26,393,768 | 2024-01-01 12:00:00 | | 19,303,456 | 2024-01-01 12:00:01 | This query calculates the rate of span duration per second. In this example, the `rate` aggregation calculates the rate of HTTP request duration per second which can be useful to detect an increate in malicious requests. **Query** ```kusto ['sample-http-logs'] | summarize rate(req_duration_ms) by bin(_time, 1s) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20rate\(req_duration_ms\)%20by%20bin\(_time%2C%201s\)%22%7D) **Output** | rate | \_time | | ---------- | ------------------- | | 240.668 ms | 2024-01-01 12:00:00 | | 264.17 ms | 2024-01-01 12:00:01 | This query calculates the rate of HTTP request duration per second. ## List of related aggregations * [**count**](/apl/aggregation-function/count): Returns the total number of records. Use `count` when you want an absolute total instead of a rate over time. * [**sum**](/apl/aggregation-function/sum): Returns the sum of values in a field. Use `sum` when you want to aggregate the total value, not its rate of change. * [**avg**](/apl/aggregation-function/avg): Returns the average value of a field. Use `avg` when you want to know the mean value rather than how it changes over time. * [**max**](/apl/aggregation-function/max): Returns the maximum value of a field. Use `max` when you need to find the peak value instead of how often or quickly something occurs. * [**min**](/apl/aggregation-function/min): Returns the minimum value of a field. Use `min` when you’re looking for the lowest value rather than a rate. # Aggregation functions This section explains how to use and combine different aggregation functions in APL. The table summarizes the aggregation functions available in APL. Use all these aggregation functions in the context of the [summarize operator](/apl/tabular-operators/summarize-operator). | Function | Description | | -------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------- | | [avg](/apl/aggregation-function/avg) | Returns an average value across the group. | | [avgif](/apl/aggregation-function/avgif) | Calculates the average value of an expression in records for which the predicate evaluates to true. | | [count](/apl/aggregation-function/count) | Returns a count of the group without/with a predicate. | | [countif](/apl/aggregation-function/countif) | Returns a count of rows for which the predicate evaluates to true. | | [dcount](/apl/aggregation-function/dcount) | Returns an estimate for the number of distinct values that are taken by a scalar an expressionession in the summary group. | | [dcountif](/apl/aggregation-function/dcountif) | Returns an estimate of the number of distinct values of an expression of rows for which the predicate evaluates to true. | | [histogram](/apl/aggregation-function/histogram) | Returns a timeseries heatmap chart across the group. | | [make\_list](/apl/aggregation-function/make-list) | Creates a dynamic JSON object (array) of all the values of an expression in the group. | | [make\_list\_if](/apl/aggregation-function/make-list-if) | Creates a dynamic JSON object (array) of an expression values in the group for which the predicate evaluates to true. | | [make\_set](/apl/aggregation-function/make-set) | Creates a dynamic JSON array of the set of distinct values that an expression takes in the group. | | [make\_set\_if](/apl/aggregation-function/make-set-if) | Creates a dynamic JSON object (array) of the set of distinct values that an expression takes in records for which the predicate evaluates to true. | | [max](/apl/aggregation-function/max) | Returns the maximum value across the group. | | [maxif](/apl/aggregation-function/maxif) | Calculates the maximum value of an expression in records for which the predicate evaluates to true. | | [min](/apl/aggregation-function/min) | Returns the minimum value across the group. | | [minif](/apl/aggregation-function/minif) | Returns the minimum of an expression in records for which the predicate evaluates to true. | | [percentile](/apl/aggregation-function/percentile) | Calculates the requested percentiles of the group and produces a timeseries chart. | | [rate](/apl/aggregation-function/rate) | Calculates the rate of values in a group per second. | | [stdev](/apl/aggregation-function/stdev) | Calculates the standard deviation of an expression across the group. | | [stdevif](/apl/aggregation-function/stdevif) | Calculates the standard deviation of an expression in records for which the predicate evaluates to true. | | [sum](/apl/aggregation-function/sum) | Calculates the sum of an expression across the group. | | [sumif](/apl/aggregation-function/sumif) | Calculates the sum of an expression in records for which the predicate evaluates to true. | | [topk](/apl/aggregation-function/topk) | calculates the top values of an expression across the group in a dataset. | | [variance](/apl/aggregation-function/variance) | Calculates the variance of an expression across the group. | | [varianceif](/apl/aggregation-function/varianceif) | Calculates the variance of an expression in records for which the predicate evaluates to true. | # stdev This page explains how to use the stdev aggregation function in APL. The `stdev` aggregation in APL computes the standard deviation of a numeric field within a dataset. This is useful for understanding the variability or dispersion of data points around the mean. You can apply this aggregation to various use cases, such as performance monitoring, anomaly detection, and statistical analysis of logs and traces. Use the `stdev` function to determine how spread out values like request duration, span duration, or response times are. This is particularly helpful when analyzing data trends and identifying inconsistencies, outliers, or abnormal behavior. ## For users of other query languages If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL. In Splunk SPL, the `stdev` aggregation function works similarly but has a different syntax. While SPL uses the `stdev` command within the `stats` function, APL users will find the aggregation works similarly in APL with just minor differences in syntax. ```sql Splunk example | stats stdev(duration) as duration_std ``` ```kusto APL equivalent ['dataset'] | summarize duration_std = stdev(duration) ``` In ANSI SQL, the standard deviation is computed using the `STDDEV` function. APL's `stdev` function is the direct equivalent of SQL’s `STDDEV`, although APL uses pipes (`|`) for chaining operations and different keyword formatting. ```sql SQL example SELECT STDDEV(duration) AS duration_std FROM dataset; ``` ```kusto APL equivalent ['dataset'] | summarize duration_std = stdev(duration) ``` ## Usage ### Syntax ```kusto stdev(numeric_field) ``` ### Parameters * **`numeric_field`**: The field containing numeric values for which the standard deviation is calculated. ### Returns The `stdev` aggregation returns a single numeric value representing the standard deviation of the specified numeric field in the dataset. ## Use case examples You can use the `stdev` aggregation to analyze HTTP request durations and identify performance variations across different requests. For instance, you can calculate the standard deviation of request durations to identify potential anomalies. **Query** ```kusto ['sample-http-logs'] | summarize req_duration_std = stdev(req_duration_ms) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20req_duration_std%20%3D%20stdev\(req_duration_ms\)%22%7D) **Output** | req\_duration\_std | | ------------------ | | 345.67 | This query calculates the standard deviation of the `req_duration_ms` field in the `sample-http-logs` dataset, helping to understand how much variability there is in request durations. In distributed tracing, calculating the standard deviation of span durations can help identify inconsistent spans that might indicate performance issues or bottlenecks. **Query** ```kusto ['otel-demo-traces'] | summarize span_duration_std = stdev(duration) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20span_duration_std%20%3D%20stdev\(duration\)%22%7D) **Output** | span\_duration\_std | | ------------------- | | 0:00:02.456 | This query computes the standard deviation of span durations in the `otel-demo-traces` dataset, providing insight into how much variation exists between trace spans. In security logs, the `stdev` function can help analyze the response times of various HTTP requests, potentially identifying patterns that might be related to security incidents or abnormal behavior. **Query** ```kusto ['sample-http-logs'] | summarize resp_time_std = stdev(req_duration_ms) by status ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20resp_time_std%20%3D%20stdev\(req_duration_ms\)%20by%20status%22%7D) **Output** | status | resp\_time\_std | | ------ | --------------- | | 200 | 123.45 | | 500 | 567.89 | This query calculates the standard deviation of request durations grouped by the HTTP status code, providing insight into the performance of different status codes. ## List of related aggregations * [**avg**](/apl/aggregation-function/avg): Calculates the average value of a numeric field. Use `avg` to understand the central tendency of the data. * [**min**](/apl/aggregation-function/min): Returns the smallest value in a numeric field. Use `min` when you need to find the minimum value. * [**max**](/apl/aggregation-function/max): Returns the largest value in a numeric field. Use `max` to identify the peak value in a dataset. * [**sum**](/apl/aggregation-function/sum): Adds up all the values in a numeric field. Use `sum` to get a total across records. * [**count**](/apl/aggregation-function/count): Returns the number of records in a dataset. Use `count` when you need the number of occurrences or entries. # stdevif This page explains how to use the stdevif aggregation function in APL. The `stdevif` aggregation function in APL computes the standard deviation of values in a group based on a specified condition. This is useful when you want to calculate variability in data, but only for rows that meet a particular condition. For example, you can use `stdevif` to find the standard deviation of response times in an HTTP log, but only for requests that resulted in a 200 status code. The `stdevif` function is useful when you want to analyze the spread of data values filtered by specific criteria, such as analyzing request durations in successful transactions or monitoring trace durations of specific services in OpenTelemetry data. ## For users of other query languages If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL. In Splunk SPL, the `stdev` function is used to calculate the standard deviation, but you need to use an `if` function or a `where` clause to filter data. APL simplifies this by combining both operations in `stdevif`. ```sql Splunk example | stats stdev(req_duration_ms) as stdev_req where status="200" ``` ```kusto APL equivalent ['sample-http-logs'] | summarize stdevif(req_duration_ms, status == "200") by geo.country ``` In ANSI SQL, the `STDDEV` function is used to compute the standard deviation, but it requires the use of a `CASE WHEN` expression to apply a conditional filter. APL integrates the condition directly into the `stdevif` function. ```sql SQL example SELECT STDDEV(CASE WHEN status = '200' THEN req_duration_ms END) FROM sample_http_logs GROUP BY geo.country; ``` ```kusto APL equivalent ['sample-http-logs'] | summarize stdevif(req_duration_ms, status == "200") by geo.country ``` ## Usage ### Syntax ```kusto summarize stdevif(column, condition) ``` ### Parameters * **column**: The column that contains the numeric values for which you want to calculate the standard deviation. * **condition**: The condition that must be true for the values to be included in the standard deviation calculation. ### Returns The `stdevif` function returns a floating-point number representing the standard deviation of the specified column for the rows that satisfy the condition. ## Use case examples In this example, you calculate the standard deviation of request durations (`req_duration_ms`), but only for successful HTTP requests (status code 200). **Query** ```kusto ['sample-http-logs'] | summarize stdevif(req_duration_ms, status == '200') by ['geo.country'] ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20summarize%20stdevif%28req_duration_ms%2C%20status%20%3D%3D%20%27200%27%29%20by%20%5B%27geo.country%27%5D%22%7D) **Output** | geo.country | stdev\_req\_duration\_ms | | ----------- | ------------------------ | | US | 120.45 | | Canada | 98.77 | | Germany | 134.92 | This query calculates the standard deviation of request durations for HTTP 200 responses, grouped by country. In this example, you calculate the standard deviation of span durations, but only for traces from the `frontend` service. **Query** ```kusto ['otel-demo-traces'] | summarize stdevif(duration, ['service.name'] == "frontend") by kind ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D%20%7C%20summarize%20stdevif%28duration%2C%20%5B%27service.name%27%5D%20%3D%3D%20%27frontend%27%29%20by%20kind%22%7D) **Output** | kind | stdev\_duration | | ------ | --------------- | | server | 45.78 | | client | 23.54 | This query computes the standard deviation of span durations for the `frontend` service, grouped by span type (`kind`). In this example, you calculate the standard deviation of request durations for security events from specific HTTP methods, filtered by `POST` requests. **Query** ```kusto ['sample-http-logs'] | summarize stdevif(req_duration_ms, method == "POST") by ['geo.city'] ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20summarize%20stdevif%28req_duration_ms%2C%20method%20%3D%3D%20%27POST%27%29%20by%20%5B%27geo.city%27%5D%22%7D) **Output** | geo.city | stdev\_req\_duration\_ms | | -------- | ------------------------ | | New York | 150.12 | | Berlin | 130.33 | This query calculates the standard deviation of request durations for `POST` HTTP requests, grouped by the originating city. ## List of related aggregations * [**avgif**](/apl/aggregation-function/avgif): Similar to `stdevif`, but instead of calculating the standard deviation, `avgif` computes the average of values that meet the condition. * [**sumif**](/apl/aggregation-function/sumif): Computes the sum of values that meet the condition. Use `sumif` when you want to aggregate total values instead of analyzing data spread. * [**varianceif**](/apl/aggregation-function/varianceif): Returns the variance of values that meet the condition, which is a measure of how spread out the data points are. * [**countif**](/apl/aggregation-function/countif): Counts the number of rows that satisfy the specified condition. * [**minif**](/apl/aggregation-function/minif): Retrieves the minimum value that satisfies the given condition, useful when finding the smallest value in filtered data. # sum This page explains how to use the sum aggregation function in APL. The `sum` aggregation in APL is used to compute the total sum of a specific numeric field in a dataset. This aggregation is useful when you want to find the cumulative value for a certain metric, such as the total duration of requests, total sales revenue, or any other numeric field that can be summed. You can use the `sum` aggregation in a wide range of scenarios, such as analyzing log data, monitoring traces, or examining security logs. It is particularly helpful when you want to get a quick overview of your data in terms of totals or cumulative statistics. ## For users of other query languages If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL. In Splunk, you use the `sum` function in combination with the `stats` command to aggregate data. In APL, the `sum` aggregation works similarly but is structured differently in terms of syntax. ```splunk Splunk example | stats sum(req_duration_ms) as total_duration ``` ```kusto APL equivalent ['sample-http-logs'] | summarize total_duration = sum(req_duration_ms) ``` In ANSI SQL, the `SUM` function is commonly used with the `GROUP BY` clause to aggregate data by a specific field. In APL, the `sum` function works similarly but can be used without requiring a `GROUP BY` clause for simple summations. ```sql SQL example SELECT SUM(req_duration_ms) AS total_duration FROM sample_http_logs ``` ```kusto APL equivalent ['sample-http-logs'] | summarize total_duration = sum(req_duration_ms) ``` ## Usage ### Syntax ```kusto summarize [ =] sum() ``` ### Parameters * ``: (Optional) The name you want to assign to the resulting column that contains the sum. * ``: The field in your dataset that contains the numeric values you want to sum. ### Returns The `sum` aggregation returns a single row with the sum of the specified numeric field. If used with a `by` clause, it returns multiple rows with the sum per group. ## Use case examples The `sum` aggregation can be used to calculate the total request duration in an HTTP log dataset. **Query** ```kusto ['sample-http-logs'] | summarize total_duration = sum(req_duration_ms) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20total_duration%20%3D%20sum\(req_duration_ms\)%22%7D) **Output** | total\_duration | | --------------- | | 123456 | This query calculates the total request duration across all HTTP requests in the dataset. The `sum` aggregation can be applied to OpenTelemetry traces to calculate the total span duration. **Query** ```kusto ['otel-demo-traces'] | summarize total_duration = sum(duration) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20total_duration%20%3D%20sum\(duration\)%22%7D) **Output** | total\_duration | | --------------- | | 7890 | This query calculates the total duration of all spans in the dataset. You can use the `sum` aggregation to calculate the total number of requests based on a specific HTTP status in security logs. **Query** ```kusto ['sample-http-logs'] | where status == '200' | summarize request_count = sum(1) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20where%20status%20%3D%3D%20'200'%20%7C%20summarize%20request_count%20%3D%20sum\(1\)%22%7D) **Output** | request\_count | | -------------- | | 500 | This query counts the total number of successful requests (status 200) in the dataset. ## List of related aggregations * [**count**](/apl/aggregation-function/count): Counts the number of records in a dataset. Use `count` when you want to count the number of rows, not aggregate numeric values. * [**avg**](/apl/aggregation-function/avg): Computes the average value of a numeric field. Use `avg` when you need to find the mean instead of the total sum. * [**min**](/apl/aggregation-function/min): Returns the minimum value of a numeric field. Use `min` when you're interested in the lowest value. * [**max**](/apl/aggregation-function/max): Returns the maximum value of a numeric field. Use `max` when you're interested in the highest value. * [**sumif**](/apl/aggregation-function/sumif): Sums a numeric field conditionally. Use `sumif` when you only want to sum values that meet a specific condition. # sumif This page explains how to use the sumif aggregation function in APL. The `sumif` aggregation function in Axiom Processing Language (APL) computes the sum of a numeric expression for records that meet a specified condition. This function is useful when you want to filter data based on specific criteria and aggregate the numeric values that match the condition. Use `sumif` when you need to apply conditional logic to sums, such as calculating the total request duration for successful HTTP requests or summing the span durations in OpenTelemetry traces for a specific service. ## For users of other query languages If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL. In Splunk SPL, the `sumif` equivalent functionality requires using a `stats` command with a `where` clause to filter the data. In APL, you can use `sumif` to simplify this operation by combining both the condition and the summing logic into one function. ```sql Splunk example | stats sum(duration) as total_duration where status="200" ``` ```kusto APL equivalent summarize total_duration = sumif(duration, status == '200') ``` In ANSI SQL, achieving a similar result typically involves using a `CASE` statement inside the `SUM` function to conditionally sum values based on a specified condition. In APL, `sumif` provides a more concise approach by allowing you to filter and sum in a single function. ```sql SQL example SELECT SUM(CASE WHEN status = '200' THEN duration ELSE 0 END) AS total_duration FROM http_logs ``` ```kusto APL equivalent summarize total_duration = sumif(duration, status == '200') ``` ## Usage ### Syntax ```kusto sumif(numeric_expression, condition) ``` ### Parameters * `numeric_expression`: The numeric field or expression you want to sum. * `condition`: A boolean expression that determines which records contribute to the sum. Only the records that satisfy the condition are considered. ### Returns `sumif` returns the sum of the values in `numeric_expression` for records where the `condition` is true. If no records meet the condition, the result is 0. ## Use case examples In this use case, we calculate the total request duration for HTTP requests that returned a `200` status code. **Query** ```kusto ['sample-http-logs'] | summarize total_req_duration = sumif(req_duration_ms, status == '200') ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20summarize%20total_req_duration%20%3D%20sumif%28req_duration_ms%2C%20status%20%3D%3D%20%27200%27%29%22%7D) **Output** | total\_req\_duration | | -------------------- | | 145000 | This query computes the total request duration (in milliseconds) for all successful HTTP requests (those with a status code of `200`). In this example, we sum the span durations for the `frontend` service in OpenTelemetry traces. **Query** ```kusto ['otel-demo-traces'] | summarize total_duration = sumif(duration, ['service.name'] == 'frontend') ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D%20%7C%20summarize%20total_duration%20%3D%20sumif%28duration%2C%20%5B%27service.name%27%5D%20%3D%3D%20%27frontend%27%29%22%7D) **Output** | total\_duration | | --------------- | | 32000 | This query sums the span durations for traces related to the `frontend` service, providing insight into how long this service has been running over time. Here, we calculate the total request duration for failed HTTP requests (those with status codes other than `200`). **Query** ```kusto ['sample-http-logs'] | summarize total_req_duration_failed = sumif(req_duration_ms, status != '200') ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20summarize%20total_req_duration_failed%20%3D%20sumif%28req_duration_ms%2C%20status%20%21%3D%20%27200%27%29%22%7D) **Output** | total\_req\_duration\_failed | | ---------------------------- | | 64000 | This query computes the total request duration for all failed HTTP requests (where the status code is not `200`), which can be useful for security log analysis. ## List of related aggregations * [**avgif**](/apl/aggregation-function/avgif): Computes the average of a numeric expression for records that meet a specified condition. Use `avgif` when you're interested in the average value, not the total sum. * [**countif**](/apl/aggregation-function/countif): Counts the number of records that satisfy a condition. Use `countif` when you need to know how many records match a specific criterion. * [**minif**](/apl/aggregation-function/minif): Returns the minimum value of a numeric expression for records that meet a condition. Useful when you need the smallest value under certain criteria. * [**maxif**](/apl/aggregation-function/maxif): Returns the maximum value of a numeric expression for records that meet a condition. Use `maxif` to identify the highest values under certain conditions. # topk This page explains how to use the topk aggregation function in APL. The `topk` aggregation in Axiom Processing Language (APL) allows you to identify the top *k* results based on a specified field. This is especially useful when you want to quickly analyze large datasets and extract the most significant values, such as the top-performing queries, most frequent errors, or highest latency requests. Use `topk` to find the most common or relevant entries in datasets, especially in log analysis, telemetry data, and monitoring systems. This aggregation helps you focus on the most important data points, filtering out the noise. The `topk` aggregation in APL is estimated. The estimation comes with the benefit of speed at the expense of accuracy. This means that `topk` is fast and light on resources even on a large or high-cardinality dataset, but it doesn’t provide the most accurate results. For completely accurate results, use the [`top` operator](/apl/tabular-operators/top-operator). ## For users of other query languages If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL. Splunk SPL doesn’t have the equivalent of the `topk` function. You can achieve similar results with SPL’s `top` command which is equivalent to APL’s `top` operator. The `topk` function in APL behaves similarly by returning the top `k` values of a specified field, but its syntax is unique to APL. The main difference between `top` (supported by both SPL and APL) and `topk` (supported only by APL) is that `topk` is estimated. This means that APL’s `topk` is faster, less resource intenstive, but less accurate than SPL’s `top`. ```sql Splunk example | top limit=5 status by method ``` ```kusto APL equivalent ['sample-http-logs'] | summarize topk(status, 5) by method ``` In ANSI SQL, identifying the top *k* rows often involves using the `ORDER BY` and `LIMIT` clauses. While the logic remains similar, APL’s `topk` simplifies this process by directly returning the top *k* values of a field in an aggregation. The main difference between SQL’s solution and APL’s `topk` is that `topk` is estimated. This means that APL’s `topk` is faster, less resource intenstive, but less accurate than SQL’s combination of `ORDER BY` and `LIMIT` clauses. ```sql SQL example SELECT status, COUNT(*) FROM sample_http_logs GROUP BY status ORDER BY COUNT(*) DESC LIMIT 5; ``` ```kusto APL equivalent ['sample-http-logs'] | summarize topk(status, 5) ``` ## Usage ### Syntax ```kusto topk(field, k) ``` ### Parameters * **`field`**: The field or expression to rank the results by. * **`k`**: The number of top results to return. ### Returns A subset of the original dataset with the top *k* values based on the specified field. ## Use case examples When analyzing HTTP logs, you can use the `topk` function to find the top 5 most frequent HTTP status codes. **Query** ```kusto ['sample-http-logs'] | summarize topk(status, 5) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%20%7C%20summarize%20topk\(status%2C%205\)%22%7D) **Output** | status | count\_ | | ------ | ------- | | 200 | 1500 | | 404 | 400 | | 500 | 200 | | 301 | 150 | | 302 | 100 | This query groups the logs by HTTP status and returns the 5 most frequent statuses. In OpenTelemetry traces, you can use `topk` to find the top five status codes by service. **Query** ```kusto ['otel-demo-traces'] | summarize topk(['attributes.http.status_code'], 5) by ['service.name'] ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20topk\(%5B'attributes.http.status_code'%5D%2C%205\)%20by%20%5B'service.name'%5D%22%7D) **Output** | service.name | attributes.http.status\_code | \_count | | ------------- | ---------------------------- | ---------- | | frontendproxy | 200 | 34,862,088 | | | 203 | 3,095,223 | | | 404 | 154,417 | | | 500 | 153,823 | | | 504 | 3,497 | This query shows the top five status codes by service. You can use `topk` in security log analysis to find the top 5 cities generating the most HTTP requests. **Query** ```kusto ['sample-http-logs'] | summarize topk(['geo.city'], 5) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%20%7C%20summarize%20topk\(%5B'geo.city'%5D%2C%205\)%22%7D) **Output** | geo.city | count\_ | | -------- | ------- | | New York | 500 | | London | 400 | | Paris | 350 | | Tokyo | 300 | | Berlin | 250 | This query returns the top 5 cities based on the number of HTTP requests. ## List of related aggregations * [**top**](/apl/tabular-operators/top-operator): Returns the top values based on a field without requiring a specific number of results (`k`), making it useful when you're unsure how many top values to retrieve. * [**sort**](/apl/tabular-operators/sort-operator): Orders the dataset based on one or more fields, which is useful if you need a complete ordered list rather than the top *k* values. * [**extend**](/apl/tabular-operators/extend-operator): Adds calculated fields to your dataset, which can be useful in combination with `topk` to create custom rankings. * [**count**](/apl/aggregation-function/count): Aggregates the dataset by counting occurrences, often used in conjunction with `topk` to find the most common values. # variance This page explains how to use the variance aggregation function in APL. The `variance` aggregation function in APL calculates the variance of a numeric expression across a set of records. Variance is a statistical measurement that represents the spread of data points in a dataset. It's useful for understanding how much variation exists in your data. In scenarios such as performance analysis, network traffic monitoring, or anomaly detection, `variance` helps identify outliers and patterns by showing how data points deviate from the mean. ## For users of other query languages If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL. In SPL, variance is computed using the `stats` command with the `var` function, whereas in APL, you can use `variance` for the same functionality. ```sql Splunk example | stats var(req_duration_ms) as variance ``` ```kusto APL equivalent ['sample-http-logs'] | summarize variance(req_duration_ms) ``` In ANSI SQL, variance is typically calculated using `VAR_POP` or `VAR_SAMP`. APL provides a simpler approach using the `variance` function without needing to specify population or sample. ```sql SQL example SELECT VAR_POP(req_duration_ms) FROM sample_http_logs; ``` ```kusto APL equivalent ['sample-http-logs'] | summarize variance(req_duration_ms) ``` ## Usage ### Syntax ```kusto summarize variance(Expression) ``` ### Parameters * `Expression`: A numeric expression or field for which you want to compute the variance. The expression should evaluate to a numeric data type. ### Returns The function returns the variance (a numeric value) of the specified expression across the records. ## Use case examples You can use the `variance` function to measure the variability of request durations, which helps in identifying performance bottlenecks or anomalies in web services. **Query** ```kusto ['sample-http-logs'] | summarize variance(req_duration_ms) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20variance\(req_duration_ms\)%22%7D) **Output** | variance\_req\_duration\_ms | | --------------------------- | | 1024.5 | This query calculates the variance of request durations from a dataset of HTTP logs. A high variance indicates greater variability in request durations, potentially signaling performance issues. For OpenTelemetry traces, `variance` can be used to measure how much span durations differ across service invocations, helping in performance optimization and anomaly detection. **Query** ```kusto ['otel-demo-traces'] | summarize variance(duration) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20variance\(duration\)%22%7D) **Output** | variance\_duration | | ------------------ | | 1287.3 | This query computes the variance of span durations across traces, which helps in understanding how consistent the service performance is. A higher variance might indicate unstable or inconsistent performance. You can use the `variance` function on security logs to detect abnormal patterns in request behavior, such as unusual fluctuations in response times, which may point to potential security threats. **Query** ```kusto ['sample-http-logs'] | summarize variance(req_duration_ms) by status ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20variance\(req_duration_ms\)%20by%20status%22%7D) **Output** | status | variance\_req\_duration\_ms | | ------ | --------------------------- | | 200 | 1534.8 | | 404 | 2103.4 | This query calculates the variance of request durations grouped by HTTP status codes. High variance in certain status codes (e.g., 404 errors) can indicate network or application issues. ## List of related aggregations * [**stdev**](/apl/aggregation-function/stdev): Computes the standard deviation, which is the square root of the variance. Use `stdev` when you need the spread of data in the same units as the original dataset. * [**avg**](/apl/aggregation-function/avg): Computes the average of a numeric field. Combine `avg` with `variance` to analyze both the central tendency and the spread of data. * [**count**](/apl/aggregation-function/count): Counts the number of records. Use `count` alongside `variance` to get a sense of data size relative to variance. * [**percentile**](/apl/aggregation-function/percentile): Returns a value below which a given percentage of observations fall. Use `percentile` for a more detailed distribution analysis. * [**max**](/apl/aggregation-function/max): Returns the maximum value. Use `max` when you are looking for extreme values in addition to variance to detect anomalies. # varianceif This page explains how to use the varianceif aggregation function in APL. The `varianceif` aggregation in APL calculates the variance of values that meet a specified condition. This is useful when you want to understand the variability of a subset of data without considering all data points. For example, you can use `varianceif` to compute the variance of request durations for HTTP requests that resulted in a specific status code or to track anomalies in trace durations for a particular service. You can use the `varianceif` aggregation when analyzing logs, telemetry data, or security events where conditions on subsets of the data are critical to your analysis. ## For users of other query languages If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL. In Splunk, you would use the `eval` function to filter data and calculate variance for specific conditions. In APL, `varianceif` combines the filtering and aggregation into a single function, making your queries more concise. ```sql Splunk example | eval filtered_var=if(status=="200",req_duration_ms,null()) | stats var(filtered_var) ``` ```kusto APL equivalent ['sample-http-logs'] | summarize varianceif(req_duration_ms, status == '200') ``` In ANSI SQL, you typically use a `CASE` statement to apply conditional logic and then compute the variance. In APL, `varianceif` simplifies this by combining both the condition and the aggregation. ```sql SQL example SELECT VARIANCE(CASE WHEN status = '200' THEN req_duration_ms END) FROM sample_http_logs; ``` ```kusto APL equivalent ['sample-http-logs'] | summarize varianceif(req_duration_ms, status == '200') ``` ## Usage ### Syntax ```kusto summarize varianceif(Expr, Predicate) ``` ### Parameters * `Expr`: The expression (numeric) for which you want to calculate the variance. * `Predicate`: A boolean condition that determines which records to include in the calculation. ### Returns Returns the variance of `Expr` for the records where the `Predicate` is true. If no records match the condition, it returns `null`. ## Use case examples You can use the `varianceif` function to calculate the variance of HTTP request durations for requests that succeeded (`status == '200'`). **Query** ```kusto ['sample-http-logs'] | summarize varianceif(req_duration_ms, status == '200') ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20varianceif%28req_duration_ms%2C%20status%20%3D%3D%20'200'%29%22%7D) **Output** | varianceif\_req\_duration\_ms | | ----------------------------- | | 15.6 | This query calculates the variance of request durations for all HTTP requests that returned a status code of 200 (successful requests). You can use the `varianceif` function to monitor the variance in span durations for a specific service, such as the `frontend` service. **Query** ```kusto ['otel-demo-traces'] | summarize varianceif(duration, ['service.name'] == 'frontend') ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20summarize%20varianceif%28duration%2C%20%5B'service.name'%5D%20%3D%3D%20'frontend'%29%22%7D) **Output** | varianceif\_duration | | -------------------- | | 32.7 | This query calculates the variance in the duration of spans generated by the `frontend` service. The `varianceif` function can also be used to track the variance in request durations for requests from a specific geographic region, such as requests from `geo.country == 'United States'`. **Query** ```kusto ['sample-http-logs'] | summarize varianceif(req_duration_ms, ['geo.country'] == 'United States') ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20varianceif%28req_duration_ms%2C%20%5B'geo.country'%5D%20%3D%3D%20'United%20States'%29%22%7D) **Output** | varianceif\_req\_duration\_ms | | ----------------------------- | | 22.9 | This query calculates the variance in request durations for requests originating from the United States. ## List of related aggregations * [**avgif**](/apl/aggregation-function/avgif): Computes the average value of an expression for records that match a given condition. Use `avgif` when you want the average instead of variance. * [**sumif**](/apl/aggregation-function/sumif): Returns the sum of values that meet a specified condition. Use `sumif` when you're interested in totals, not variance. * [**stdevif**](/apl/aggregation-function/stdevif): Returns the standard deviation of values based on a condition. Use `stdevif` when you want to measure dispersion using standard deviation instead of variance. # Null values This page explains how APL represents missing values. All scalar data types in APL have a special value that represents a missing value. This value is called the null value, or null. ## Null literals The null value of a scalar type D is represented in the query language by the null literal D(null). The following query returns a single row full of null values: ```kusto print bool(null), datetime(null), dynamic(null), int(null), long(null), real(null), double(null), time(null) ``` ## Predicates on null values The scalar function [isnull()](/apl/scalar-functions/string-functions#isnull\(\)) can be used to determine if a scalar value is the null value. The corresponding function [isnotnull()](/apl/scalar-functions/string-functions#isnotnull\(\)) can be used to determine if a scalar value isn’t the null value. ## Equality and inequality of null values * Equality (`==`): Applying the equality operator to two null values yields `bool(null)`. Applying the equality operator to a null value and a non-null value yields `bool(false)`. * inequality(`!=`): Applying the inequality operator to two null values yields `bool(null)`. Applying the inequality operator to a null value and a non-null value yields `bool(true)`. # Scalar data types This page explains the data types in APL. Axiom Processing Language supplies a set of system data types that define all the types of data that can be used with APL. The following table lists the data types supported by APL, alongside additional aliases you can use to refer to them. | **Type** | **Additional name(s)** | **gettype()** | | ------------------------------------- | ----------------------------- | ------------------------------------------------------------ | | [bool()](#the-bool-data-type) | **boolean** | **int8** | | [datetime()](#the-datetime-data-type) | **date** | **datetime** | | [dynamic()](#the-dynamic-data-type) | | **array** or **dictionary** or any other of the other values | | [int()](#the-int-data-type) | **int** has an alias **long** | **int** | | [long()](#the-long-data-type) | | **long** | | [real()](#the-real-data-type) | **double** | **real** | | [string()](#the-string-data-type) | | **string** | | [timespan()](#the-timespan-data-type) | **time** | **timespan** | ## The bool data type The bool (boolean) data type can have one of two states: `true` or `false` (internally encoded as 1 and 0, respectively), as well as the null value. ### bool literals The bool data type has the following literals: * true and bool(true): Representing trueness * false and bool(false): Representing falsehood * null and bool(null): Representing the null value ### bool operators The `bool` data type supports the following operators: equality (`==`), inequality (`!=`), logical-and (`and`), and logical-or (`or`). ## The datetime data type The datetime (date) data type represents an instant in time, typically expressed as a date and time of day. Values range from 00:00:00 (midnight), January 1, 0001 Anno Domini (Common Era) through 11:59:59 P.M., December 31, 9999 A.D. (C.E.) in the Gregorian calendar. ### datetime literals Literals of type **datetime** have the syntax **datetime** (`value`), where a number of formats are supported for value, as indicated by the following table: | **Example** | **Value** | | ------------------------------------------------------------ | -------------------------------------------------------------- | | **datetime(2019-11-30 23:59:59.9)** **datetime(2015-12-31)** | Times are always in UTC. Omitting the date gives a time today. | | **datetime(null)** | Check out our [null values](/apl/data-types/null-values) | | **now()** | The current time. | | **now(-timespan)** | now()-timespan | | **ago(timespan)** | now()-timespan | **now()** and **ago()** indicate a `datetime` value compared with the moment in time when APL started to execute the query. ### Supported formats We support the **ISO 8601** format, which is the standard format for representing dates and times in the Gregorian calendar. ### [ISO 8601](https://www.iso.org/iso-8601-date-and-time-format.html) | **Format** | **Example** | | ------------------- | --------------------------- | | %Y-%m-%dT%H:%M:%s%z | 2016-06-26T08:20:03.123456Z | | %Y-%m-%dT%H:%M:%s | 2016-06-26T08:20:03.123456 | | %Y-%m-%dT%H:%M | 2016-06-26T08:20 | | %Y-%m-%d %H:%M:%s%z | 2016-10-06 15:55:55.123456Z | | %Y-%m-%d %H:%M:%s | 2016-10-06 15:55:55 | | %Y-%m-%d %H:%M | 2016-10-06 15:55 | | %Y-%m-%d | 2014-11-08 | ## The dynamic data type The **dynamic** scalar data type is special in that it can take on any value of other scalar data types from the list below, as well as arrays and property bags. Specifically, a **dynamic** value can be: * null * A value of any of the primitive scalar data types: **bool**, **datetime**, **int**, **long**, **real**, **string**, and **timespan**. * An array of **dynamic** values, holding zero or more values with zero-based indexing. * A property bag, holding zero or more key-value pairs. ### Dynamic literals A literal of type dynamic looks like this: dynamic (`Value`) Value can be: * null, in which case the literal represents the null dynamic value: **dynamic(null)**. * Another scalar data type literal, in which case the literal represents the **dynamic** literal of the "inner" type. For example, **dynamic(6)** is a dynamic value holding the value 6 of the long scalar data type. * An array of dynamic or other literals: \[`ListOfValues`]. For example, dynamic(\[3, 4, "bye"]) is a dynamic array of three elements, two **long** values and one **string** value. * A property bag: \{`Name`=`Value ...`}. For example, `dynamic(\{"a":1, "b":\{"a":2\}\})` is a property bag with two slots, a, and b, with the second slot being another property bag. ## The int data type The **int** data type represents a signed, 64-bit wide, integer. The special form **int(null)** represents the [null value.](/apl/data-types/null-values) **int** has an alias **[long](/apl/data-types/scalar-data-types#the-long-data-type)** ## The long data type The **long** data type represents a signed, 64-bit wide, integer. ### long literals Literals of the long data type can be specified in the following syntax: long(`Value`) Where Value can take the following forms: * One more or digits, in which case the literal value is the decimal representation of these digits. For example, **long(11)** is the number eleven of type long. * A minus (`-`) sign followed by one or more digits. For example, **long(-3)** is the number minus three of type **long**. * null, in which case this is the [null value](/apl/data-types/null-values) of the **long** data type. Thus, the null value of type **long** is **long(null)**. ## The real data type The **real** data type represents a 64-bit wide, double-precision, floating-point number. ## The string data type The **string** data type represents a sequence of zero or more [Unicode](https://home.unicode.org/) characters. ### String literals There are several ways to encode literals of the **string** data type in a query text: * Enclose the string in double-quotes(`"`): "This is a string literal. Single quote characters (') don’t require escaping. Double quote characters (") are escaped by a backslash (\\)" * Enclose the string in single-quotes (`'`): Another string literal. Single quote characters (') require escaping by a backslash (\\). Double quote characters (") do not require escaping. In the two representations above, the backslash (`\`) character indicates escaping. The backslash is used to escape the enclosing quote characters, tab characters (`\t`), newline characters (`\n`), and itself (`\\`). ### Raw string literals Raw string literals are also supported. In this form, the backslash character (`\`) stands for itself, and does not denote an escape sequence. * Enclosed in double-quotes(`""`): **@"This is a raw string literal"** * Enclose in single-quotes(`'`): **@'This is a raw string literal'** Raw strings are particularly useful for **regexes** where you can use **@"^\[\d]+$"** instead of **"^[\\d]+$"** ## The timespan data type The **timespan** `(time)` data type represents a time interval. ## timespan literals Literals of type **timespan** have the syntax **timespan(value)**, where a number of formats are supported for value, as indicated by the following table: | **Value** | **length of time** | | ----------------- | ------------------ | | **2d** | 2 days | | **1.5h** | 1.5 hour | | **30m** | 30 minutes | | **10s** | 10 seconds | | **timespan(15s)** | 15 seconds | | **0.1s** | 0.1 second | | **timespan(2d)** | 2 days | ## Type conversions APL provides a set of functions to convert values between different scalar data types. These conversion functions allow you to convert a value from one type to another. Some of the commonly used conversion functions include: * `tobool()`: Converts input to boolean representation. * `todatetime()`: Converts input to datetime scalar. * `todouble()` or `toreal()`: Converts input to a value of type real. * `tostring()`: Converts input to a string representation. * `totimespan()`: Converts input to timespan scalar. * `tolong()`: Converts input to long (signed 64-bit) number representation. * `toint()`: Converts input to an integer value (signed 64-bit) number representation. For a complete list of conversion functions and their detailed descriptions and examples, refer to the [Conversion functions](/apl/scalar-functions/conversion-functions) documentation. # Entity names This page explains how to use entity names in your APL query. APL entities (datasets, tables, columns, and operators) are named. For example, two fields or columns in the same dataset can have the same name if the casing is different, and a table and a dataset may have the same name because they aren’t in the same scope. ## Columns * Column names are case-sensitive for resolving purposes and they have a specific position in the dataset’s collection of columns. * Column names are unique within a dataset and table. * In queries, columns are generally referenced by name only. They can only appear in expressions, and the query operator under which the expression appears determines the table or tabular data stream. ## Identifier naming rules Axiom uses identifiers to name various entities. Valid identifier names follow these rules: * Between 1 and 1024 characters long. * Allowed characters: * Alphanumeric characters (letters and digits) * Underscore (`_`) * Space (` `) * Dot (`.`) * Dash (`-`) Identifier names are case-sensitive. ## Quote identifiers Quote an identifier in your APL query if any of the following is true: * The identifier name contains at least one of the following special characters: * Space (` `) * Dot (`.`) * Dash (`-`) * The identifier name is identical to one of the reserved keywords of the APL query language. For example, `project` or `where`. If any of the above is true, you must quote the identifier by putting it in quotations marks (`'`) and square brackets (`[]`). For example, `['my-field']`. If none of the above is true, you don’t need to quote the identifier in your APL query. For example, `myfield`. In this case, quoting the identifier name is optional. # Migrate from SQL to APL This guide will help you through migrating SQL to APL, helping you understand key differences and providing you with query examples. ## Introduction As data grows exponentially, organizations are continuously seeking more efficient and powerful tools to manage and analyze their data. The Query tab, which utilizes the Axiom Processing Language (APL), is one such service that offers fast, scalable, and interactive data exploration capabilities. If you are an SQL user looking to migrate to APL, this guide will provide a gentle introduction to help you make the transition smoothly. **This tutorial will guide you through migrating SQL to APL, helping you understand key differences and providing you with query examples.** ## Introduction to Axiom Processing Language (APL) Axiom Processing Language (APL) is the language used by the Query tab, a fast and highly scalable data exploration service. APL is optimized for real-time and historical data analytics, making it a suitable choice for various data analysis tasks. **Tabular operators**: In APL, there are several tabular operators that help you manipulate and filter data, similar to SQL’s SELECT, FROM, WHERE, GROUP BY, and ORDER BY clauses. Some of the commonly used tabular operators are: * `extend`: Adds new columns to the result set. * `project`: Selects specific columns from the result set. * `where`: Filters rows based on a condition. * `summarize`: Groups and aggregates data similar to the GROUP BY clause in SQL. * `sort`: Sorts the result set based on one or more columns, similar to ORDER BY in SQL. ## Key differences between SQL and APL While SQL and APL are query languages, there are some key differences to consider: * APL is designed for querying large volumes of structured, semi-structured, and unstructured data. * APL is a pipe-based language, meaning you can chain multiple operations using the pipe operator (`|`) to create a data transformation flow. * APL doesn’t use SELECT, and FROM clauses like SQL. Instead, it uses keywords such as summarize, extend, where, and project. * APL is case-sensitive, whereas SQL isn’t. ## Benefits of migrating from SQL to APL: * **Time Series Analysis:** APL is particularly strong when it comes to analyzing time-series data (logs, telemetry data, etc.). It has a rich set of operators designed specifically for such scenarios, making it much easier to handle time-based analysis. * **Pipelining:** APL uses a pipelining model, much like the UNIX command line. You can chain commands together using the pipe (`|`) symbol, with each command operating on the results of the previous command. This makes it very easy to write complex queries. * **Easy to Learn:** APL is designed to be simple and easy to learn, especially for those already familiar with SQL. It does not require any knowledge of database schemas or the need to specify joins. * **Scalability:** APL is a more scalable platform than SQL. This means that it can handle larger amounts of data. * **Flexibility:** APL is a more flexible platform than SQL. This means that it can be used to analyze different types of data. * **Features:** APL offers more features and capabilities than SQL. This includes features such as real-time analytics, and time-based analysis. ## Basic APL Syntax A basic APL query follows this structure: ```kusto | | | | ``` ## Query Examples Let’s see some examples of how to convert SQL queries to APL. ## SELECT with a simple filter **SQL:** ```sql SELECT * FROM [Sample-http-logs] WHERE method = 'GET'; ``` **APL:** ```kusto ['sample-http-logs'] | where method == 'GET' ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20where%20method%20==%20%27GET%27%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}}) ## COUNT with GROUP BY **SQL:** ```sql SELECT Country, COUNT(*) FROM [Sample-http-logs] GROUP BY method; ``` **APL:** ```kusto ['sample-http-logs'] | summarize count() by method ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20summarize%20count\(\)%20by%20method%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}}) ## Top N results **SQL:** ```sql SELECT TOP 10 Status, Method FROM [Sample-http-logs] ORDER BY Method DESC; ``` **APL:** ```kusto ['sample-http-logs'] | top 10 by method desc | project status, method ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|top%2010%20by%20method%20desc%20\n|%20project%20status,%20method%22,%22queryOptions%22:\{%22quickRange%22:%2215d%22}}) ## Simple filtering and projection **SQL:** ```sql SELECT method, status, geo.country FROM [Sample-http-logs] WHERE resp_header_size_bytes >= 18; ``` **APL:** ```kusto ['sample-http-logs'] | where resp_header_size_bytes >= 18 | project method, status, ['geo.country'] ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|where%20resp_header_size_bytes%20%3E=18%20\n|%20project%20method,%20status,%20\[%27geo.country%27]%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}}) ## COUNT with a HAVING clause **SQL:** ```sql SELECT geo.country FROM [Sample-http-logs] GROUP BY geo.country HAVING COUNT(*) > 100; ``` **APL:** ```kusto ['sample-http-logs'] | summarize count() by ['geo.country'] | where count_ > 100 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20summarize%20count\(\)%20by%20\[%27geo.country%27]\n|%20where%20count_%20%3E%20100%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}}) ## Multiple Aggregations **SQL:** ```sql SELECT geo.country, COUNT(*) AS TotalRequests, AVG(req_duration_ms) AS AverageRequest, MIN(req_duration_ms) AS MinRequest, MAX(req_duration_ms) AS MaxRequest FROM [Sample-http-logs] GROUP BY geo.country; ``` **APL:** ```kusto Users | summarize TotalRequests = count(), AverageRequest = avg(req_duration_ms), MinRequest = min(req_duration_ms), MaxRequest = max(req_duration_ms) by ['geo.country'] ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20summarize%20totalRequests%20=%20count\(\),%20Averagerequest%20=%20avg\(req_duration_ms\),%20MinRequest%20=%20min\(req_duration_ms\),%20MaxRequest%20=%20max\(req_duration_ms\)%20by%20\[%27geo.country%27]%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}}) ### Sum of a column **SQL:** ```sql SELECT SUM(resp_body_size_bytes) AS TotalBytes FROM [Sample-http-logs]; ``` **APL:** ```kusto [‘sample-http-logs’] | summarize TotalBytes = sum(resp_body_size_bytes) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20summarize%20TotalBytes%20=%20sum\(resp_body_size_bytes\)%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}}) ### Average of a column **SQL:** ```sql SELECT AVG(req_duration_ms) AS AverageRequest FROM [Sample-http-logs]; ``` **APL:** ```kusto ['sample-http-logs'] | summarize AverageRequest = avg(req_duration_ms) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20summarize%20AverageRequest%20=%20avg\(req_duration_ms\)%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}}) ## Minimum and Maximum Values of a column **SQL:** ```sql SELECT MIN(req_duration_ms) AS MinRequest, MAX(req_duration_ms) AS MaxRequest FROM [Sample-http-logs]; ``` **APL:** ```kusto ['sample-http-logs'] | summarize MinRequest = min(req_duration_ms), MaxRequest = max(req_duration_ms) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20summarize%20MinRequest%20=%20min\(req_duration_ms\),%20MaxRequest%20=%20max\(req_duration_ms\)%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}}) ## Count distinct values **SQL:** ```sql SELECT COUNT(DISTINCT method) AS UniqueMethods FROM [Sample-http-logs]; ``` **APL:** ```kusto ['sample-http-logs'] | summarize UniqueMethods = dcount(method) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|summarize%20UniqueMethods%20=%20dcount\(method\)%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}}) ## Standard deviation of a data **SQL:** ```sql SELECT STDDEV(req_duration_ms) AS StdDevRequest FROM [Sample-http-logs]; ``` **APL:** ```kusto ['sample-http-logs'] | summarize StdDevRequest = stdev(req_duration_ms) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20summarize%20stdDEVRequest%20=%20stdev\(req_duration_ms\)%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}}) ## Variance of a data **SQL:** ```sql SELECT VAR(req_duration_ms) AS VarRequest FROM [Sample-http-logs]; ``` **APL:** ```kusto ['sample-http-logs'] | summarize VarRequest = variance(req_duration_ms) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20summarize%20VarRequest%20=%20variance\(req_duration_ms\)%22,%22queryOptions%22:\{%22quickRange%22:%2215d%22}}) ## Multiple aggregation functions **SQL:** ```sql SELECT COUNT(*) AS TotalDuration, SUM(req_duration_ms) AS TotalDuration, AVG(Price) AS AverageDuration FROM [Sample-http-logs]; ``` **APL:** ```kusto ['sample-http-logs'] | summarize TotalOrders = count(), TotalDuration = sum( req_duration_ms), AverageDuration = avg(req_duration_ms) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20summarize%20TotalOrders%20=%20count\(\),%20TotalDuration%20=%20sum\(req_duration_ms\),%20AverageDuration%20=%20avg\(req_duration_ms\)%20%22,%22queryOptions%22:\{%22quickRange%22:%2215d%22}}) ## Aggregation with GROUP BY and ORDER BY **SQL:** ```sql SELECT status, COUNT(*) AS TotalStatus, SUM(resp_header_size_bytes) AS TotalRequest FROM [Sample-http-logs]; GROUP BY status ORDER BY TotalSpent DESC; ``` **APL:** ```kusto ['sample-http-logs'] | summarize TotalStatus = count(), TotalRequest = sum(resp_header_size_bytes) by status | order by TotalRequest desc ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20summarize%20TotalStatus%20=%20count\(\),%20TotalRequest%20=%20sum\(resp_header_size_bytes\)%20by%20status\n|%20order%20by%20TotalRequest%20desc%20%22,%22queryOptions%22:\{%22quickRange%22:%2215d%22}}) ## Count with a condition **SQL:** ```sql SELECT COUNT(*) AS HighContentStatus FROM [Sample-http-logs]; WHERE resp_header_size_bytes > 1; ``` **APL:** ```kusto ['sample-http-logs'] | where resp_header_size_bytes > 1 | summarize HighContentStatus = count() ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20where%20resp_header_size_bytes%20%3E%201\n|%20summarize%20HighContentStatus%20=%20count\(\)%20%20%20%22,%22queryOptions%22:\{%22quickRange%22:%2215d%22}}) ## Aggregation with HAVING **SQL:** ```sql SELECT Status FROM [Sample-http-logs]; GROUP BY Status HAVING COUNT(*) > 10; ``` **APL:** ```kusto ['sample-http-logs'] | summarize OrderCount = count() by status | where OrderCount > 10 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20summarize%20OrderCount%20=%20count\(\)%20by%20status\n|%20where%20OrderCount%20%3E%2010%20%20%20%22,%22queryOptions%22:\{%22quickRange%22:%2215d%22}}) ## Count occurrences of a value in a field **SQL:** ```sql SELECT content_type, COUNT(*) AS RequestCount FROM [Sample-http-logs]; WHERE content_type = ‘text/csv’; ``` **APL:** ```kusto ['sample-http-logs']; | where content_type == 'text/csv' | summarize RequestCount = count() ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20where%20content_type%20==%20%27text/csv%27%20\n|%20summarize%20RequestCount%20=%20count\(\)%20%20%20%22,%22queryOptions%22:\{%22quickRange%22:%2215d%22}}) ## String Functions: ## Length of a string **SQL:** ```sql SELECT LEN(Status) AS NameLength FROM [Sample-http-logs]; ``` **APL:** ```kusto ['sample-http-logs'] | extend NameLength = strlen(status) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20NameLength%20=%20strlen\(status\)%20%22,%22queryOptions%22:\{%22quickRange%22:%2215d%22}}) ## Concatentation **SQL:** ```sql SELECT CONCAT(content_type, ' ', method) AS FullLength FROM [Sample-http-logs]; ``` **APL:** ```kusto ['sample-http-logs'] | extend FullLength = strcat(content_type, ' ', method) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20FullLength%20=%20strcat\(content_type,%20%27%20%27,%20method\)%20%20%22,%22queryOptions%22:\{%22quickRange%22:%2215d%22}}) ## Substring **SQL:** ```sql SELECT SUBSTRING(content_type, 1, 10) AS ShortDescription FROM [Sample-http-logs]; ``` **APL:** ```kusto ['sample-http-logs'] | extend ShortDescription = substring(content_type, 0, 10) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20ShortDescription%20=%20substring\(content_type,%200,%2010\)%20%22,%22queryOptions%22:\{%22quickRange%22:%2215d%22}}) ## Left and Right **SQL:** ```sql SELECT LEFT(content_type, 3) AS LeftTitle, RIGHT(content_type, 3) AS RightTitle FROM [Sample-http-logs]; ``` **APL:** ```kusto ['sample-http-logs'] | extend LeftTitle = substring(content_type, 0, 3), RightTitle = substring(content_type, strlen(content_type) - 3, 3) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20LeftTitle%20=%20substring\(content_type,%200,%203\),%20RightTitle%20=%20substring\(content_type,%20strlen\(content_type\)%20-%203,%203\)%20%20%22,%22queryOptions%22:\{%22quickRange%22:%2215d%22}}) ## Replace **SQL:** ```sql SELECT REPLACE(StaTUS, 'old', 'new') AS UpdatedStatus FROM [Sample-http-logs]; ``` **APL:** ```kusto ['sample-http-logs'] | extend UpdatedStatus = replace('old', 'new', status) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20UpdatedStatus%20=%20replace\(%27old%27,%20%27new%27,%20status\)%20%20%22,%22queryOptions%22:\{%22quickRange%22:%2215d%22}}) ## Upper and Lower **SQL:** ```sql SELECT UPPER(FirstName) AS UpperFirstName, LOWER(LastName) AS LowerLastName FROM [Sample-http-logs]; ``` **APL:** ```kusto ['sample-http-logs'] | project upperFirstName = toupper(content_type), LowerLastNmae = tolower(status) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20project%20upperFirstName%20=%20toupper\(content_type\),%20LowerLastNmae%20=%20tolower\(status\)%20%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}}) ## LTrim and RTrim **SQL:** ```sql SELECT LTRIM(content_type) AS LeftTrimmedFirstName, RTRIM(content_type) AS RightTrimmedLastName FROM [Sample-http-logs]; ``` **APL:** ```kusto ['sample-http-logs'] | extend LeftTrimmedFirstName = trim_start(' ', content_type), RightTrimmedLastName = trim_end(' ', content_type) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20project%20LeftTrimmedFirstName%20=%20trim_start\(%27%27,%20content_type\),%20RightTrimmedLastName%20=%20trim_end\(%27%27,%20content_type\)%20%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}}) ## Trim **SQL:** ```sql SELECT TRIM(content_type) AS TrimmedFirstName FROM [Sample-http-logs]; ``` **APL:** ```kusto ['sample-http-logs'] | extend TrimmedFirstName = trim(' ', content_type) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20TrimmedFirstName%20=%20trim\(%27%20%27,%20content_type\)%20%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}}) ## Reverse **SQL:** ```sql SELECT REVERSE(Method) AS ReversedFirstName FROM [Sample-http-logs]; ``` **APL:** ```kusto ['sample-http-logs'] | extend ReversedFirstName = reverse(method) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20project%20ReservedFirstnName%20=%20reverse\(method\)%20%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}}) ## Case-insensitive search **SQL:** ```sql SELECT Status, Method FROM “Sample-http-logs” WHERE LOWER(Method) LIKE 'get’'; ``` **APL:** ```kusto ['sample-http-logs'] | where tolower(method) contains 'GET' | project status, method ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20where%20tolower\(method\)%20contains%20%27GET%27\n|%20project%20status,%20method%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}}) ## Take the First Step Today: Dive into APL The journey from SQL to APL might seem daunting at first, but with the right approach, it can become an empowering transition. It is about expanding your data query capabilities to leverage the advanced, versatile, and fast querying infrastructure that APL provides. In the end, the goal is to enable you to draw more value from your data, make faster decisions, and ultimately propel your business forward. Try converting some of your existing SQL queries to APL and observe the performance difference. Explore the Axiom Processing Language and start experimenting with its unique features. **Happy querying!** # Migrate from Sumo Logic Query Language to APL This guide dives into why APL could be a superior choice for your data needs, and the differences between Sumo Logic and APL. ## Introduction In the sphere of data analytics and log management, being able to query data efficiently and effectively is of paramount importance. This guide dives into why APL could be a superior choice for your data needs, the differences between Sumo Logic and APL, and the potential benefits you could reap from migrating from Sumo Logic to APL. Let’s explore the compelling case for APL as a robust, powerful tool for handling your complex data querying requirements. APL is powerful and flexible and uses a pipe (`|`) operator for chaining commands, and it provides a richer set of functions and operators for more complex queries. ## Benefits of Migrating from SumoLogic to APL * **Scalability and Performance:** APL was built with scalability in mind. It handles very large volumes of data more efficiently and provides quicker query execution compared to Sumo Logic, making it a suitable choice for organizations with extensive data requirements. APL is designed for high-speed data ingestion, real-time analytics, and providing insights across structured, semi-structured data. It’s also optimized for time-series data analysis, making it highly efficient for log and telemetry data. * **Advanced Analytics Capabilities:** With APL’s support for aggregation and conversion functions and more advanced statistical visualization, organizations can derive more sophisticated insights from their data. ## Query Examples Let’s see some examples of how to convert SumoLogic queries to APL. ## Parse, and Extract Operators Extract `from` and `to` fields. For example, if a raw event contains `From: Jane To: John,` then `from=Jane and to=John.` **Sumo Logic:** ```bash * | parse "From: * To: *" as (from, to) ``` **APL:** ```kusto ['sample-http-logs'] | extend (method) == extract("From: (.*?) To: (.*)", 1, method) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20\(method\)%20==%20extract\(%22From:%20\(.*?\)%20To:%20\(.*\)%22,%201,%20method\)%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}}) ## Extract Source IP with Regex In this section, we will utilize a regular expression to identify the four octets of an IP address. This will help us efficiently extract the source IP addresses from the data. **Sumo Logic:** ```bash *| parse regex "(\\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})" ``` **APL:** ```kusto ['sample-http-logs'] | extend ip = extract("(\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3})", 1, "23.45.67.90") ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20ip%20=%20extract\(%22\(\\\d\{1,3}\\\\.\\\d\{1,3}\\\\.\\\d\{1,3}\\\\.\\\d\{1,3}\)%22,%201,%20%2223.45.67.90%22\)%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}}) ## Extract Visited URLs This section focuses on identifying all URL addresses visited and extracting them to populate the "url" field. This method provides an organized way to track user activity using APL. **Sumo Logic:** ```bash _sourceCategory=apache | parse "GET * " as url ``` **APL:** ```kusto ['sample-http-logs'] | where method == "GET" | project url = extract(@"(\w+)", 1, method) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20where%20method%20%3D%3D%20%5C%22GET%5C%22%5Cn%7C%20project%20url%20%3D%20extract\(%40%5C%22\(%5C%5Cw%2B\)%5C%22%2C%201%2C%20method\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) ## Extract Data from Source Category Traffic This section aims to identify and analyze traffic originating from the Source Category. We will extract critical information including the source addresses, the sizes of messages transmitted, and the URLs visited, providing valuable insights into the nature of the traffic using APL. **Sumo Logic:** ```bash _sourceCategory=apache | parse "* " as src_IP | parse " 200 * " as size | parse "GET * " as url ``` **APL:** ```kusto ['sample-http-logs'] | extend src_IP = extract("^(\\S+)", 0, uri) | extend size = extract("^(\\S+)", 1, status) | extend url = extract("^(\\S+)", 1, method) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20src_IP%20%3D%20extract\(%5C%22%5E\(%40S%2B\)%5C%22%2C%200%2C%20uri\)%5Cn%7C%20extend%20size%20%3D%20extract\(%5C%22%5E\(%40S%2B\)%5C%22%2C%201%2C%20status\)%5Cn%7C%20extend%20url%20%3D%20extract\(%5C%22%5E\(%40S%2B\)%5C%22%2C%201%2C%20method\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) ## Calculate Bytes Transferred per Source IP In this part, we will compute the total number of bytes transferred to each source IP address. This will allow us to gauge the data volume associated with each source using APL. **Sumo Logic:** ```bash _sourceCategory=apache | parse "* " as src_IP | parse " 200 * " as size | count, sum(size) by src_IP ``` **APL:** ```kusto ['sample-http-logs'] | extend src_IP = extract("^(\\S+)", 1, uri) | extend size = toint(extract("200", 0, status)) | summarize count(), sum(size) by src_IP ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20size%20=%20toint\(extract\(%22200%22,%200,%20status\)\)\n|%20summarize%20count\(\),%20sum\(size\)%20by%20status%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}}) ## Compute Average HTTP Response Size In this section, we will calculate the average size of all successful HTTP responses. This metric helps us to understand the typical data load associated with successful server responses. **Sumo Logic:** ```bash _sourceCategory=apache | parse " 200 * " as size | avg(size) ``` **APL:** Get the average value from a string: ```kusto ['sample-http-logs'] | extend number = todouble(extract("\\d+(\\.\\d+)?", 0, status)) | summarize Average = avg(number) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20number%20=%20todouble\(status\)\n|%20summarize%20Average%20=%20avg\(number\)%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}}) ## Extract Data with Missing Size Field (NoDrop) This section focuses on extracting key parameters like `src`, `size`, and `URL`, even when the `size` field may be absent from the log message. **Sumo Logic:** ```bash _sourceCategory=apache | parse "* " as src_IP | parse " 200 * " as size nodrop | parse "GET * " as url ``` **APL:** ```kusto ['sample-http-logs'] | where content_type == "text/css" | extend src_IP = extract("^(\\S+)", 1, ['id']) | extend size = toint(extract("(\\w+)", 1, status)) | extend url = extract("GET", 0, method) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20where%20content_type%20%3D%3D%20%5C%22text%2Fcss%5C%22%20%7C%20extend%20src_IP%20%3D%20extract\(%5C%22%5E\(%5C%5CS%2B\)%5C%22%2C%201%2C%20%5B%27id%27%5D\)%20%7C%20extend%20size%20%3D%20toint\(extract\(%5C%22\(%5C%5Cw%2B\)%5C%22%2C%201%2C%20status\)\)%20%7C%20extend%20url%20%3D%20extract\(%5C%22GET%5C%22%2C%200%2C%20method\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) ## Count URL Visits This section is dedicated to identifying the frequency of visits to a specific URL. By counting these occurrences, we can gain insights into website popularity and user behavior. **Sumo Logic:** ```bash _sourceCategory=apache | parse "GET * " as url | count by url ``` **APL:** ```kusto ['sample-http-logs'] | extend url = extract("^(\\S+)", 1, method) | summarize Count = count() by url ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?qid=RsnK4jahgNC-rviz3s) ## Page Count by Source IP In this section, we will identify the total number of pages associated with each source IP address. This analysis will allow us to understand the volume of content generated or hosted by each source. **Sumo Logic:** ```bash _sourceCategory=apache | parse "* -" as src_ip | count by src_ip ``` **APL:** ```kusto ['sample-http-logs'] | extend src_ip = extract(".*", 0, ['id']) | summarize Count = count() by src_ip ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20src_ip%20=%20extract\(%22.*%22,%200,%20%20\[%27id%27]\)\n|%20summarize%20Count%20=%20count\(\)%20by%20src_ip%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}}) ## Reorder Pages by Load Frequency We aim to identify the total number of pages per source IP address in this section. Following this, the pages will be reordered based on the frequency of loads, which will provide insights into the most accessed content. **Sumo Logic:** ```bash _sourceCategory=apache | parse "* " as src_ip | parse "GET * " as url | count by url | sort by _count ``` **APL:** ```kusto ['sample-http-logs'] | extend src_ip = extract(".*", 0, ['id']) | extend url = extract("(GET)", 1, method) | where isnotnull(url) | summarize _count = count() by url, src_ip | order by _count desc ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20src_ip%20=%20extract\(%22.*%22,%200,%20\[%27id%27]\)\n|%20extend%20url%20=%20extract\(%22\(GET\)%22,%201,%20method\)\n|%20where%20isnotnull\(url\)\n|%20summarize%20_count%20=%20count\(\)%20by%20url,%20src_ip\n|%20order%20by%20_count%20desc%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}}) ## Identify the top 10 requested pages. **Sumo Logic:** ```bash * | parse "GET * " as url | count by url | top 10 url by _count ``` **APL:** ```kusto ['sample-http-logs'] | where method == "GET" | top 10 by method desc ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20where%20method%20==%20%22GET%22\n|%20top%2010%20by%20method%20desc%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}}) ## Top 10 IPs by Bandwidth Usage In this section, we aim to identify the top 10 source IP addresses based on their bandwidth consumption. **Sumo Logic:** ```bash _sourceCategory=apache | parse " 200 * " as size | parse "* -" as src_ip | sum(size) as total_bytes by src_ip | top 10 src_ip by total_bytes ``` **APL:** ```kusto ['sample-http-logs'] | extend size = req_duration_ms | summarize total_bytes = sum(size) by ['id'] | top 10 by total_bytes desc ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20size%20=%20req_duration_ms\n|%20summarize%20total_bytes%20=%20sum\(size\)%20by%20\[%27id%27]\n|%20top%2010%20by%20total_bytes%20desc%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}}) ## Top 6 IPs by Number of Hits This section focuses on identifying the top six source IP addresses according to the number of hits they generate. This will provide insight into the most frequently accessed or active sources in the network. **Sumo Logic** ```bash _sourceCategory=apache | parse "* -" as src_ip | count by src_ip | top 100 src_ip by _count ``` **APL:** ```kusto ['sample-http-logs'] | extend src_ip = extract("^(\\S+)", 1, user_agent) | summarize _count = count() by src_ip | top 6 by _count desc ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20summarize%20_count%20=%20count\(\)%20by%20user_agent\n|%20order%20by%20_count%20desc\n|%20limit%206%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}}) ## Timeslice and Transpose For the Source Category "apache", count by status\_code and timeslice of 1 hour. **Sumo Logic:** ```bash _sourceCategory=apache* | parse "HTTP/1.1\" * * \"" as (status_code, size) | timeslice 1h | count by _timeslice, status_code ``` **APL:** ```kusto ['sample-http-logs'] | extend status_code = extract("^(\\S+)", 1, method) | where status_code == "POST" | summarize count() by status_code, bin(_time, 1h) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20where%20method%20==%20%22POST%22\n|%20summarize%20count\(\)%20by%20method,%20bin\(_time,%201h\)%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}}) ## Hourly Status Code Count for "Text" Source In this section, We aim to count instances by `status_code`, grouped into one-hour timeslices, and then transpose `status_code` to column format. This will help us understand the frequency and timing of different status codes. **Sumo Logic:** ```bash _sourceCategory=text* | parse "HTTP/1.1\" * * \"" as (status_code, size) | timeslice 1h | count by _timeslice, status_code | transpose row _timeslice column status_code ``` **APL:** ``` ['sample-http-logs'] | where content_type startswith 'text/css' | extend status_code= status | summarize count() by bin(_time, 1h), content_type, status_code ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20where%20content_type%20startswith%20%27text/css%27\n|%20extend%20status_code%20=%20status\n|%20summarize%20count\(\)%20by%20bin\(_time,%201h\),%20content_type,%20status_code%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}}) ## Status Code Count in 5 Time Buckets In this example, we will perform a count by 'status\_code', sliced into five time buckets across the search results. This will help analyze the distribution and frequency of status codes over specific time intervals. **Sumo Logic:** ```bash _sourceCategory=apache* | parse "HTTP/1.1\" * * \"" as (status_code, size) | timeslice 5 buckets | count by _timeslice, status_code ``` **APL:** ```kusto ['sample-http-logs'] | where content_type startswith 'text/css' | extend p=("HTTP/1.1\" * * \""), tostring( is_tls) | extend status_code= status | summarize count() by bin(_time, 12m), status_code ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20where%20content_type%20startswith%20%27text/css%27\n|%20extend%20p=\(%22HTTP/1.1\\%22%20*%20*%20\\%22%22\),%20tostring\(is_tls\)\n|%20extend%20status_code%20=%20status\n|%20summarize%20count\(\)%20by%20bin\(_time,%2012m\),%20status_code%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}}) ## Grouped Status Code Count In this example, we will count messages by status code categories. We will group all messages with status codes in the `200s`, `300s`, `400s`, and `500s` together, we are also groupint the method requests with the `GET`, `POST`, `PUT`, `DELETE` attributes. This will provide an overview of the response status distribution. **Sumo Logic:** ```bash _sourceCategory=Apache/Access | timeslice 15m | if (status_code matches "20*",1,0) as resp_200 | if (status_code matches "30*",1,0) as resp_300 | if (status_code matches "40*",1,0) as resp_400 | if (status_code matches "50*",1,0) as resp_500 | if (!(status_code matches "20*" or status_code matches "30*" or status_code matches "40*" or status_code matches "50*"),1,0) as resp_others | count(*), sum(resp_200) as tot_200, sum(resp_300) as tot_300, sum(resp_400) as tot_400, sum(resp_500) as tot_500, sum(resp_others) as tot_others by _timeslice ``` **APL:** ```kusto ['sample-http-logs'] | extend MethodCategory = case( method == "GET", "GET Requests", method == "POST", "POST Requests", method == "PUT", "PUT Requests", method == "DELETE", "DELETE Requests", "Other Methods") | extend StatusCodeCategory = case( status startswith "2", "Success", status startswith "3", "Redirection", status startswith "4", "Client Error", status startswith "5", "Server Error", "Unknown Status") | extend ContentTypeCategory = case( content_type == "text/csv", "CSV", content_type == "application/json", "JSON", content_type == "text/html", "HTML", "Other Types") | summarize Count=count() by bin_auto(_time), StatusCodeCategory, MethodCategory, ContentTypeCategory ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20MethodCategory%20=%20case\(\n%20%20%20method%20==%20%22GET%22,%20%22GET%20Requests%22,\n%20%20%20method%20==%20%22POST%22,%20%22POST%20Requests%22,\n%20%20%20method%20==%20%22PUT%22,%20%22PUT%20Requests%22,\n%20%20%20method%20==%20%22DELETE%22,%20%22DELETE%20Requests%22,\n%20%20%20%22Other%20Methods%22\)\n|%20extend%20StatusCodeCategory%20=%20case\(\n%20%20%20status%20startswith%20%222%22,%20%22Success%22,\n%20%20%20status%20startswith%20%223%22,%20%22Redirection%22,\n%20%20%20status%20startswith%20%224%22,%20%22Client%20Error%22,\n%20%20%20status%20startswith%20%225%22,%20%22Server%20Error%22,\n%20%20%20%22Unknown%20Status%22\)\n|%20extend%20ContentTypeCategory%20=%20case\(\n%20%20%20content_type%20==%20%22text/csv%22,%20%22CSV%22,\n%20%20%20content_type%20==%20%22application/json%22,%20%22JSON%22,\n%20%20%20content_type%20==%20%22text/html%22,%20%22HTML%22,\n%20%20%20%22Other%20Types%22\)\n|%20summarize%20Count=count\(\)%20by%20bin_auto\(_time\),%20StatusCodeCategory,%20MethodCategory,%20ContentTypeCategory%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}}) ## Conditional Operators For the Source Category "apache", find all messages with a client error status code (40\*): **Sumo Logic:** ```bash _sourceCategory=apache* | parse "HTTP/1.1\" * * \"" as (status_code, size) | where status_code matches "40*" ``` **APL:** ```kusto ['sample-http-logs'] | where content_type startswith 'text/css' | extend p = ("HTTP/1.1\" * * \"") | where status == "200" ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20where%20content_type%20startswith%20%27text/css%27\n|%20extend%20p%20=%20\(%22HTTP/1.1\\%22%20*%20*%20\\%22%22\)\n|%20where%20status%20==%20%22200%22%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}}) ## Browser-based Hit Count In this query example, we aim to count the number of hits by browser. This analysis will provide insights into the different browsers used to access the source and their respective frequencies. **Sumo Logic:** ```bash _sourceCategory=Apache/Access | extract "\"[A-Z]+ \S+ HTTP/[\d\.]+\" \S+ \S+ \S+ \"(?[^\"]+?)\"" | if (agent matches "*MSIE*",1,0) as ie | if (agent matches "*Firefox*",1,0) as firefox | if (agent matches "*Safari*",1,0) as safari | if (agent matches "*Chrome*",1,0) as chrome | sum(ie) as ie, sum(firefox) as firefox, sum(safari) as safari, sum(chrome) as chrome ``` **APL:** ```kusto ['sample-http-logs'] | extend ie = case(tolower(user_agent) contains "msie", 1, 0) | extend firefox = case(tolower(user_agent) contains "firefox", 1, 0) | extend safari = case(tolower(user_agent) contains "safari", 1, 0) | extend chrome = case(tolower(user_agent) contains "chrome", 1, 0) | summarize data = sum(ie), lima = sum(firefox), lo = sum(safari), ce = sum(chrome) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20ie%20=%20case\(tolower\(user_agent\)%20contains%20%22msie%22,%201,%200\)\n|%20extend%20firefox%20=%20case\(tolower\(user_agent\)%20contains%20%22firefox%22,%201,%200\)\n|%20extend%20safari%20=%20case\(tolower\(user_agent\)%20contains%20%22safari%22,%201,%200\)\n|%20extend%20chrome%20=%20case\(tolower\(user_agent\)%20contains%20%22chrome%22,%201,%200\)\n|%20summarize%20data%20=%20sum\(ie\),%20lima%20=%20sum\(firefox\),%20lo%20=%20sum\(safari\),%20ce%20=%20sum\(chrome\)%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}}) ## Use the where operator to match only weekend days. **Sumo Logic:** ```bash * | parse "day=*:" as day_of_week | where day_of_week in ("Saturday","Sunday") ``` **APL:** ```kusto ['sample-http-logs'] | extend day_of_week = dayofweek(_time) | where day_of_week == 1 or day_of_week == 0 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20day_of_week%20=%20dayofweek\(_time\)\n|%20where%20day_of_week%20==%201%20or%20day_of_week%20==%200%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}}) ## Extract Numeric Version Numbers In this section, we will identify version numbers that match numeric values 2, 3, or 1. We will utilize the `num` operator to convert these strings into numerical format, facilitating easier analysis and comparison. **Sumo Logic:** ```bash * | parse "Version=*." as number | num(number) | where number in (2,3,6) ``` **APL:** ```kusto ['sample-http-logs'] | extend p= (req_duration_ms) | extend number=toint(p) | where number in (2,3,6) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20p=%20\(req_duration_ms\)\n|%20extend%20number=toint\(p\)\n|%20where%20number%20in%20\(2,3,6\)%22,%22queryOptions%22:\{%22quickRange%22:%2290d%22}}) ## Making the Leap: Transform Your Data Analytics with APL As we've navigated through the process of migrating from Sumo Logic to APL, we hope you've found the insights valuable. The powerful capabilities of Axiom Processing Lnaguage are now within your reach, ready to empower your data analytics journey. Ready to take the next step in your data analytics journey? Dive deeper into APL and discover how it can unlock even more potential in your data. Check out our APL [learning resources](/apl/guides/migrating-from-sql-to-apl) and [tutorials](/apl/tutorial) to become proficient in APL, and join our [community forums](http://axiom.co/discord) to engage with other APL users. Together, we can redefine what’s possible in data analytics. Remember, the migration to APL is not just a change, it’s an upgrade. Embrace the change, because better data analytics await you. Begin your APL journey today! # Migrate from Splunk SPL to APL This step-by-step guide provides a high-level mapping from Splunk SPL to APL. Splunk and Axiom are powerful tools for log analysis and data exploration. The data explorer interface uses Axiom Processing Language (APL). There are some differences between the query languages for Splunk and Axiom. When transitioning from Splunk to APL, you will need to understand how to convert your Splunk SPL queries into APL. **This guide provides a high-level mapping from Splunk to APL.** ## Basic Searching Splunk uses a `search` command for basic searching, while in APL, simply specify the dataset name followed by a filter. **Splunk:** ```bash search index="myIndex" error ``` **APL:** ```kusto ['myDatasaet'] | where FieldName contains “error” ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20where%20method%20contains%20%27GET%27%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}}) ## Filtering In Splunk, perform filtering using the `search` command, usually specifying field names and their desired values. In APL, perform filtering by using the `where` operator. **Splunk:** ```bash Search index=”myIndex” error | stats count ``` **APL:** ```kusto ['myDataset'] | where fieldName contains “error” | count ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20where%20content_type%20contains%20%27text%27\n|%20count\n|%20limit%2010%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}}) ## Aggregation In Splunk, the `stats` command is used for aggregation. In APL, perform aggregation using the `summarize` operator. **Splunk:** ```bash search index="myIndex" | stats count by status ``` **APL:** ```kusto ['myDataset'] | summarize count() by status ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20summarize%20count\(\)%20by%20status%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}}) ## Time Frames In Splunk, select a time range for a search in the time picker on the search page. In APL, filter by a time range using the where operator and the `timespan` field of the dataset. **Splunk:** ```bash search index="myIndex" earliest=-1d@d latest=now ``` **APL:** ```kusto ['myDataset'] | where _time >= ago(1d) and _time <= now() ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20where%20_time%20%3E=%20ago\(1d\)%20and%20_time%20%3C=%20now\(\)%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}}) ## Sorting In Splunk, the `sort` command is used to order the results of a search. In APL, perform sorting by using the `sort by` operator. **Splunk:** ```bash search index="myIndex" | sort - content_type ``` **APL:** ```kusto ['myDataset'] | sort by countent_type desc ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20sort%20by%20content_type%20desc%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}}) ## Selecting Fields In Splunk, use the fields command to specify which fields to include or exclude in the search results. In APL, use the `project` operator, `project-away` operator, or the `project-keep` operator to specify which fields to include in the query results. **Splunk:** ```bash index=main sourcetype=mySourceType | fields status, responseTime ``` **APL:** ```kusto ['myDataset'] | extend newName = oldName | project-away oldName ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20newStatus%20=%20status%20\n|%20project-away%20status%20%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}}) ## Renaming Fields In Splunk, rename fields using the `rename` command, while in APL rename fields using the `extend,` and `project` operator. Here is the general syntax: **Splunk:** ```bash index="myIndex" sourcetype="mySourceType" | rename oldFieldName AS newFieldName ``` **APL:** ```kusto ['myDataset'] | where method == "GET" | extend new_field_name = content_type | project-away content_type ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20where%20method%20==%20%27GET%27\n|%20extend%20new_field_name%20=%20content_type\n|%20project-away%20content_type%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}}) ## Calculated Fields In Splunk, use the `eval` command to create calculated fields based on the values of other fields, while in APL use the `extend` operator to create calculated fields based on the values of other fields. **Splunk** ```bash search index="myIndex" | eval newField=field1+field2 ``` **APL:** ```kusto ['myDataset'] | extend newField = field1 + field2 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=\{%22apl%22:%22\[%27sample-http-logs%27]\n|%20extend%20calculatedFields%20=%20req_duration_ms%20%2b%20resp_body_size_bytes%22,%22queryOptions%22:\{%22quickRange%22:%2230d%22}}) ## Structure and Concepts The following table compares concepts and data structures between Splunk and APL logs. | Concept | Splunk | APL | Comment | | ------------------------- | -------- | ------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | data caches | buckets | caching and retention policies | Controls the period and caching level for the data.This setting directly affects the performance of queries. | | logical partition of data | index | dataset | Allows logical separation of the data. | | structured event metadata | N/A | dataset | Splunk doesn’t expose the concept of metadata to the search language. APL logs have the concept of a dataset, which has fields and columns. Each event instance is mapped to a row. | | data record | event | row | Terminology change only. | | types | datatype | datatype | APL data types are more explicit because they are set on the fields. Both have the ability to work dynamically with data types and roughly equivalent sets of data types. | | query and search | search | query | Concepts essentially are the same between APL and Splunk | ## Functions The following table specifies functions in APL that are equivalent to Splunk Functions. | Splunk | APL | | ------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | strcat | strcat() | | split | split() | | if | iff() | | tonumber | todouble(), tolong(), toint() | | upper, lower | toupper(), tolower() | | replace | replace\_string() or replace\_regex() | | substr | substring() | | tolower | tolower() | | toupper | toupper() | | match | matches regex | | regex | matches regex **(in splunk, regex is an operator. In APL, it’s a relational operator.)** | | searchmatch | == **(In splunk, `searchmatch` allows searching the exact string.)** | | random | rand(), rand(n) **(Splunk’s function returns a number between zero to 231 -1. APL returns a number between 0.0 and 1.0, or if a parameter is provided, between 0 and n-1.)** | | now | now() | In Splunk, the function is invoked by using the `eval` operator. In APL, it’s used as part of the `extend` or `project`. In Splunk, the function is invoked by using the `eval` operator. In APL, it can be used with the `where` operator. ## Filter APL log queries start from a tabular result set in which a filter is applied. In Splunk, filtering is the default operation on the current index. You may also use the where operator in Splunk, but we don’t recommend it. | Product | Operator | Example | | :------ | :--------- | :------------------------------------------------------------------------- | | Splunk | **search** | Sample.Logs="330009.2" method="GET" \_indextime>-24h | | APL | **where** | \['sample-http-logs']
\| where method == "GET" and \_time > ago(24h) | ## Get n events or rows for inspection APL log queries also support `take` as an alias to `limit`. In Splunk, if the results are ordered, `head` returns the first n results. In APL, `limit` isn’t ordered, but it returns the first n rows that are found. | Product | Operator | Example | | ------- | -------- | ---------------------------------------- | | Splunk | head | Sample.Logs=330009.2
\| head 100 | | APL | limit | \['sample-htto-logs']
\| limit 100 | ## Get the first *n* events or rows ordered by a field or column For the bottom results, in Splunk, use `tail`. In APL, specify ordering direction by using `asc`. | Product | Operator | Example | | :------ | :------- | :------------------------------------------------------------------ | | Splunk | head | Sample.Logs="33009.2"
\| sort Event.Sequence
\| head 20 | | APL | top | \['sample-http-logs']
\| top 20 by method | ## Extend the result set with new fields or columns Splunk has an `eval` function, but it’s not comparable to the `eval` operator in APL. Both the `eval` operator in Splunk and the `extend` operator in APL support only scalar functions and arithmetic operators. | Product | Operator | Example | | :------ | :------- | :------------------------------------------------------------------------------------ | | Splunk | eval | Sample.Logs=330009.2
\| eval state= if(Data.Exception = "0", "success", "error") | | APL | extend | \['sample-http-logs']
\| extend Grade = iff(req\_duration\_ms >= 80, "A", "B") | ## Rename APL uses the `project` operator to rename a field. In the `project` operator, a query can take advantage of any indexes that are prebuilt for a field. Splunk has a `rename` operator that does the same. | Product | Operator | Example | | :------ | :------- | :-------------------------------------------------------------- | | Splunk | rename | Sample.Logs=330009.2
\| rename Date.Exception as execption | | APL | project | \['sample-http-logs']
\| project updated\_status = status | ## Format results and projection Splunk uses the `table` command to select which columns to include in the results. APL has a `project` operator that does the same and [more](/apl/tabular-operators/project-operator). | Product | Operator | Example | | :------ | :------- | :--------------------------------------------------- | | Splunk | table | Event.Rule=330009.2
\| table rule, state | | APL | project | \['sample-http-logs']
\| project status, method | Splunk uses the `field -` command to select which columns to exclude from the results. APL has a `project-away` operator that does the same. | Product | Operator | Example | | :------ | :--------------- | :-------------------------------------------------------------- | | Splunk | **fields -** | Sample.Logs=330009.2\`
\| fields - quota, hightest\_seller | | APL | **project-away** | \['sample-http-logs']
\| project-away method, status | ## Aggregation See the [list of summarize aggregations functions](/apl/aggregation-function/statistical-functions) that are available. | Splunk operator | Splunk example | APL operator | APL example | | :-------------- | :------------------------------------------------------------- | :----------- | :----------------------------------------------------------------------- | | **stats** | search (Rule=120502.\*)
\| stats count by OSEnv, Audience | summarize | \['sample-http-logs']
\| summarize count() by content\_type, status | ## Sort In Splunk, to sort in ascending order, you must use the `reverse` operator. APL also supports defining where to put nulls, either at the beginning or at the end. | Product | Operator | Example | | :------ | :------- | :------------------------------------------------------------- | | Splunk | sort | Sample.logs=120103
\| sort Data.Hresult
\| reverse | | APL | order by | \['sample-http-logs']
\| order by status desc | Whether you’re just starting your transition or you’re in the thick of it, this guide can serve as a helpful roadmap to assist you in your journey from Splunk to Axiom Processing Language. Dive into the Axiom Processing Language, start converting your Splunk queries to APL, and explore the rich capabilities of the Query tab. Embrace the learning curve, and remember, every complex query you master is another step forward in your data analytics journey. # Axiom Processing Language (APL) APL is a query language that is perfect for getting deeper insights from your data. Whether logs, or metrics, APL provides the flexibility to filter, and summarize your data exactly the way you need it. ## Introduction The Axiom Processing Language (APL) is a query language that is perfect for getting deeper insights from your data. Whether logs, events, analytics, or similar, APL provides the flexibility to filter, manipulate, and summarize your data exactly the way you need it. ## Get started Go to the Query tab and click one of your datasets to get started. The APL editor has full auto-completion so you can poke around or you can get a better understanding of all the features by using the reference menu to the left of this page. ## APL Query Structure At a minimum, a query consists of source data reference (name of a dataset) and zero or more query operators applied in sequence. Individual operators are delimited using the pipe character (`|`). APL query has the following structure: ```kusto DataSource | operator ... | operator ... ``` Where: * DataSource is the name of the dataset you want to query * Operator is a function that will be applied to the data Let’s look at an example query. ```kusto ['github-issue-comment-event'] | extend bot = actor contains "-bot" or actor contains "[bot]" | where bot == true | summarize count() by bin_auto(_time), actor ``` The query above begins with reference to a dataset called **github-issue-comment-event** and contains several operators, [extend](/apl/tabular-operators/extend-operator), [where](/apl/tabular-operators/where-operator), and [summarize](/apl/tabular-operators/summarize-operator), each separated by a `pipe`. The extend operator creates the **bot** column in the returned result, and sets its values depending on the value of the actor column, the **where** operator filters out the value of the **bot** to a branch of rows and then produce a chart from the aggregation using the **summarize** operator. The most common kind of query statement is a tabular expression statement. Tabular statements contain operators, each of which starts with a tabular `input` and returns a tabular `output.` * Explore the [tabular operators](/apl/tabular-operators/extend-operator) we support. * Check out our [entity names and identifier naming rules](/apl/entities/entity-names). Axiom Processing Language supplies a set of system [data types](/apl/data-types/scalar-data-types) that define all the types of [data](/apl/data-types/null-values) that can be used with Axiom Processing Language. # Set statement The set statement is used to set a query option in your APL query. The `set` statement is used to set a query option. Options enabled with the `set` statement only have effect for the duration of the query. The `set` statement specified will affect how your query is processed and the returned results. ## Syntax ```kusto set OptionName=OptionValue ``` ## Strict types The `stricttypes` query option lets you specify only the exact type of the data type declaration needed in your query, or a **QueryFailed** error will be thrown. ## Example ```kusto set stricttypes; ['Dataset'] | where number == 5 ``` # Special field attributes This page explains how to implement special fields within APL queries to enhance the functionality and interactivity of datasets. Use these fields in APL queries to add unique behaviors to the Axiom user interface. ## Add link to table * Name: `_row_url` * Type: string * Description: Define the URL to which the entire table links. * APL query example: `extend _row_url = 'https://axiom.co/'` * Expected behavior: Make rows clickable. When clicked, go to the specified URL. If you specify a static string as the URL, all rows link to that page. To specify a different URL for each row, use an dynamic expression like `extend _row_url = strcat('https://axiom.co/', uri)` where `uri` is a field in your data. ## Add link to values in a field * Name: `_FIELDNAME_url` * Type: string * Description: Define a URL to which values in a field link. * APL query example: `extend _website_url = 'https://axiom.co/'` * Expected behavior: Make values in the `website` field clickable. When clicked, go to the specified URL. Replace `FIELDNAME` with the actual name of the field. ## Add tooltip to values in a field * Name: `_FIELDNAME_tooltip` * Type: string * Description: Define text to be displayed when hovering over values in a field. * Example Usage: `extend _errors_tooltip = 'Number of errors'` * Expected behavior: Display a tooltip with the specified text when the user hovers over values in a field. Replace `FIELDNAME` with the actual name of the field. ## Add description to values in a field * Name: `_FIELDNAME_description` * Type: string * Description: Define additional information to be displayed under the values in a field. * Example Usage: `extend _diskusage_description = 'Current disk usage'` * Expected behavior: Display additional text under the values in a field for more context. Replace `FIELDNAME` with the actual name of the field. ## Example The example APL query below adds a tooltip and a description to the values of the `status` field. Clicking one of the values in this field leads to a page about status codes. ```apl ['http-logs'] | extend _status_tooltip = "The status of the HTTP request is the response code from the server. It shows if an HTTP request has been successfully completed." | extend _status_description = "This is the status of the HTTP request." | extend _status_url = "https://developer.mozilla.org/en-US/docs/Web/HTTP/Status" ``` # Array functions Learn how to use and combine different mathematical functions in APL ## Array Functions * Most of the `array` functions are used with the `dynamic array` function. Dynamic arrays lets you insert an element if there is no more space left for the new element. It allows you to add or remove elements, allocates memory at run time, and can be modified with any `array` function. * In APL, a `dynamic` array expands as you add more objects. So you don’t need to determine the size ahead of time, and also lets you write `functions` that are reusable. | **Function Name** | **Description** | | ------------------------------------------- | ------------------------------------------------------------------------------------------------------------------- | | [array\_concat()](#array-concat) | Concatenates a number of dynamic arrays to a single array. | | [array\_iff()](#array-iff) | Returns a new array containing elements from the input array that satisfy the condition. | | [array\_index\_of()](#array-index-of) | Searches the array for the specified item, and returns its position. | | [array\_length()](#array-length) | Calculates the number of elements in a dynamic array. | | [array\_reverse()](#array-reverse) | Reverses the order of the elements in a dynamic array. | | [array\_rotate\_left](#array-rotate-left) | Rotates values inside a `dynamic` array to the left. | | [array\_rotate\_right](#array-rotate-right) | Rotates values inside a `dynamic` array to the right. | | [array\_select\_dict()](#array-select-dict) | Selects a dictionary from an array of dictionaries. | | [array\_shift\_left()](#array-shift-left) | Shifts the values inside a `dynamic` array to the left. | | [array\_shift\_right()](#array-shift-right) | shifts values inside an array to the right. | | [array\_slice()](#array-slice) | Extracts a slice of a dynamic array. | | [array\_split()](#array-split) | Splits an array to multiple arrays according to the split indices and packs the generated array in a dynamic array. | | [array\_sum()](#array-sum) | Calculates the sum of elements in a dynamic array. | | [isarray()](#isarray) | Checks whether a value is an array. | | [pack\_array()](#pack-array) | Packs all input values into a dynamic array. | Each argument has a **required** section which is denoted with `required` or `optional` * If it’s denoted by `required` it means the argument must be passed into that function before it'll work. * if it’s denoted by `optional` it means the function can work without passing the argument value. ## array\_concat() Concatenates a number of dynamic arrays to a single array. ### Arguments | **Name** | **Type** | **Required or Optional** | **Description** | | ----------- | -------- | ------------------------ | ------------------------------------------------------------------------------------------ | | arr1...arrN | dynamic | Required | Input arrays to be concatenated into a dynamic array. All arguments must be dynamic arrays | ### Returns Dynamic array of arrays with arr1, arr2, ... , arrN. ### Examples ```kusto array_concat(array ...) ``` ```kusto ['github-issues-event'] | extend array1 = dynamic([{"name": "status", "color": "ededed", "description": ""}]) | extend array2 = dynamic([{"name": "puffer-ai-hpc-portal", "color": "ededed", "description": ""}]) | extend concatenate = array_concat(array1, array2) | project concatenate ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27github-issues-event%27%5D%5Cn%7C%20extend%20array1%20%3D%20dynamic%28%5B%7B%27name%27%3A%20%27status%27%2C%20%27color%27%3A%20%27ededed%27%2C%20%27description%27%3A%20%27%27%7D%5D%29%5Cn%7C%20extend%20array2%20%3D%20dynamic%28%5B%7B%27name%27%3A%20%27puffer-ai-hpc-portal%27%2C%20%27color%27%3A%20%27ededed%27%2C%20%27description%27%3A%20%27%27%7D%5D%29%5Cn%7C%20extend%20concatenate%20%3D%20array_concat%28array1%2C%20array2%29%5Cn%7C%20project%20concatenate%22%7D) * Result ```json { "concatenate": [ { "name": "status", "description": "", "color": "ededed" }, { "name": "puffer-ai-hpc-portal", "description": "", "color": "ededed" } ] } ``` ```kusto ['github-issues-event'] | extend array1 = dynamic([{"app": "App1", "status": "running", "method": "GET"}]) | extend array2 = dynamic([{"app": "App2", "status": "stopped", "method": "POST"}]) | extend concatenatedLogs = array_concat(array1, array2) | project concatenatedLogs ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27github-issues-event%27%5D%5Cn%7C%20extend%20array1%20%3D%20dynamic%28%5B%7B%27app%27%3A%20%27App1%27%2C%20%27status%27%3A%20%27running%27%2C%20%27method%27%3A%20%27GET%27%7D%5D%29%5Cn%7C%20extend%20array2%20%3D%20dynamic%28%5B%7B%27app%27%3A%20%27App2%27%2C%20%27status%27%3A%20%27stopped%27%2C%20%27method%27%3A%20%27POST%27%7D%5D%29%5Cn%7C%20extend%20concatenatedLogs%20%3D%20array_concat%28array1%2C%20array2%29%5Cn%7C%20project%20concatenatedLogs%22%7D) * Result ```json { "concatenatedLogs": [ { "app": "App1", "status": "running", "method": "GET" }, { "app": "App2", "status": "stopped", "method": "POST" } ] } ``` ## array\_iff() Returns a new array containing elements from the input array that satisfy the condition. ### Arguments | **Name** | **Type** | **Required or Optional** | **Description** | | ---------------- | -------- | ------------------------ | ---------------------------------------------------------------------------------------------------- | | condition\_array | dynamic | Required | Input array of boolean or numeric values | | when\_true | | Required | Input array of values - the result value(s) when the corresponding value of ConditionArray is true. | | when\_false | | Required | Input array of values - the result value(s) when the corresponding value of ConditionArray is false. | ### Returns Dynamic array of the values taken either from the when\_true or when\_false \[array] values, according to the corresponding value of the Condition array. ### Examples ```kusto array_iff(Condition_array, when_true, when_false) ``` ```kusto ['github-issues-event'] | extend return_array = array_iff(dynamic([true,false,true]), dynamic([4,2,1]), dynamic([7,8,4])) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27github-issues-event%27%5D%5Cn%7C%20extend%20return_array%20%3D%20array_iff%28dynamic%28%5Btrue%2Cfalse%2Ctrue%5D%29%2C%20dynamic%28%5B4%2C2%2C1%5D%29%2C%20dynamic%28%5B7%2C8%2C4%5D%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) * Result ```json { "return_array": [4, 8, 1] } ``` ## array\_index\_of() Searches the array for the specified item, and returns its position. ### Arguments | **Name** | **Type** | **Required or Optional** | **Description** | | ------------- | --------------------------------------- | ------------------------ | ------------------------------------------------------------------------------------------------------------------------------------- | | array | array | Required | Input array to search. | | lookup\_value | scalar | Required | The value should be of type `long`, `integer`, `double`, `datetime`, `timespan`, or `string`. | | start\_index | number | Optional | Search start position. A negative value will offset the starting search value from the end of the array by `abs(start_index) steps`.. | | length | number | Optional | Number of values to examine. A value of `-1` means unlimited length. | | occurrence | The number of the occurrence. Default 1 | No | | ### Returns Zero-based index position of lookup. Returns `-1` if the value isn’t found in the array. For irrelevant inputs (occurrence \< 0 or length \< -1) - returns null. ### Examples ```kusto array_index_of(array, value, [start], [length], [occurrence]) ``` ```kusto ['github-issues-event'] | extend index_of_array = array_index_of(dynamic(["this", "is", "an", "example", "an", "example"]), "pn") ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?qid=ftntZpvUx5N-s2e5ft) * Result ```json { "index_of_array": -1 } ``` ## array\_length() Calculates the number of elements in a dynamic array. ### Arguments | **Name** | **Type** | **Required or Optional** | **Description** | | -------- | -------- | ------------------------ | --------------- | | array | dynamic | Required | A dynamic value | ### Returns The number of elements in array, or null if array is not an array. ### Examples ```kusto array_length(array) ``` ```kusto ['github-issues-event'] | project return_length = array_length(labels) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27github-issues-event%27%5D%5Cn%7C%20project%20return_length%20%3D%20array_length%28labels%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) * Result ```json { "return_length": 2 } ``` ## array\_reverse() Reverses the order of the elements in a dynamic array. ### Arguments * array: Input array to reverse. ### Returns An array that contains exactly the same elements as the input array, but in reverse order. ### Examples ```kusto array_reverse(array) ``` ```kusto ['github-issues-event'] | project reversed_array = array_reverse(labels) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27github-issues-event%27%5D%5Cn%7C%20project%20reversed_array%20%3D%20array_reverse%28labels%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) * Result ```json { "reversed_array": [ { "name": "axiom", "color": "d73a4a", "description": "Axiom observability data" } ] } ``` ## array\_rotate\_left() Rotates values inside a `dynamic` array to the left. ### Arguments | **Name** | **Type** | **Required or Optional** | **Description** | | ------------- | -------- | ------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------- | | array | dynamic | Required | Input array to rotate, must be dynamic array. | | rotate\_count | integer | Required | Number of positions that array elements will be rotated to the left. If the value is negative, the elements will be rotated to the right. | ### Returns Dynamic array containing the same amount of the elements as in original array, where each element was rotated according to rotate\_count. ### Examples ```kusto array_rotate_left(array, rotate_count) ``` ```kusto ['github-issues-event'] | project rotate_array_left = array_rotate_left(dynamic([1,2,3,4,5]), 1) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27github-issues-event%27%5D%5Cn%7C%20project%20rotate_array_left%20%3D%20array_rotate_left%28dynamic%28%5B1%2C2%2C3%2C4%2C5%5D%29%2C%201%29%22%7D) * Result ```json { "rotate_array_left": [2, 3, 4, 5, 1] } ``` ## array\_rotate\_right() Rotates values inside a `dynamic` array to the right. ### Arguments | **Name** | **Type** | **Required or Optional** | **Description** | | ------------- | -------- | ------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------- | | array | dynamic | Required | Input array to rotate, must be dynamic array. | | rotate\_count | integer | Required | Number of positions that array elements will be rotated to the right. If the value is negative, the elements will be rotated to the Left. | ### Returns Dynamic array containing the same amount of the elements as in the original array, where each element was rotated according to rotate\_count. ### Examples ```kusto array_rotate_right(array, rotate_count) ``` ```kusto ['github-issues-event'] | project rotate_array_right = array_rotate_right(dynamic([1,2,3,4,5]), 1) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27github-issues-event%27%5D%5Cn%7C%20project%20rotate_array_right%20%3D%20array_rotate_right%28dynamic%28%5B1%2C2%2C3%2C4%2C5%5D%29%2C%201%29%22%7D) * Result ```json { "rotate_array_right": [5, 1, 2, 3, 4] } ``` ## array\_select\_dict() Selects a dictionary from an array of dictionaries. ### Arguments | **Name** | **Type** | **Required or Optional** | **Description** | | -------- | -------- | ------------------------ | --------------------------------------------------- | | array | dynamic | Required | Input array of dictionaries, must be dynamic array. | | key | string | Required | Key to use for selection in of the dictionaries. | | value | scalar | Required | Value of the selected key to create a match. | ### Returns The first dictionary in the array that has a key and value match to the parameters or null if none. If a value in the array is not a dictionary, it will be ignored. ### Examples ```kusto array_select_dict(array, "search key", "matching value") ``` ```kusto datatable(a: dynamic)[dynamic([{"key": 5, "extra": "data", ""}, {"key": 6, "extra": "other_data", ""}])] | project selected = array_select_dict(a, "key", 5) ``` * Result ```json { "selected": dynamic({"key": 5, "extra": "data"}) } ``` ## array\_shift\_left() Shifts the values inside a `dynamic` array to the left. ### Arguments | **Name** | **Type** | **Required or Optional** | **Description** | | -------------- | -------- | ------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------- | | array | dynamic | Required | Input array to rotate, must be dynamic array. | | shift\_count | integer | Required | Number of positions that array elements will be shifted to the left. If the value is negative, the elements will be shifted to the right. | | default\_value | scalar | Required | Value used for inserting elements instead of the ones that were shifted and removed. The default is null or an empty string depending on the array type. | ### Returns Dynamic array containing the same number of elements as in the original array. Each element has been shifted according to shift\_count ### Examples ```kusto array_shift_left(array, shift_count [, default_value ]) ``` ```kusto ['github-issues-event'] | project shift_array_left = array_shift_left(dynamic([1,2,3,4,5]), 1) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27github-issues-event%27%5D%5Cn%7C%20project%20shift_array_left%20%3D%20array_shift_left%28dynamic%28%5B1%2C2%2C3%2C4%2C5%5D%29%2C%201%29%22%7D) * Result ```json { "shift_array_left": [2, 3, 4, 5, null] } ``` ## array\_shift\_right() shifts values inside an array to the right. ### Arguments | **Name** | **Type** | **Required or Optional** | **Description** | | -------------- | -------- | ------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------- | | array | dynamic | Required | Input array to rotate, must be dynamic array. | | shift\_count | integer | Required | Number of positions that array elements will be shifted to the right. If the value is negative, the elements will be shifted to the left. | | default\_value | scalar | Required | Value used for inserting elements instead of the ones that were shifted and removed. The default is null or an empty string depending on the array type. | ### Returns Dynamic array containing the same amount of the elements as in the original array. Each element has been shifted according to shift\_count. New elements that are added instead of the removed elements will have a value of default\_value. ### Examples ```kusto array_shift_right(array, shift_count [, default_value ]) ``` ```kusto ['github-issues-event'] | project shift_array_right = array_shift_right(dynamic([1,2,3,4,5]), 1) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27github-issues-event%27%5D%5Cn%7C%20project%20shift_array_right%20%3D%20array_shift_right%28dynamic%28%5B1%2C2%2C3%2C4%2C5%5D%29%2C%201%29%22%7D) * Result ```json { "shift_array_right": [null, 1, 2, 3, 4] } ``` ## array\_slice() Extracts a slice of a dynamic array. ### Arguments | **Name** | **Type** | **Required or Optional** | **Description** | | ------------ | -------- | ------------------------ | ---------------------------------------------------------------------------------------------- | | array | dynamic | Required | Input array to extract the slice. | | shift\_count | number | Required | Start index of the slice (inclusive). Negative values are converted to `array_length`+`start`. | | end | number | Required | Last index of the slice. (inclusive). Negative values are converted to `array_length`+`end`. | ### Returns Dynamic array of the values in the range \[start..end] from array. ### Example ```kusto array_slice(array, start, end) ``` ```kusto ['github-issues-event'] | project slice_array = array_slice(dynamic([1,2,3]), 1, 2) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27github-issues-event%27%5D%5Cn%7C%20project%20slice_array%20%3D%20array_slice%28dynamic%28%5B1%2C2%2C3%5D%29%2C%201%2C%202%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) * Result ```json { "slice_array": [2] } ``` ## array\_split() Splits an array to multiple arrays according to the split indices and packs the generated array in a dynamic array. ### Arguments | **Name** | **Type** | **Required or Optional** | **Description** | | -------- | -------- | ------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------- | | array | dynamic | Required | Array to split. | | indices | integer | Required | Split indices (zero based). This can be a single integer or a dynamic array of integers. Negative values are converted to `array_length` + `value`. | ### Returns Dynamic array containing `N+1` arrays with the values in the range \[0..1,2),from array, where **N** is the number of input indices. ### Examples ```kusto array_split(array, indices) ``` ```kusto ['github-issues-event'] | project split_array = array_split(dynamic([1,2,3,4,5]), 4) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27github-issues-event%27%5D%5Cn%7C%20project%20split_array%20%3D%20array_split%28dynamic%28%5B1%2C2%2C3%2C4%2C5%5D%29%2C%204%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) * Result ```json { "split_array": [[1, 2, 3, 4], [5]] } ``` ## array\_sum() Calculates the sum of elements in a dynamic array. ### Arguments | **Name** | **Type** | **Required or Optional** | **Description** | | -------- | -------- | ------------------------ | ------------------------------------ | | array | dynamic | Required | Array that will be used in the input | ### Returns Double type value with the sum of the elements of the array. ### Examples ```kusto array_sum(array) ``` ```kusto ['github-issues-event'] | project sum_array = array_sum(dynamic([2,5,6])) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27github-issues-event%27%5D%5Cn%7C%20project%20sum_array%20%3D%20array_sum%28dynamic%28%5B2%2C5%2C6%5D%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) ## isarray() Returns a Boolean value indicating whether a variable is an array. It determines whether the passed value is an Array. ### Arguments * Expression: input value passed to the function. ### Returns Returns `True` if the expression is an array; otherwise, it returns `False`. ### Examples ```kusto isarray(expression) ``` ```kusto ['github-issues-event'] | project is_array = isarray( ['milestone.creator'] ) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27github-issues-event%27%5D%5Cn%7C%20project%20%20is_array%20%3D%20isarray%28%20%5B%27milestone.creator%27%5D%20%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) * Result ```json { "is_array": false } ``` ## pack\_array() Packs all input values into a dynamic array. ### Arguments * Expression: Input expression value to be packed into a dynamic array. ### Returns Dynamic array which includes the values of Expr1, Expr2, ... , ExprN. ### Examples ```kusto pack_array(value, ...) ``` ```kusto ['github-issues-event'] | project packed_array = pack_array( creator, ['milestone.creator'] ) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27github-issues-event%27%5D%5Cn%7C%20project%20packed_array%20%3D%20pack_array%28%20creator%2C%20%5B%27milestone.creator%27%5D%20%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) * Result ```json { "packed_array": ["axiomhq", "next-axiom"] } ``` # Conditional functions Learn how to use and combine different conditional functions in APL ## Conditional functions | **Function Name** | **Description** | | ----------------- | ----------------------------------------------------------------------------------------------------------- | | [case()](#case) | Evaluates a list of conditions and returns the first result expression whose condition is satisfied. | | [iff()](#iff) | Evaluates the first argument (the predicate), and returns the value of either the second or third arguments | ## case() Evaluates a list of conditions and returns the first result whose condition is satisfied. ### Arguments * condition: An expression that evaluates to a Boolean. * result: An expression that Axiom evaluates and returns the value if its condition is the first that evaluates to true. * nothingMatchedResult: An expression that Axiom evaluates and returns the value if none of the conditional expressions evaluates to true. ### Returns Axiom returns the value of the first result whose condition evaluates to true. If none of the conditions is satisfied, Axiom returns the value of `nothingMatchedResult`. ### Example ```kusto case(condition1, result1, condition2, result2, condition3, result3, ..., nothingMatchedResult) ``` ```kusto ['sample-http-logs'] | extend status_human_readable = case( status_int == 200, 'OK', status_int == 201, 'Created', status_int == 301, 'Moved Permanently', status_int == 500, 'Internal Server Error', 'Other' ) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20extend%20status_code%20%3D%20case\(status_int%20%3D%3D%20200%2C%20'OK'%2C%20status_int%20%3D%3D%20201%2C%20'Created'%2C%20status_int%20%3D%3D%20301%2C%20'Moved%20Permanently'%2C%20status_int%20%3D%3D%20500%2C%20'Internal%20Server%20Error'%2C%20'Other'\)%22%7D) ## iff() Evaluates the first argument (the predicate), and returns the value of either the second or third arguments. The second and third arguments must be of the same type. ### Arguments * predicate: An expression that evaluates to a boolean value. * ifTrue: An expression that gets evaluated and its value returned from the function if predicate evaluates to `true`. * ifFalse: An expression that gets evaluated and its value returned from the function if predicate evaluates to `false`. ### Returns This function returns the value of ifTrue if predicate evaluates to true, or the value of ifFalse otherwise. ### Examples ```kusto iff(predicate, ifTrue, ifFalse) ``` ```kusto ['sample-http-logs'] | project Status = iff(req_duration_ms == 1, "numeric", "Inactive") ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20project%20Status%20%3D%20iff%28req_duration_ms%20%3D%3D%201%2C%20%5C%22numeric%5C%22%2C%20%5C%22Inactive%5C%22%29%22%7D) # Conversion functions Learn how to use and combine different conversion functions in APL ## Conversion functions | **Function Name** | **Description** | | --------------------------------------------- | ------------------------------------------------------------------------------------------ | | [ensure\_field()](#ensure-field) | Ensures the existence of a field and returns its value or a typed nil if it doesn’t exist. | | [tobool()](#tobool) | Converts input to boolean (signed 8-bit) representation. | | [todatetime()](#todatetime) | Converts input to datetime scalar. | | [todouble(), toreal()](#todouble\(\),-toreal) | Converts the input to a value of type `real`. `todouble()` and `toreal()` are synonyms. | | [tostring()](#tostring) | Converts input to a string representation. | | [totimespan()](#totimespan) | Converts input to timespan scalar. | | [tohex()](#tohex) | Converts input to a hexadecimal string. | | [tolong()](#tolong) | Converts input to long (signed 64-bit) number representation. | | [dynamic\_to\_json()](#dynamic-to-json) | Converts a scalar value of type dynamic to a canonical string representation. | | [isbool()](#isbool) | Returns a value of true or false if the expression value is passed. | | [toint()](#toint) | Converts the input to an integer value (signed 64-bit) number representation. | ## ensure\_field() Ensures the existence of a field and returns its value or a typed nil if it doesn’t exist. ### Arguments | **name** | **type** | **description** | | ----------- | -------- | ------------------------------------------------------------------------------------------------------ | | field\_name | string | The name of the field to ensure exists. | | field\_type | type | The type of the field. See [scalar data types](/apl/data-types/scalar-data-types) for supported types. | ### Returns This function returns the value of the specified field if it exists, otherwise it returns a typed nil. ### Examples ```kusto ensure_field(field_name, field_type) ``` ### Handle missing fields In this example, the value of `show_field` is nil because the `myfield` field doesn’t exist. ```kusto ['sample-http-logs'] | extend show_field = ensure_field("myfield", typeof(string)) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20show_field%20%3D%20ensure_field%28%27myfield%27%2C%20typeof%28string%29%29%22%7D) ### Access existing fields In this example, the value of `newstatus` is the value of `status` because the `status` field exists. ```kusto ['sample-http-logs'] | extend newstatus = ensure_field("status", typeof(string)) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20newstatus%20%3D%20ensure_field%28%27status%27%2C%20typeof%28string%29%29%22%7D) ### Future-proof queries In this example, the query is prepared for a field named `upcoming_field` that is expected to be added to the data soon. By using `ensure_field()`, logic can be written around this future field, and the query will work when the field becomes available. ```kusto ['sample-http-logs'] | extend new_field = ensure_field("upcoming_field", typeof(int)) | where new_field > 100 ``` ## tobool() Converts input to boolean (signed 8-bit) representation. ### Arguments * Expr: Expression that will be converted to boolean. ### Returns * If conversion is successful, result will be a boolean. If conversion isn’t successful, result will be `false` ### Examples ```kusto tobool(Expr) toboolean(Expr) (alias) ``` ```kusto tobool("true") == true ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20conversion_function%20%3D%20tobool%28%5C%22true%5C%22%29%20%3D%3D%20true%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) * Result: ```json { "conversion_function": true } ``` ## todatetime() Converts input to datetime scalar. ### Arguments * Expr: Expression that will be converted to datetime. ### Returns If the conversion is successful, the result will be a datetime value. Else, the result will be `false.` ### Examples ```kusto todatetime(Expr) ``` ```kusto todatetime("2022-11-13") ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20conversion_function%20%3D%20todatetime%28%5C%222022-11-13%5C%22%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) * Result ```json { "boo": "2022-11-13T00:00:00Z" } ``` ## todouble(), toreal() Converts the input to a value of type real. **(todouble() is an alternative word to toreal())** ### Arguments * Expr: An expression whose value will be converted to a value of type `real.` ### Returns If conversion is successful, the result is a value of type real. If conversion is not successful, the result returns false. ### Examples ```kusto toreal(Expr)todouble(Expr) ``` ```kusto toreal("1567") == 1567 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20conversion_function%20%3D%20toreal%28%5C%221567%5C%22%29%20%3D%3D%201567%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) * Result: ```json { "conversion_function": true } ``` ## tostring() Converts input to a string representation. ### Arguments * `Expr:` Expression that will be converted to string. ### Returns If the Expression value is non-null, the result will be a string representation of the Expression. If the Expression value is null, the result will be an empty string. ### Examples ```kusto tostring(Expr) ``` ```kusto tostring(axiom) == "axiom" ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20conversion_function%20%3D%20tostring%28%5C%22axiom%5C%22%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) * Result: ```json { "conversion_function": "axiom" } ``` ## totimespan Converts input to timespan scalar. ### Arguments * `Expr:` Expression that will be converted to timespan. ### Returns If conversion is successful, result will be a timespan value. Else, result will be false. ### Examples ```kusto totimespan(Expr) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20conversion_function%20%3D%20totimespan%282022-11-13%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) * Result ```json { "conversion_function": "1.998µs" } ``` ## tohex() Converts input to a hexadecimal string. ### Arguments * Expr: int or long value that will be converted to a hex string. Other types are not supported. ### Returns If conversion is successful, result will be a string value. If conversion is not successful, result will be false. ### Examples ```kusto tohex(value) ``` ```kusto tohex(-546) == 'fffffffffffffdde' ``` ```kusto tohex(546) == '222' ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20conversion_function%20%3D%20tohex%28-546%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) * Result: ```json { "conversion_function": "fffffffffffffdde" } ``` ## tolong() Converts input to long (signed 64-bit) number representation. ### Arguments * Expr: Expression that will be converted to long. ### Returns If conversion is successful, result will be a long number. If conversion is not successful, result will be false. ### Examples ```kusto tolong(Expr) ``` ```kusto tolong("241") == 241 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20conversion_function%20%3D%20tolong%28%5C%22241%5C%22%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) * Result: ```json { "conversion_function": 241 } ``` ## dynamic\_to\_json() Converts a scalar value of type `dynamic` to a canonical `string` representation. ### Arguments * dynamic input (EXpr): The function accepts one argument. ### Returns Returns a canonical representation of the input as a value of type `string`, according to the following rules: * If the input is a scalar value of type other than `dynamic`, the output is the app of `tostring()` to that value. * If the input in an array of values, the output is composed of the characters **\[, ,, and ]** interspersed with the canonical representation described here of each array element. * If the input is a property bag, the output is composed of the characters **\{, ,, and }** interspersed with the colon (:)-delimited name/value pairs of the properties. The pairs are sorted by the names, and the values are in the canonical representation described here of each array element. ### Examples ```kusto dynamic_to_json(dynamic) ``` ```kusto ['sample-http-logs'] | project conversion_function = dynamic_to_json(dynamic([1,2,3])) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20conversion_function%20%3D%20dynamic_to_json%28dynamic%28%5B1%2C2%2C3%5D%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) * Result: ```json { "conversion_function": "[1,2,3]" } ``` ## isbool() Returns a value of true or false if the expression value is passed. ### Arguments * Expr: The function accepts one argument. The variable of expression to be evaluated. ### Returns Returns `true` if expression value is a bool, `false` otherwise. ### Examples ```kusto isbool(expression) ``` ```kusto isbool("pow") == false ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20conversion_function%20%3D%20isbool%28%5C%22pow%5C%22%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) * Result: ```json { "conversion_function": false } ``` *** ## toint() Converts the input to an integer value (signed 64-bit) number representation. ### Arguments * Value: The value to convert to an [integer](/apl/data-types/scalar-data-types#the-int-data-type). ### Returns If the conversion is successful, the result will be an integer. Otherwise, the result will be `null`. ### Examples ```kusto toint(value) ``` ```kusto | project toint("456") == 456 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20toint%28%5C%22456%5C%22%29%20%3D%3D%20456%22%7D) # Datetime functions Learn how to use and combine different timespan functions in APL ## DateTime/ Timespan functions | **Function Name** | **Description** | | ---------------------------------- | -------------------------------------------------------------------------------------------------------------------- | | [ago()](#ago) | Subtracts the given timespan from the current UTC clock time. | | [datetime\_add()](#datetime-add) | Calculates a new datetime from a specified datepart multiplied by a specified amount, added to a specified datetime. | | [datetime\_part()](#datetime-part) | Extracts the requested date part as an integer value. | | [datetime\_diff()](#datetime-diff) | Calculates calendarian difference between two datetime values. | | [dayofmonth()](#dayofmonth) | Returns the integer number representing the day number of the given month | | [dayofweek()](#dayofweek) | Returns the integer number of days since the preceding Sunday, as a timespan. | | [dayofyear()](#dayofyear) | Returns the integer number represents the day number of the given year. | | [endofyear()](#endofyear) | Returns the end of the year containing the date | | [getmonth()](#getmonth) | Get the month number (1-12) from a datetime. | | [getyear()](#getyear) | Returns the year part of the `datetime` argument. | | [hourofday()](#hourofday) | Returns the integer number representing the hour number of the given date | | [endofday()](#endofday) | Returns the end of the day containing the date | | [now()](#now) | Returns the current UTC clock time, optionally offset by a given timespan. | | [endofmonth()](#endofmonth) | Returns the end of the month containing the date | | [endofweek()](#endofweek) | Returns the end of the week containing the date. | | [monthofyear()](#monthofyear) | Returns the integer number represents the month number of the given year. | | [startofday()](#startofday) | Returns the start of the day containing the date | | [startofmonth()](#startofmonth) | Returns the start of the month containing the date | | [startofweek()](#startofweek) | Returns the start of the week containing the date | | [startofyear()](#startofyear) | Returns the start of the year containing the date | * We support the ISO 8601 format, which is the standard format for representing dates and times in the Gregorian calendar. [Check them out here](/apl/data-types/scalar-data-types#supported-formats) ## ago() Subtracts the given timespan from the current UTC clock time. ### Arguments * Interval to subtract from the current UTC clock time ### Returns now() - a\_timespan ### Example ```kusto ago(6h) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20date_time_functions%20%3D%20ago%286h%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) * Result: ```json { "date_time_functions": "2023-09-11T20:12:39Z" } ``` ```kusto ago(3d) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20date_time_functions%20%3D%20ago%283d%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) * Result: ```json { "date_time_functions": "2023-09-09T02:13:29Z" } ``` ## datetime\_add() Calculates a new datetime from a specified datepart multiplied by a specified amount, added to a specified datetime. ### Arguments * period: string. * amount: integer. * datetime: datetime value. ### Returns A date after a certain time/date interval has been added. ### Example ```kusto datetime_add(period,amount,datetime) ``` ```kusto ['sample-http-logs'] | project new_datetime = datetime_add( "month", 1, datetime(2016-10-06)) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20new_datetime%20%3D%20datetime_add%28%20%5C%22month%5C%22%2C%201%2C%20datetime%282016-10-06%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) * Result: ```json { "new_datetime": "2016-11-06T00:00:00Z" } ``` ## datetime\_part() Extracts the requested date part as an integer value. ### Arguments * date: datetime * part: string ### Returns An integer representing the extracted part. ### Examples ```kusto datetime_part(part,datetime) ``` ```kusto ['sample-http-logs'] | project new_datetime = datetime_part("Day", datetime(2016-06-26T08:20:03.123456Z)) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20new_datetime%20%3D%20datetime_part%28%5C%22Day%5C%22%2C%20datetime%282016-06-26T08%3A20%3A03.123456Z%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) * Result: ```json { "new_datetime": 26 } ``` ## datetime\_diff() Calculates calendarian difference between two datetime values. ### Arguments * period: string. * datetime\_1: datetime value. * datetime\_2: datetime value. ### Returns An integer, which represents amount of periods in the result of subtraction (datetime\_1 - datetime\_2). ### Example ```kusto datetime_diff(period,datetime_1,datetime_2) ``` ```kusto ['sample-http-logs'] | project new_datetime = datetime_diff("week", datetime(2019-06-26T08:20:03.123456Z), datetime(2014-06-26T08:19:03.123456Z)) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20new_datetime%20%3D%20datetime_diff%28%5C%22week%5C%22%2C%20datetime%28%5C%222019-06-26T08%3A20%3A03.123456Z%5C%22%29%2C%20datetime%28%5C%222014-06-26T08%3A19%3A03.123456Z%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) * Result: ```json { "new_datetime": 260 } ``` ```kusto ['sample-http-logs'] | project new_datetime = datetime_diff("week", datetime(2015-11-08), datetime(2014-11-08)) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20new_datetime%20%3D%20datetime_diff%28%5C%22week%5C%22%2C%20datetime%28%5C%222014-11-08%5C%22%29%2C%20datetime%28%5C%222014-11-08%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) * Result: ```json { "new_datetime": 52 } ``` ## dayofmonth() Returns the integer number representing the day number of the given month ### Arguments * `a_date`: A `datetime` ### Returns day number of the given month. ### Example ```kusto dayofmonth(a_date) ``` ```kusto ['sample-http-logs'] | project day_of_the_month = dayofmonth(datetime(2017-11-30)) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20day_of_the_month%20%3D%20dayofmonth%28datetime%28%5C%222017-11-30%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) * Result: ```json { "day_of_the_month": 30 } ``` ## dayofweek() Returns the integer number of days since the preceding Sunday, as a timespan. ### Arguments * a\_date: A datetime. ### Returns The `timespan` since midnight at the beginning of the preceding Sunday, rounded down to an integer number of days. ### Example ```kusto dayofweek(a_date) ``` ```kusto ['sample-http-logs'] | project day_of_the_week = dayofweek(datetime(2019-05-18)) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20day_of_the_week%20%3D%20dayofweek%28datetime%28%5C%222019-05-18%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) * Result: ```json { "day_of_the_week": 6 } ``` ## dayofyear() Returns the integer number represents the day number of the given year. ### Arguments * `a_date`: A `datetime.` ### Returns `day number` of the given year. ### Example ```kusto dayofyear(a_date) ``` ```kusto ['sample-http-logs'] | project day_of_the_year = dayofyear(datetime(2020-07-20)) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20day_of_the_year%20%3D%20dayofyear%28datetime%28%5C%222020-07-20%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) * Result: ```json { "day_of_the_year": 202 } ``` ## endofyear() Returns the end of the year containing the date ### Arguments * date: The input date. ### Returns A datetime representing the end of the year for the given date value ### Example ```kusto endofyear(date) ``` ```kusto ['sample-http-logs'] | extend end_of_the_year = endofyear(datetime(2016-06-26T08:20)) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20%20end_of_the_year%20%3D%20endofyear%28datetime%28%5C%222016-06-26T08%3A20%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) * Result: ```json { "end_of_the_year": "2016-12-31T23:59:59.999999999Z" } ``` ## getmonth() Get the month number (1-12) from a datetime. ```kusto ['sample-http-logs'] | extend get_specific_month = getmonth(datetime(2020-07-26T08:20)) ``` ## getyear() Returns the year part of the `datetime` argument. ### Example ```kusto getyear(datetime()) ``` ```kusto ['sample-http-logs'] | project get_specific_year = getyear(datetime(2020-07-26)) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20get_specific_year%20%3D%20getyear%28datetime%28%5C%222020-07-26%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) * Result: ```json { "get_specific_year": 2020 } ``` ## hourofday() Returns the integer number representing the hour number of the given date ### Arguments * a\_date: A datetime. ### Returns hour number of the day (0-23). ### Example ```kusto hourofday(a_date) ``` ```kusto ['sample-http-logs'] | project get_specific_hour = hourofday(datetime(2016-06-26T08:20:03.123456Z)) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20get_specific_hour%20%3D%20hourofday%28datetime%28%5C%222016-06-26T08%3A20%3A03.123456Z%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) * Result: ```json { "get_specific_hour": 8 } ``` ## endofday() Returns the end of the day containing the date ### Arguments * date: The input date. ### Returns A datetime representing the end of the day for the given date value. ### Example ```kusto endofday(date) ``` ```kusto ['sample-http-logs'] | project end_of_day_series = endofday(datetime(2016-06-26T08:20:03.123456Z)) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20end_of_day_series%20%3D%20endofday%28datetime%28%5C%222016-06-26T08%3A20%3A03.123456Z%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) * Result: ```json { "end_of_day_series": "2016-06-26T23:59:59.999999999Z" } ``` ## now() Returns the current UTC clock time, optionally offset by a given timespan. This function can be used multiple times in a statement and the clock time being referenced will be the same for all instances. ### Arguments * offset: A timespan, added to the current UTC clock time. Default: 0. ### Returns The current UTC clock time as a datetime. ### Example ```kusto now([offset]) ``` ```kusto ['sample-http-logs'] | project returns_clock_time = now(-5d) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20returns_clock_time%20%3D%20now%28-5d%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) * Result: ```json { "returns_clock_time": "2023-09-07T02:54:50Z" } ``` ## endofmonth() Returns the end of the month containing the date ### Arguments * date: The input date. ### Returns A datetime representing the end of the month for the given date value. ### Example ```kusto endofmonth(date) ``` ```kusto ['sample-http-logs'] | project end_of_the_month = endofmonth(datetime(2016-06-26T08:20)) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20end_of_the_month%20%3D%20endofmonth%28datetime%28%5C%222016-06-26T08%3A20%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) * Result: ```json { "end_of_the_month": "2016-06-30T23:59:59.999999999Z" } ``` ## endofweek() Returns the end of the week containing the date ### Arguments * date: The input date. ### Returns A datetime representing the end of the week for the given date value ### Example ```kusto endofweek(date) ``` ```kusto ['sample-http-logs'] | project end_of_the_week = endofweek(datetime(2019-04-18T08:20)) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20end_of_the_week%20%3D%20endofweek%28datetime%28%5C%222019-04-18T08%3A20%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) * Result: ```json { "end_of_the_week": "2019-04-20T23:59:59.999999999Z" } ``` ## monthofyear() Returns the integer number represents the month number of the given year. ### Arguments * `date`: A datetime. ### Returns month number of the given year. ### Example ```kusto monthofyear(datetime("2018-11-21")) ``` ```kusto ['sample-http-logs'] | project month_of_the_year = monthofyear(datetime(2018-11-11)) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20month_of_the_year%20%3D%20monthofyear%28datetime%28%5C%222018-11-11%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) * Result: ```json { "month_of_the_year": 11 } ``` ## startofday() Returns the start of the day containing the date ### Arguments * date: The input date. ### Returns A datetime representing the start of the day for the given date value ### Examples ```kusto startofday(datetime(2020-08-31)) ``` ```kusto ['sample-http-logs'] | project start_of_the_day = startofday(datetime(2018-11-11)) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20start_of_the_day%20%3D%20startofday%28datetime%28%5C%222018-11-11%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) * Result: ```json { "start_of_the_day": "2018-11-11T00:00:00Z" } ``` ## startofmonth() Returns the start of the month containing the date ### Arguments * date: The input date. ### Returns A datetime representing the start of the month for the given date value ### Example ```kusto ['github-issues-event'] | project start_of_the_month = startofmonth(datetime(2020-08-01)) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27github-issues-event%27%5D%5Cn%7C%20project%20start_of_the_month%20%3D%20%20startofmonth%28datetime%28%5C%222020-08-01%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) * Result: ```json { "start_of_the_month": "2020-08-01T00:00:00Z" } ``` ```kusto ['hackernews'] | extend start_of_the_month = startofmonth(datetime(2020-08-01)) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27hn%27%5D%5Cn%7C%20project%20start_of_the_month%20%3D%20startofmonth%28datetime%28%5C%222020-08-01%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) * Result: ```json { "start_of_the_month": "2020-08-01T00:00:00Z" } ``` ## startofweek() Returns the start of the week containing the date Start of the week is considered to be a Sunday. ### Arguments * date: The input date. ### Returns A datetime representing the start of the week for the given date value ### Examples ```kusto ['github-issues-event'] | extend start_of_the_week = startofweek(datetime(2020-08-01)) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27github-issues-event%27%5D%5Cn%7C%20project%20start_of_the_week%20%3D%20%20startofweek%28datetime%28%5C%222020-08-01%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) * Result: ```json { "start_of_the_week": "2020-07-26T00:00:00Z" } ``` ```kusto ['hackernews'] | extend start_of_the_week = startofweek(datetime(2020-08-01)) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27hn%27%5D%5Cn%7C%20project%20start_of_the_week%20%3D%20startofweek%28datetime%28%5C%222020-08-01%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) * Result: ```json { "start_of_the_week": "2020-07-26T00:00:00Z" } ``` ```kusto ['sample-http-logs'] | extend start_of_the_week = startofweek(datetime(2018-06-11T00:00:00Z)) ``` ## startofyear() Returns the start of the year containing the date ### Arguments * date: The input date. ### Returns A datetime representing the start of the year for the given date value ### Examples ```kusto ['sample-http-logs'] | project yearStart = startofyear(datetime(2019-04-03)) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20yearStart%20%3D%20startofyear%28datetime%28%5C%222019-04-03%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) * Result: ```json { "yearStart": "2019-01-01T00:00:00Z" } ``` ```kusto ['sample-http-logs'] | project yearStart = startofyear(datetime(2019-10-09 01:00:00.0000000)) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20yearStart%20%3D%20startofyear%28datetime%28%5C%222019-10-09%2001%3A00%3A00.0000000%5C%22%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) * Result: ```json { "yearStart": "2019-01-01T00:00:00Z" } ``` # Hash functions Learn how to use and combine various hash functions in APL ## Hash functions | **Function Name** | **Description** | | ------------------------------ | ------------------------------------------------ | | [hash\_md5()](#hash-md5) | Returns a MD5 hash value for the input value. | | [hash\_sha1()](#hash-sha1) | Returns a sha1 hash value for the input value. | | [hash\_sha256()](#hash-sha256) | Returns a SHA256 hash value for the input value. | | [hash\_sha512()](#hash-sha512) | Returns a SHA512 hash value for the input value. | ## hash\_md5() Returns an MD5 hash value for the input value. ### Arguments * source: The value to be hashed. ### Returns The MD5 hash value of the given scalar, encoded as a hex string (a string of characters, each two of which represent a single Hex number between 0 and 255). ### Examples ```kusto hash_md5(source) ``` ```kusto ['sample-http-logs'] | project md5_hash_value = hash_md5(content_type) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20md5_hash_value%20%3D%20hash_md5%28content_type%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) * Result ```json { "md5_hash_value": "b980a9c041dbd33d5893fad65d33284b" } ``` ## hash\_sha1() Returns a SHA1 hash value for the input value. ### Arguments * source: The value to be hashed. ### Returns The sha1 hash value of the given scalar, encoded as a hex string ### Examples ```kusto hash_sha1(source) ``` ```kusto ['sample-http-logs'] | project sha1_hash_value = hash_sha1(content_type) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20sha1_hash_value%20%3D%20hash_sha1%28content_type%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) * Result ```json { "sha1_hash_value": "9f9af029585ba014e07cd3910ca976cf56160616" } ``` ## hash\_sha256() Returns a SHA256 hash value for the input value. ### Arguments * source: The value to be hashed. ### Returns The sha256 hash value of the given scalar, encoded as a hex string (a string of characters, each two of which represent a single Hex number between 0 and 255). ### Examples ```kusto hash_sha256(source) ``` ```kusto ['sample-http-logs'] | project sha256_hash_value = hash_sha256(content_type) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20sha256_hash_value%20%3D%20hash_sha256%28content_type%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) * Result ```json { "sha256_hash_value": "bb4770ff4ac5b7d2be41a088cb27d8bcaad53b574b6f27941e8e48e9e10fc25a" } ``` ## hash\_sha512() Returns a SHA512 hash value for the input value. ### Arguments * source: The value to be hashed. ### Returns The sha512 hash value of the given scalar, encoded as a hex string (a string of characters, each two of which represent a single Hex number between 0 and 511). ### Examples ```kusto hash_sha512(source) ``` ```kusto ['sample-http-logs'] | project sha512_hash_value = hash_sha512(status) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20sha512_hash_value%20%3D%20hash_sha512%28status%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) * Result ```json { "sha512_hash_value": "0878a61b503dd5a9fe9ea3545d6d3bd41c3b50a47f3594cb8bbab3e47558d68fc8fcc409cd0831e91afc4e609ef9da84e0696c50354ad86b25f2609efef6a834" } ``` *** ```kusto ['sample-http-logs'] | project sha512_hash_value = hash_sha512(content_type) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20sha512_hash_value%20%3D%20hash_sha512%28content_type%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) * Result ```json { "sha512_hash_value": "95c6eacdd41170b129c3c287cfe088d4fafea34e371422b94eb78b9653a89d4132af33ef39dd6b3d80e18c33b21ae167ec9e9c2d820860689c647ffb725498c4" } ``` # IP functions Learn how to use and combine different ip functions in APL ## IP functions | **Function Name** | **Description** | | ----------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------ | | [format\_ipv4()](#format-ipv4) | Parses input with a netmask and returns string representing IPv4 address. | | [parse\_ipv4()](#parse-ipv4) | Converts input to long (signed 64-bit) number representation. | | [parse\_ipv4\_mask()](#parse-ipv4-mask) | Converts input string and IP-prefix mask to long (signed 64-bit) number representation. | | [ipv4\_is\_in\_range()](#ipv4-is-in-range) | Checks if IPv4 string address is in IPv4-prefix notation range. | | [ipv4\_is\_private()](#ipv4-is-private) | Checks if IPv4 string address belongs to a set of private network IPs. | | [ipv4\_netmask\_suffix()](#ipv4-netmask-suffix) | Returns the value of the IPv4 netmask suffix from IPv4 string address. | | [geo\_info\_from\_ip\_address()](#geo-info-from-ip-address) | Extracts geographical, geolocation, and network information from IP addresses. It supports both IPv4 and IPv6 addresses. | ### IP-prefix notation IP addresses can be defined with `IP-prefix notation` using a slash (**/**) character. The IP address to the LEFT of the slash (**/**\*) is the base IP address. The number (1 to 32) to the RIGHT of the slash (**/**) is the number of contiguous 1 bit in the netmask. For example, `192.168.2.0/24` will have an associated net/subnetmask containing 24 contiguous bits or `255.255.255.0` in dotted decimal format. ## format\_ipv4() Parses input with a netmask and returns string representing IPv4 address. ### Arguments * Expr(IP): A string or number representation of the IPv4 address. ### Returns If conversion is successful, the result will be a string representing IPv4 address. If conversion isn’t successful, the result will be an empty string. ### Example ```kusto format_ipv4(ip) ``` ```kusto ['sample-http-logs'] | project str_ipv4 = format_ipv4("192.168.2.0") ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20str_ipv4%20%3D%20format_ipv4%28%5C%22192.168.2.0%5C%22%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) * Result ```json { "str_ipv4": "192.168.2.0" } ``` ## parse\_ipv4() Converts IPv4 string to long (signed 64-bit) number representation. ### Arguments * Expr: String expression representing IPv4 that will be converted to long. String may include net-mask using IP-prefix notation. ### Returns If conversion is successful, the result will be a long number. If conversion isn’t successful, the result will be `null.` ### Example ```kusto parse_ipv4(Expr) ``` ```kusto ['sample-http-logs'] | project parsed_ipv4 = parse_ipv4("192.168.2.0") ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20parsed_ipv4%20%3D%20parse_ipv4%28%5C%22192.168.2.0%5C%22%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) * Result ```json { "parsed_ipv4": 3232236032 } ``` ## parse\_ipv4\_mask() Converts the input string of IPv4 and netmask to long number representation (signed 64-bit). ### Arguments * `Expr:` A string representation of the IPv4 address that will be converted to long. * `PrefixMask:` An integer from 0 to 32 representing the number of most-significant bits that are taken into account. ### Returns If conversion is successful, the result will be a long number. If conversion isn’t successful, the result will be `null.` ### Example ```kusto parse_ipv4_mask(Expr, PrefixMask) ``` ```kusto ['sample-http-logs'] | project parsed_ipv4 = parse_ipv4_mask("192.5.1.4", 24) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20parsed_ipv4%20%3D%20parse_ipv4_mask%28%5C%22192.5.1.4%5C%22%2C%2024%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) * Result ```json { "parsed_ipv4": 3221553408 } ``` ## ipv4\_is\_in\_range() Checks if IPv4 string address is in IPv4-prefix notation range. ### Arguments * Ipv4Address: A string expression representing an IPv4 address. * Ipv4Range: A string expression representing an IPv4 range using [IP-prefix notation](/apl/scalar-functions/ip-functions#ip-prefix-notation). ### Returns * `true`: If the long representation of the first IPv4 string argument is in range of the second IPv4 string argument. * `false`: Otherwise. * `null`: If conversion for one of the two IPv4 strings wasn’t successful. ### Examples ```kusto ipv4_is_in_range('192.168.1.5', '192.168.1.2/24') ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20project%20ipv4_range%20%3D%20ipv4_is_in_range%28'192.168.1.5'%2C%20'192.168.1.2%2F24'%29%22%7D) * Result ```json { "ipv4_in_range": true } ``` ```kusto ipv4_is_in_range("127.2.3.1", "127.2.3.1") == true ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20ipv4_range%20%3D%20ipv4_is_in_range%28%5C%22127.2.3.1%5C%22%2C%20%5C%22127.2.3.1%5C%22%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) * Result ```json { "ipv4_range": true } ``` ```kusto ipv4_is_in_range('192.168.1.5', '192.168.1.5/24') == true ``` ```kusto ipv4_is_in_range('192.168.1.5', '192.168.2.1/24') == false ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%7Cproject%20ipv4_range%20%3D%20ipv4_is_in_range%28%27%20192.168.1.5%27%2C%20%27192.168.2.1%2F24%27%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) * Result ```json { "ipv4_range": false } ``` ## ipv4\_is\_private() Checks if IPv4 string address belongs to a set of private network IPs. ### Private IPv4 addresses The private IPv4 addresses reserved for private networks by the Internet Assigned Numbers Authority (IANA) are: | **IP address range** | **Number of addresses** | **Largest CIDR block (subnet mask)** | | --------------------------------- | ----------------------- | ------------------------------------ | | **10.0.0.0 – 10.255.255.255** | **16777216** | **10.0.0.0/8 (255.0.0.0)** | | **172.16.0.0 – 172.31.255.255** | **1048576** | **172.16.0.0/12 (255.240.0.0)** | | **192.168.0.0 – 192.168.255.255** | **65536** | **192.168.0.0/16 (255.255.0.0)** | ### Arguments * Expr: A string expression representing an IPv4 address. IPv4 strings can be masked using [IP-prefix notation.](/apl/scalar-functions/ip-functions#ip-prefix-notation) ### Returns * `true`: If the IPv4 address belongs to any of the private network ranges. * `false`: Otherwise. * `null`: If parsing of the input as IPv4 address string wasn’t successful. ### Example ```kusto ipv4_is_private('192.168.2.1') == true ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20ipv4_range%20%3D%20ipv4_is_private%28%27192.168.2.1%27%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) * Result ```json { "ipv4_private": true } ``` ```kusto ipv4_is_private('208.1.2.3') == false ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20ipv4_private%20%3D%20ipv4_is_private%28%27208.1.2.3%27%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) * Result ```json { "ipv4_private": false } ``` ## ipv4\_netmask\_suffix() Returns the value of the IPv4 netmask suffix from IPv4 string address. ### Arguments Expr: A string expression representing an IPv4 address. IPv4 strings can be masked using [IP-prefix notation](/apl/scalar-functions/ip-functions#ip-prefix-notation). ### Returns * The value of the netmask suffix the IPv4 address. If suffix is not present in the input, a value of **32** (full netmask suffix) is returned. * null: If parsing of the input as IPv4 address string wasn’t successful. ### Example ```kusto ipv4_netmask_suffix('192.164.2.2/24') == 24 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20netmask_suffix%20%3D%20ipv4_netmask_suffix%28%27192.164.2.2%2F24%27%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) * Result ```json { "netmask_suffix": 24 } ``` ```kusto ipv4_netmask_suffix('192.166.1.2') == 32 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20netmask_suffix%20%3D%20ipv4_netmask_suffix%28%27192.166.1.2%27%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) * Result ```json { "netmask_suffix": 32 } ``` ## geo\_info\_from\_ip\_address() Extracts geographical, geolocation, and network information from IP addresses. It supports both IPv4 and IPv6 addresses. ### Arguments | Name | Type | Required | Description | | --------- | ------ | -------- | --------------------------------------------------------------- | | ipAddress | String | Yes | The IP address to extract information from. Can be IPv4 or IPv6 | ### Returns A dynamic object containing the information on the IP address’s whereabouts (if the information is available). The object contains the following fields: | Name | Type | Description | | ------------ | ------ | -------------------------------------------- | | country | string | Country name | | state | string | State (subdivision) name | | city | string | City name | | latitude | real | Latitude coordinate | | longitude | real | Longitude coordinate | | country\_iso | string | ISO code of the country | | time\_zone | string | Time zone in which the IP address is located | ### Examples ```kusto geo_info_from_ip_address(IpAddress) ``` ### IPv4 Examples ### Extracting geolocation information from IPv4 address ```kusto ['sample-http-logs'] | extend ip_location = geo_info_from_ip_address('172.217.11.4') ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20ip_location%20%3D%20geo_info_from_ip_address%28%27172.217.11.4%27%29%22%7D) ### Projecting geolocation information from IPv4 address ```kusto ['sample-http-logs'] | project ip_location=geo_info_from_ip_address('20.53.203.50') ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20ip_location%3Dgeo_info_from_ip_address%28%2720.53.203.50%27%29%22%7D) ### Filtering geolocation information from IPv4 address ```kusto ['sample-http-logs'] | extend ip_location = geo_info_from_ip_address('20.53.203.50') | where ip_location.country == "Australia" and ip_location.country_iso == "AU" and ip_location.state == "New South Wales" ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20ip_location%20%3D%20geo_info_from_ip_address%28%2720.53.203.50%27%29%5Cn%7C%20where%20ip_location.country%20%3D%3D%20%5C%22Australia%5C%22%20and%20ip_location.country_iso%20%3D%3D%20%5C%22AU%5C%22%20and%20ip_location.state%20%3D%3D%20%5C%22New%20South%20Wales%5C%22%22%7D) ### Grouping geolocation information from IPv4 address ```kusto ['sample-http-logs'] | extend ip_location = geo_info_from_ip_address('20.53.203.50') | summarize Count=count() by ip_location.state, ip_location.city, ip_location.latitude, ip_location.longitude ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20ip_location%20%3D%20geo_info_from_ip_address%28%2720.53.203.50%27%29%5Cn%7C%20summarize%20Count%3Dcount%28%29%20by%20ip_location.state%2C%20ip_location.city%2C%20ip_location.latitude%2C%20ip_location.longitude%22%7D) ### IPv6 Examples ### Extracting geolocation information from IPv6 address ```kusto ['sample-http-logs'] | extend ip_location = geo_info_from_ip_address('2607:f8b0:4005:805::200e') ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20ip_location%20%3D%20geo_info_from_ip_address%28%272607%3Af8b0%3A4005%3A805%3A%3A200e%27%29%22%7D) ### Projecting geolocation information from IPv6 address ```kusto ['sample-http-logs'] | project ip_location=geo_info_from_ip_address('2a03:2880:f12c:83:face:b00c::25de') ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20ip_location%3Dgeo_info_from_ip_address%28%272a03%3A2880%3Af12c%3A83%3Aface%3Ab00c%3A%3A25de%27%29%22%7D) ### Filtering geolocation information from IPv6 address ```kusto ['sample-http-logs'] | extend ip_location = geo_info_from_ip_address('2a03:2880:f12c:83:face:b00c::25de') | where ip_location.country == "United States" and ip_location.country_iso == "US" and ip_location.state == "Florida" ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20ip_location%20%3D%20geo_info_from_ip_address%28%272a03%3A2880%3Af12c%3A83%3Aface%3Ab00c%3A%3A25de%27%29%5Cn%7C%20where%20ip_location.country%20%3D%3D%20%5C%22United%20States%5C%22%20and%20ip_location.country_iso%20%3D%3D%20%5C%22US%5C%22%20and%20ip_location.state%20%3D%3D%20%5C%22Florida%5C%22%22%7D) ### Grouping geolocation information from IPv6 address ```kusto ['sample-http-logs'] | extend ip_location = geo_info_from_ip_address('2a03:2880:f12c:83:face:b00c::25de') | summarize Count=count() by ip_location.state, ip_location.city, ip_location.latitude, ip_location.longitude ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20ip_location%20%3D%20geo_info_from_ip_address%28%272a03%3A2880%3Af12c%3A83%3Aface%3Ab00c%3A%3A25de%27%29%5Cn%7C%20summarize%20Count%3Dcount%28%29%20by%20ip_location.state%2C%20ip_location.city%2C%20ip_location.latitude%2C%20ip_location.longitude%22%7D) # Mathematical functions Learn how to use and combine different mathematical functions in APL ## Mathematical functions | **Function Name** | **Description** | | ----------------------- | -------------------------------------------------------------------------------------------------------------- | | [abs()](#abs) | Calculates the absolute value of the input. | | [acos()](#acos) | Returns the angle whose cosine is the specified number (the inverse operation of cos()). | | [asin()](#asin) | Returns the angle whose sine is the specified number (the inverse operation of sin()). | | [atan()](#atan) | Returns the angle whose tangent is the specified number (the inverse operation of tan()). | | [atan2()](#atan2) | Calculates the angle, in radians, between the positive x-axis and the ray from the origin to the point (y, x). | | [cos()](#cos) | Returns the cosine function. | | [degrees()](#degrees) | Converts angle value in radians into value in degrees, using formula degrees = (180 / PI) \* angle-in-radians. | | [exp()](#exp) | The base-e exponential function of x, which is e raised to the power x: e^x. | | [exp2()](#exp2) | The base-2 exponential function of x, which is 2 raised to the power x: 2^x. | | [gamma()](#gamma) | Computes gamma function. | | [isinf()](#isinf) | Returns whether input is an infinite (positive or negative) value. | | [isnan()](#isnan) | Returns whether input is Not-a-Number (NaN) value. | | [log()](#log) | Returns the natural logarithm function. | | [log10()](#log10) | Returns the common (base-10) logarithm function. | | [log2()](#log2) | Returns the base-2 logarithm function. | | [loggamma()](#loggamma) | Computes log of absolute value of the gamma function. | | [not()](#not) | Reverses the value of its bool argument. | | [pi()](#pi) | Returns the constant value of Pi (π). | | [pow()](#pow) | Returns a result of raising to power. | | [radians()](#radians) | Converts angle value in degrees into value in radians, using formula radians = (PI / 180) \* angle-in-degrees. | | [round()](#round) | Returns the rounded source to the specified precision. | | [sign()](#sign) | Sign of a numeric expression. | | [sin()](#sin) | Returns the sine function. | | [sqrt()](#sqrt) | Returns the square root function. | | [tan()](#tan) | Returns the tangent function. | | [exp10()](#exp10) | The base-10 exponential function of x, which is 10 raised to the power x: 10^x. | | [isint()](#isint) | Returns whether input is an integer (positive or negative) value | | [isfinite()](#isfinite) | Returns whether input is a finite value (is neither infinite nor NaN). | ## abs() Calculates the absolute value of the input. ### Arguments | **Name** | **Type** | **Required or Optional** | **Description** | | -------- | --------------------- | ------------------------ | -------------------------- | | x | int, real or timespan | Required | The value to make absolute | ### Returns * Absolute value of x. ### Examples ```kusto abs(x) ``` ```kusto abs(80.5) == 80.5 ``` ```kusto ['sample-http-logs'] | project absolute_value = abs(req_duration_ms) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20absolute_value%20%3D%20abs%28req_duration_ms%29%22%7D) ## acos() Returns the angle whose cosine is the specified number (the inverse operation of cos()) . ### Arguments | **Name** | **Type** | **Required or Optional** | **Description** | | -------- | -------- | ------------------------ | -------------------------------- | | x | real | Required | A real number in range \[-1,. 1] | ### Returns * The value of the arc cosine of x * `null` if `x` \< -1 or `x` > 1 ### Examples ```kusto acos(x) ``` ```kusto acos(-1) == 3.141592653589793 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20cosine_angle%20%3D%20acos%28-1%29%22%7D) ## asin() Returns the angle whose sine is the specified number (the inverse operation of sin()) . ### Arguments | **Name** | **Type** | **Required or Optional** | **Description** | | -------- | -------- | ------------------------ | -------------------------------- | | x | real | Required | A real number in range \[-1,. 1] | * x: A real number in range \[-1, 1]. ### Returns * The value of the arc sine of x * null if x \< -1 or x > 1 ### Examples ```kusto asin(x) ``` ```kusto ['sample-http-logs'] | project inverse_sin_angle = asin(-1) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20inverse_sin_angle%20%3D%20asin%28-1%29%22%7D) ## atan() Returns the angle whose tangent is the specified number (the inverse operation of tan()) . ### Arguments x: A real number. ### Returns The value of the arc tangent of x ### Examples ```kusto atan(x) ``` ```kusto atan(-1) == -0.7853981633974483 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20inverse_tan_angle%20%3D%20atan%28-1%29%22%7D) ## atan2() Calculates the angle, in radians, between the positive x-axis and the ray from the origin to the point (y, x). ### Arguments x: X coordinate (a real number). y: Y coordinate (a real number). ### Returns The angle, in radians, between the positive x-axis and the ray from the origin to the point (y, x). ### Examples ```kusto atan2(y,x) ``` ```kusto atan2(-1, 1) == -0.7853981633974483 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20angle_in_rads%20%3D%20atan2%28-1%2C%201%29%22%7D) ## cos() Returns the cosine function. ### Arguments x: A real number. ### Returns The result of cos(x) ### Examples ```kusto cos(x) ``` ```kusto cos(-1) == 0.5403023058681398 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20cosine_function%20%3D%20cos%28-1%29%22%7D) ## degrees() Converts angle value in radians into value in degrees, using formula degrees = (180 / PI ) \* angle\_in\_radians ### Arguments | **Name** | **Type** | **Required or Optional** | **Description** | | -------- | -------- | ------------------------ | ----------------- | | a | real | Required | Angle in radians. | ### Returns The corresponding angle in degrees for an angle specified in radians. ### Examples ```kusto degrees(a) ``` ```kusto degrees(3.14) == 179.9087476710785 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20degree_rads%20%3D%20degrees%283.14%29%22%7D) ## exp() The base-e exponential function of x, which is e raised to the power x: e^x. ### Arguments | **Name** | **Type** | **Required or Optional** | **Description** | | -------- | ----------- | ------------------------ | ---------------------- | | x | real number | Required | Value of the exponent. | ### Returns * Exponential value of x. * For natural (base-e) logarithms, see [log()](/apl/scalar-functions/mathematical-functions#log\(\)). * For exponential functions of base-2 and base-10 logarithms, see [exp2()](/apl/scalar-functions/mathematical-functions#exp2\(\)), [exp10()](/apl/scalar-functions/mathematical-functions#exp10\(\)) ### Examples ```kusto exp(x) ``` ```kusto exp(1) == 2.718281828459045 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20exponential_value%20%3D%20exp%281%29%22%7D) ## exp2() The base-2 exponential function of x, which is 2 raised to the power x: 2^x. ### Arguments | **Name** | **Type** | **Required or Optional** | **Description** | | -------- | ----------- | ------------------------ | ---------------------- | | x | real number | Required | Value of the exponent. | ### Returns * Exponential value of x. * For natural (base-2) logarithms, see [log2()](/apl/scalar-functions/mathematical-functions#log2\(\)). * For exponential functions of base-e and base-10 logarithms, see [exp()](/apl/scalar-functions/mathematical-functions#exp\(\)), [exp10()](/apl/scalar-functions/mathematical-functions#exp10\(\)) ### Examples ```kusto exp2(x) ``` ```kusto | project base_2_exponential_value = exp2(req_duration_ms) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20base_2_exponential_value%20%3D%20exp2%28req_duration_ms%29%22%7D) ## gamma() Computes [gamma function](https://en.wikipedia.org/wiki/Gamma_function) ### Arguments * x: Parameter for the gamma function ### Returns * Gamma function of x. * For computing log-gamma function, see loggamma(). ### Examples ```kusto gamma(x) ``` ```kusto gamma(4) == 6 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20gamma_function%20%3D%20gamma%284%29%22%7D) ## isinf() Returns whether input is an infinite (positive or negative) value. ### Example ```kusto isinf(x) ``` ```kusto isinf(45.56) == false ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20infinite_value%20%3D%20isinf%2845.56%29%22%7D) ### Arguments x: A real number. ### Returns A non-zero value (true) if x is a positive or negative infinite; and zero (false) otherwise. ## isnan() Returns whether input is Not-a-Number (NaN) value. ### Arguments x: A real number. ### Returns A non-zero value (true) if x is NaN; and zero (false) otherwise. ### Examples ```kusto isnan(x) ``` ```kusto isnan(45.56) == false ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20nan%20%3D%20isnan%2845.56%29%22%7D) ## log() log() returns the natural logarithm function. ### Arguments | **Name** | **Type** | **Required or Optional** | **Description** | | -------- | -------- | ------------------------ | ------------------ | | x | real | Required | A real number > 0. | ### Returns The natural logarithm is the base-e logarithm: the inverse of the natural exponential function (exp). null if the argument is negative or null or can’t be converted to a real value. ### Examples ```kusto log(x) ``` ```kusto log(1) == 0 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20natural_log%20%3D%20log%281%29%22%7D) ## log10() log10() returns the common (base-10) logarithm function. ### Arguments | **Name** | **Type** | **Required or Optional** | **Description** | | -------- | -------- | ------------------------ | ------------------ | | x | real | Required | A real number > 0. | ### Returns The common logarithm is the base-10 logarithm: the inverse of the exponential function (exp) with base 10. null if the argument is negative or null or can’t be converted to a real value. ### Examples ```kusto log10(x) ``` ```kusto log10(4) == 0.6020599913279624 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20base10%20%3D%20log10%284%29%22%7D) ## log2() log2() returns the base-2 logarithm function. ### Arguments | **Name** | **Type** | **Required or Optional** | **Description** | | -------- | -------- | ------------------------ | ------------------ | | x | real | Required | A real number > 0. | ### Returns The logarithm is the base-2 logarithm: the inverse of the exponential function (exp) with base 2. null if the argument is negative or null or can’t be converted to a real value. ### Examples ```kusto log2(x) ``` ```kusto log2(6) == 2.584962500721156 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20base2_log%20%3D%20log2%286%29%22%7D) ## loggamma() Computes log of absolute value of the [gamma function](https://en.wikipedia.org/wiki/Gamma_function) ### Arguments x: Parameter for the gamma function ### Returns * Returns the natural logarithm of the absolute value of the gamma function of x. ### Examples ````kusto loggamma(x) ```kusto loggamma(16) == 27.89927138384089 ```` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20gamma_function%20%3D%20loggamma%2816%29%22%7D) ## not() Reverses the value of its bool argument. ### Examples ```kusto not(expr) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20reverse%20%3D%20not%28false%29%22%7D) ### Arguments | **Name** | **Type** | **Required or Optional** | **Description** | | -------- | -------- | ------------------------ | ----------------------------------- | | Expr | bool | Required | A `bool` expression to be reversed. | ### Returns Returns the reversed logical value of its bool argument. ## pi() Returns the constant value of Pi. ### Returns * The double value of Pi (3.1415926...) ### Examples ```kusto pi() ``` ```kusto ['sample-http-logs'] | project pie = pi() ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20pie%20%3D%20pi%28%29%22%7D) ## pow() Returns a result of raising to power ### Examples ```kusto pow(base, exponent ) ``` ```kusto pow(2, 6) == 64 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20power%20%3D%20pow%282%2C%206%29%22%7D) ### Arguments * *base:* Base value. * *exponent:* Exponent value. ### Returns Returns base raised to the power exponent: base ^ exponent. ## radians() Converts angle value in degrees into value in radians, using formula `radians = (PI / 180 ) * angle_in_degrees` ### Arguments | **Name** | **Type** | **Required or Optional** | **Description** | | -------- | -------- | ------------------------ | --------------------------------- | | a | real | Required | Angle in degrees (a real number). | ### Returns The corresponding angle in radians for an angle specified in degrees. ### Examples ```kusto radians(a) ``` ```kusto radians(60) == 1.0471975511965976 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20radians%20%3D%20radians%2860%29%22%7D) ## round() Returns the rounded source to the specified precision. ### Arguments * source: The source scalar the round is calculated on. * Precision: Number of digits the source will be rounded to.(default value is 0) ### Returns The rounded source to the specified precision. ### Examples ```kusto round(source [, Precision]) ``` ```kusto round(25.563663) == 26 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20rounded_value%20%3D%20round%2825.563663%29%22%7D) ## sign() Sign of a numeric expression ### Examples ```kusto sign(x) ``` ```kusto sign(25.563663) == 1 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20numeric_expression%20%3D%20sign%2825.563663%29%22%7D) ### Arguments * x: A real number. ### Returns * The positive (+1), zero (0), or negative (-1) sign of the specified expression. ## sin() Returns the sine function. ### Examples ```kusto sin(x) ``` ```kusto sin(25.563663) == 0.41770848373492825 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20sine_function%20%3D%20sin%2825.563663%29%22%7D) ### Arguments | **Name** | **Type** | **Required or Optional** | **Description** | | -------- | -------- | ------------------------ | --------------- | | x | real | Required | A real number. | ### Returns The result of sin(x) ## sqrt() Returns the square root function. ### Examples ```kusto sqrt(x) ``` ```kusto sqrt(25.563663) == 5.0560521160288685 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20squareroot%20%3D%20sqrt%2825.563663%29%22%7D) ### Arguments | **Name** | **Type** | **Required or Optional** | **Description** | | -------- | -------- | ------------------------ | ------------------- | | x | real | Required | A real number >= 0. | ### Returns * A positive number such that \_sqrt(x) \_ sqrt(x) == x\* * null if the argument is negative or cannot be converted to a real value. ## tan() Returns the tangent function. ### Examples ```kusto tan(x) ``` ```kusto tan(25.563663) == 0.4597371460602336 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20tangent_function%20%3D%20tan%2825.563663%29%22%7D) ### Argument | **Name** | **Type** | **Required or Optional** | **Description** | | -------- | -------- | ------------------------ | --------------- | | x | real | Required | A real number. | ### Returns * The result of `tan(x)` ## exp10() The base-10 exponential function of x, which is 10 raised to the power x: 10^x. ### Examples ```kusto exp10(x) ``` ```kusto exp10(25.563663) == 36,615,333,994,520,800,000,000,000 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20base_10_exponential%20%3D%20pow%2810%2C%2025.563663%29%22%7D) ### Arguments | **Name** | **Type** | **Required or Optional** | **Description** | | -------- | -------- | ------------------------ | ------------------------------------- | | x | real | Required | A real number, value of the exponent. | ### Returns * Exponential value of x. * For natural (base-10) logarithms, see [log10()](/apl/scalar-functions/mathematical-functions#log10\(\)). * For exponential functions of base-e and base-2 logarithms, see [exp()](/apl/scalar-functions/mathematical-functions#exp\(\)), [exp2()](/apl/scalar-functions/mathematical-functions#exp2\(\)) ## isint() Returns whether input is an integer (positive or negative) value. ### Arguments * Expr: expression value which can be a real number ### Returns A non-zero value (true) if expression is a positive or negative integer; and zero (false) otherwise. ### Examples ```kusto isint(expression) ``` ```kusto isint(resp_body_size_bytes) == true ``` ```kusto isint(25.563663) == false ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20integer_value%20%3D%20isint%2825.563663%29%22%7D) ## isfinite() Returns whether input is a finite value (is neither infinite nor NaN). ### Arguments * number: A real number. ### Returns A non-zero value (true) if x is finite; and zero (false) otherwise. ### Examples ```kusto isfinite(number) ``` ```kusto isfinite(25.563663) == true ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20isfinite_value%20%3D%20isfinite%2825.563663%29%22%7D) # Pair functions Learn how to use and combine different pair functions in APL ## Pair functions | **Function Name** | **Description** | | ---------------------------- | ------------------------------------ | | [pair()](#pair) | Creates a pair from a key and value. | | [parse\_pair()](#parse-pair) | Parses a string to form a pair. | Each argument has a **required** section which is denoted with `required` or `optional` * If it’s denoted by `required` it means the argument must be passed into that function before it'll work. * if it’s denoted by `optional` it means the function can work without passing the argument value. ## pair() Creates a pair from a key and value. ### Arguments | **Name** | **Type** | **Required or Optional** | **Description** | | --------- | -------- | ------------------------ | ----------------------------------------------- | | Key | string | Required | String for the key in the pair | | Value | string | Required | String for the value in the pair | | Separator | string | Optional (Default: ":") | Separator between the key and value in the pair | ### Returns Returns a pair with the key **Key** and the value **Value** with the separator **Seperator**. ### Examples ```kusto pair("key", "value", ".") ``` ```kusto ['logs'] | where tags contains pair("host", "mymachine") ``` [Run in Playground]() ## parse\_pair() Creates a pair from a key and value. ### Arguments | **Name** | **Type** | **Required or Optional** | **Description** | | --------- | -------- | ------------------------ | ----------------------------------------------- | | Pair | string | Required | String that has a pair of key value to pull out | | Separator | string | Optional (Default: ":") | Separator between the key and value in the pair | ### Returns Returns a pair with the key and value separated by the separator **Seperator** in **Pair**. If none is found a pair with the value of **Pair** and an empty key is returned. ### Examples ```kusto parse_pair("key.value", ".") ``` ```kusto ['logs'] | where parse_pair(tags[0]).key == "host" ``` [Run in Playground]() # Rounding functions Learn how to use and combine different rounding functions in APL ## Rounding functions | **Function Name** | **Description** | | ------------------------ | ------------------------------------------------------------------------------------------------------------------------- | | [ceiling()](#ceiling) | Calculates the smallest integer greater than, or equal to, the specified numeric expression. | | [bin()](#bin) | Rounds values down to an integer multiple of a given bin size. | | [bin\_auto()](#bin-auto) | Rounds values down to a fixed-size "bin", with control over the bin size and starting point provided by a query property. | | [floor()](#floor) | Calculates the largest integer less than, or equal to, the specified numeric expression. | ## ceiling() Calculates the smallest integer greater than, or equal to, the specified numeric expression. ### Arguments * x: A real number. ### Returns * The smallest integer greater than, or equal to, the specified numeric expression. ### Examples ```kusto ceiling(x) ``` ```kusto ceiling(25.43) == 26 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20project%20smallest_integer%20%3D%20ceiling%2825.43%29%22%7D) ## bin() Rounds values down to an integer multiple of a given bin size. The `bin()` function is used with [summarize operator](/apl/tabular-operators/summarize-operator). If your set of values are disorderly, they will be grouped into fractions. ### Arguments * value: A date, number, or [timespan](/apl/data-types/scalar-data-types#timespan-literals) * roundTo: The "bin size", a number or timespan that divides value. ### Returns The nearest multiple of roundTo below value. ### Examples ```kusto bin(value,roundTo) ``` ```kusto bin(25.73, 4) == 24 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20project%20round_value%20%3D%20bin%2825.73%2C%204%29%22%7D) ## bin\_auto() Rounds values down to a fixed-size "bin", the `bin_auto()` function can only be used with the [summarize operator](/apl/tabular-operators/summarize-operator) by statement with the `_time` column. ### Arguments * Expression: A scalar expression of a numeric type indicating the value to round. ### Returns The nearest multiple of `query_bin_auto_at` below Expression, shifted so that `query_bin_auto_at` will be translated into itself. ### Example ```kusto summarize count() by bin_auto(_time) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20summarize%20count%28%29%20by%20bin_auto%28_time%29%22%7D) ## floor() Calculates the largest integer less than, or equal to, the specified numeric expression. ### Arguments * number: A real number. ### Returns * The largest integer greater than, or equal to, the specified numeric expression. ### Examples ```kusto floor(number) ``` ```kusto floor(25.73) == 25 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20project%20largest_integer_number%20%3D%20floor%2825.73%29%22%7D) # SQL functions Learn how to use SQL functions in APL ## SQL functions | **Function Name** | **Description** | | ---------------------------- | ------------------------------------------------------------------------------------------------------------------ | | [parse\_sql()](#parse-sql) | Interprets and analyzes SQL queries, making it easier to extract and understand SQL statements within datasets. | | [format\_sql()](#format-sql) | Converts the data model produced by `parse_sql()` back into a SQL statement for validation or formatting purposes. | ## parse\_sql() Analyzes an SQL statement and constructs a data model, enabling insights into the SQL content within a dataset. ### Limitations * It is mainly used for simple SQL queries. SQL statements like stored procedures, Windows functions, common table expressions (CTEs), recursive queries, advanced statistical functions, and special joins are not supported. ### Arguments | **Name** | **Type** | **Required or Optional** | **Description** | | -------------- | -------- | ------------------------ | ----------------------------- | | sql\_statement | string | Required | The SQL statement to analyze. | ### Returns A dictionary representing the structured data model of the provided SQL statement. This model includes maps or slices that detail the various components of the SQL statement, such as tables, fields, conditions, etc. ### Examples ### Basic data retrieval The SQL statement **`SELECT * FROM db`** retrieves all columns and rows from the table named **`db`**. ```kusto hn | project parse_sql("select * from db") ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22hn%20%7C%20project%20parse_sql\('select%20*%20from%20db'\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D) ### WHERE Clause This example parses a **`SELECT`** statement with a **`WHERE`** clause, filtering **`customers`** by **`subscription_status`**. ```kusto hn | project parse_sql("SELECT id, email FROM customers WHERE subscription_status = 'active'") ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22hn%20%7C%20project%20parse_sql\(%5C%22SELECT%20id%2C%20email%20FROM%20customers%20WHERE%20subscription_status%20%3D%20'active'%5C%22\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D) ### JOIN operation This example shows parsing an SQL statement that performs a **`JOIN`** operation between **`orders`** and **`customers`** tables to match orders with customer names. ```kusto hn | project parse_sql("SELECT orders.id, customers.name FROM orders JOIN customers ON orders.customer_id = customers.id") ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22hn%20%7C%20project%20parse_sql\(%5C%22SELECT%20orders.id%2C%20customers.name%20FROM%20orders%20JOIN%20customers%20ON%20orders.customer_id%20%3D%20customers.id%5C%22\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D) ### GROUP BY Clause In this example, the **`parse_sql()`** function is used to parse an SQL statement that aggregates order counts by **`product_id`** using the **`GROUP BY`** clause. ```kusto hn | project parse_sql("SELECT product_id, COUNT(*) as order_count FROM orders GROUP BY product_id") ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22hn%20%7C%20project%20parse_sql\(%5C%22SELECT%20product_id%2C%20COUNT\(*\)%20as%20order_count%20FROM%20orders%20GROUP%20BY%20product_id%5C%22\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D) ### Nested Queries This example demonstrates parsing a nested SQL query, where the inner query selects **`user_id`** from **`orders`** based on **`purchase_date`**, and the outer query selects names from **`users`** based on those IDs. ```kusto hn | project parse_sql("SELECT name FROM users WHERE id IN (SELECT user_id FROM orders WHERE purchase_date > '2022-01-01')") ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22hn%20%7C%20project%20parse_sql\(%5C%22SELECT%20name%20FROM%20users%20WHERE%20id%20IN%20\(SELECT%20user_id%20FROM%20orders%20WHERE%20purchase_date%20%3E%20'2022-01-01'\)%5C%22\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D) ### ORDER BY Clause Here, the example shows how to parse an SQL statement that orders **`users`** by **`registration_date`** in descending order. ```kusto hn | project parse_sql("SELECT name, registration_date FROM users ORDER BY registration_date DESC") ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22hn%20%7C%20project%20parse_sql\(%5C%22SELECT%20name%2C%20registration_date%20FROM%20users%20ORDER%20BY%20registration_date%20DESC%5C%22\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D) ### Sorting users by registration data This example demonstrates parsing an SQL statement that retrieves the **`name`** and **`registration_date`** of users from the **`users`** table, and orders the results by **`registration_date`** in descending order, showing how to sort data based on a specific column. ```kusto hn | extend parse_sql("SELECT name, registration_date FROM users ORDER BY registration_date DESC") ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22hn%20%7C%20extend%20parse_sql\(%5C%22SELECT%20name%2C%20registration_date%20FROM%20users%20ORDER%20BY%20registration_date%20DESC%5C%22\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D) ### Querying with index hints to use a specific index This query hints at MySQL to use a specific index named **`index_name`** when executing the SELECT statement on the **`users`** table. ```kusto hn | project parse_sql("SELECT * FROM users USE INDEX (index_name) WHERE user_id = 101") ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22hn%20%7C%20project%20parse_sql\(%5C%22SELECT%20*%20FROM%20users%20USE%20INDEX%20\(index_name\)%20WHERE%20user_id%20%3D%20101%5C%22\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D) ### Inserting data with ON DUPLICATE KEY UPDATE This example showcases MySQL’s ability to handle duplicate key entries elegantly by updating the existing record if the insert operation encounters a duplicate key. ```kusto hn | project parse_sql("INSERT INTO settings (user_id, setting, value) VALUES (1, 'theme', 'dark') ON DUPLICATE KEY UPDATE value='dark'") ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22hn%20%7C%20project%20parse_sql\(%5C%22INSERT%20INTO%20settings%20\(user_id%2C%20setting%2C%20value\)%20VALUES%20\(1%2C%20'theme'%2C%20'dark'\)%20ON%20DUPLICATE%20KEY%20UPDATE%20value%3D'dark'%5C%22\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D) ### Using JSON functions This query demonstrates MySQL’s support for JSON data types and functions, extracting the age from a JSON object stored in the **`user_info`** column. ```kusto hn | project parse_sql("SELECT JSON_EXTRACT(user_info, '$.age') AS age FROM users WHERE user_id = 101") ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22hn%20%7C%20project%20parse_sql\(%5C%22SELECT%20JSON_EXTRACT\(user_info%2C%20%27%24.age%27\)%20AS%20age%20FROM%20users%20WHERE%20user_id%20%3D%20101%5C%22\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D) ## format\_sql() Transforms the data model output by `parse_sql()` back into a SQL statement. Useful for testing and ensuring that the parsing accurately retains the original structure and intent of the SQL statement. ### Arguments | **Name** | **Type** | **Required or Optional** | **Description** | | ------------------ | ---------- | ------------------------ | -------------------------------------------------- | | parsed\_sql\_model | dictionary | Required | The structured data model output by `parse_sql()`. | ### Returns A string that represents the SQL statement reconstructed from the provided data model. ### Examples ### Reformatting a basic SELECT Query After parsing a SQL statement, you can reformat it back to its original or a standard SQL format. ```kusto hn | extend parsed = parse_sql("SELECT * FROM db") | project formatted_sql = format_sql(parsed) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22hn%20%7C%20extend%20parsed%20%3D%20parse_sql\(%5C%22SELECT%20*%20FROM%20db%5C%22\)%20%7C%20project%20formatted_sql%20%3D%20format_sql\(parsed\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D) ### Formatting SQL Queries This example first parses a SQL statement to analyze its structure and then formats the parsed structure back into a SQL string using `format_sql`. ```kusto hn | extend parsed = parse_sql("SELECT name, registration_date FROM users ORDER BY registration_date DESC") | project format_sql(parsed) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22hn%20%7C%20extend%20parsed%20%3D%20parse_sql\(%5C%22SELECT%20name%2C%20registration_date%20FROM%20users%20ORDER%20BY%20registration_date%20DESC%5C%22\)%20%7C%20project%20formatted_sql%20%3D%20format_sql\(parsed\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D) ### Formatting a simple SELECT Statement This example demonstrates parsing a straightforward `SELECT` statement that retrieves user IDs and usernames from an `user_accounts` table where the `active` status is `1`. After parsing, it uses `format_sql` to convert the parsed data back into a SQL string. ```kusto hn | extend parsed = parse_sql("SELECT user_id, username FROM user_accounts WHERE active = 1") | project formatted_sql = format_sql(parsed) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22hn%20%7C%20extend%20parsed%20%3D%20parse_sql\(%5C%22SELECT%20user_id%2C%20username%20FROM%20user_accounts%20WHERE%20active%20%3D%201%5C%22\)%20%7C%20project%20formatted_sql%20%3D%20format_sql\(parsed\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D) ### Reformatting a complex query with JOINS In this example, a more complex SQL statement involving an `INNER JOIN` between `orders` and `customers` tables is parsed. The query selects orders and customer names for orders placed after January 1, 2023. `format_sql` is then used to reformat the parsed structure into a SQL string. ```kusto hn | extend parsed = parse_sql("SELECT orders.order_id, customers.name FROM orders INNER JOIN customers ON orders.customer_id = customers.id WHERE orders.order_date > '2023-01-01'") | project formatted_sql = format_sql(parsed) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22hn%20%7C%20extend%20parsed%20%3D%20parse_sql\(%5C%22SELECT%20orders.order_id%2C%20customers.name%20FROM%20orders%20INNER%20JOIN%20customers%20ON%20orders.customer_id%20%3D%20customers.id%20WHERE%20orders.order_date%20%3E%20'2023-01-01'%5C%22\)%20%7C%20project%20formatted_sql%20%3D%20format_sql\(parsed\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D) ### Using format\_sql with aggregation functions This example focuses on parsing an SQL statement that performs aggregation. It selects product IDs and counts of total sales from a `sales` table, grouping by `product_id` and having a condition on the count. After parsing, `format_sql` reformats the output into an SQL string. ```kusto hn | extend parsed = parse_sql("SELECT product_id, COUNT(*) as total_sales FROM sales GROUP BY product_id HAVING COUNT(*) > 100") | project formatted_sql = format_sql(parsed) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22hn%20%7C%20extend%20parsed%20%3D%20parse_sql\(%5C%22SELECT%20product_id%2C%20COUNT\(*\)%20as%20total_sales%20FROM%20sales%20GROUP%20BY%20product_id%20HAVING%20COUNT\(*\)%20%3E%20100%5C%22\)%20%7C%20project%20formatted_sql%20%3D%20format_sql\(parsed\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D) # String functions Learn how to use and combine different string functions in APL ## String functions | **Function Name** | **Description** | | ----------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------- | | [base64\_encode\_tostring()](#base64-encode-tostring) | Encodes a string as base64 string. | | [base64\_decode\_tostring()](#base64-decode-tostring) | Decodes a base64 string to a UTF-8 string. | | [countof()](#countof) | Counts occurrences of a substring in a string. | | [countof\_regex()](#countof-regex) | Counts occurrences of a substring in a string. Regex matches don’t. | | [coalesce()](#coalesce) | Evaluates a list of expressions and returns the first non-null (or non-empty for string) expression. | | [extract()](#extract) | Get a match for a regular expression from a text string. | | [extract\_all()](#extract-all) | Get all matches for a regular expression from a text string. | | [format\_bytes()](#format-bytes) | Formats a number of bytes as a string including bytes units | | [format\_url()](#format-url) | Formats an input string into a valid URL by adding the necessary protocol if it’s escaping illegal URL characters. | | [indexof()](#indexof) | Function reports the zero-based index of the first occurrence of a specified string within input string. | | [isempty()](#isempty) | Returns true if the argument is an empty string or is null. | | [isnotempty()](#isnotempty) | Returns true if the argument isn’t an empty string or a null. | | [isnotnull()](#isnotnull) | Returns true if the argument is not null. | | [isnull()](#isnull) | Evaluates its sole argument and returns a bool value indicating if the argument evaluates to a null value. | | [parse\_bytes()](#parse-bytes) | Parses a string including byte size units and returns the number of bytes | | [parse\_json()](#parse-json) | Interprets a string as a JSON value) and returns the value as dynamic. | | [parse\_url()](#parse-url) | Parses an absolute URL string and returns a dynamic object contains all parts of the URL. | | [parse\_urlquery()](#parse-urlquery) | Parses a url query string and returns a dynamic object contains the Query parameters. | | [replace()](#replace) | Replace all regex matches with another string. | | [replace\_regex()](#replace-regex) | Replaces all regex matches with another string. | | [replace\_string()](#replace-string) | Replaces all string matches with another string. | | [reverse()](#reverse) | Function makes reverse of input string. | | [split()](#split) | Splits a given string according to a given delimiter and returns a string array with the contained substrings. | | [strcat()](#strcat) | Concatenates between 1 and 64 arguments. | | [strcat\_delim()](#strcat-delim) | Concatenates between 2 and 64 arguments, with delimiter, provided as first argument. | | [strcmp()](#strcmp) | Compares two strings. | | [strlen()](#strlen) | Returns the length, in characters, of the input string. | | [strrep()](#strrep) | Repeats given string provided number of times (default = 1). | | [substring()](#substring) | Extracts a substring from a source string starting from some index to the end of the string. | | [toupper()](#toupper) | Converts a string to upper case. | | [tolower()](#tolower) | Converts a string to lower case. | | [trim()](#trim) | Removes all leading and trailing matches of the specified cutset. | | [trim\_regex()](#trim-regex) | Removes all leading and trailing matches of the specified regular expression. | | [trim\_end()](#trim-end) | Removes trailing match of the specified cutset. | | [trim\_end\_regex()](#trim-end-regex) | Removes trailing match of the specified regular expression. | | [trim\_start()](#trim-start) | Removes leading match of the specified cutset. | | [trim\_start\_regex()](#trim-start-regex) | Removes leading match of the specified regular expression. | | [url\_decode()](#url-decode) | The function converts encoded URL into a regular URL representation. | | [url\_encode()](#url-encode) | The function converts characters of the input URL into a format that can be transmitted over the Internet. | | [gettype()](#gettype) | Returns the runtime type of its single argument. | | [parse\_csv()](#parse-csv) | Splits a given string representing a single record of comma-separated values and returns a string array with these values. | Each argument has a **required** section which is denoted with `required` or `optional` * If it’s denoted by `required` it means the argument must be passed into that function before it'll work. * if it’s denoted by `optional` it means the function can work without passing the argument value. ## base64\_encode\_tostring() Encodes a string as base64 string. ### Arguments | **Name** | **Type** | **Required or Optional** | **Description** | | -------- | -------- | ------------------------ | ------------------------------------------------------------ | | String | string | Required | Input string or string field to be encoded as base64 string. | ### Returns Returns the string encoded as base64 string. * To decode base64 strings to UTF-8 strings, see [base64\_decode\_tostring()](#base64-decode-tostring) ### Examples ```kusto base64_encode_tostring(string) ``` ```kusto ['sample-http-logs'] | project encoded_base64_string = base64_encode_tostring(content_type) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20encoded_base64_string%20%3D%20base64_encode_tostring\(content_type\)%22%7D) ## base64\_decode\_tostring() Decodes a base64 string to a UTF-8 string. ### Arguments | **Name** | **Type** | **Required or Optional** | **Description** | | -------- | -------- | ------------------------ | ------------------------------------------------------------------------ | | String | string | Required | Input string or string field to be decoded from base64 to UTF8-8 string. | ### Returns Returns UTF-8 string decoded from base64 string. * To encode strings to base64 string, see [base64\_encode\_tostring()](#base64-encode-tostring) ### Examples ```kusto base64_decode_tostring(string) ``` ```kusto ['sample-http-logs'] | project decoded_base64_string = base64_decode_tostring("VGhpcyBpcyBhbiBlbmNvZGVkIG1lc3NhZ2Uu") ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20decoded_base64_string%20%3D%20base64_decode_tostring\(%5C%22VGhpcyBpcyBhbiBlbmNvZGVkIG1lc3NhZ2Uu%5C%22\)%22%7D) ## countof() Counts occurrences of a substring in a string. ### Arguments | **name** | **type** | **description** | **Required or Optional** | | ----------- | ---------- | ---------------------------------------- | ------------------------ | | text source | **string** | Source to count your occurences from | Required | | search | **string** | The plain string to match inside source. | Required | ### Returns The number of times that the search string can be matched. ### Examples ```kusto countof(search, text) ``` ```kusto ['sample-http-logs'] | project count = countof("con", "content_type") ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20count%20%3D%20countof\(%5C%22con%5C%22%2C%20%5C%22content_type%5C%22\)%22%7D) ## countof\_regex() Counts occurrences of a substring in a string. regex matches don’t. ### Arguments * text source: A string. * regex search: regular expression to match inside your text source. ### Returns The number of times that the search string can be matched in the dataset. Regex matches do not. ### Examples ```kusto countof_regex(regex, text) ``` ```kusto ['sample-http-logs'] | project count = countof_regex("c.n", "content_type") ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20count%20%3D%20countof_regex\(%5C%22c.n%5C%22%2C%20%5C%22content_type%5C%22\)%22%7D) ## coalesce() Evaluates a list of expressions and returns the first non-null (or non-empty for string) expression. ### Arguments | **name** | **type** | **description** | **Required or Optional** | | --------- | ---------- | ---------------------------------------- | ------------------------ | | arguments | **scalar** | The expression or field to be evaluated. | Required | ### Returns The value of the first argument whose value isn’t null (or not-empty for string expressions). ### Examples ```kusto ['sample-http-logs'] | project coalesced = coalesce(content_type, ['geo.city'], method) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20coalesced%20%3D%20coalesce\(content_type%2C%20%5B%27geo.city%27%5D%2C%20method\)%22%7D) ```kusto ['http-logs'] | project req_duration_ms, server_datacenter, predicate = coalesce(content_type, method, status) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20req_duration_ms%2C%20server_datacenter%2C%20predicate%20%3D%20coalesce\(content_type%2C%20method%2C%20status\)%22%7D) ## extract() Convert the extracted substring to the indicated type. ### Arguments | **name** | **type** | **description** | | ------------ | -------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | regex | **expression** | A regular expression. | | captureGroup | **int** | A positive `int` constant indicating the capture group to extract. 0 stands for the entire match, 1 for the value matched by the first '('parenthesis')' in the regular expression, 2 or more for subsequent parentheses. | | source | **string** | A string to search | ### Returns If regex finds a match in source: the substring matched against the indicated capture group captureGroup, optionally converted to typeLiteral. If there’s no match, or the type conversion fails: `-1` or `string error` ### Examples ```kusto extract(regex, captureGroup, source) ``` ```kusto ['sample-http-logs'] | project extract_sub = extract("^.{2,2}(.{4,4})", 1, content_type) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20extract_sub%20%3D%20%20extract\(%5C%22%5E.%7B2%2C2%7D\(.%7B4%2C4%7D\)%5C%22%2C%201%2C%20content_type\)%22%7D) ```kusto extract("x=([0-9.]+)", 1, "axiom x=65.6|po") == "65.6" ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20extract_sub%20%3D%20%20extract\(%5C%22x%3D\(%5B0-9.%5D%2B\)%5C%22%2C%201%2C%20%5C%22axiom%20x%3D65.6%7Cpo%5C%22\)%20%3D%3D%20%5C%2265.6%5C%22%22%7D) ## extract\_all() retrieve a subset of matching groups. ### Arguments | **name** | **type** | **description** | **Required or Optional** | | ------------- | -------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------ | | regex | **expression** | A regular expression containing between one and 16 capture groups. Examples of a valid regex: @"(\d+)". Examples of an invalid regex: @"\d+" | Required | | captureGroups | **array** | A dynamic array constant that indicates the capture group to extract. Valid values are from 1 to the number of capturing groups in the regular expression. | optional | | source | **string** | A string to search | Required | ### Returns * If regex finds a match in source: Returns dynamic array including all matches against the indicated capture groups captureGroups, or all of capturing groups in the regex. * If number of captureGroups is 1: The returned array has a single dimension of matched values. * If number of captureGroups is more than 1: The returned array is a two-dimensional collection of multi-value matches per captureGroups selection, or all capture groups present in the regex if captureGroups is omitted. * If there’s no match: `-1` ### Examples ```kusto extract_all(regex, [captureGroups,] source) ``` ```kusto ['sample-http-logs'] | project extract_match = extract_all(@"(\w)(\w+)(\w)", dynamic([1,3]), content_type) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20extract_match%20%3D%20extract_all%28%40%5C%22%28%5C%5Cw%29%28%5C%5Cw%2B%29%28%5C%5Cw%29%5C%22%2C%20dynamic%28%5B1%2C3%5D%29%2C%20content_type%29%22%2C%20%22queryOptions%22%3A%20%7B%22quickRange%22%3A%20%2290d%22%7D%7D) ```kusto extract_all(@"(\w)(\w+)(\w)", dynamic([1,3]), content_type) == [["t", "t"],["c","v"]] ``` ## format\_bytes() Formats a number as a string representing data size in bytes. ### Arguments | **name** | **type** | **description** | **Required or Optional** | | --------- | ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------ | | value | **number** | a number to be formatted as data size in bytes | Required | | precision | **number** | Number of digits the value will be rounded to. (default value is zero) | Optional | | units | **string** | Units of the target data size the string formatting will use (base 2 suffixes: `Bytes`, `KiB`, `KB`, `MiB`, `MB`, `GiB`, `GB`, `TiB`, `TB`, `PiB`, `EiB`, `ZiB`, `YiB`; base 10 suffixes: `kB` `MB` `GB` `TB` `PB` `EB` `ZB` `YB`). If the parameter is empty the units will be auto-selected based on input value. | Optional | | base | **number** | Either 2 or 10 to specify whether the prefix is calculated using 1000s or 1024s for each type. (default value is 2) | Optional | ### Returns * A formatted string for humans ### Examples ```kusto format_bytes( 4000, number, "['id']", num_comments ) == "3.9062500000000 KB" ``` ```kusto format_bytes(value [, precision [, units [, base]]]) format_bytes(1024) == "1 KB" format_bytes(8000000, 2, "MB", 10) == "8.00 MB" ``` ```kusto ['github-issues-event'] | project formated_bytes = format_bytes( 4783549035, number, "['id']", num_comments ) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27github-issues-event%27%5D%5Cn%7C%20project%20formated_bytes%20%3D%20format_bytes\(4783549035%2C%20number%2C%20%5C%22%5B%27id%27%5D%5C%22%2C%20num_comments\)%22%7D) ## format\_url() Formats an input string into a valid URL. This function will return a string that is a properly formatted URL. ### Arguments | **name** | **type** | **description** | **Required or Optional** | | -------- | ----------- | ------------------------------------------ | ------------------------ | | url | **dynamic** | string input you want to format into a URL | Required | ### Returns * A string that represents a properly formatted URL. ### Examples ```kusto ['sample-http-logs'] | project formatted_url = format_url(dynamic({"scheme": "https", "host": "github.com", "path": "/axiomhq/next-axiom"}) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20formatted_url%20%3D%20format_url%28dynamic%28%7B%5C%22scheme%5C%22%3A%20%5C%22https%5C%22%2C%20%5C%22host%5C%22%3A%20%5C%22github.com%5C%22%2C%20%5C%22path%5C%22%3A%20%5C%22%2Faxiomhq%2Fnext-axiom%5C%22%7D%29%29%22%7D) ```kusto ['sample-http-logs'] | project formatted_url = format_url(dynamic({"scheme": "https", "host": "github.com", "path": "/axiomhq/next-axiom", "port": 443, "fragment": "axiom","user": "axiom", "password": "apl"})) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20formatted_url%20%3D%20format_url%28dynamic%28%7B%5C%22scheme%5C%22%3A%20%5C%22https%5C%22%2C%20%5C%22host%5C%22%3A%20%5C%22github.com%5C%22%2C%20%5C%22path%5C%22%3A%20%5C%22%2Faxiomhq%2Fnext-axiom%5C%22%2C%20%5C%22port%5C%22%3A%20443%2C%20%5C%22fragment%5C%22%3A%20%5C%22axiom%5C%22%2C%20%5C%22user%5C%22%3A%20%5C%22axiom%5C%22%2C%20%5C%22password%5C%22%3A%20%5C%22apl%5C%22%7D%29%29%22%7D) * These are all the supported keys when using the `format_url` function: scheme, host, port, fragment, user, password, query. ## indexof() Reports the zero-based index of the first occurrence of a specified string within the input string. ### Arguments | **name** | **type** | **description** | **usage** | | ------------ | -------------- | ------------------------------------------------------------------------------- | --------- | | source | **string** | Input string | Required | | lookup | **string** | String to look up | Required | | start\_index | **text** | Search start position. | Optional | | length | **characters** | Number of character positions to examine. A value of -1 means unlimited length. | Optional | | occurrence | **number** | The number of the occurrence. Default 1. | Optional | ### Returns * Zero-based index position of lookup. * Returns -1 if the string isn’t found in the input. ### Examples ```kusto indexof( body, ['id'], 2, 1, number ) == "-1" ``` ```kusto indexof(source,lookup[,start_index[,length[,occurrence]]]) indexof () ``` ```kusto ['github-issues-event'] | project occurrence = indexof( body, ['id'], 23, 5, number ) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27github-issues-event%27%5D%5Cn%7C%20project%20occurrence%20%3D%20indexof%28%20body%2C%20%5B%27id%27%5D%2C%2023%2C%205%2C%20number%20%29%22%7D) ## isempty() Returns `true` if the argument is an empty string or is null. ### Returns Indicates whether the argument is an empty string or isnull. ### Examples ```kusto isempty("") == true ``` ```kusto isempty([value]) ``` ```kusto ['github-issues-event'] | project empty = isempty(num_comments) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20project%20empty%20%3D%20isempty%28num_comments%29%22%7D) ## isnotempty() Returns `true` if the argument isn’t an empty string, and it isn’t null. ### Examples ```kusto isnotempty("") == false ``` ```kusto isnotempty([value]) notempty([value]) -- alias of isnotempty ``` ```kusto ['github-issues-event'] | project not_empty = isnotempty(num_comments) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20project%20not_empty%20%3D%20isnotempty%28num_comments%29%22%7D) ## isnotnull() Returns `true` if the argument is not null. ### Examples ```kusto isnotnull( num_comments ) == true ``` ```kusto isnotnull([value]) notnull([value]) - alias for `isnotnull` ``` ```kusto ['github-issues-event'] | project not_null = isnotnull(num_comments) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20project%20not_null%20%3D%20isnotnull%28num_comments%29%22%7D) ## isnull() Evaluates its sole argument and returns a bool value indicating if the argument evaluates to a null value. ### Returns True or false, depending on whether or not the value is null. ### Examples ```kusto isnull(Expr) ``` ```kusto ['github-issues-event'] | project is_null = isnull(creator) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20project%20is_null%20%3D%20isnull%28creator%29%22%7D) ## parse\_bytes() Parses a string including byte size units and returns the number of bytes ### Arguments | **name** | **type** | **description** | **Required or Optional** | | ------------- | ---------- | ------------------------------------------------------------------------------------------------------------------------------ | ------------------------ | | bytes\_string | **string** | A string formated defining the number of bytes | Required | | base | **number** | (optional) Either 2 or 10 to specify whether the prefix is calculated using 1000s or 1024s for each type. (default value is 2) | Required | ### Returns * The number of bytes or zero if unable to parse ### Examples ```kusto parse_bytes(bytes_string [, base]) parse_bytes("1 KB") == 1024 parse_bytes("1 KB", 10) == 1000 parse_bytes("128 Bytes") == 128 parse_bytes("bad data") == 0 ``` ```kusto ['github-issues-event'] | extend parsed_bytes = parse_bytes("300 KB", 10) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20extend%20parsed_bytes%20%3D%20%20parse_bytes%28%5C%22300%20KB%5C%22%2C%2010%29%22%7D) ```kusto ['github-issues-event'] | project parsed_bytes = parse_bytes("300 KB", 10) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20project%20parsed_bytes%20%3D%20%20parse_bytes%28%5C%22300%20KB%5C%22%2C%2010%29%22%7D) ## parse\_json() Interprets a string as a JSON value and returns the value as dynamic. ### Arguments | **Name** | **Type** | **Required or Optional** | **Description** | | --------- | -------- | ------------------------ | -------------------------------------------------------------------- | | Json Expr | string | Required | Expression that will be used, also represents a JSON-formatted value | ### Returns An object of type json that is determined by the value of json: * If json is of type string, and is a properly formatted JSON string, then the string is parsed, and the value produced is returned. * If json is of type string, but it isn’t a properly formatted JSON string, then the returned value is an object of type dynamic that holds the original string value. ### Examples ```kusto parse_json(json) ``` ```kusto ['vercel'] | extend parsed = parse_json('{"name":"vercel", "statuscode":200, "region": { "route": "usage streams", "number": 9 }}') ``` ```kusto ['github-issues-event'] | extend parsed = parse_json(creator) | where isnotnull( parsed) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20extend%20parsed%20%3D%20parse_json%28creator%29%5Cn%7C%20where%20isnotnull%28parsed%29%22%7D) ## parse\_url() Parses an absolute URL `string` and returns an object contains `URL parts.` ### Arguments | **Name** | **Type** | **Required or Optional** | **Description** | | -------- | -------- | ------------------------ | ------------------------------------------------------- | | URL | string | Required | A string represents a URL or the query part of the URL. | ### Returns An object of type dynamic that included the URL components: Scheme, Host, Port, Path, Username, Password, Query Parameters, Fragment. ### Examples ```kusto parse_url(url) ``` ```kusto ['sample-http-logs'] | extend ParsedURL = parse_url("https://www.example.com/path/to/page?query=example") | project Scheme = ParsedURL["scheme"], Host = ParsedURL["host"], Path = ParsedURL["path"], Query = ParsedURL["query"] ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20extend%20ParsedURL%20%3D%20parse_url%28%5C%22https%3A%2F%2Fwww.example.com%2Fpath%2Fto%2Fpage%3Fquery%3Dexample%5C%22%29%5Cn%7C%20project%20%5Cn%20%20Scheme%20%3D%20ParsedURL%5B%5C%22scheme%5C%22%5D%2C%5Cn%20%20Host%20%3D%20ParsedURL%5B%5C%22host%5C%22%5D%2C%5Cn%20%20Path%20%3D%20ParsedURL%5B%5C%22path%5C%22%5D%2C%5Cn%20%20Query%20%3D%20ParsedURL%5B%5C%22query%5C%22%5D%22%7D) * Result ```json { "Host": "www.example.com", "Path": "/path/to/page", "Query": { "query": "example" }, "Scheme": "https" } ``` ## parse\_urlquery() Returns a `dynamic` object contains the Query parameters. ### Arguments | **Name** | **Type** | **Required or Optional** | **Description** | | -------- | -------- | ------------------------ | -------------------------------- | | Query | string | Required | A string represents a url query. | query: A string represents a url query ### Returns An object of type dynamic that includes the query parameters. ### Examples ```kusto parse_urlquery("a1=b1&a2=b2&a3=b3") ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20extend%20ParsedURLQUERY%20%3D%20parse_urlquery%28%5C%22a1%3Db1%26a2%3Db2%26a3%3Db3%5C%22%29%22%7D) * Result ```json { "Result": { "a3": "b3", "a2": "b2", "a1": "b1" } } ``` ```kusto parse_urlquery(query) ``` ```kusto ['github-issues-event'] | project parsed = parse_urlquery("https://play.axiom.co/axiom-play-qf1k/explorer?qid=fUKgiQgLjKE-rd7wjy") ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20project%20parsed%20%3D%20parse_urlquery%28%5C%22https%3A%2F%2Fplay.axiom.co%2Faxiom-play-qf1k%2Fexplorer%3Fqid%3DfUKgiQgLjKE-rd7wjy%5C%22%29%22%7D) ## replace() Replace all regex matches with another string. ### Arguments * regex: The regular expression to search source. It can contain capture groups in '('parentheses')'. * rewrite: The replacement regex for any match made by matchingRegex. Use $0 to refer to the whole match, $1 for the first capture group, \$2 and so on for subsequent capture groups. * source: A string. ### Returns * source after replacing all matches of regex with evaluations of rewrite. Matches do not overlap. ### Examples ```kusto replace(regex, rewrite, source) ``` ```kusto ['sample-http-logs'] | project content_type, Comment = replace("[html]", "[censored]", method) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20project%20content_type%2C%20Comment%20%3D%20replace%28%5C%22%5Bhtml%5D%5C%22%2C%20%5C%22%5Bcensored%5D%5C%22%2C%20method%29%22%7D) ## replace\_regex() Replaces all regex matches with another string. ### Arguments * regex: The regular expression to search text. * rewrite: The replacement regex for any match made by *matchingRegex*. * text: A string. ### Returns source after replacing all matches of regex with evaluations of rewrite. Matches do not overlap. ### Examples ```kusto replace_regex(@'^logging', 'axiom', 'logging-data') ``` * Result ```json { "replaced": "axiom-data" } ``` ```kusto replace_regex(regex, rewrite, text) ``` ```kusto ['github-issues-event'] | extend replaced = replace_regex(@'^logging', 'axiom', 'logging-data') ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20project%20replaced_regex%20%3D%20replace_regex%28%40'%5Elogging'%2C%20'axiom'%2C%20'logging-data'%29%22%7D) ### Backreferences Backreferences match the same text as previously matched by a capturing group. With Backreferences, you can identify a repeated character or substring within a string. * Backreferences in APL is implemented using the `$` sign. #### Examples ```kusto ['github-issues-event'] | project backreferences = replace_regex(@'observability=(.+)', 'axiom=$1', creator) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%20%7C%20project%20backreferences%20%3D%20replace_regex\(%40'observability%3D\(.%2B\)'%2C%20'axiom%3D%241'%2C%20creator\)%22%7D) ## replace\_string() Replaces all string matches with another string. ### Arguments | **Name** | **Type** | **Required or Optional** | **Description** | | -------- | -------- | ------------------------ | ----------------------------------------------------------------------- | | lookup | string | Required | A string which Axiom matches in `text` and replaces with `rewrite`. | | rewrite | string | Required | A string with which Axiom replaces parts of `text` that match `lookup`. | | text | string | Required | A string where Axiom replaces parts matching `lookup` with `rewrite`. | ### Returns `text` after replacing all matches of `lookup` with evaluations of `rewrite`. Matches don’t overlap. ### Examples ```kusto replace_string("github", "axiom", "The project is hosted on github") ``` * Result ```json { "replaced_string": "axiom" } ``` ```kusto replace_string(lookup, rewrite, text) ``` ```kusto ['sample-http-logs'] | extend replaced_string = replace_string("The project is hosted on github", "github", "axiom") | project replaced_string ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20replaced_string%20%3D%20replace_string%28%27github%27%2C%20%27axiom%27%2C%20%27The%20project%20is%20hosted%20on%20github%27%29%5Cn%7C%20project%20replaced_string%22%7D) ## reverse() Function reverses the order of the input Field. ### Arguments | **name** | **type** | **description** | **Required or Optional** | | -------- | -------- | ----------------- | ------------------------ | | Field | `string` | Field input value | Required | ### Returns The reverse order of a field value. ### Examples ```kusto reverse(value) ``` ```kusto project reversed = reverse("axiom") ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20project%20reversed_value%20%3D%20reverse%28'axiom'%29%22%7D) * Result ```json moixa ``` ## split() Splits a given string according to a given delimiter and returns a string array with the contained substrings. Optionally, a specific substring can be returned if exists. ### Arguments * source: The source string that will be split according to the given delimiter. * delimiter: The delimiter (Field) that will be used in order to split the source string. ### Returns * A string array that contains the substrings of the given source string that are delimited by the given delimiter. ### Examples ```kusto split(source, delimiter) ``` ```kusto project split_str = split("axiom_observability_monitoring", "_") ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27github-issues-event%27%5D%5Cn%7C%20project%20split_str%20%3D%20split%28%5C%22axiom_observability_monitoring%5C%22%2C%20%5C%22_%5C%22%29%22%7D) * Result ```json { "split_str": ["axiom", "observability", "monitoring"] } ``` ## strcat() Concatenates between 1 and 64 arguments. If the arguments aren’t of string type, they'll be forcibly converted to string. ### Arguments | **Name** | **Type** | **Required or Optional** | **Description** | | -------- | -------- | ------------------------ | ------------------------------- | | Expr | string | Required | Expressions to be concatenated. | ### Returns Arguments, concatenated to a single string. ### Examples ```kusto strcat(argument1, argument2[, argumentN]) ``` ```kusto ['github-issues-event'] | project stract_con = strcat( ['milestone.creator'], number ) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20project%20stract_con%20%3D%20strcat%28%20%5B'milestone.creator'%5D%2C%20number%20%29%22%7D) ```kusto ['github-issues-event'] | project stract_con = strcat( 'axiom', number ) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20project%20stract_con%20%3D%20strcat%28%20'axiom'%2C%20number%20%29%22%7D) * Result ```json { "stract_con": "axiom3249" } ``` ## strcat\_delim() Concatenates between 2 and 64 arguments, with delimiter, provided as first argument. * If arguments aren’t of string type, they'll be forcibly converted to string. ### Arguments | **Name** | **Type** | **Required or Optional** | **Description** | | ------------ | -------- | ------------------------ | --------------------------------------------------- | | delimiter | string | Required | string expression, which will be used as separator. | | argument1 .. | string | Required | Expressions to be concatenated. | ### Returns Arguments, concatenated to a single string with delimiter. ### Examples ```kusto strcat_delim(delimiter, argument1, argument2[ , argumentN]) ``` ```kusto ['github-issues-event'] | project strcat = strcat_delim(":", actor, creator) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20project%20strcat%20%3D%20strcat_delim%28'%3A'%2C%20actor%2C%20creator%29%22%7D) ```kusto project strcat = strcat_delim(":", "axiom", "monitoring") ``` * Result ```json { "strcat": "axiom:monitoring" } ``` ## strcmp() Compares two strings. The function starts comparing the first character of each string. If they are equal to each other, it continues with the following pairs until the characters differ or until the end of shorter string is reached. ### Arguments | **Name** | **Type** | **Required or Optional** | **Description** | | -------- | -------- | ------------------------ | ----------------------------------- | | string1 | string | Required | first input string for comparison. | | string2 | string | Required | second input string for comparison. | ### Returns Returns an integral value indicating the relationship between the strings: * When the result is 0: The contents of both strings are equal. * When the result is -1: the first character that does not match has a lower value in string1 than in string2. * When the result is 1: the first character that does not match has a higher value in string1 than in string2. ### Examples ```kusto strcmp(string1, string2) ``` ```kusto ['github-issues-event'] | extend cmp = strcmp( body, repo ) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20extend%20cmp%20%3D%20strcmp%28%20body%2C%20repo%20%29%22%7D) ```kusto project cmp = strcmp( "axiom", "observability") ``` * Result ```json { "input_string": -1 } ``` ## strlen() Returns the length, in characters, of the input string. ### Arguments | **Name** | **Type** | **Required or Optional** | **Description** | | -------- | -------- | ------------------------ | ---------------------------------------------------------- | | source | string | Required | The source string that will be measured for string length. | ### Returns Returns the length, in characters, of the input string. ### Examples ```kusto strlen(source) ``` ```kusto project str_len = strlen("axiom") ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20project%20str_len%20%3D%20strlen\(%5C%22axiom%5C%22\)%22%7D) * Result ```json { "str_len": 5 } ``` ## strrep() Repeats given string provided amount of times. * In case if first or third argument is not of a string type, it will be forcibly converted to string. ### Arguments | **Name** | **Type** | **Required or Optional** | **Description** | | ---------- | -------- | ------------------------ | ----------------------------------------------------- | | value | Expr | Required | Inpute Expression | | multiplier | integer | Required | positive integer value (from 1 to 1024) | | delimiter | string | Optional | An optional string expression (default: empty string) | ### Returns * Value repeated for a specified number of times, concatenated with delimiter. * In case if multiplier is more than maximal allowed value (1024), input string will be repeated 1024 times. ### Examples ```kusto strrep(value,multiplier,[delimiter]) ``` ```kusto ['github-issues-event'] | extend repeat_string = strrep( repo, 5, "::" ) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20extend%20repeat_string%20%3D%20strrep\(%20repo%2C%205%2C%20%5C%22%3A%3A%5C%22%20\)%22%7D) ```kusto project repeat_string = strrep( "axiom", 3, "::" ) ``` * Result ```json { "repeat_string": "axiom::axiom::axiom" } ``` ## substring() Extracts a substring from a source string starting from some index to the end of the string. ### Arguments * source: The source string that the substring will be taken from. * startingIndex: The zero-based starting character position of the requested substring. * length: A parameter that can be used to specify the requested number of characters in the substring. ### Returns A substring from the given string. The substring starts at startingIndex (zero-based) character position and continues to the end of the string or length characters if specified. ### Examples ```kusto substring(source, startingIndex [, length]) ``` ```kusto ['github-issues-event'] | extend extract_string = substring( repo, 4, 5 ) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20extend%20extract_string%20%3D%20substring\(%20repo%2C%204%2C%205%20\)%22%7D) ```kusto project extract_string = substring( "axiom", 4, 5 ) ``` ```json { "extract_string": "m" } ``` ## toupper() Converts a string to upper case. ```kusto toupper("axiom") == "AXIOM" ``` ```kusto ['github-issues-event'] | project upper = toupper( body ) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20project%20upper%20%3D%20toupper\(%20body%20\)%22%7D) ## tolower() Converts a string to lower case. ```kusto tolower("AXIOM") == "axiom" ``` ```kusto ['github-issues-event'] | project low = tolower( body ) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20project%20low%20%3D%20tolower%28body%29%22%7D) ## trim() Removes all leading and trailing matches of the specified cutset. ### Arguments * source: A string. * cutset: A string containing the characters to be removed. ### Returns source after trimming matches of the cutset found in the beginning and/or the end of source. ### Examples ```kusto trim(source) ``` ```kusto ['github-issues-event'] | extend remove_leading_matches = trim( "locked", repo) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20extend%20remove_leading_matches%20%3D%20trim\(%5C%22locked%5C%22%2C%20repo\)%22%7D) ```kusto project remove_leading_matches = trim( "axiom", "observability") ``` * Result ```json { "remove_leading_matches": "bservability" } ``` ## trim\_regex() Removes all leading and trailing matches of the specified regular expression. ### Arguments * regex: String or regular expression to be trimmed from the beginning and/or the end of source. * source: A string. ### Returns source after trimming matches of regex found in the beginning and/or the end of source. ### Examples ```kusto trim_regex(regex, source) ``` ```kusto ['github-issues-event'] | extend remove_trailing_match_regex = trim_regex( "^github", action ) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20extend%20remove_trailing_match_regex%20%3D%20trim_regex\(%5C%22%5Egithub%5C%22%2C%20action\)%22%7D) * Result ```json { "remove_trailing_match_regex": "closed" } ``` ## trim\_end() Removes trailing match of the specified cutset. ### Arguments * source: A string. * cutset: A string containing the characters to be removed.\` ### Returns source after trimming matches of the cutset found in the end of source. ### Examples ```kusto trim_end(source) ``` ```kusto ['github-issues-event'] | extend remove_cutset = trim_end(@"[^\w]+", body) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27github-issues-event%27%5D%5Cn%7C%20extend%20remove_cutset%20%3D%20trim_end%28%40%5C%22%5B%5E%5C%5Cw%5D%2B%5C%22%2C%20body%29%22%7D) * Result ```json { "remove_cutset": "In [`9128d50`](https://7aa98788e07\n), **down**:\n- HTTP code: 0\n- Response time: 0 ms\n" } ``` ## trim\_end\_regex() Removes trailing match of the specified regular expression. ### Arguments * regex: String or regular expression to be trimmed from the end of source. * source: A string. ### Returns source after trimming matches of regex found in the end of source. ### Examples ```kusto trim_end_regex(regex, source) ``` ```kusto ['github-issues-event'] | project remove_cutset_regex = trim_end_regex( "^github", creator ) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20project%20remove_cutset_regex%20%3D%20trim_end_regex\(%20%5C%22%5Egithub%5C%22%2C%20creator%20\)%22%7D) * Result ```json { "remove_cutset_regex": "axiomhq" } ``` ## trim\_start() Removes leading match of the specified cutset. ### Arguments * source: A string. ### Returns * source after trimming match of the specified cutset found in the beginning of source. ### Examples ```kusto trim_start(source) ``` ```kusto ['github-issues-event'] | project remove_cutset = trim_start( "github", repo) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20project%20remove_cutset%20%3D%20trim_start\(%20%5C%22github%5C%22%2C%20repo\)%22%7D) * Result ```json { "remove_cutset": "axiomhq/next-axiom" } ``` ## trim\_start\_regex() Removes leading match of the specified regular expression. ### Arguments * regex: String or regular expression to be trimmed from the beginning of source. * source: A string. ### Returns source after trimming match of regex found in the beginning of source. ### Examples ```kusto trim_start_regex(regex, source) ``` ```kusto ['github-issues-event'] | project remove_cutset = trim_start_regex( "github", repo) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20project%20remove_cutset%20%3D%20trim_start_regex\(%20%5C%22github%5C%22%2C%20repo\)%22%7D) * Result ```json { "remove_cutset": "axiomhq/next-axiom" } ``` ## url\_decode() The function converts encoded URL into a to regular URL representation. ### Arguments * `encoded url:` encoded URL (string). ### Returns URL (string) in a regular representation. ### Examples ```kusto url_decode(encoded url) ``` ```kusto ['github-issues-event'] | project decoded_link = url_decode( "https://www.axiom.co/" ) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20project%20decoded_link%20%3D%20url_decode\(%20%5C%22https%3A%2F%2Fwww.axiom.co%2F%5C%22%20\)%22%7D) * Result ```json { "decoded_link": "https://www.axiom.co/" } ``` ## url\_encode() The function converts characters of the input URL into a format that can be transmitted over the Internet. ### Arguments * url: input URL (string). ### Returns URL (string) converted into a format that can be transmitted over the Internet. ### Examples ```kusto url_encode(url) ``` ```kusto ['github-issues-event'] | project encoded_url = url_encode( "https://www.axiom.co/" ) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20project%20encoded_url%20%3D%20url_encode\(%20%5C%22https%3A%2F%2Fwww.axiom.co%2F%5C%22%20\)%22%7D) * Result ```json { "encoded_link": "https%3A%2F%2Fwww.axiom.co%2F" } ``` ## gettype() Returns the runtime type of its single argument. ### Arguments * Expressions ### Returns A string representing the runtime type of its single argument. ### Examples | **Expression** | **Returns** | | ----------------------------------------- | -------------- | | gettype("lima") | **string** | | gettype(2222) | **int** | | gettype(5==5) | **bool** | | gettype(now()) | **datetime** | | gettype(parse\_json('67')) | **int** | | gettype(parse\_json(' "polish" ')) | **string** | | gettype(parse\_json(' \{"axiom":1234} ')) | **dictionary** | | gettype(parse\_json(' \[6, 7, 8] ')) | **array** | | gettype(456.98) | **real** | | gettype(parse\_json('')) | **null** | ## parse\_csv() Splits a given string representing a single record of comma-separated values and returns a string array with these values. ### Arguments * csv\_text: A string representing a single record of comma-separated values. ### Returns A string array that contains the split values. ### Examples ```kusto parse_csv("axiom,logging,observability") == [ "axiom", "logging", "observability" ] ``` ```kusto parse_csv("axiom, processing, language") == [ "axiom", "processing", "language" ] ``` ```kusto ['github-issues-event'] | project parse_csv("github, body, repo") ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%5Cn%7C%20project%20parse_csv\(%5C%22github%2C%20body%2C%20repo%5C%22\)%22%7D) # Logical operators Learn how to use and combine different logical operators in APL. ## Logical (binary) operators The following logical operators are supported between two values of the `bool` type: **These logical operators are sometimes referred-to as Boolean operators, and sometimes as binary operators. The names are all synonyms.** | **Operator name** | **Syntax** | **meaning** | | | ----------------- | ---------- | ----------------------------------------------------------------------------------------------------------------------- | - | | Equality | **==** | Yields `true` if both operands are non-null and equal to each other. Otherwise, `false.` | | | Inequality | **!=** | Yields `true` if either one (or both) of the operands are null, or they are not equal to each other. Otherwise, `true.` | | | Logical and | **and** | Yields `true` if both operands are `true.` | | | Logical or | **or** | Yields `true `if one of the operands is `true`, regardless of the other operand. | | # Numerical operators Learn how to use and combine numerical operators in APL. ## Numerical operators The types `int`, `long`, and `real` represent numerical types. The following operators can be used between pairs of these types: | **Operator** | **Description** | **Example** | | | ------------ | --------------------------------- | ------------------------------------------------ | - | | `+` | Add | `3.19 + 3.19`, `ago(10m) + 10m` | | | `-` | Subtract | `0.26 - 0.23` | | | `*` | Multiply | `1s * 5`, `5 * 5` | | | `/` | Divide | `10m / 1s`, `4 / 2` | | | `%` | Modulo | `10 % 3`, `5 % 2` | | | `<` | Less | `1 < 2`, `1 <= 1` | | | `>` | Greater | `0.23 > 0.22`, `10min > 1sec`, `now() > ago(1d)` | | | `==` | Equals | `3 == 3` | | | `!=` | Not equals | `2 != 1` | | | `<=` | Less or Equal | `5 <= 6` | | | `>=` | Greater or Equal | `7 >= 6` | | | `in` | Equals to one of the elements | `"abc" in ("123", "345", "abc")` | | | `!in` | Not equals to any of the elements | `"bca" !in ("123", "345", "abc")` | | # String operators Learn how to use and combine different query operators for searching string data types. ## String operators Axiom processing language provides you with different query operators for searching string data types. Below are the list of string operators we support on Axiom processing language. **Note:** The following abbreviations are used in the table below: * RHS = right hand side of the expression. * LHS = left hand side of the expression. Operators with an \_cs suffix are case sensitive When two operators do the same task, use the case-sensitive one for better performance. For example: * instead of `=~`, use `==` * instead of `in~`, use `in` * instead of `contains`, use `contains_cs` The table below shows the list of string operators supported by Axiom processing language: | **Operator** | **Description** | **Case-Sensitive** | **Example** | | ------------------- | --------------------------------------- | ------------------ | --------------------------------------- | | **==** | Equals | Yes | `"aBc" == "aBc"` | | **!=** | Not equals | Yes | `"abc" != "ABC"` | | **=\~** | Equals | No | `"abc" =~ "ABC"` | | **!\~** | Not equals | No | `"aBc" !~ "xyz"` | | **contains** | RHS occurs as a subsequence of LHS | No | `parentSpanId` contains `Span` | | **!contains** | RHS doesn’t occur in LHS | No | `parentSpanId` !contains `abc` | | **contains\_cs** | RHS occurs as a subsequence of LHS | Yes | `parentSpanId` contains\_cs "Id" | | **!contains\_cs** | RHS doesn’t occur in LHS | Yes | `parentSpanId` !contains\_cs "Id" | | **startswith** | RHS is an initial subsequence of LHS | No | `parentSpanId` startswith `parent` | | **!startswith** | RHS isn’t an initial subsequence of LHS | No | `parentSpanId` !startswith "Id" | | **startswith\_cs** | RHS is an initial subsequence of LHS | Yes | `parentSpanId` startswith\_cs "parent" | | **!startswith\_cs** | RHS isn’t an initial subsequence of LHS | Yes | `parentSpanId` !startswith\_cs "parent" | | **endswith** | RHS is a closing subsequence of LHS | No | `parentSpanId` endswith "Id" | | **!endswith** | RHS isn’t a closing subsequence of LHS | No | `parentSpanId` !endswith `Span` | | **endswith\_cs** | RHS is a closing subsequence of LHS | Yes | `parentSpanId` endswith\_cs `Id` | | **!endswith\_cs** | RHS isn’t a closing subsequence of LHS | Yes | `parentSpanId` !endswith\_cs `Span` | | **in** | Equals to one of the elements | Yes | `abc` in ("123", "345", "abc") | | **!in** | Not equals to any of the elements | Yes | "bca" !in ("123", "345", "abc") | | **in\~** | Equals to one of the elements | No | "abc" in\~ ("123", "345", "ABC") | | **!in\~** | Not equals to any of the elements | No | "bca" !in\~ ("123", "345", "ABC") | | **!matches regex** | LHS doesn’t contain a match for RHS | Yes | `parentSpanId` !matches regex `g.*r` | | **matches regex** | LHS contains a match for RHS | Yes | `parentSpanId` matches regex `g.*r` | | **has** | RHS is a whole term in LHS | No | `Content Type` has `text` | | **has\_cs** | RHS is a whole term in LHS | Yes | `Content Type` has\_cs `Text` | ## Use string operators efficiently String operators are fundamental in comparing, searching, or matching strings. Understanding the performance implications of different operators can significantly optimize your queries. Below are performance tips and query examples. ## Equality and Inequality Operators * Operators: `==`, `!=`, `=~`, `!~`, `in`, `!in`, `in~`, `!in~` Query Examples: ```kusto "get" == "get" "get" != "GET" "get" =~ "GET" "get" !~ "put" "get" in ("get", "put", "delete") ``` * Use `==` or `!=` for exact match comparisons when case sensitivity is important, as they are faster. * Use `=~` or `!~` for case-insensitive comparisons, or when the exact case is unknown. * Use `in` or `!in` for checking membership within a set of values, which can be efficient for a small set of values. ## Subsequence Matching Operators * Operators: `contains`, `!contains`, `contains_cs`, `!contains_cs`, `startswith`, `!startswith`, `startswith_cs`, `!startswith_cs`, `endswith`, `!endswith`, `endswith_cs`, `!endswith_cs`. Query Examples: ```kusto "parentSpanId" contains "Span" // True "parentSpanId" !contains "xyz" // True "parentSpanId" startswith "parent" // True "parentSpanId" endswith "Id" // True "parentSpanId" contains_cs "Span" // True if parentSpanId is "parentSpanId", False if parentSpanId is "parentspanid" or "PARENTSPANID" "parentSpanId" startswith_cs "parent" // True if parentSpanId is "parentSpanId", False if parentSpanId is "ParentSpanId" or "PARENTSPANID" "parentSpanId" endswith_cs "Id" // True if parentSpanId is "parentSpanId", False if parentSpanId is "parentspanid" or "PARENTSPANID" ``` * Use case-sensitive operators (`contains_cs`, `startswith_cs`, `endswith_cs`) when the case is known, as they are faster. ## Regular Expression Matching Operators * Operators: `matches regex`, `!matches regex` ```kusto "parentSpanId" matches regex "p.*Id" // True "parentSpanId" !matches regex "x.*z" // True ``` * Avoid complex regular expressions or use string operators for simple substring, prefix, or suffix matching. ## Term Matching Operators * Operators: `has`, `has_cs` Query Examples: ```kusto "content type" has "type" // True "content type" has_cs "Type" // False ``` * Use `has` or `has_cs` for term matching which can be more efficient than regular expression matching for simple term searches. * Use `has_cs` when the case is known, as it is faster due to case-sensitive matching. ## Best Practices * Always use case-sensitive operators when the case is known, as they are faster. * Avoid complex regular expressions for simple matching tasks; use simpler string operators instead. * When matching against a set of values, ensure the set is as small as possible to improve performance. * For substring matching, prefer prefix or suffix matching over general substring matching for better performance. ## has operator The `has` operator in APL filters rows based on whether a given term or phrase appears within a string field. ## Importance of the `has` operator: * **Precision Filtering:** Unlike the `contains` operator, which matches any substring, the `has` operator looks for exact terms, ensuring more precise results. * **Simplicity:** Provides an easy and readable way to find exact terms in a string without resorting to regex or other more complex methods. The following table compares the `has` operators using the abbreviations provided: * RHS = right-hand side of the expression * LHS = left-hand side of the expression | Operator | Description | Case-Sensitive | Example | | ------------- | ------------------------------------------------------------- | -------------- | -------------------------------------- | | has | Right-hand-side (RHS) is a whole term in left-hand-side (LHS) | No | "North America" has "america" | | has\_cs | RHS is a whole term in LHS | Yes | "North America" has\_cs "America" | | hassuffix | LHS string ends with the RHS string | No | "documentation.docx" hassuffix ".docx" | | hasprefix | LHS string starts with the RHS string | No | "Admin\_User" hasprefix "Admin" | | hassuffix\_cs | LHS string ends with the RHS string | Yes | "Document.HTML" hassuffix\_cs ".HTML" | | hasprefix\_cs | LHS string starts with the RHS string | Yes | "DOCS\_file" hasprefix\_cs "DOCS" | ## Syntax ```kusto ['Dataset'] | where Field has (Expression) ``` ## Parameters | Name | Type | Required | Description | | ---------- | ----------------- | -------- | -------------------------------------------------------------------------------------------------------------- | | Field | string | ✓ | The field filters the events. | | Expression | scalar or tabular | ✓ | An expression for which to search. The first field is used if the value of the expression has multiple fields. | ## Returns The `has` operator returns rows from the dataset where the specified term is found in the given field. If the term is present, the row is included in the result set; otherwise, it is filtered out. ## Example ```kusto ['sample-http-logs'] | summarize event_count = count() by content_type | where content_type has "text" | where event_count > 10 | project event_count, content_type ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20summarize%20event_count%20%3D%20count%28%29%20by%20content_type%5Cn%7C%20where%20content_type%20has%20%5C%22text%5C%22%5Cn%7C%20where%20event_count%20%3E%2010%5Cn%7C%20project%20event_count%2C%20content_type%22%7D\&queryOptions=%7B%22quickRange%22%3A%2230d%22%7D) ## Output | event\_count | content\_type | | ------------ | ------------------------ | | 132,765 | text/html | | 132,621 | text/plain-charset=utf-8 | | 89,085 | text/csv | | 88,436 | text/css | # count This page explains how to use the count operator function in APL. The `count` operator in Axiom Processing Language (APL) is a simple yet powerful aggregation function that returns the total number of records in a dataset. You can use it to calculate the number of rows in a table or the results of a query. The `count` operator is useful in scenarios such as log analysis, telemetry data processing, and security monitoring, where you need to know how many events, transactions, or data entries match certain criteria. ## For users of other query languages If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL. In Splunk’s SPL, the `stats count` function is used to count the number of events in a dataset. In APL, the equivalent operation is simply `count`. You can use `count` in APL without the need for additional function wrapping. ```splunk Splunk example index=web_logs | stats count ``` ```kusto APL equivalent ['sample-http-logs'] | count ``` In ANSI SQL, you typically use `COUNT(*)` or `COUNT(field)` to count the number of rows in a table. In APL, the `count` operator achieves the same functionality, but it doesn’t require a field name or `*`. ```sql SQL example SELECT COUNT(*) FROM web_logs; ``` ```kusto APL equivalent ['sample-http-logs'] | count ``` ## Usage ### Syntax ```kusto | count ``` ### Parameters The `count` operator does not take any parameters. It simply returns the number of records in the dataset or query result. ### Returns `count` returns an integer representing the total number of records in the dataset. ## Use case examples In this example, you count the total number of HTTP requests in the `['sample-http-logs']` dataset. **Query** ```kusto ['sample-http-logs'] | count ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20count%22%7D) **Output** | count | | ----- | | 15000 | This query returns the total number of HTTP requests recorded in the logs. In this example, you count the number of traces in the `['otel-demo-traces']` dataset. **Query** ```kusto ['otel-demo-traces'] | count ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20count%22%7D) **Output** | count | | ----- | | 5000 | This query returns the total number of OpenTelemetry traces in the dataset. In this example, you count the number of security events in the `['sample-http-logs']` dataset where the status code indicates an error (status codes 4xx or 5xx). **Query** ```kusto ['sample-http-logs'] | where status startswith '4' or status startswith '5' | count ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20where%20status%20startswith%20'4'%20or%20status%20startswith%20'5'%20%7C%20count%22%7D) **Output** | count | | ----- | | 1200 | This query returns the number of HTTP requests that resulted in an error (HTTP status code 4xx or 5xx). ## List of related operators * [**summarize**](/apl/tabular-operators/summarize-operator): The `summarize` operator is used to aggregate data based on one or more fields, allowing you to calculate sums, averages, and other statistics, including counts. Use `summarize` when you need to group data before counting. * [**extend**](/apl/tabular-operators/extend-operator): The `extend` operator adds calculated fields to a dataset. You can use `extend` alongside `count` if you want to add additional calculated data to your query results. * [**project**](/apl/tabular-operators/project-operator): The `project` operator selects specific fields from a dataset. While `count` returns the total number of records, `project` can limit or change which fields you see. * [**where**](/apl/tabular-operators/where-operator): The `where` operator filters rows based on a condition. Use `where` with `count` to only count records that meet certain criteria. * [**take**](/apl/tabular-operators/take-operator): The `take` operator returns a specified number of records. You can use `take` to limit results before applying `count` if you're interested in counting a sample of records. # distinct This page explains how to use the distinct operator function in APL. The `distinct` operator in APL (Axiom Processing Language) returns a unique set of values from a specified field or set of fields. This operator is useful when you need to filter out duplicate entries and focus only on distinct values, such as unique user IDs, event types, or error codes within your datasets. Use the `distinct` operator in scenarios where eliminating duplicates helps you gain clearer insights from your data, like when analyzing logs, monitoring system traces, or reviewing security incidents. ## For users of other query languages If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL. In Splunk’s SPL, the `dedup` command is often used to retrieve distinct values. In APL, the equivalent is the `distinct` operator, which behaves similarly by returning unique values but without necessarily ordering them. ```splunk Splunk example index=web_logs | dedup user_id ``` ```kusto APL equivalent ['sample-http-logs'] | distinct id ``` In ANSI SQL, you use `SELECT DISTINCT` to return unique rows from a table. In APL, the `distinct` operator serves a similar function but is placed after the table reference rather than in the `SELECT` clause. ```sql SQL example SELECT DISTINCT user_id FROM web_logs; ``` ```kusto APL equivalent ['sample-http-logs'] | distinct id ``` ## Usage ### Syntax ```kusto | distinct FieldName1 [, FieldName2, ...] ``` ### Parameters * `FieldName1, FieldName2, ...`: The fields to include in the distinct operation. If you specify multiple fields, the result will include rows where the combination of values across these fields is unique. ### Returns The `distinct` operator returns a dataset with unique values from the specified fields, removing any duplicate entries. ## Use case examples In this use case, the `distinct` operator helps identify unique users who made HTTP requests in a system. **Query** ```kusto ['sample-http-logs'] | distinct id ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20distinct%20id%22%7D) **Output** | id | | --------- | | user\_123 | | user\_456 | | user\_789 | This query returns a list of unique user IDs that have made HTTP requests, filtering out duplicate user activity. Here, the `distinct` operator is used to identify all unique services involved in traces. **Query** ```kusto ['otel-demo-traces'] | distinct ['service.name'] ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20distinct%20%5B'service.name'%5D%22%7D) **Output** | service.name | | --------------------- | | frontend | | checkoutservice | | productcatalogservice | This query returns a distinct list of services involved in traces. In this example, you use the `distinct` operator to find unique HTTP status codes from security logs. **Query** ```kusto ['sample-http-logs'] | distinct status ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20distinct%20status%22%7D) **Output** | status | | ------ | | 200 | | 404 | | 500 | This query provides a distinct list of HTTP status codes that occurred in the logs. ## List of related operators * [**count**](/apl/tabular-operators/count-operator): Returns the total number of rows. Use it to count occurrences of data rather than filtering for distinct values. * [**summarize**](/apl/tabular-operators/summarize-operator): Allows you to aggregate data and perform calculations like sums or averages while grouping by distinct values. * [**project**](/apl/tabular-operators/project-operator): Selects specific fields from the dataset. Use it when you want to control which fields are returned before applying `distinct`. # extend This page explains how to use the extend operator in APL. The `extend` operator in APL allows you to create new calculated fields in your result set based on existing data. You can define expressions or functions to compute new values for each row, making `extend` particularly useful when you need to enrich your data without altering the original dataset. You typically use `extend` when you want to add additional fields to analyze trends, compare metrics, or generate new insights from your data. ## For users of other query languages If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL. In Splunk, the `eval` command is used to create new fields or modify existing ones. In APL, you can achieve this using the `extend` operator. ```sql Splunk example index=myindex | eval newField = duration * 1000 ``` ```kusto APL equivalent ['sample-http-logs'] | extend newField = req_duration_ms * 1000 ``` In ANSI SQL, you typically use the `SELECT` clause with expressions to create new fields. In APL, `extend` is used instead to define these new computed fields. ```sql SQL example SELECT id, req_duration_ms, req_duration_ms * 1000 AS newField FROM logs; ``` ```kusto APL equivalent ['sample-http-logs'] | extend newField = req_duration_ms * 1000 ``` ## Usage ### Syntax ```kusto | extend NewField = Expression ``` ### Parameters * `NewField`: The name of the new field to be created. * `Expression`: The expression used to compute values for the new field. This can include mathematical operations, string manipulations, or functions. ### Returns The operator returns a copy of the original dataset with the following changes: * Field names noted by `extend` that already exist in the input are removed and appended as their new calculated values. * Field names noted by `extend` that do not exist in the input are appended as their new calculated values. ## Use case examples In log analysis, you can use `extend` to compute the duration of each request in seconds from a millisecond value. **Query** ```kusto ['sample-http-logs'] | extend duration_sec = req_duration_ms / 1000 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20extend%20duration_sec%20%3D%20req_duration_ms%20%2F%201000%22%7D) **Output** | \_time | req\_duration\_ms | id | status | uri | method | geo.city | geo.country | duration\_sec | | ------------------- | ----------------- | ---- | ------ | ----- | ------ | -------- | ----------- | ------------- | | 2024-10-17 09:00:01 | 300 | 1234 | 200 | /home | GET | London | UK | 0.3 | This query calculates the duration of HTTP requests in seconds by dividing the `req_duration_ms` field by 1000. You can use `extend` to create a new field that categorizes the service type based on the service’s name. **Query** ```kusto ['otel-demo-traces'] | extend service_type = iff(['service.name'] in ('frontend', 'frontendproxy'), 'Web', 'Backend') ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D%20%7C%20extend%20service_type%20%3D%20iff%28%5B%27service.name%27%5D%20in%20%28%27frontend%27%2C%20%27frontendproxy%27%29%2C%20%27Web%27%2C%20%27Backend%27%29%22%7D) **Output** | \_time | span\_id | trace\_id | service.name | kind | status\_code | service\_type | | ------------------- | -------- | --------- | --------------- | ------ | ------------ | ------------- | | 2024-10-17 09:00:01 | abc123 | xyz789 | frontend | client | 200 | Web | | 2024-10-17 09:00:01 | def456 | uvw123 | checkoutservice | server | 500 | Backend | This query adds a new field `service_type` that categorizes the service into either Web or Backend based on the `service.name` field. For security logs, you can use `extend` to categorize HTTP statuses as success or failure. **Query** ```kusto ['sample-http-logs'] | extend status_category = iff(status == '200', 'Success', 'Failure') ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20extend%20status_category%20%3D%20iff%28status%20%3D%3D%20%27200%27%2C%20%27Success%27%2C%20%27Failure%27%29%22%7D) **Output** | \_time | id | status | uri | status\_category | | ------------------- | ---- | ------ | ----- | ---------------- | | 2024-10-17 09:00:01 | 1234 | 200 | /home | Success | This query creates a new field `status_category` that labels each HTTP request as either a Success or Failure based on the status code. ## List of related operators * [**project**](/apl/tabular-operators/project-operator): Use `project` to select specific fields or rename them. Unlike `extend`, it does not add new fields. * [**summarize**](/apl/tabular-operators/summarize-operator): Use `summarize` to aggregate data, which differs from `extend` that only adds new calculated fields without aggregation. # extend-valid This page explains how to use the extend-valid operator in APL. The `extend-valid` operator in Axiom Processing Language (APL) allows you to extend a set of fields with new calculated values, where these calculations are based on conditions of validity for each row. It’s particularly useful when working with datasets that contain missing or invalid data, as it enables you to calculate and assign values only when certain conditions are met. This operator helps you keep your data clean by applying calculations to valid data points, and leaving invalid or missing values untouched. This is a shorthand operator to create a field while also doing basic checking on the validity of the field. In many cases, additional checks are required and it is recommended in those cases a combination of an [extend](/apl/tabular-operators/extend-operator) and a [where](/apl/tabular-operators/where-operator) operator are used. The basic checks that Axiom preform depend on the type of the expression: * **Dictionary:** Check if the dictionary is not null and has at least one entry. * **Array:** Check if the arrat is not null and has at least one value. * **String:** Check is the string is not empty and has at least one character. * **Other types:** The same logic as `tobool` and a check for true. You can use `extend-valid` to perform conditional transformations on large datasets, especially in scenarios where data quality varies or when dealing with complex log or telemetry data. ## For users of other query languages If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL. In Splunk SPL, similar functionality is achieved using the `eval` function, but with the `if` command to handle conditional logic for valid or invalid data. In APL, `extend-valid` is more specialized for handling valid data points directly, allowing you to extend fields based on conditions. ```sql Splunk example | eval new_field = if(isnotnull(field), field + 1, null()) ``` ```kusto APL equivalent ['sample-http-logs'] | extend-valid new_field = req_duration_ms + 100 ``` In ANSI SQL, similar functionality is often achieved using the `CASE WHEN` expression within a `SELECT` statement to handle conditional logic for fields. In APL, `extend-valid` directly extends a field conditionally, based on the validity of the data. ```sql SQL example SELECT CASE WHEN req_duration_ms IS NOT NULL THEN req_duration_ms + 100 ELSE NULL END AS new_field FROM sample_http_logs; ``` ```kusto APL equivalent ['sample-http-logs'] | extend-valid new_field = req_duration_ms + 100 ``` ## Usage ### Syntax ```kusto | extend-valid FieldName1 = Expression1, FieldName2 = Expression2, FieldName3 = ... ``` ### Parameters * `FieldName`: The name of the existing field that you want to extend. * `Expression`: The expression to evaluate and apply for valid rows. ### Returns The operator returns a table where the specified fields are extended with new values based on the given expression for valid rows. The original value remains unchanged. ## Use case examples In this use case, you normalize the HTTP request methods by converting them to uppercase for valid entries. **Query** ```kusto ['sample-http-logs'] | extend-valid upper_method = toupper(method) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend-valid%20upper_method%20%3D%20toupper\(method\)%22%7D) **Output** | \_time | method | upper\_method | | ------------------- | ------ | ------------- | | 2023-10-01 12:00:00 | get | GET | | 2023-10-01 12:01:00 | POST | POST | | 2023-10-01 12:02:00 | NULL | NULL | In this query, the `toupper` function converts the `method` field to uppercase, but only for valid entries. If the `method` field is null, the result remains null. In this use case, you extract the first part of the service namespace (before the hyphen) from valid namespaces in the OpenTelemetry traces. **Query** ```kusto ['otel-demo-traces'] | extend-valid namespace_prefix = extract('^(.*?)-', 1, ['service.namespace']) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20extend-valid%20namespace_prefix%20%3D%20extract\('%5E\(.*%3F\)-'%2C%201%2C%20%5B'service.namespace'%5D\)%22%7D) **Output** | \_time | service.namespace | namespace\_prefix | | ------------------- | ------------------ | ----------------- | | 2023-10-01 12:00:00 | opentelemetry-demo | opentelemetry | | 2023-10-01 12:01:00 | opentelemetry-prod | opentelemetry | | 2023-10-01 12:02:00 | NULL | NULL | In this query, the `extract` function pulls the first part of the service namespace. It only applies to valid `service.namespace` values, leaving nulls unchanged. In this use case, you extract the first letter of the city names from the `geo.city` field for valid log entries. **Query** ```kusto ['sample-http-logs'] | extend-valid city_first_letter = extract('^([A-Za-z])', 1, ['geo.city']) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend-valid%20city_first_letter%20%3D%20extract\('%5E\(%5BA-Za-z%5D\)'%2C%201%2C%20%5B'geo.city'%5D\)%22%7D) **Output** | \_time | geo.city | city\_first\_letter | | ------------------- | -------- | ------------------- | | 2023-10-01 12:00:00 | New York | N | | 2023-10-01 12:01:00 | NULL | NULL | | 2023-10-01 12:02:00 | London | L | | 2023-10-01 12:03:00 | 1Paris | NULL | In this query, the `extract` function retrieves the first letter of the city names from the `geo.city` field for valid entries. If the `geo.city` field is null or starts with a non-alphabetical character, no city name is extracted, and the result remains null. ## List of related operators * [**extend**](/apl/tabular-operators/extend-operator): Use `extend` to add calculated fields unconditionally, without validating data. * [**project**](/apl/tabular-operators/project-operator): Use `project` to select and rename fields, without performing conditional extensions. * [**summarize**](/apl/tabular-operators/summarize-operator): Use `summarize` for aggregation, often used before extending fields with further calculations. # limit This page explains how to use the limit operator in APL. The `limit` operator in Axiom Processing Language (APL) allows you to restrict the number of rows returned from a query. It is particularly useful when you want to see only a subset of results from large datasets, such as when debugging or previewing query outputs. The `limit` operator can help optimize performance and focus analysis by reducing the amount of data processed. Use the `limit` operator when you want to return only the top rows from a dataset, especially in cases where the full result set is not necessary. ## For users of other query languages If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL. In Splunk, the equivalent to APL’s `limit` is the `head` command, which also returns the top rows of a dataset. The main difference is in the syntax. ```sql Splunk example | head 10 ``` ```kusto APL equivalent ['sample-http-logs'] | limit 10 ``` In ANSI SQL, the `LIMIT` clause is equivalent to the `limit` operator in APL. The SQL `LIMIT` statement is placed at the end of a query, whereas in APL, the `limit` operator comes after the dataset reference. ```sql SQL example SELECT * FROM sample_http_logs LIMIT 10; ``` ```kusto APL equivalent ['sample-http-logs'] | limit 10 ``` ## Usage ### Syntax ```kusto | limit [N] ``` ### Parameters * `N`: The maximum number of rows to return. This must be a non-negative integer. ### Returns The `limit` operator returns the top **`N`** rows from the input dataset. If fewer than **`N`** rows are available, all rows are returned. ## Use case examples In log analysis, you often want to view only the most recent entries, and `limit` can help narrow the focus on those rows. **Query** ```kusto ['sample-http-logs'] | limit 5 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20limit%205%22%7D) **Output** | \_time | req\_duration\_ms | id | status | uri | method | geo.city | geo.country | | ------------------- | ----------------- | --- | ------ | -------------- | ------ | -------- | ----------- | | 2024-10-17T12:00:00 | 200 | 123 | 200 | /index.html | GET | New York | USA | | 2024-10-17T11:59:59 | 300 | 124 | 404 | /notfound.html | GET | London | UK | This query limits the output to the first 5 rows from the `['sample-http-logs']` dataset, returning recent HTTP log entries. When analyzing OpenTelemetry traces, you may want to focus on the most recent traces. **Query** ```kusto ['otel-demo-traces'] | limit 5 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20limit%205%22%7D) **Output** | \_time | duration | span\_id | trace\_id | service.name | kind | status\_code | | ------------------- | -------- | -------- | --------- | ------------ | ------ | ------------ | | 2024-10-17T12:00:00 | 500ms | 1abc | 123xyz | frontend | server | OK | | 2024-10-17T11:59:59 | 200ms | 2def | 124xyz | cartservice | client | OK | This query retrieves the first 5 rows from the `['otel-demo-traces']` dataset, helping you analyze the latest traces. For security log analysis, you might want to review the most recent login attempts to ensure no anomalies exist. **Query** ```kusto ['sample-http-logs'] | where status == '401' | limit 5 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20where%20status%20%3D%3D%20'401'%20%7C%20limit%205%22%7D) **Output** | \_time | req\_duration\_ms | id | status | uri | method | geo.city | geo.country | | ------------------- | ----------------- | --- | ------ | ----------- | ------ | -------- | ----------- | | 2024-10-17T12:00:00 | 300 | 567 | 401 | /login.html | POST | Berlin | Germany | | 2024-10-17T11:59:59 | 250 | 568 | 401 | /login.html | POST | Sydney | Australia | This query limits the output to 5 unauthorized access attempts (`401` status code) from the `['sample-http-logs']` dataset. ## List of related operators * [**take**](/apl/tabular-operators/take-operator): Similar to `limit`, but explicitly focuses on row sampling. * [**top**](/apl/tabular-operators/top-operator): Retrieves the top **N** rows sorted by a specific field. * [**sample**](/apl/tabular-operators/sample-operator): Randomly samples **N** rows from the dataset. # order This page explains how to use the order operator in APL. The `order` operator in Axiom Processing Language (APL) allows you to sort the rows of a result set by one or more specified fields. You can use this operator to organize data for easier interpretation, prioritize specific values, or prepare data for subsequent analysis steps. The `order` operator is particularly useful when working with logs, telemetry data, or any dataset where ranking or sorting by values (such as time, status, or user ID) is necessary. ## For users of other query languages If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL. In Splunk SPL, the equivalent operator to `order` is `sort`. SPL uses a similar syntax to APL but with some differences. In SPL, `sort` allows both ascending (`asc`) and descending (`desc`) sorting, while in APL, you achieve sorting using the `asc()` and `desc()` functions for fields. ```splunk Splunk example | sort - _time ``` ```kusto APL equivalent ['sample-http-logs'] | order by _time desc ``` In ANSI SQL, the equivalent of `order` is `ORDER BY`. SQL uses `ASC` for ascending and `DESC` for descending order. In APL, sorting works similarly, with the `asc()` and `desc()` functions added around field names to specify the order. ```sql SQL example SELECT * FROM logs ORDER BY _time DESC; ``` ```kusto APL equivalent ['sample-http-logs'] | order by _time desc ``` ## Usage ### Syntax ```kusto | order by FieldName [asc | desc], FieldName [asc | desc] ``` ### Parameters * `FieldName`: The name of the field by which to sort. * `asc`: Sorts the field in ascending order. * `desc`: Sorts the field in descending order. ### Returns The `order` operator returns the input dataset, sorted according to the specified fields and order (ascending or descending). If multiple fields are specified, sorting is done based on the first field, then by the second if values in the first field are equal, and so on. ## Use case examples In this example, you sort HTTP logs by request duration in descending order to prioritize the longest requests. **Query** ```kusto ['sample-http-logs'] | order by req_duration_ms desc ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20order%20by%20req_duration_ms%20desc%22%7D) **Output** | \_time | req\_duration\_ms | id | status | uri | method | geo.city | geo.country | | ------------------- | ----------------- | ------ | ------ | -------------------- | ------ | -------- | ----------- | | 2024-10-17 10:10:01 | 1500 | user12 | 200 | /api/v1/get-orders | GET | Seattle | US | | 2024-10-17 10:09:47 | 1350 | user23 | 404 | /api/v1/get-products | GET | New York | US | | 2024-10-17 10:08:21 | 1200 | user45 | 500 | /api/v1/post-order | POST | London | UK | This query sorts the logs by request duration, helping you identify which requests are taking the most time to complete. In this example, you sort OpenTelemetry trace data by span duration in descending order, which helps you identify the longest-running spans across your services. **Query** ```kusto ['otel-demo-traces'] | order by duration desc ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D%20%7C%20order%20by%20duration%20desc%22%7D) **Output** | \_time | duration | span\_id | trace\_id | service.name | kind | status\_code | | ------------------- | -------- | -------- | --------- | --------------------- | ------ | ------------ | | 2024-10-17 10:10:01 | 15.3s | span4567 | trace123 | frontend | server | 200 | | 2024-10-17 10:09:47 | 12.4s | span8910 | trace789 | checkoutservice | client | 200 | | 2024-10-17 10:08:21 | 10.7s | span1112 | trace456 | productcatalogservice | server | 500 | This query helps you detect performance bottlenecks by sorting spans based on their duration. In this example, you analyze security logs by sorting them by time to view the most recent logs. **Query** ```kusto ['sample-http-logs'] | order by _time desc ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20order%20by%20_time%20desc%22%7D) **Output** | \_time | req\_duration\_ms | id | status | uri | method | geo.city | geo.country | | ------------------- | ----------------- | ------ | ------ | ---------------------- | ------ | -------- | ----------- | | 2024-10-17 10:10:01 | 300 | user34 | 200 | /api/v1/login | POST | Berlin | DE | | 2024-10-17 10:09:47 | 150 | user78 | 401 | /api/v1/get-profile | GET | Paris | FR | | 2024-10-17 10:08:21 | 200 | user56 | 500 | /api/v1/update-profile | PUT | Madrid | ES | This query sorts the security logs by time to display the most recent log entries first, helping you quickly review recent security events. ## List of related operators * [**top**](/apl/tabular-operators/top-operator): The `top` operator returns the top N records based on a specific sorting criteria, which is similar to `order` but only retrieves a fixed number of results. * [**summarize**](/apl/tabular-operators/summarize-operator): The `summarize` operator groups data and often works in combination with `order` to rank summarized values. * [**extend**](/apl/tabular-operators/extend-operator): The `extend` operator can be used to create calculated fields, which can then be used as sorting criteria in the `order` operator. # Tabular operators This section explains how to use and combine tabular operators in APL. The table summarizes the tabular operators functions available in APL. | Function | Description | | ------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------- | | [count](/apl/tabular-operators/count-operator) | Returns an integer representing the total number of records in the dataset. | | [distinct](/apl/tabular-operators/distinct-operator) | Returns a dataset with unique values from the specified fields, removing any duplicate entries. | | [extend](/apl/tabular-operators/extend-operator) | Returns the original dataset with one or more new fields appended, based on the defined expressions. | | [extend-valid](/apl/tabular-operators/extend-valid-operator) | Returns a table where the specified fields are extended with new values based on the given expression for valid rows. | | [limit](/apl/tabular-operators/limit-operator) | Returns the top N rows from the input dataset. | | [order](/apl/tabular-operators/order-operator) | Returns the input dataset, sorted according to the specified fields and order. | | [parse](/apl/tabular-operators/parse-operator) | Returns the input dataset with new fields added based on the specified parsing pattern. | | [project](/apl/tabular-operators/project-operator) | Returns a dataset containing only the specified fields. | | [project-away](/apl/tabular-operators/project-away-operator) | Returns the input dataset excluding the specified fields. | | [project-keep](/apl/tabular-operators/project-keep-operator) | Returns a dataset with only the specified fields. | | [project-reorder](/apl/tabular-operators/project-reorder-operator) | Returns a table with the specified fields reordered as requested followed by any unspecified fields in their original order. | | [sample](/apl/tabular-operators/sample-operator) | Returns a table containing the specified number of rows, selected randomly from the input dataset. | | [search](/apl/tabular-operators/search-operator) | Returns all rows where the specified keyword appears in any field. | | [sort](/apl/tabular-operators/sort-operator) | Returns a table with rows ordered based on the specified fields. | | [summarize](/apl/tabular-operators/summarize-operator) | Returns a table where each row represents a unique combination of values from the by fields, with the aggregated results calculated for the other fields. | | [take](/apl/tabular-operators/take-operator) | Returns the specified number of rows from the dataset. | | [top](/apl/tabular-operators/top-operator) | Returns the top N rows from the dataset based on the specified sorting criteria. | | [union](/apl/tabular-operators/union-operator) | Returns all rows from the specified tables or queries. | | [where](/apl/tabular-operators/where-operator) | Returns a filtered dataset containing only the rows where the condition evaluates to true. | # parse This page explains how to use the parse operator function in APL. The `parse` operator in APL enables you to extract and structure information from unstructured or semi-structured text data, such as log files or strings. You can use the operator to specify a pattern for parsing the data and define the fields to extract. This is useful when analyzing logs, tracing information from text fields, or extracting key-value pairs from message formats. You can find the `parse` operator helpful when you need to process raw text fields and convert them into a structured format for further analysis. It’s particularly effective when working with data that doesn't conform to a fixed schema, such as log entries or custom messages. ## Importance of the parse operator * **Data extraction:** It allows you to extract structured data from unstructured or semi-structured string fields, enabling you to transform raw data into a more usable format. * **Flexibility:** The parse operator supports different parsing modes (simple, relaxed, regex) and provides various options to define parsing patterns, making it adaptable to different data formats and requirements. * **Performance:** By extracting only the necessary information from string fields, the parse operator helps optimize query performance by reducing the amount of data processed and enabling more efficient filtering and aggregation. * **Readability:** The parse operator provides a clear and concise way to define parsing patterns, making the query code more readable and maintainable. ## For users of other query languages If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL. In Splunk, the `rex` command is often used to extract fields from raw events or text. In APL, the `parse` operator performs a similar function. You define the text pattern to match and extract fields, allowing you to extract structured data from unstructured strings. ```splunk Splunk example index=web_logs | rex field=_raw "duration=(?\d+)" ``` ```kusto APL equivalent ['sample-http-logs'] | parse uri with * "duration=" req_duration_ms:int ``` In ANSI SQL, there isn’t a direct equivalent to the `parse` operator. Typically, you use string functions such as `SUBSTRING` or `REGEXP` to extract parts of a text field. However, APL’s `parse` operator simplifies this process by allowing you to define a text pattern and extract multiple fields in a single statement. ```sql SQL example SELECT SUBSTRING(uri, CHARINDEX('duration=', uri) + 9, 3) AS req_duration_ms FROM sample_http_logs; ``` ```kusto APL equivalent ['sample-http-logs'] | parse uri with * "duration=" req_duration_ms:int ``` ## Usage ### Syntax ```kusto | parse [kind=simple|regex|relaxed] Expression with [*] StringConstant FieldName [: FieldType] [*] ... ``` ### Parameters * `kind`: Optional parameter to specify the parsing mode. Its value can be `simple` for exact matches, `regex` for regular expressions, or `relaxed` for relaxed parsing. The default is `simple`. * `Expression`: The string expression to parse. * `StringConstant`: A string literal or regular expression pattern to match against. * `FieldName`: The name of the field to assign the extracted value. * `FieldType`: Optional parameter to specify the data type of the extracted field. The default is `string`. * `*`: Wildcard to match any characters before or after the `StringConstant`. * `...`: You can specify additional `StringConstant` and `FieldName` pairs to extract multiple values. ### Returns The parse operator returns the input dataset with new fields added based on the specified parsing pattern. The new fields contain the extracted values from the parsed string expression. If the parsing fails for a particular row, the corresponding fields have null values. ## Use case examples For log analysis, you can extract the HTTP request duration from the `uri` field using the `parse` operator. **Query** ```kusto ['sample-http-logs'] | parse uri with * 'duration=' req_duration_ms:int | project _time, req_duration_ms, uri ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20parse%20uri%20with%20%2A%20'duration%3D'%20req_duration_ms%3Aint%20%7C%20project%20_time%2C%20req_duration_ms%2C%20uri%22%7D) **Output** | \_time | req\_duration\_ms | uri | | ------------------- | ----------------- | ----------------------------- | | 2024-10-18T12:00:00 | 200 | /api/v1/resource?duration=200 | | 2024-10-18T12:00:05 | 300 | /api/v1/resource?duration=300 | This query extracts the `req_duration_ms` from the `uri` field and projects the time and duration for each HTTP request. In OpenTelemetry traces, the `parse` operator is useful for extracting components of trace data, such as the service name or status code. **Query** ```kusto ['otel-demo-traces'] | parse trace_id with * '-' ['service.name'] | project _time, ['service.name'], trace_id ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D%20%7C%20parse%20trace_id%20with%20%2A%20'-'%20%5B'service.name'%5D%20%7C%20project%20_time%2C%20%5B'service.name'%5D%2C%20trace_id%22%7D) **Output** | \_time | service.name | trace\_id | | ------------------- | ------------ | -------------------- | | 2024-10-18T12:00:00 | frontend | a1b2c3d4-frontend | | 2024-10-18T12:01:00 | cartservice | e5f6g7h8-cartservice | This query extracts the `service.name` from the `trace_id` and projects the time and service name for each trace. For security logs, you can use the `parse` operator to extract status codes and the method of HTTP requests. **Query** ```kusto ['sample-http-logs'] | parse method with * '/' status | project _time, method, status ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20parse%20method%20with%20%2A%20'%2F'%20status%20%7C%20project%20_time%2C%20method%2C%20status%22%7D) **Output** | \_time | method | status | | ------------------- | ------ | ------ | | 2024-10-18T12:00:00 | GET | 200 | | 2024-10-18T12:00:05 | POST | 404 | This query extracts the HTTP method and status from the `method` field and shows them along with the timestamp. ## Other examples ### Parse content type This example parses the `content_type` field to extract the `datatype` and `format` values separated by a `/`. The extracted values are projected as separate fields. **Original string** ```bash application/charset=utf-8 ``` **Query** ```kusto ['sample-http-logs'] | parse content_type with datatype '/' format | project datatype, format ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20parse%20content_type%20with%20datatype%20'%2F'%20format%20%7C%20project%20datatype%2C%20format%22%7D) **Output** ```json { "datatype": "application", "format": "charset=utf-8" } ``` ### Parse user agent This example parses the `user_agent` field to extract the operating system name (`os_name`) and version (`os_version`) enclosed within parentheses. The extracted values are projected as separate fields. **Original string** ```bash Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36 ``` **Query** ```kusto ['sample-http-logs'] | parse user_agent with * '(' os_name ' ' os_version ';' * ')' * | project os_name, os_version ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20parse%20user_agent%20with%20*%20'\('%20os_name%20'%20'%20os_version%20'%3B'%20*%20'\)'%20*%20%7C%20project%20os_name%2C%20os_version%22%7D) **Output** ```json { "os_name": "Windows NT 10.0; Win64; x64", "os_version": "10.0" } ``` ### Parse URI endpoint This example parses the `uri` field to extract the `endpoint` value that appears after `/api/v1/`. The extracted value is projected as a new field. **Original string** ```bash /api/v1/ping/user/textdata ``` **Query** ```kusto ['sample-http-logs'] | parse uri with '/api/v1/' endpoint | project endpoint ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20parse%20uri%20with%20'%2Fapi%2Fv1%2F'%20endpoint%20%7C%20project%20endpoint%22%7D) **Output** ```json { "endpoint": "ping/user/textdata" } ``` ### Parse ID into region, tenant, and user ID This example demonstrates how to parse the `id` field into three parts: `region`, `tenant`, and `userId`. The `id` field is structured with these parts separated by hyphens (`-`). The extracted parts are projected as separate fields. **Original string** ```bash usa-acmeinc-3iou24 ``` **Query** ```kusto ['sample-http-logs'] | parse id with region '-' tenant '-' userId | project region, tenant, userId ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20parse%20id%20with%20region%20'-'%20tenant%20'-'%20userId%20%7C%20project%20region%2C%20tenant%2C%20userId%22%7D) **Output** ```json { "region": "usa", "tenant": "acmeinc", "userId": "3iou24" } ``` ### Parse in relaxed mode The parse operator supports a relaxed mode that allows for more flexible parsing. In relaxed mode, Axiom treats the parsing pattern as a regular string and matches results in a relaxed manner. If some parts of the pattern are missing or do not match the expected type, Axiom assigns null values. This example parses the `log` field into four separate parts (`method`, `url`, `status`, and `responseTime`) based on a structured format. The extracted parts are projected as separate fields. **Original string** ```bash GET /home 200 123ms POST /login 500 nonValidResponseTime PUT /api/data 201 456ms DELETE /user/123 404 nonValidResponseTime ``` **Query** ```kusto ['HttpRequestLogs'] | parse kind=relaxed log with method " " url " " status:int " " responseTime | project method, url, status, responseTime ``` **Output** ```json [ { "method": "GET", "url": "/home", "status": 200, "responseTime": "123ms" }, { "method": "POST", "url": "/login", "status": 500, "responseTime": null }, { "method": "PUT", "url": "/api/data", "status": 201, "responseTime": "456ms" }, { "method": "DELETE", "url": "/user/123", "status": 404, "responseTime": null } ] ``` ### Parse in regex mode The parse operator supports a regex mode that allows you to parse use regular expressions. In regex mode, Axiom treats the parsing pattern as a regular expression and matches results based on the specified regex pattern. This example demonstrates how to parse Kubernetes pod log entries using regex mode to extract various fields such as `podName`, `namespace`, `phase`, `startTime`, `nodeName`, `hostIP`, and `podIP`. The parsing pattern is treated as a regular expression, and the extracted values are assigned to the respective fields. **Original string** ```bash Log: PodStatusUpdate (podName=nginx-pod, namespace=default, phase=Running, startTime=2023-05-14 08:30:00, nodeName=node-1, hostIP=192.168.1.1, podIP=10.1.1.1) ``` **Query** ```kusto ['PodLogs'] | parse kind=regex AppName with @"Log: PodStatusUpdate \(podName=" podName: string @", namespace=" namespace: string @", phase=" phase: string @", startTime=" startTime: datetime @", nodeName=" nodeName: string @", hostIP=" hostIP: string @", podIP=" podIP: string @"\)" | project podName, namespace, phase, startTime, nodeName, hostIP, podIP ``` **Output** ```json { "podName": "nginx-pod", "namespace": "default", "phase": "Running", "startTime": "2023-05-14 08:30:00", "nodeName": "node-1", "hostIP": "192.168.1.1", "podIP": "10.1.1.1" } ``` ## Best practices When using the parse operator, consider the following best practices: * Use appropriate parsing modes: Choose the parsing mode (simple, relaxed, regex) based on the complexity and variability of the data being parsed. Simple mode is suitable for fixed patterns, while relaxed and regex modes offer more flexibility. * Handle missing or invalid data: Consider how to handle scenarios where the parsing pattern does not match or the extracted values do not conform to the expected types. Use the relaxed mode or provide default values to handle such cases. * Project only necessary fields: After parsing, use the project operator to select only the fields that are relevant for further querying. This helps reduce the amount of data transferred and improves query performance. * Use parse in combination with other operators: Combine parse with other APL operators like where, extend, and summarize to filter, transform, and aggregate the parsed data effectively. By following these best practices and understanding the capabilities of the parse operator, you can effectively extract and transform data from string fields in APL, enabling powerful querying and insights. ## List of related operators * [**extend**](/apl/tabular-operators/extend-operator): Use the `extend` operator when you want to add calculated fields without parsing text. * [**project**](/apl/tabular-operators/project-operator): Use `project` to select and rename fields after parsing text. # project-away This page explains how to use the project-away operator function in APL. The `project-away` operator in APL is used to exclude specific fields from the output of a query. This operator is useful when you want to return a subset of fields from a dataset, without needing to manually specify every field you want to keep. Instead, you specify the fields you want to remove, and the operator returns all remaining fields. You can use `project-away` in scenarios where your dataset contains irrelevant or sensitive fields that you do not want in the results. It simplifies queries, especially when dealing with wide datasets, by allowing you to filter out fields without having to explicitly list every field to include. ## For users of other query languages If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL. In Splunk SPL, you use the `fields` command to remove fields from your results. In APL, the `project-away` operator provides a similar functionality, removing specified fields while returning the remaining ones. ```splunk Splunk example ... | fields - status, uri, method ``` ```kusto APL equivalent ['sample-http-logs'] | project-away status, uri, method ``` In SQL, you typically use the `SELECT` statement to explicitly include fields. In contrast, APL’s `project-away` operator allows you to exclude fields, offering a more concise approach when you want to keep many fields but remove a few. ```sql SQL example SELECT _time, req_duration_ms, id, geo.city, geo.country FROM sample_http_logs; ``` ```kusto APL equivalent ['sample-http-logs'] | project-away status, uri, method ``` ## Usage ### Syntax ```kusto | project-away FieldName1, FieldName2, ... ``` ### Parameters * `FieldName`: The field you want to exclude from the result set. ### Returns The `project-away` operator returns the input dataset excluding the specified fields. The result contains the same number of rows as the input table. ## Use case examples In log analysis, you might want to exclude unnecessary fields to focus on the relevant fields, such as timestamp, request duration, and user information. **Query** ```kusto ['sample-http-logs'] | project-away status, uri, method ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20project-away%20status%2C%20uri%2C%20method%22%7D) **Output** | \_time | req\_duration\_ms | id | geo.city | geo.country | | ------------------- | ----------------- | -- | -------- | ----------- | | 2023-10-17 10:23:00 | 120 | u1 | Seattle | USA | | 2023-10-17 10:24:00 | 135 | u2 | Berlin | Germany | The query removes the `status`, `uri`, and `method` fields from the output, keeping the focus on the key fields. When analyzing OpenTelemetry traces, you can remove fields that aren't necessary for specific trace evaluations, such as span IDs and statuses. **Query** ```kusto ['otel-demo-traces'] | project-away span_id, status_code ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D%20%7C%20project-away%20span_id%2C%20status_code%22%7D) **Output** | \_time | duration | trace\_id | service.name | kind | | ------------------- | -------- | --------- | --------------- | ------ | | 2023-10-17 11:01:00 | 00:00:03 | t1 | frontend | server | | 2023-10-17 11:02:00 | 00:00:02 | t2 | checkoutservice | client | The query removes the `span_id` and `status_code` fields, focusing on key service information. In security log analysis, excluding unnecessary fields such as the HTTP method or URI can help focus on user behavior patterns and request durations. **Query** ```kusto ['sample-http-logs'] | project-away method, uri ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20project-away%20method%2C%20uri%22%7D) **Output** | \_time | req\_duration\_ms | id | status | geo.city | geo.country | | ------------------- | ----------------- | -- | ------ | -------- | ----------- | | 2023-10-17 10:25:00 | 95 | u3 | 200 | London | UK | | 2023-10-17 10:26:00 | 180 | u4 | 404 | Paris | France | The query excludes the `method` and `uri` fields, keeping information like status and geographical details. ## Wildcard Wildcard refers to a special character or a set of characters that can be used to substitute for any other character in a search pattern. Use wildcards to create more flexible queries and perform more powerful searches. The syntax for wildcard can either be `data*` or `['data.fo']*`. Here’s how you can use wildcards in `project-away`: ```kusto ['sample-http-logs'] | project-away status*, user*, is*, ['geo.']* ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project-away%20status%2A%2C%20user%2A%2C%20is%2A%2C%20%20%5B%27geo.%27%5D%2A%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) ```kusto ['github-push-event'] | project-away push*, repo*, ['commits']* ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27github-push-event%27%5D%5Cn%7C%20project-away%20push%2A%2C%20repo%2A%2C%20%5B%27commits%27%5D%2A%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) ## List of related operators * [**project**](/apl/tabular-operators/project-operator): The `project` operator lets you select specific fields to include, rather than excluding them. * [**extend**](/apl/tabular-operators/extend-operator): The `extend` operator is used to add new fields, whereas `project-away` is for removing fields. * [**summarize**](/apl/tabular-operators/summarize-operator): While `project-away` removes fields, `summarize` is useful for aggregating data across multiple fields. # project-keep This page explains how to use the project-keep operator function in APL. The `project-keep` operator in APL is a powerful tool for field selection. It allows you to explicitly keep specific fields from a dataset, discarding any others not listed in the operator's parameters. This is useful when you only need to work with a subset of fields in your query results and want to reduce clutter or improve performance by eliminating unnecessary fields. You can use `project-keep` when you need to focus on particular data points, such as in log analysis, security event monitoring, or extracting key fields from traces. ## For users of other query languages If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL. In Splunk SPL, the `table` command performs a similar task to APL’s `project-keep`. It selects only the fields you specify and excludes any others. ```splunk Splunk example index=main | table _time, status, uri ``` ```kusto APL equivalent ['sample-http-logs'] | project-keep _time, status, uri ``` In ANSI SQL, the `SELECT` statement combined with field names performs a task similar to `project-keep` in APL. Both allow you to specify which fields to retrieve from the dataset. ```sql SQL example SELECT _time, status, uri FROM sample_http_logs ``` ```kusto APL equivalent ['sample-http-logs'] | project-keep _time, status, uri ``` ## Usage ### Syntax ```kusto | project-keep FieldName1, FieldName2, ... ``` ### Parameters * `FieldName`: The field you want to keep in the result set. ### Returns `project-keep` returns a dataset with only the specified fields. All other fields are removed from the output. The result contains the same number of rows as the input table. ## Use case examples For log analysis, you might want to keep only the fields that are relevant to investigating HTTP requests. **Query** ```kusto ['sample-http-logs'] | project-keep _time, status, uri, method, req_duration_ms ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20project-keep%20_time%2C%20status%2C%20uri%2C%20method%2C%20req_duration_ms%22%7D) **Output** | \_time | status | uri | method | req\_duration\_ms | | ------------------- | ------ | ------------------ | ------ | ----------------- | | 2024-10-17 10:00:00 | 200 | /index.html | GET | 120 | | 2024-10-17 10:01:00 | 404 | /non-existent.html | GET | 50 | | 2024-10-17 10:02:00 | 500 | /server-error | POST | 300 | This query filters the dataset to show only the request timestamp, status, URI, method, and duration, which can help you analyze server performance or errors. For OpenTelemetry trace analysis, you may want to focus on key tracing details such as service names and trace IDs. **Query** ```kusto ['otel-demo-traces'] | project-keep _time, trace_id, span_id, ['service.name'], duration ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D%20%7C%20project-keep%20_time%2C%20trace_id%2C%20span_id%2C%20%5B%27service.name%27%5D%2C%20duration%22%7D) **Output** | \_time | trace\_id | span\_id | service.name | duration | | ------------------- | --------- | -------- | --------------- | -------- | | 2024-10-17 10:03:00 | abc123 | xyz789 | frontend | 500ms | | 2024-10-17 10:04:00 | def456 | mno345 | checkoutservice | 250ms | This query extracts specific tracing information, such as trace and span IDs, the name of the service, and the span’s duration. In security log analysis, focusing on essential fields like user ID and HTTP status can help track suspicious activity. **Query** ```kusto ['sample-http-logs'] | project-keep _time, id, status, uri, ['geo.city'], ['geo.country'] ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20project-keep%20_time%2C%20id%2C%20status%2C%20uri%2C%20%5B%27geo.city%27%5D%2C%20%5B%27geo.country%27%5D%22%7D) **Output** | \_time | id | status | uri | geo.city | geo.country | | ------------------- | ------- | ------ | ------ | ------------- | ----------- | | 2024-10-17 10:05:00 | user123 | 403 | /admin | New York | USA | | 2024-10-17 10:06:00 | user456 | 200 | /login | San Francisco | USA | This query narrows down the data to track HTTP status codes by users, helping identify potential unauthorized access attempts. ## List of related operators * [**project**](/apl/tabular-operators/project-operator): Use `project` to explicitly specify the fields you want in your result, while also allowing transformations or calculations on those fields. * [**extend**](/apl/tabular-operators/extend-operator): Use `extend` to add new fields or modify existing ones without dropping any fields. * [**summarize**](/apl/tabular-operators/summarize-operator): Use `summarize` when you need to perform aggregation operations on your dataset, grouping data as necessary. ## Wildcard Wildcard refers to a special character or a set of characters that can be used to substitute for any other character in a search pattern. Use wildcards to create more flexible queries and perform more powerful searches. The syntax for wildcard can either be `data*` or `['data.fo']*`. Here’s how you can use wildcards in `project-keep`: ```kusto ['sample-http-logs'] | project-keep resp*, content*, ['geo.']* ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project-keep%20resp%2A%2C%20content%2A%2C%20%20%5B%27geo.%27%5D%2A%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) ```kusto ['github-push-event'] | project-keep size*, repo*, ['commits']*, id* ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27github-push-event%27%5D%5Cn%7C%20project-keep%20size%2A%2C%20repo%2A%2C%20%5B%27commits%27%5D%2A%2C%20id%2A%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) # project This page explains how to use the project operator in APL. # project operator The `project` operator in Axiom Processing Language (APL) is used to select specific fields from a dataset, potentially renaming them or applying calculations on the fly. With `project`, you can control which fields are returned by the query, allowing you to focus on only the data you need. This operator is useful when you want to refine your query results by reducing the number of fields, renaming them, or deriving new fields based on existing data. It’s a powerful tool for filtering out unnecessary fields and performing light transformations on your dataset. ## For users of other query languages If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL. In Splunk SPL, the equivalent of the `project` operator is typically the `table` or `fields` command. While SPL’s `table` focuses on selecting fields, `fields` controls both selection and exclusion, similar to `project` in APL. ```sql Splunk example | table _time, status, uri ``` ```kusto APL equivalent ['sample-http-logs'] | project _time, status, uri ``` In ANSI SQL, the `SELECT` statement serves a similar role to the `project` operator in APL. SQL users will recognize that `project` behaves like selecting fields from a table, with the ability to rename or transform fields inline. ```sql SQL example SELECT _time, status, uri FROM sample_http_logs; ``` ```kusto APL equivalent ['sample-http-logs'] | project _time, status, uri ``` ## Usage ### Syntax ```kusto | project FieldName [= Expression] [, ...] ``` Or ```kusto | project FieldName, FieldName, FieldName, ... ``` Or ```kusto | project [FieldName, FieldName[,] = Expression [, ...] ``` ### Parameters * `FieldName`: The names of the fields in the order you want them to appear in the result set. If there is no Expression, then FieldName is compulsory and a field of that name must appear in the input. * `Expression`: Optional scalar expression referencing the input fields. ### Returns The `project` operator returns a dataset containing only the specified fields. ## Use case examples In this example, you’ll extract the timestamp, HTTP status code, and request URI from the sample HTTP logs. **Query** ```kusto ['sample-http-logs'] | project _time, status, uri ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20project%20_time%2C%20status%2C%20uri%22%7D) **Output** | \_time | status | uri | | ------------------- | ------ | --------------- | | 2024-10-17 12:00:00 | 200 | /api/v1/getData | | 2024-10-17 12:01:00 | 404 | /api/v1/getUser | The query returns only the timestamp, HTTP status code, and request URI, reducing unnecessary fields from the dataset. In this example, you’ll extract trace information such as the service name, span ID, and duration from OpenTelemetry traces. **Query** ```kusto ['otel-demo-traces'] | project ['service.name'], span_id, duration ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20project%20%5B'service.name'%5D%2C%20span_id%2C%20duration%22%7D) **Output** | service.name | span\_id | duration | | ------------ | ------------- | -------- | | frontend | span-1234abcd | 00:00:02 | | cartservice | span-5678efgh | 00:00:05 | The query isolates relevant tracing data, such as the service name, span ID, and duration of spans. In this example, you’ll focus on security log entries by projecting only the timestamp, user ID, and HTTP status from the sample HTTP logs. **Query** ```kusto ['sample-http-logs'] | project _time, id, status ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20project%20_time%2C%20id%2C%20status%22%7D) **Output** | \_time | id | status | | ------------------- | ----- | ------ | | 2024-10-17 12:00:00 | user1 | 200 | | 2024-10-17 12:01:00 | user2 | 403 | The query extracts only the timestamp, user ID, and HTTP status for analysis of access control in security logs. ## List of related operators * [**extend**](/apl/tabular-operators/extend-operator): Use `extend` to add new fields or calculate values without removing any existing fields. * [**summarize**](/apl/tabular-operators/summarize-operator): Use `summarize` to aggregate data across groups of rows, which is useful when you’re calculating totals or averages. * [**where**](/apl/tabular-operators/where-operator): Use `where` to filter rows based on conditions, often paired with `project` to refine your dataset further. # project-reorder This page explains how to use the project-reorder operator in APL. The `project-reorder` operator in APL allows you to rearrange the fields of a dataset without modifying the underlying data. This operator is useful when you need to control the display order of fields in query results, making your data easier to read and analyze. It can be especially helpful when working with large datasets where field ordering impacts the clarity of the output. Use `project-reorder` when you want to emphasize specific fields by adjusting their order in the result set without changing their values or structure. ## For users of other query languages If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL. In Splunk SPL, you use the `table` command to reorder fields, which works similarly to how `project-reorder` functions in APL. ```splunk Splunk example | table FieldA, FieldB, FieldC ``` ```kusto APL equivalent ['dataset.name'] | project-reorder FieldA, FieldB, FieldC ``` In ANSI SQL, the order of fields in a `SELECT` statement determines their arrangement in the output. In APL, `project-reorder` provides more explicit control over the field order without requiring a full `SELECT` clause. ```sql SQL example SELECT FieldA, FieldB, FieldC FROM dataset; ``` ```kusto APL equivalent | project-reorder FieldA, FieldB, FieldC ``` ## Usage ### Syntax ```kusto | project-reorder Field1 [asc | desc | granny-asc | granny-desc], Field2 [asc | desc | granny-asc | granny-desc], ... ``` ### Parameters * `Field1, Field2, ...`: The names of the fields in the order you want them to appear in the result set. * `[asc | desc | granny-asc | granny-desc]`: Optional: Specifies the sort order for the reordered fields. `asc` or `desc` order fields by field name in ascending or descending manner. `granny-asc` or `granny-desc` order by ascending or descending while secondarily sorting by the next numeric value. For example, `b50` comes before `b9` when you use `granny-asc`. ### Returns A table with the specified fields reordered as requested followed by any unspecified fields in their original order. `project-reorder` doesn‘t rename or remove fields from the dataset. All fields that existed in the dataset appear in the results table. ## Use case examples In this example, you reorder HTTP log fields to prioritize the most relevant ones for log analysis. **Query** ```kusto ['sample-http-logs'] | project-reorder _time, method, status, uri, req_duration_ms, ['geo.city'], ['geo.country'] ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20project-reorder%20_time%2C%20method%2C%20status%2C%20uri%2C%20req_duration_ms%2C%20%5B%27geo.city%27%5D%2C%20%5B%27geo.country%27%5D%22%7D) **Output** | \_time | method | status | uri | req\_duration\_ms | geo.city | geo.country | | ------------------- | ------ | ------ | ---------------- | ----------------- | -------- | ----------- | | 2024-10-17 12:34:56 | GET | 200 | /home | 120 | New York | USA | | 2024-10-17 12:35:01 | POST | 404 | /api/v1/resource | 250 | Berlin | Germany | This query rearranges the fields for clarity, placing the most crucial fields (`_time`, `method`, `status`) at the front for easier analysis. Here’s an example where OpenTelemetry trace fields are reordered to prioritize service and status information. **Query** ```kusto ['otel-demo-traces'] | project-reorder _time, ['service.name'], kind, status_code, trace_id, span_id, duration ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D%20%7C%20project-reorder%20_time%2C%20%5B%27service.name%27%5D%2C%20kind%2C%20status_code%2C%20trace_id%2C%20span_id%2C%20duration%22%7D) **Output** | \_time | service.name | kind | status\_code | trace\_id | span\_id | duration | | ------------------- | --------------------- | ------ | ------------ | --------- | -------- | -------- | | 2024-10-17 12:34:56 | frontend | client | 200 | abc123 | span456 | 00:00:01 | | 2024-10-17 12:35:01 | productcatalogservice | server | 500 | xyz789 | span012 | 00:00:05 | This query emphasizes service-related fields like `service.name` and `status_code` at the start of the output. In this example, fields in a security log are reordered to prioritize key fields for investigating HTTP request anomalies. **Query** ```kusto ['sample-http-logs'] | project-reorder _time, status, method, uri, id, ['geo.city'], ['geo.country'] ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20project-reorder%20_time%2C%20status%2C%20method%2C%20uri%2C%20id%2C%20%5B%27geo.city%27%5D%2C%20%5B%27geo.country%27%5D%22%7D) **Output** | \_time | status | method | uri | id | geo.city | geo.country | | ------------------- | ------ | ------ | ---------------- | ------ | -------- | ----------- | | 2024-10-17 12:34:56 | 200 | GET | /home | user01 | New York | USA | | 2024-10-17 12:35:01 | 404 | POST | /api/v1/resource | user02 | Berlin | Germany | This query reorders the fields to focus on the HTTP status, request method, and URI, which are critical for security-related analyses. ## Wildcard Wildcard refers to a special character or a set of characters that can be used to substitute for any other character in a search pattern. Use wildcards to create more flexible queries and perform more powerful searches. The syntax for wildcard can either be `data*` or `['data.fo']*`. Here’s how you can use wildcards in `project-reorder`: Reorder all fields in ascending order: ```kusto ['sample-http-logs'] | project-reorder * asc ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project-reorder%20%2A%20asc%22%7D) Reorder specific fields to the beginning: ```kusto ['sample-http-logs'] | project-reorder method, status, uri ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project-reorder%20method%2C%20status%2C%20uri%22%7D) Reorder fields using wildcards and sort in descending order: ```kusto ['github-push-event'] | project-reorder repo*, num_commits, push_id, ref, size, ['id'], size_large desc ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27github-push-event%27%5D%5Cn%7C%20project-reorder%20repo%2A%2C%20num_commits%2C%20push_id%2C%20ref%2C%20size%2C%20%5B%27id%27%5D%2C%20size_large%20desc%22%7D) Reorder specific fields and keep others in original order: ```kusto ['otel-demo-traces'] | project-reorder trace_id, *, span_id // orders the trace_id then everything else, then span_id fields ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27otel-demo-traces%27%5D%5Cn%7C%20project-reorder%20trace_id%2C%20%2A%2C%20span_id%22%7D) ## List of related operators * [**project**](/apl/tabular-operators/project-operator): Use the `project` operator to select and rename fields without changing their order. * [**extend**](/apl/tabular-operators/extend-operator): `extend` adds new calculated fields while keeping the original ones in place. * [**summarize**](/apl/tabular-operators/summarize-operator): Use `summarize` to perform aggregations on fields, which can then be reordered using `project-reorder`. * [**sort**](/apl/tabular-operators/sort-operator): Sorts rows based on field values, and the results can then be reordered with `project-reorder`. # sample This page explains how to use the sample operator function in APL. The `sample` operator in APL psuedo-randomly selects rows from the input dataset at a rate specified by a parameter. This operator is useful when you want to analyze a subset of data, reduce the dataset size for testing, or quickly explore patterns without processing the entire dataset. The sampling algorithm is not statistically rigorous but provides a way to explore and understand a dataset. For statistically rigorous analysis, use `summarize` instead. You can find the `sample` operator useful when working with large datasets, where processing the entire dataset is resource-intensive or unnecessary. It’s ideal for scenarios like log analysis, performance monitoring, or sampling for data quality checks. ## For users of other query languages If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL. In Splunk SPL, the `sample` command works similarly, returning a subset of data rows randomly. However, the APL `sample` operator requires a simpler syntax without additional arguments for biasing the randomness. ```sql Splunk example | sample 10 ``` ```kusto APL equivalent ['sample-http-logs'] | sample 0.1 ``` In ANSI SQL, there is no direct equivalent to the `sample` operator, but you can achieve similar results using the `TABLESAMPLE` clause. In APL, `sample` operates independently and is more flexible, as it’s not tied to a table scan. ```sql SQL example SELECT * FROM table TABLESAMPLE (10 ROWS); ``` ```kusto APL equivalent ['sample-http-logs'] | sample 0.1 ``` ## Usage ### Syntax ```kusto | sample ProportionOfRows ``` ### Parameters * `ProportionOfRows`: A float greater than 0 and less than 1 which specifies the proportion of rows to return from the dataset. The rows are selected randomly. ### Returns The operator returns a table containing the specified number of rows, selected randomly from the input dataset. ## Use case examples In this use case, you sample a small number of rows from your HTTP logs to quickly analyze trends without working through the entire dataset. **Query** ```kusto ['sample-http-logs'] | sample 0.05 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20sample%200.05%22%7D) **Output** | \_time | req\_duration\_ms | id | status | uri | method | geo.city | geo.country | | ------------------- | ----------------- | ----- | ------ | --------- | ------ | -------- | ----------- | | 2023-10-16 12:45:00 | 234 | user1 | 200 | /index | GET | New York | US | | 2023-10-16 12:47:00 | 120 | user2 | 404 | /login | POST | Paris | FR | | 2023-10-16 12:48:00 | 543 | user3 | 500 | /checkout | POST | Tokyo | JP | This query returns a random subset of 5 % of all rows from the HTTP logs, helping you quickly identify any potential issues or patterns without analyzing the entire dataset. In this use case, you sample traces to investigate performance metrics for a particular service across different spans. **Query** ```kusto ['otel-demo-traces'] | where ['service.name'] == 'checkoutservice' | sample 0.05 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D%20%7C%20where%20%5B%27service.name%27%5D%20%3D%3D%20%27checkoutservice%27%20%7C%20sample%200.05%22%7D) **Output** | \_time | duration | span\_id | trace\_id | service.name | kind | status\_code | | ------------------- | -------- | -------- | --------- | --------------- | ------ | ------------ | | 2023-10-16 14:05:00 | 1.34s | span5678 | trace123 | checkoutservice | client | 200 | | 2023-10-16 14:06:00 | 0.89s | span3456 | trace456 | checkoutservice | server | 500 | This query returns 5 % of all traces for the `checkoutservice` to identify potential performance bottlenecks. In this use case, you sample security log data to spot irregular activity in requests, such as 500-level HTTP responses. **Query** ```kusto ['sample-http-logs'] | where status == '500' | sample 0.03 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20where%20status%20%3D%3D%20%27500%27%20%7C%20sample%200.03%22%7D) **Output** | \_time | req\_duration\_ms | id | status | uri | method | geo.city | geo.country | | ------------------- | ----------------- | ----- | ------ | -------- | ------ | -------- | ----------- | | 2023-10-16 14:30:00 | 543 | user4 | 500 | /payment | POST | Berlin | DE | | 2023-10-16 14:32:00 | 876 | user5 | 500 | /order | POST | London | GB | This query helps you quickly spot failed requests (HTTP 500 responses) and investigate any potential causes of these errors. ## List of related operators * [**take**](/apl/tabular-operators/take-operator): Use `take` when you want to return the first N rows in the dataset rather than a random subset. * [**where**](/apl/tabular-operators/where-operator): Use `where` to filter rows based on conditions rather than sampling randomly. * [**top**](/apl/tabular-operators/top-operator): Use `top` to return the highest N rows based on a sorting criterion. # search This page explains how to use the search operator in APL. The `search` operator in APL is used to perform a full-text search across multiple fields in a dataset. This operator allows you to locate specific keywords, phrases, or patterns, helping you filter data quickly and efficiently. You can use `search` to query logs, traces, and other data sources without the need to specify individual fields, making it particularly useful when you’re unsure where the relevant data resides. Use `search` when you want to search multiple fields in a dataset, especially for ad-hoc analysis or quick lookups across logs or traces. It’s commonly applied in log analysis, security monitoring, and trace analysis, where multiple fields may contain the desired data. ## Importance of the search operator * **Versatility:** It allows you to find a specific text or term across various fields within a dataset that they choose or select for their search, without the necessity to specify each field. * **Efficiency:** Saves time when you aren’t sure which field or datasets in APL might contain the information you are looking for. * **User-friendliness:** It’s particularly useful for users or developers unfamiliar with the schema details of a given database. ## Usage ### Syntax ```kusto search [kind=CaseSensitivity] SearchPredicate ``` or ```kusto search [kind=CaseSensitivity] SearchPredicate ``` ### Parameters | Name | Type | Required | Description | | ------------------- | ------ | -------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **CaseSensitivity** | string | | A flag that controls the behavior of all `string` scalar operators, such as `has`, with respect to case sensitivity. Valid values are `default`, `case_insensitive`, `case_sensitive`. The options `default` and `case_insensitive` are synonymous, since the default behavior is case insensitive. | | **SearchPredicate** | string | ✓ | A Boolean expression to be evaluated for every event in the input. If it returns `true`, the record is outputted. | ## Returns Returns all rows where the specified keyword appears in any field. ## Search predicate syntax The SearchPredicate allows you to search for specific terms in all fields of a dataset. The operator that will be applied to a search term depends on the presence and placement of a wildcard asterisk (\*) in the term, as shown in the following table. | Literal | Operator | | ---------- | --------------- | | `axiomk` | `has` | | `*axiomk` | `hassuffix` | | `axiomk*` | `hasprefix` | | `*axiomk*` | `contains` | | `ax*ig` | `matches regex` | You can also restrict the search to a specific field, look for an exact match instead of a term match, or search by regular expression. The syntax for each of these cases is shown in the following table. | Syntax | Explanation | | ------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------- | | **FieldName**`:`**StringLiteral** | This syntax can be used to restrict the search to a specific field. The default behavior is to search all fields. | | **FieldName**`==`**StringLiteral** | This syntax can be used to search for exact matches of a field against a string value. The default behavior is to look for a term-match. | | **Field** `matches regex` **StringLiteral** | This syntax indicates regular expression matching, in which *StringLiteral* is the regex pattern. | Use boolean expressions to combine conditions and create more complex searches. For example, `"axiom" and b==789` would result in a search for events that have the term axiom in any field and the value 789 in the b field. ### Search predicate syntax examples | # | Syntax | Meaning (equivalent `where`) | Comments | | -- | ---------------------------------------- | --------------------------------------------------------- | ----------------------------------------- | | 1 | `search "axiom"` | `where * has "axiom"` | | | 2 | `search field:"axiom"` | `where field has "axiom"` | | | 3 | `search field=="axiom"` | `where field=="axiom"` | | | 4 | `search "axiom*"` | `where * hasprefix "axiom"` | | | 5 | `search "*axiom"` | `where * hassuffix "axiom"` | | | 6 | `search "*axiom*"` | `where * contains "axiom"` | | | 7 | `search "Pad*FG"` | `where * matches regex @"\bPad.*FG\b"` | | | 8 | `search *` | `where 0==0` | | | 9 | `search field matches regex "..."` | `where field matches regex "..."` | | | 10 | `search kind=case_sensitive` | | All string comparisons are case-sensitive | | 11 | `search "axiom" and ("log" or "metric")` | `where * has "axiom" and (* has "log" or * has "metric")` | | | 12 | `search "axiom" or (A>a and Aa and A datetime('2022-09-16') ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20search%20%5C%22get%5C%22%20and%20_time%20%3E%20datetime%28%272022-09-16%27%29%22%7D\&queryOptions=%7B%22quickRange%22%3A%2230d%22%7D) ### Use kind=default By default, the search is case-insensitive and uses the simple search. ```kusto ['sample-http-logs'] | search kind=default "INDIA" ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20search%20kind%3Ddefault%20%5C%22INDIA%5C%22%22%7D\&queryOptions=%7B%22quickRange%22%3A%2230d%22%7D) ### Use kind=case\_sensitive Search for logs that contain the term "text" with case sensitivity. ```kusto ['sample-http-logs'] | search kind=case_sensitive "text" ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20search%20kind%3Dcase_sensitive%20%5C%22text%5C%22%22%7D\&queryOptions=%7B%22quickRange%22%3A%2230d%22%7D) ### Use kind=case\_insensitive Explicitly search for logs that contain the term "CSS" without case sensitivity. ```kusto ['sample-http-logs'] | search kind=case_insensitive "CSS" ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20search%20kind%3Dcase_insensitive%20%5C%22CSS%5C%22%22%7D\&queryOptions=%7B%22quickRange%22%3A%2230d%22%7D) ### Use search \* Search all logs. This would essentially return all rows in the dataset. ```kusto ['sample-http-logs'] | search * ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20search%20%2A%22%7D\&queryOptions=%7B%22quickRange%22%3A%2230d%22%7D) ### Contain any substring Search for logs that contain any substring of "brazil". ```kusto ['sample-http-logs'] | search "*brazil*" ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20search%20%5C%22%2Abrazil%2A%5C%22%22%7D\&queryOptions=%7B%22quickRange%22%3A%2230d%22%7D) ### Search for multiple independent terms Search the logs for entries that contain either the term "GET" or "covina", irrespective of their context or the fields they appear in. ```kusto ['sample-http-logs'] | search "GET" or "covina" ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20search%20%5C%22GET%5C%22%20or%20%5C%22covina%5C%22%22%7D\&queryOptions=%7B%22quickRange%22%3A%2230d%22%7D) ## Use the search operator efficiently Using non-field-specific filters such as the `search` operator has an impact on performance, especially when used over a high volume of events in a wide time range. To use the `search` operator efficiently, follow these guidelines: * Use field-specific filters when possible. Field-specific filters narrow your query results to events where a field has a given value. They are more efficient than non-field-specific filters, such as the `search` operator, that narrow your query results by searching across all fields for a given value. When you know the target field, replace the `search` operator with `where` clauses that filter for values in a specific field. * After using the `search` operator in your query, use other operators, such as `project` statements, to limit the number of returned fields. * Use the `kind` flag when possible. When you know the pattern that string values in your data follow, use the `kind` flag to specify the case-sensitivity of the search. # sort This page explains how to use the sort operator function in APL. The `sort` operator in APL arranges the rows of a result set based on one or more fields in ascending or descending order. You can use it to organize your data logically or optimize subsequent operations that depend on ordered data. This operator is useful when analyzing logs, traces, or any dataset where the order of results matters, such as when you’re interested in top or bottom performers, chronological sequences, or sorting by status codes. ## For users of other query languages If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL. In Splunk SPL, the equivalent of `sort` is the `sort` command, which orders search results based on one or more fields. However, in APL, you must explicitly specify the sorting direction for each field, and sorting by multiple fields requires chaining them with commas. ```splunk Splunk example | sort - _time, status ``` ```kusto APL equivalent ['sample-http-logs'] | sort by _time desc, status asc ``` In SQL, sorting is done using the `ORDER BY` clause. The APL `sort` operator behaves similarly but uses the `by` keyword instead of `ORDER BY`. Additionally, APL requires specifying the order direction (`asc` or `desc`) explicitly for each field. ```sql SQL example SELECT * FROM sample_http_logs ORDER BY _time DESC, status ASC ``` ```kusto APL equivalent ['sample-http-logs'] | sort by _time desc, status asc ``` ## Usage ### Syntax ```kusto | sort by Field1 [asc | desc], Field2 [asc | desc], ... ``` ### Parameters * `Field1`, `Field2`, ...: The fields to sort by. * \[asc | desc]: Specify the sorting direction for each field as either `asc` for ascending order or `desc` for descending order. ### Returns A table with rows ordered based on the specified fields. ## Use sort and project together When you use `project` and `sort` in the same query, ensure you project the fields that you want to sort on. Similarly, when you use `project-away` and `sort` in the same query, ensure you don’t remove the fields that you want to sort on. The above is also true for time fields. For example, to project the field `status` and sort on the field `_time`, project both fields similarly to the query below: ```apl ['sample-http-logs'] | project status, _time | sort by _time desc ``` ## Use case examples Sorting HTTP logs by request duration and then by status code is useful to identify slow requests and their corresponding statuses. **Query** ```kusto ['sample-http-logs'] | sort by req_duration_ms desc, status asc ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20sort%20by%20req_duration_ms%20desc%2C%20status%20asc%22%7D) **Output** | \_time | req\_duration\_ms | id | status | uri | method | geo.city | geo.country | | ------------------- | ----------------- | ---- | ------ | ---------- | ------ | -------- | ----------- | | 2024-10-18 12:34:56 | 5000 | abc1 | 500 | /api/data | GET | New York | US | | 2024-10-18 12:35:56 | 4500 | abc2 | 200 | /api/users | POST | London | UK | The query sorts the HTTP logs by the duration of each request in descending order, showing the longest-running requests at the top. If two requests have the same duration, they are sorted by status code in ascending order. Sorting OpenTelemetry traces by span duration helps identify the longest-running spans within a specific service. **Query** ```kusto ['otel-demo-traces'] | sort by duration desc, ['service.name'] asc ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D%20%7C%20sort%20by%20duration%20desc%2C%20%5B%27service.name%27%5D%20asc%22%7D) **Output** | \_time | duration | span\_id | trace\_id | service.name | kind | status\_code | | ------------------- | -------- | -------- | --------- | ------------ | ------ | ------------ | | 2024-10-18 12:36:56 | 00:00:15 | span1 | trace1 | frontend | server | 200 | | 2024-10-18 12:37:56 | 00:00:14 | span2 | trace2 | cartservice | client | 500 | This query sorts spans by their duration in descending order, with the longest spans at the top, followed by the service name in ascending order. Sorting security logs by status code and then by timestamp can help in investigating recent failed requests. **Query** ```kusto ['sample-http-logs'] | sort by status asc, _time desc ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20sort%20by%20status%20asc%2C%20_time%20desc%22%7D) **Output** | \_time | req\_duration\_ms | id | status | uri | method | geo.city | geo.country | | ------------------- | ----------------- | ---- | ------ | ---------- | ------ | -------- | ----------- | | 2024-10-18 12:40:56 | 3000 | abc3 | 400 | /api/login | POST | Toronto | CA | | 2024-10-18 12:39:56 | 2000 | abc4 | 400 | /api/auth | GET | Berlin | DE | This query sorts security logs by status code first (in ascending order) and then by the most recent events. ## List of related operators * [**top**](/apl/tabular-operators/top-operator): Use `top` to return a specified number of rows with the highest or lowest values, but unlike `sort`, `top` limits the result set. * [**project**](/apl/tabular-operators/project-operator): Use `project` to select and reorder fields without changing the order of rows. * [**extend**](/apl/tabular-operators/extend-operator): Use `extend` to create calculated fields that can then be used in conjunction with `sort` to refine your results. * [**summarize**](/apl/tabular-operators/summarize-operator): Use `summarize` to group and aggregate data before applying `sort` for detailed analysis. # summarize This page explains how to use the summarize operator function in APL. ## Introduction The `summarize` operator in APL enables you to perform data aggregation and create summary tables from large datasets. You can use it to group data by specified fields and apply aggregation functions such as `count()`, `sum()`, `avg()`, `min()`, `max()`, and many others. This is particularly useful when analyzing logs, tracing OpenTelemetry data, or reviewing security events. The `summarize` operator is helpful when you want to reduce the granularity of a dataset to extract insights or trends. ## For users of other query languages If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL. In Splunk SPL, the `stats` command performs a similar function to APL’s `summarize` operator. Both operators are used to group data and apply aggregation functions. In APL, `summarize` is more explicit about the fields to group by and the aggregation functions to apply. ```sql Splunk example index="sample-http-logs" | stats count by method ``` ```kusto APL equivalent ['sample-http-logs'] | summarize count() by method ``` The `summarize` operator in APL is conceptually similar to SQL’s `GROUP BY` clause with aggregation functions. In APL, you explicitly specify the aggregation function (like `count()`, `sum()`) and the fields to group by. ```sql SQL example SELECT method, COUNT(*) FROM sample_http_logs GROUP BY method ``` ```kusto APL equivalent ['sample-http-logs'] | summarize count() by method ``` ## Usage ### Syntax ```kusto | summarize [[Field1 =] AggregationFunction [, ...]] [by [Field2 =] GroupExpression [, ...]] ``` ### Parameters * `Field1`: A field name. * `AggregationFunction`: The aggregation function to apply. Examples include `count()`, `sum()`, `avg()`, `min()`, and `max()`. * `GroupExpression`: A scalar expression that can reference the dataset. ### Returns The `summarize` operator returns a table where: * The input rows are arranged into groups having the same values of the `by` expressions. * The specified aggregation functions are computed over each group, producing a row for each group. * The result contains the `by` fields and also at least one field for each computed aggregate. Some aggregation functions return multiple fields. ## Use case examples In log analysis, you can use `summarize` to count the number of HTTP requests grouped by method, or to compute the average request duration. **Query** ```kusto ['sample-http-logs'] | summarize count() by method ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20summarize%20count\(\)%20by%20method%22%7D) **Output** | method | count\_ | | ------ | ------- | | GET | 1000 | | POST | 450 | This query groups the HTTP requests by the `method` field and counts how many times each method is used. You can use `summarize` to analyze OpenTelemetry traces by calculating the average span duration for each service. **Query** ```kusto ['otel-demo-traces'] | summarize avg(duration) by ['service.name'] ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D%20%7C%20summarize%20avg\(duration\)%20by%20%5B%27service.name%27%5D%22%7D) **Output** | service.name | avg\_duration | | ------------ | ------------- | | frontend | 50ms | | cartservice | 75ms | This query calculates the average duration of traces for each service in the dataset. In security log analysis, `summarize` can help group events by status codes and see the distribution of HTTP responses. **Query** ```kusto ['sample-http-logs'] | summarize count() by status ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20summarize%20count\(\)%20by%20status%22%7D) **Output** | status | count\_ | | ------ | ------- | | 200 | 1200 | | 404 | 300 | This query summarizes HTTP status codes, giving insight into the distribution of responses in your logs. ## Other examples ```kusto ['sample-http-logs'] | summarize topk(content_type, 20) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20summarize%20topk\(content_type%2C%2020\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) ```kusto ['github-push-event'] | summarize topk(repo, 20) by bin(_time, 24h) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27github-push-event%27%5D%7C%20summarize%20topk\(repo%2C%2020\)%20by%20bin\(_time%2C%2024h\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) Returns a table that shows the heatmap in each interval \[0, 30], \[30, 20, 10], and so on. This example has a cell for `HISTOGRAM(req_duration_ms)`. ```kusto ['sample-http-logs'] | summarize histogram(req_duration_ms, 30) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20summarize%20histogram\(req_duration_ms%2C%2030\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) ```kusto ['github-push-event'] | where _time > ago(7d) | where repo contains "axiom" | summarize count(), numCommits=sum(size) by _time=bin(_time, 3h), repo | take 100 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27github-push-event%27%5D%20%7C%20where%20_time%20%3E%20ago\(7d\)%20%7C%20where%20repo%20contains%20%5C%22axiom%5C%22%20%7C%20summarize%20count\(\)%2C%20numCommits%3Dsum\(size\)%20by%20_time%3Dbin\(_time%2C%203h\)%2C%20repo%20%7C%20take%20100%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) ## List of related operators * [**count**](/apl/tabular-operators/count-operator): Use when you only need to count rows without grouping by specific fields. * [**extend**](/apl/tabular-operators/extend-operator): Use to add new calculated fields to a dataset. * [**project**](/apl/tabular-operators/project-operator): Use to select specific fields or create new calculated fields, often in combination with `summarize`. # take This page explains how to use the take operator in APL. The `take` operator in APL allows you to retrieve a specified number of rows from a dataset. It’s useful when you want to preview data, limit the result set for performance reasons, or fetch a random sample from large datasets. The `take` operator can be particularly effective in scenarios like log analysis, security monitoring, and telemetry where large amounts of data are processed, and only a subset is needed for analysis. ## For users of other query languages If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL. In Splunk SPL, the `head` and `tail` commands perform similar operations to the APL `take` operator, where `head` returns the first N results, and `tail` returns the last N. In APL, `take` is a flexible way to fetch any subset of rows in a dataset. ```sql Splunk example | head 10 ``` ```kusto APL equivalent ['sample-http-logs'] | take 10 ``` In ANSI SQL, the equivalent of the APL `take` operator is `LIMIT`. While SQL requires you to specify a sorting order with `ORDER BY` for deterministic results, APL allows you to use `take` to fetch a specific number of rows without needing explicit sorting. ```sql SQL example SELECT * FROM sample_http_logs LIMIT 10; ``` ```kusto APL equivalent ['sample-http-logs'] | take 10 ``` ## Usage ### Syntax ```kusto | take N ``` ### Parameters * `N`: The number of rows to take from the dataset. If `N` is positive, it returns the first `N` rows. If `N` is negative, it returns the last `N` rows. ### Returns The operator returns the specified number of rows from the dataset. ## Use case examples The `take` operator is useful in log analysis when you need to view a subset of logs to quickly identify trends or errors without analyzing the entire dataset. **Query** ```kusto ['sample-http-logs'] | take 5 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20take%205%22%7D) **Output** | \_time | req\_duration\_ms | id | status | uri | method | geo.city | geo.country | | -------------------- | ----------------- | ---- | ------ | --------- | ------ | -------- | ----------- | | 2023-10-18T10:00:00Z | 120 | u123 | 200 | /home | GET | Berlin | Germany | | 2023-10-18T10:01:00Z | 85 | u124 | 404 | /login | POST | New York | USA | | 2023-10-18T10:02:00Z | 150 | u125 | 500 | /checkout | POST | Tokyo | Japan | This query retrieves the first 5 rows from the `sample-http-logs` dataset. In the context of OpenTelemetry traces, the `take` operator helps extract a small number of traces to analyze span performance or trace behavior across services. **Query** ```kusto ['otel-demo-traces'] | take 3 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20take%203%22%7D) **Output** | \_time | duration | span\_id | trace\_id | service.name | kind | status\_code | | -------------------- | -------- | -------- | --------- | --------------- | -------- | ------------ | | 2023-10-18T10:10:00Z | 250ms | s123 | t456 | frontend | server | OK | | 2023-10-18T10:11:00Z | 300ms | s124 | t457 | checkoutservice | client | OK | | 2023-10-18T10:12:00Z | 100ms | s125 | t458 | cartservice | internal | ERROR | This query retrieves the first 3 spans from the OpenTelemetry traces dataset. For security logs, `take` allows quick sampling of log entries to detect patterns or anomalies without needing the entire log file. **Query** ```kusto ['sample-http-logs'] | take 10 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20take%2010%22%7D) **Output** | \_time | req\_duration\_ms | id | status | uri | method | geo.city | geo.country | | -------------------- | ----------------- | ---- | ------ | ---------- | ------ | -------- | ----------- | | 2023-10-18T10:20:00Z | 200 | u223 | 200 | /admin | GET | London | UK | | 2023-10-18T10:21:00Z | 190 | u224 | 403 | /dashboard | GET | Berlin | Germany | This query retrieves the first 10 security log entries, useful for quick investigations. ## List of related operators * [**limit**](/apl/tabular-operators/limit-operator): Similar to `take`, but explicitly limits the result set and often used for pagination or performance optimization. * [**sort**](/apl/tabular-operators/sort-operator): Used in combination with `take` when you want to fetch a subset of sorted data. * [**where**](/apl/tabular-operators/where-operator): Filters rows based on a condition before using `take` for sampling specific subsets. # top This page explains how to use the top operator function in APL. The `top` operator in Axiom Processing Language (APL) allows you to retrieve the top N rows from a dataset based on specified criteria. It is particularly useful when you need to analyze the highest values in large datasets or want to quickly identify trends, such as the highest request durations in logs or top error occurrences in traces. You can apply it in scenarios like log analysis, security investigations, or tracing system performance. ## For users of other query languages If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL. The `top` operator in APL is similar to `top` in Splunk SPL but allows greater flexibility in specifying multiple sorting criteria. ```sql Splunk example index="sample_http_logs" | top limit=5 req_duration_ms ``` ```kusto APL equivalent ['sample-http-logs'] | top 5 by req_duration_ms ``` In ANSI SQL, the `TOP` operator is used with an `ORDER BY` clause to limit the number of rows. In APL, the syntax is similar but uses `top` in a pipeline and specifies the ordering criteria directly. ```sql SQL example SELECT TOP 5 req_duration_ms FROM sample_http_logs ORDER BY req_duration_ms DESC ``` ```kusto APL equivalent ['sample-http-logs'] | top 5 by req_duration_ms ``` ## Usage ### Syntax ```kusto | top N by Expression [asc | desc] ``` ### Parameters * `N`: The number of rows to return. * `Expression`: A scalar expression used for sorting. The type of the values must be numeric, date, time, or string. * `[asc | desc]`: Optional. Use to sort in ascending or descending order. The default is descending. ### Returns The `top` operator returns the top N rows from the dataset based on the specified sorting criteria. ## Use case examples The `top` operator helps you find the HTTP requests with the longest durations. **Query** ```kusto ['sample-http-logs'] | top 5 by req_duration_ms ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20top%205%20by%20req_duration_ms%22%7D) **Output** | \_time | req\_duration\_ms | id | status | uri | method | geo.city | geo.country | | ------------------- | ----------------- | --- | ------ | ---------------- | ------ | -------- | ----------- | | 2024-10-01 10:12:34 | 5000 | 123 | 200 | /api/get-data | GET | New York | US | | 2024-10-01 11:14:20 | 4900 | 124 | 200 | /api/post-data | POST | Chicago | US | | 2024-10-01 12:15:45 | 4800 | 125 | 200 | /api/update-item | PUT | London | UK | This query returns the top 5 HTTP requests that took the longest time to process. The `top` operator is useful for identifying the spans with the longest duration in distributed tracing systems. **Query** ```kusto ['otel-demo-traces'] | top 5 by duration ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27otel-demo-traces%27%5D%20%7C%20top%205%20by%20duration%22%7D) **Output** | \_time | duration | span\_id | trace\_id | service.name | kind | status\_code | | ------------------- | -------- | -------- | --------- | --------------- | ------ | ------------ | | 2024-10-01 10:12:34 | 300ms | span123 | trace456 | frontend | server | 200 | | 2024-10-01 10:13:20 | 290ms | span124 | trace457 | cartservice | client | 200 | | 2024-10-01 10:15:45 | 280ms | span125 | trace458 | checkoutservice | server | 500 | This query returns the top 5 spans with the longest durations from the OpenTelemetry traces. The `top` operator is useful for identifying the most frequent HTTP status codes in security logs. **Query** ```kusto ['sample-http-logs'] | summarize count() by status | top 3 by count_ ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20summarize%20count\(\)%20by%20status%20%7C%20top%203%20by%20count_%22%7D) **Output** | status | count\_ | | ------ | ------- | | 200 | 500 | | 404 | 50 | | 500 | 20 | This query shows the top 3 most common HTTP status codes in security logs. ## List of related operators * [**order**](/apl/tabular-operators/order-operator): Use when you need full control over row ordering without limiting the number of results. * [**summarize**](/apl/tabular-operators/summarize-operator): Useful when aggregating data over fields and obtaining summarized results. * [**take**](/apl/tabular-operators/take-operator): Returns the first N rows without sorting. Use when ordering is not necessary. # union This page explains how to use the union operator in APL. The `union` operator in APL allows you to combine the results of two or more queries into a single output. The operator is useful when you need to analyze or compare data from different datasets or tables in a unified manner. By using `union`, you can merge multiple sets of records, keeping all data from the source tables without applying any aggregation or filtering. The `union` operator is particularly helpful in scenarios like log analysis, tracing OpenTelemetry events, or correlating security logs across multiple sources. You can use it to perform comprehensive investigations by bringing together information from different datasets into one query. ## Union of two datasets To understand how the `union` operator works, consider these datasets: **Server requests** | \_time | status | method | trace\_id | | ------ | ------ | ------ | --------- | | 12:10 | 200 | GET | 1 | | 12:15 | 200 | POST | 2 | | 12:20 | 503 | POST | 3 | | 12:25 | 200 | POST | 4 | **App logs** | \_time | trace\_id | message | | ------ | --------- | ------- | | 12:12 | 1 | foo | | 12:21 | 3 | bar | | 13:35 | 27 | baz | Performing a union on `Server requests` and `Application logs` would result in a new dataset with all the rows from both `DatasetA` and `DatasetB`. A union of **requests** and **logs** would produce the following result set: | \_time | status | method | trace\_id | message | | ------ | ------ | ------ | --------- | ------- | | 12:10 | 200 | GET | 1 | | | 12:12 | | | 1 | foo | | 12:15 | 200 | POST | 2 | | | 12:20 | 503 | POST | 3 | | | 12:21 | | | 3 | bar | | 12:25 | 200 | POST | 4 | | | 13:35 | | | 27 | baz | This result combines the rows and merges types for overlapping fields. ## For users of other query languages If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL. In Splunk SPL, the `append` command works similarly to the `union` operator in APL. Both operators are used to combine multiple datasets. However, while `append` in Splunk typically adds one dataset to the end of another, APL’s `union` merges datasets while preserving all records. ```splunk Splunk example index=web OR index=security ``` ```kusto APL equivalent ['sample-http-logs'] | union ['security-logs'] ``` In ANSI SQL, the `UNION` operator performs a similar function to the APL `union` operator. Both are used to combine the results of two or more queries. However, SQL’s `UNION` removes duplicates by default, whereas APL’s `union` keeps all rows unless you use `union with=kind=unique`. ```sql SQL example SELECT * FROM web_logs UNION SELECT * FROM security_logs; ``` ```kusto APL equivalent ['sample-http-logs'] | union ['security-logs'] ``` ## Usage ### Syntax ```kusto T1 | union [T2], [T3], ... ``` ### Parameters * `T1, T2, T3, ...`: Tables or query results you want to combine into a single output. ### Returns The `union` operator returns all rows from the specified tables or queries. If fields overlap, they are merged. Non-overlapping fields are retained in their original form. ## Use case examples In log analysis, you can use the `union` operator to combine HTTP logs from different sources, such as web servers and security systems, to analyze trends or detect anomalies. **Query** ```kusto ['sample-http-logs'] | union ['security-logs'] | where status == '500' ``` **Output** | \_time | id | status | uri | method | geo.city | geo.country | req\_duration\_ms | | ------------------- | ------- | ------ | ------------------- | ------ | -------- | ----------- | ----------------- | | 2024-10-17 12:34:56 | user123 | 500 | /api/login | GET | London | UK | 345 | | 2024-10-17 12:35:10 | user456 | 500 | /api/update-profile | POST | Berlin | Germany | 123 | This query combines two datasets (HTTP logs and security logs) and filters the combined data to show only those entries where the HTTP status code is 500. When working with OpenTelemetry traces, you can use the `union` operator to combine tracing information from different services for a unified view of system performance. **Query** ```kusto ['otel-demo-traces'] | union ['otel-backend-traces'] | where ['service.name'] == 'frontend' and status_code == 'error' ``` **Output** | \_time | trace\_id | span\_id | \['service.name'] | kind | status\_code | | ------------------- | ---------- | -------- | ----------------- | ------ | ------------ | | 2024-10-17 12:36:10 | trace-1234 | span-567 | frontend | server | error | | 2024-10-17 12:38:20 | trace-7890 | span-345 | frontend | client | error | This query combines traces from two different datasets and filters them to show only errors occurring in the `frontend` service. For security logs, the `union` operator is useful to combine logs from different sources, such as intrusion detection systems (IDS) and firewall logs. **Query** ```kusto ['sample-http-logs'] | union ['security-logs'] | where ['geo.country'] == 'Germany' ``` **Output** | \_time | id | status | uri | method | geo.city | geo.country | req\_duration\_ms | | ------------------- | ------- | ------ | ---------------- | ------ | -------- | ----------- | ----------------- | | 2024-10-17 12:34:56 | user789 | 200 | /api/login | GET | Berlin | Germany | 245 | | 2024-10-17 12:40:22 | user456 | 404 | /api/nonexistent | GET | Munich | Germany | 532 | This query combines web and security logs, then filters the results to show only those records where the request originated from Germany. ## Other examples ### Basic union This example combines all rows from `github-push-event` and `github-pull-request-event` without any transformation or filtering. ```kusto ['github-push-event'] | union ['github-pull-request-event'] ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27github-push-event%27%5D%5Cn%7C%20union%20%5B%27github-pull-request-event%27%5D%22%7D) ### Filter after union This example combines the datasets, and then filters the data to only include rows where the `method` is `GET`. ```kusto ['sample-http-logs'] | union ['github-issues-event'] | where method == "GET" ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20union%20%5B%27github-issues-event%27%5D%5Cn%7C%20where%20method%20%3D%3D%20%5C%22GET%5C%22%22%7D) ### Aggregate after union This example combines the datasets and summarizes the data, counting the occurrences of each combination of `content_type` and `actor`. ```kusto ['sample-http-logs'] | union ['github-pull-request-event'] | summarize Count = count() by content_type, actor ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20union%20%5B%27github-pull-request-event%27%5D%5Cn%7C%20summarize%20Count%20%3D%20count%28%29%20by%20content_type%2C%20actor%22%7D) ### Filter and project specific data from combined log sources This query combines GitHub pull request event logs and GitHub push events, filters by actions made by `github-actions[bot]`, and displays key event details such as `time`, `repository`, `commits`, `head` , `id`. ```kusto ['github-pull-request-event'] | union ['github-push-event'] | where actor == "github-actions[bot]" | project _time, repo, ['id'], commits, head ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27github-pull-request-event%27%5D%5Cn%7C%20union%20%5B%27github-push-event%27%5D%5Cn%7C%20where%20actor%20%3D%3D%20%5C%22github-actions%5Bbot%5D%5C%22%5Cn%7C%20project%20_time%2C%20repo%2C%20%5B%27id%27%5D%2C%20commits%2C%20head%22%7D) ### Union with field removing This example removes the `content_type` and `commits` field in the datasets `sample-http-logs` and `github-push-event` before combining the datasets. ```kusto ['sample-http-logs'] | union ['github-push-event'] | project-away content_type, commits ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20union%20%5B%27github-push-event%27%5D%5Cn%7C%20project-away%20content_type%2C%20commits%22%7D) ### Filter after union This example performs a union and then filters the resulting set to only include rows where the `method` is `GET`. ```kusto ['sample-http-logs'] | union ['github-issues-event'] | where method == "GET" ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20union%20%5B%27github-issues-event%27%5D%5Cn%7C%20where%20method%20%3D%3D%20%5C%22GET%5C%22%22%7D) ### Union with order by After the union, the result is ordered by the `type` field. ```kusto ['sample-http-logs'] | union hn | order by type ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20union%20hn%5Cn%7C%20order%20by%20type%22%7D) ### Union with joint conditions This example performs a union and then filters the resulting dataset for rows where `content_type` contains the letter `a` and `city` is `seattle`. ```kusto ['sample-http-logs'] | union ['github-pull-request-event'] | where content_type contains "a" and ['geo.city'] == "Seattle" ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20union%20%5B%27github-pull-request-event%27%5D%5Cn%7C%20where%20content_type%20contains%20%5C%22a%5C%22%20and%20%5B%27geo.city%27%5D%20%20%3D%3D%20%5C%22Seattle%5C%22%22%7D) ### Union and count unique values After the union, the query calculates the number of unique `geo.city` and `repo` entries in the combined dataset. ```kusto ['sample-http-logs'] | union ['github-push-event'] | summarize UniqueNames = dcount(['geo.city']), UniqueData = dcount(repo) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%20%22%5B%27sample-http-logs%27%5D%5Cn%7C%20union%20%5B%27github-push-event%27%5D%5Cn%7C%20summarize%20UniqueNames%20%3D%20dcount%28%5B%27geo.city%27%5D%29%2C%20UniqueData%20%3D%20dcount%28repo%29%22%7D) ## Best practices for the union operator To maximize the effectiveness of the union operator in APL, here are some best practices to consider: * Before using the `union` operator, ensure that the fields being merged have compatible data types. * Use `project` or `project-away` to include or exclude specific fields. This can improve performance and the clarity of your results, especially when you only need a subset of the available data. # where This page explains how to use the where operator in APL. The `where` operator in APL is used to filter rows based on specified conditions. You can use the `where` operator to return only the records that meet the criteria you define. It’s a foundational operator in querying datasets, helping you focus on specific data by applying conditions to filter out unwanted rows. This is useful when working with large datasets, logs, traces, or security events, allowing you to extract meaningful information quickly. ## For users of other query languages If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL. In Splunk SPL, the `where` operator filters events based on boolean expressions. APL’s `where` operator functions similarly, allowing you to filter rows that satisfy a condition. ```sql Splunk example index=main | where status="200" ``` ```kusto APL equivalent ['sample-http-logs'] | where status == '200' ``` In ANSI SQL, the `WHERE` clause filters rows in a `SELECT` query based on a condition. APL’s `where` operator behaves similarly, but the syntax reflects APL’s specific dataset structures. ```sql SQL example SELECT * FROM sample_http_logs WHERE status = '200' ``` ```kusto APL equivalent ['sample-http-logs'] | where status == '200' ``` ## Usage ### Syntax ```kusto | where condition ``` ### Parameters * `condition`: A Boolean expression that specifies the filtering condition. The `where` operator returns only the rows that satisfy this condition. ### Returns The `where` operator returns a filtered dataset containing only the rows where the condition evaluates to true. ## Use case examples In this use case, you filter HTTP logs to focus on records where the HTTP status is 404 (Not Found). **Query** ```kusto ['sample-http-logs'] | where status == '404' ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20where%20status%20%3D%3D%20'404'%22%7D) **Output** | \_time | id | status | method | uri | req\_duration\_ms | geo.city | geo.country | | ------------------- | ----- | ------ | ------ | -------------- | ----------------- | -------- | ----------- | | 2024-10-17 10:20:00 | 12345 | 404 | GET | /notfound.html | 120 | Seattle | US | This query filters out all HTTP requests except those that resulted in a 404 error, making it easy to investigate pages that were not found. Here, you filter OpenTelemetry traces to retrieve spans where the `duration` exceeded 500 milliseconds. **Query** ```kusto ['otel-demo-traces'] | where duration > 500ms ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20where%20duration%20%3E%20500ms%22%7D) **Output** | \_time | span\_id | trace\_id | duration | service.name | kind | status\_code | | ------------------- | -------- | --------- | -------- | ------------ | ------ | ------------ | | 2024-10-17 11:15:00 | abc123 | xyz789 | 520ms | frontend | server | OK | This query helps identify spans with durations longer than 500 milliseconds, which might indicate performance issues. In this security use case, you filter logs to find requests from users in a specific country, such as Germany. **Query** ```kusto ['sample-http-logs'] | where ['geo.country'] == 'Germany' ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20where%20%5B'geo.country'%5D%20%3D%3D%20'Germany'%22%7D) **Output** | \_time | id | status | method | uri | req\_duration\_ms | geo.city | geo.country | | ------------------- | ----- | ------ | ------ | ------ | ----------------- | -------- | ----------- | | 2024-10-17 09:45:00 | 54321 | 200 | POST | /login | 100 | Berlin | Germany | This query helps filter logs to investigate activity originating from a specific country, useful for security and compliance. ## where \* has The `* has` pattern in APL is a dynamic and powerful tool within the `where` operator. It offers you the flexibility to search for specific substrings across all fields in a dataset without the need to specify each field name individually. This becomes especially advantageous when dealing with datasets that have numerous or dynamically named fields. `where * has` is an expensive operation because it searches all fields. For a more efficient query, explicitly list the fields in which you want to search. For example: `where firstName has "miguel" or lastName has "miguel"`. ### Basic where \* has usage Find events where any field contains a specific substring. ```kusto ['sample-http-logs'] | where * has "GET" ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20where%20%2A%20has%20%5C%22GET%5C%22%22%7D\&queryOptions=%7B%22quickRange%22%3A%2230d%22%7D) ### Combine multiple substrings Find events where any field contains one of multiple substrings. ```kusto ['sample-http-logs'] | where * has "GET" or * has "text" ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20where%20%2A%20has%20%5C%22GET%5C%22%20or%20%2A%20has%20%5C%22text%5C%22%22%7D\&queryOptions=%7B%22quickRange%22%3A%2230d%22%7D) ### Use \* has with other operators Find events where any field contains a substring, and another specific field equals a certain value. ```kusto ['sample-http-logs'] | where * has "css" and req_duration_ms == 1 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20where%20%2A%20has%20%5C%22css%5C%22%20and%20req_duration_ms%20%3D%3D%201%22%7D\&queryOptions=%7B%22quickRange%22%3A%2230d%22%7D) ### Advanced chaining Filter data based on several conditions, including fields containing certain substrings, then summarize by another specific criterion. ```kusto ['sample-http-logs'] | where * has "GET" and * has "css" | summarize Count=count() by method, content_type, server_datacenter ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20where%20%2A%20has%20%5C%22GET%5C%22%20and%20%2A%20has%20%5C%22css%5C%22%5Cn%7C%20summarize%20Count%3Dcount%28%29%20by%20method%2C%20content_type%2C%20server_datacenter%22%7D\&queryOptions=%7B%22quickRange%22%3A%2230d%22%7D) ### Use with aggregations Find the average of a specific field for events where any field contains a certain substring. ```kusto ['sample-http-logs'] | where * has "Japan" | summarize avg(req_duration_ms) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20where%20%2A%20has%20%5C%22Japan%5C%22%5Cn%7C%20summarize%20avg%28req_duration_ms%29%22%7D\&queryOptions=%7B%22quickRange%22%3A%2230d%22%7D) ### String case transformation The `has` operator is case insensitive. Use `has` if you’re unsure about the case of the substring in the dataset. For the case-sensitive operator, use `has_cs`. ```kusto ['sample-http-logs'] | where * has "mexico" | summarize avg(req_duration_ms) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20where%20%2A%20has%20%5C%22mexico%5C%22%5Cn%7C%20summarize%20avg%28req_duration_ms%29%22%7D\&queryOptions=%7B%22quickRange%22%3A%2230d%22%7D) ## List of related operators * [**count**](/apl/tabular-operators/count-operator): Use `count` to return the number of records that match specific criteria. * [**distinct**](/apl/tabular-operators/distinct-operator): Use `distinct` to return unique values in a dataset, complementing filtering. * [**take**](/apl/tabular-operators/take-operator): Use `take` to return a specific number of records, typically in combination with `where` for pagination. # Sample queries Explore how to use APL in Axiom’s Query tab to run queries using Tabular Operators, Scalar Functions, and Aggregation Functions. In this tutorial, you’ll explore how to use APL in Axiom’s Query tab to run queries using Tabular Operators, Scalar Functions, and Aggregation Functions. ## Prerequisites * Sign up and log in to [Axiom Account](https://app.axiom.co/) * Ingest data into your dataset or you can run queries on [Play Sandbox](https://axiom.co/play) ## Overview of APL Every query, starts with a dataset embedded in **square brackets**, with the starting expression being a tabular operator statement. The query’s tabular expression statements produce the results of the query. Before you can start writing tabular operators or any function, the pipe (`|`) delimiter starts the query statements as they flow from one function to another. ## Commonly used Operators To run queries on each function or operator in this tutorial, click the **Run in Playground** button. [summarize](/apl/tabular-operators/summarize-operator): Produces a table that aggregates the content of the dataset. The following query returns the count of events by **time** ```kusto ['github-push-event'] | summarize count() by bin_auto(_time) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27github-push-event%27%5D%5Cn%7C%20summarize%20count%28%29%20by%20bin_auto%28_time%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) You can use the [aggregation functions](/apl/aggregation-function/statistical-functions) with the **summarize operator** to produce different columns. ## Top 10 GitHub push events by maximum push id ```kusto ['github-push-event'] | summarize max_if = maxif(push_id, true) by size | top 10 by max_if desc ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27github-push-event%27%5D%5Cn%7C%20summarize%20max_if%20%3D%20maxif%28push_id%2C%20true%29%20by%20size%5Cn%7C%20top%2010%20by%20max_if%20desc%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) ## Distinct City count by server datacenter ```kusto ['sample-http-logs'] | summarize cities = dcount(['geo.city']) by server_datacenter ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20summarize%20cities%20%3D%20dcount%28%5B%27geo.city%27%5D%29%20by%20server_datacenter%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) The result of a summarize operation has: * A row for every combination of by values * Each column named in by * A column for each expression [where](/apl/tabular-operators/where-operator): Filters the content of the dataset that meets a **condition** when executed. The following query filters the data by **method** and **content\_type**: ```kusto ['sample-http-logs'] | where method == "GET" and content_type == "application/octet-stream" | project method , content_type ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20where%20method%20%3D%3D%20%5C%22GET%5C%22%20and%20content_type%20%3D%3D%20%5C%22application%2Foctet-stream%5C%22%5Cn%7C%20project%20method%20%2C%20content_type%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) [count](/apl/tabular-operators/count-operator): Returns the number of events from the input dataset. ```kusto ['sample-http-logs'] | count ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20count%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) [Summarize](/apl/tabular-operators/summarize-operator) count by time bins in sample HTTP logs ```kusto ['sample-http-logs'] | summarize count() by bin_auto(_time) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20summarize%20count%28%29%20by%20bin_auto%28_time%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) [project](/apl/tabular-operators/project-operator): Selects a subset of columns. ```kusto ['sample-http-logs'] | project content_type, ['geo.country'], method, resp_body_size_bytes, resp_header_size_bytes ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20content_type%2C%20%5B%27geo.country%27%5D%2C%20method%2C%20resp_body_size_bytes%2C%20resp_header_size_bytes%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) [take](/apl/tabular-operators/take-operator): Returns up to the specified number of rows. ```kusto ['sample-http-logs'] | take 100 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20take%20100%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) The **limit** operator is an alias to the **take** operator. ```kusto ['sample-http-logs'] | limit 10 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20limit%2010%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) ## Scalar Functions #### [parse\_json()](/apl/scalar-functions/string-functions#parse-json) The following query extracts the JSON elements from an array: ```kusto ['sample-http-logs'] | project parsed_json = parse_json( "config_jsonified_metrics") ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20parsed_json%20%3D%20parse_json%28%20%5C%22config_jsonified_metrics%5C%22%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) #### [replace\_string()](/apl/scalar-functions/string-functions#parse-json): Replaces all string matches with another string. ```kusto ['sample-http-logs'] | extend replaced_string = replace_string( "creator", "method", "machala" ) | project replaced_string ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20replaced_string%20%3D%20replace_string%28%20%5C%22creator%5C%22%2C%20%5C%22method%5C%22%2C%20%5C%22machala%5C%22%20%29%5Cn%7C%20project%20replaced_string%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) #### [split()](/apl/scalar-functions/string-functions#split): Splits a given string according to a given delimiter and returns a string array. ```kusto ['sample-http-logs'] | project split_str = split("method_content_metrics", "_") | take 20 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20split_str%20%3D%20split%28%5C%22method_content_metrics%5C%22%2C%20%5C%22_%5C%22%29%5Cn%7C%20take%2020%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) #### [strcat\_delim()](/apl/scalar-functions/string-functions#strcat-delim): Concatenates a string array into a string with a given delimiter. ```kusto ['sample-http-logs'] | project strcat = strcat_delim(":", ['geo.city'], resp_body_size_bytes) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20strcat%20%3D%20strcat_delim%28%5C%22%3A%5C%22%2C%20%5B%27geo.city%27%5D%2C%20resp_body_size_bytes%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) #### [indexof()](/apl/scalar-functions/string-functions#indexof): Reports the zero-based index of the first occurrence of a specified string within the input string. ```kusto ['sample-http-logs'] | extend based_index = indexof( ['geo.country'], content_type, 45, 60, resp_body_size_bytes ), specified_time = bin(resp_header_size_bytes, 30) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20based_index%20%3D%20%20indexof%28%20%5B%27geo.country%27%5D%2C%20content_type%2C%2045%2C%2060%2C%20resp_body_size_bytes%20%29%2C%20specified_time%20%3D%20bin%28resp_header_size_bytes%2C%2030%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) ## Regex Examples ```kusto ['sample-http-logs'] | project remove_cutset = trim_start_regex("[^a-zA-Z]", content_type ) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20project%20remove_cutset%20%3D%20trim_start_regex%28%5C%22%5B%5Ea-zA-Z%5D%5C%22%2C%20content_type%20%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) ## Finding logs from a specific City ```kusto ['sample-http-logs'] | where tostring(geo.city) matches regex "^Camaquã$" ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20where%20tostring%28%5B%27geo.city%27%5D%29%20matches%20regex%20%5C%22%5ECamaqu%C3%A3%24%5C%22%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) ## Identifying logs from a specific user agent ```kusto ['sample-http-logs'] | where tostring(user_agent) matches regex "Mozilla/5.0" ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20where%20tostring%28user_agent%29%20matches%20regex%20%5C%22Mozilla%2F5.0%5C%22%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) ## Finding logs with response body size in a certain range ```kusto ['sample-http-logs'] | where toint(resp_body_size_bytes) >= 4000 and toint(resp_body_size_bytes) <= 5000 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20where%20toint%28resp_body_size_bytes%29%20%3E%3D%204000%20and%20toint%28resp_body_size_bytes%29%20%3C%3D%205000%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) ## Finding logs with user agents containing Windows NT ```kusto ['sample-http-logs'] | where tostring(user_agent) matches regex @"Windows NT [\d\.]+" ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?qid=m8yNkSVVjGq-s0z19c) ## Finding logs with specific response header size ```kusto ['sample-http-logs'] | where toint(resp_header_size_bytes) == 31 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20where%20toint%28resp_header_size_bytes%29%20%3D%3D%2031%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) ## Finding logs with specific request duration ```kusto ['sample-http-logs'] | where toreal(req_duration_ms) < 1 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20where%20toreal%28req_duration_ms%29%20%3C%201%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) ## Finding logs where TLS is enabled and method is POST ```kusto ['sample-http-logs'] | where tostring(is_tls) == "true" and tostring(method) == "POST" ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20where%20tostring%28is_tls%29%20%3D%3D%20%5C%22true%5C%22%20and%20tostring%28method%29%20%3D%3D%20%5C%22POST%5C%22%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) ## Array functions #### [array\_concat()](/apl/scalar-functions/array-functions#array_concat): Concatenates a number of dynamic arrays to a single array. ```kusto ['sample-http-logs'] | extend concatenate = array_concat( dynamic([5,4,3,87,45,2,3,45])) | project concatenate ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20concatenate%20%3D%20array_concat%28%20dynamic%28%5B5%2C4%2C3%2C87%2C45%2C2%2C3%2C45%5D%29%29%5Cn%7C%20project%20concatenate%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) #### [array\_sum()](/apl/scalar-functions/array-functions#array-sum): Calculates the sum of elements in a dynamic array. ```kusto ['sample-http-logs'] | extend summary_array=dynamic([1,2,3,4]) | project summary_array=array_sum(summary_array) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20summary_array%3Ddynamic%28%5B1%2C2%2C3%2C4%5D%29%5Cn%7C%20project%20summary_array%3Darray_sum%28summary_array%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) ## Conversion functions #### [todatetime()](/apl/scalar-functions/conversion-functions#todatetime): Converts input to datetime scalar. ```kusto ['sample-http-logs'] | extend dated_time = todatetime("2026-08-16") ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20dated_time%20%3D%20todatetime%28%5C%222026-08-16%5C%22%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) #### [dynamic\_to\_json()](/apl/scalar-functions/conversion-functions#dynamic-to-json): Converts a scalar value of type dynamic to a canonical string representation. ```kusto ['sample-http-logs'] | extend dynamic_string = dynamic_to_json(dynamic([10,20,30,40 ])) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20dynamic_string%20%3D%20dynamic_to_json%28dynamic%28%5B10%2C20%2C30%2C40%20%5D%29%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) ## String Operators [We support various query string](/apl/scalar-operators/string-operators), [logical](/apl/scalar-operators/logical-operators) and [numerical operators](/apl/scalar-operators/numerical-operators). In the query below, we use the **contains** operator, to find the strings that contain the string **-bot** and **\[bot]**: ```kusto ['github-issue-comment-event'] | extend bot = actor contains "-bot" or actor contains "[bot]" | where bot == true | summarize count() by bin_auto(_time), actor | take 20 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27github-issue-comment-event%27%5D%5Cn%7C%20extend%20bot%20%3D%20actor%20contains%20%5C%22-bot%5C%22%20or%20actor%20contains%20%5C%22%5Bbot%5D%5C%22%5Cn%7C%20where%20bot%20%3D%3D%20true%5Cn%7C%20summarize%20count%28%29%20by%20bin_auto%28_time%29%2C%20actor%5Cn%7C%20take%2020%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) ```kusto ['sample-http-logs'] | extend user_status = status contains "200" , agent_flow = user_agent contains "(Windows NT 6.4; AppleWebKit/537.36 Chrome/41.0.2225.0 Safari/537.36" | where user_status == true | summarize count() by bin_auto(_time), status | take 15 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20user_status%20%3D%20status%20contains%20%5C%22200%5C%22%20%2C%20agent_flow%20%3D%20user_agent%20contains%20%5C%22%28Windows%20NT%206.4%3B%20AppleWebKit%2F537.36%20Chrome%2F41.0.2225.0%20Safari%2F537.36%5C%22%5Cn%7C%20where%20user_status%20%3D%3D%20true%5Cn%7C%20summarize%20count%28%29%20by%20bin_auto%28_time%29%2C%20status%5Cn%7C%20take%2015%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) ## Hash Functions * [hash\_md5()](/apl/scalar-functions/hash-functions#hash-md5): Returns an MD5 hash value for the input value. * [hash\_sha256()](/apl/scalar-functions/hash-functions#hash-sha256): Returns a sha256 hash value for the input value. * [hash\_sha1()](/apl/scalar-functions/hash-functions#hash-sha1): Returns a sha1 hash value for the input value. ```kusto ['sample-http-logs'] | extend sha_256 = hash_md5( "resp_header_size_bytes" ), sha_1 = hash_sha1( content_type), md5 = hash_md5( method), sha512 = hash_sha512( "resp_header_size_bytes" ) | project sha_256, sha_1, md5, sha512 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20sha_256%20%3D%20hash_md5%28%20%5C%22resp_header_size_bytes%5C%22%20%29%2C%20sha_1%20%3D%20hash_sha1%28%20content_type%29%2C%20md5%20%3D%20hash_md5%28%20method%29%2C%20sha512%20%3D%20hash_sha512%28%20%5C%22resp_header_size_bytes%5C%22%20%29%5Cn%7C%20project%20sha_256%2C%20sha_1%2C%20md5%2C%20sha512%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) ## List all unique groups ```kusto ['sample-http-logs'] | distinct ['id'], is_tls ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20distinct%20%5B'id'%5D%2C%20is_tls%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) ## Count of all events per service ```kusto ['sample-http-logs'] | summarize Count = count() by server_datacenter | order by Count desc ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20summarize%20Count%20%3D%20count%28%29%20by%20server_datacenter%5Cn%7C%20order%20by%20Count%20desc%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) ## Change the time clause ```kusto ['github-issues-event'] | where _time == ago(1m) | summarize count(), sum(['milestone.number']) by _time=bin(_time, 1m) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27github-issues-event%27%5D%5Cn%7C%20where%20_time%20%3D%3D%20ago%281m%29%5Cn%7C%20summarize%20count%28%29%2C%20sum%28%5B%27milestone.number%27%5D%29%20by%20_time%3Dbin%28_time%2C%201m%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) ## Rounding functions * [floor()](/apl/scalar-functions/rounding-functions#floor): Calculates the largest integer less than, or equal to, the specified numeric expression. * [ceiling()](/apl/scalar-functions/rounding-functions#ceiling): Calculates the smallest integer greater than, or equal to, the specified numeric expression. * [bin()](/apl/scalar-functions/rounding-functions#bin): Rounds values down to an integer multiple of a given bin size. ```kusto ['sample-http-logs'] | extend largest_integer_less = floor( resp_header_size_bytes ), smallest_integer_greater = ceiling( req_duration_ms ), integer_multiple = bin( resp_body_size_bytes, 5 ) | project largest_integer_less, smallest_integer_greater, integer_multiple ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20largest_integer_less%20%3D%20floor%28%20resp_header_size_bytes%20%29%2C%20smallest_integer_greater%20%3D%20ceiling%28%20req_duration_ms%20%29%2C%20integer_multiple%20%3D%20bin%28%20resp_body_size_bytes%2C%205%20%29%5Cn%7C%20project%20largest_integer_less%2C%20smallest_integer_greater%2C%20integer_multiple%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) ## Truncate decimals using round function ```kusto ['sample-http-logs'] | project rounded_value = round(req_duration_ms, 2) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20project%20rounded_value%20%3D%20round%28req_duration_ms%2C%202%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) ## Truncate decimals using floor function ```kusto ['sample-http-logs'] | project floor_value = floor(resp_body_size_bytes), ceiling_value = ceiling(req_duration_ms) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20project%20floor_value%20%3D%20floor%28resp_body_size_bytes%29%2C%20ceiling_value%20%3D%20ceiling%28req_duration_ms%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) ## HTTP 5xx responses (day wise) for the last 7 days - one bar per day ```kusto ['sample-http-logs'] | where _time > ago(7d) | where req_duration_ms >= 5 and req_duration_ms < 6 | summarize count(), histogram(resp_header_size_bytes, 20) by bin(_time, 1d) | order by _time desc ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20where%20_time%20%3E%20ago\(7d\)%20%7C%20where%20req_duration_ms%20%3E%3D%205%20and%20req_duration_ms%20%3C%206%20%7C%20summarize%20count\(\)%2C%20histogram\(resp_header_size_bytes%2C%2020\)%20by%20bin\(_time%2C%201d\)%20%7C%20order%20by%20_time%20desc%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%227d%22%7D%7D) ## Implement a remapper on remote address logs ```kusto ['sample-http-logs'] | extend RemappedStatus = case(req_duration_ms >= 0.57, "new data", resp_body_size_bytes >= 1000, "size bytes", resp_header_size_bytes == 40, "header values", "doesntmatch") ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20extend%20RemappedStatus%20%3D%20case%28req_duration_ms%20%3E%3D%200.57%2C%20%5C%22new%20data%5C%22%2C%20resp_body_size_bytes%20%3E%3D%201000%2C%20%5C%22size%20bytes%5C%22%2C%20resp_header_size_bytes%20%3D%3D%2040%2C%20%5C%22header%20values%5C%22%2C%20%5C%22doesntmatch%5C%22%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) ## Advanced aggregations In this section, you will learn how to run queries using different functions and operators. ```kusto ['sample-http-logs'] | extend prospect = ['geo.city'] contains "Okayama" or uri contains "/api/v1/messages/back" | extend possibility = server_datacenter contains "GRU" or status contains "301" | summarize count(), topk( user_agent, 6 ) by bin(_time, 10d), ['geo.country'] | take 4 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20prospect%20%3D%20%5B%27geo.city%27%5D%20contains%20%5C%22Okayama%5C%22%20or%20uri%20contains%20%5C%22%2Fapi%2Fv1%2Fmessages%2Fback%5C%22%5Cn%7C%20extend%20possibility%20%3D%20server_datacenter%20contains%20%5C%22GRU%5C%22%20or%20status%20contains%20%5C%22301%5C%22%5Cn%7C%20summarize%20count%28%29%2C%20topk%28%20user_agent%2C%206%20%29%20by%20bin%28_time%2C%2010d%29%2C%20%5B%27geo.country%27%5D%5Cn%7C%20take%204%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) ## Searching map fields ```kusto ['otel-demo-traces'] | where isnotnull( ['attributes.custom']) | extend extra = tostring(['attributes.custom']) | search extra:"0PUK6V6EV0" | project _time, trace_id, name, ['attributes.custom'] ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%5Cn%7C%20where%20isnotnull%28%20%5B'attributes.custom'%5D%29%5Cn%7C%20extend%20extra%20%3D%20tostring%28%5B'attributes.custom'%5D%29%5Cn%7C%20search%20extra%3A%5C%220PUK6V6EV0%5C%22%5Cn%7C%20project%20_time%2C%20trace_id%2C%20name%2C%20%5B'attributes.custom'%5D%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) ## Configure Processing rules ```kusto ['sample-http-logs'] | where _sysTime > ago(1d) | summarize count() by method ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20where%20_sysTime%20%3E%20ago%281d%29%5Cn%7C%20summarize%20count%28%29%20by%20method%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%221d%22%7D%7D) ## Return different values based on the evaluation of a condition ```kusto ['sample-http-logs'] | extend MemoryUsageStatus = iff(req_duration_ms > 10000, "Highest", "Normal") ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20MemoryUsageStatus%20%3D%20iff%28req_duration_ms%20%3E%2010000%2C%20%27Highest%27%2C%20%27Normal%27%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) ## Working with different operators ```kusto ['hn'] | extend superman = text contains "superman" or title contains "superman" | extend batman = text contains "batman" or title contains "batman" | extend hero = case( superman and batman, "both", superman, "superman ", // spaces change the color batman, "batman ", "none") | where (superman or batman) and not (batman and superman) | summarize count(), topk(type, 3) by bin(_time, 30d), hero | take 10 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27hn%27%5D%5Cn%7C%20extend%20superman%20%3D%20text%20contains%20%5C%22superman%5C%22%20or%20title%20contains%20%5C%22superman%5C%22%5Cn%7C%20extend%20batman%20%3D%20text%20contains%20%5C%22batman%5C%22%20or%20title%20contains%20%5C%22batman%5C%22%5Cn%7C%20extend%20hero%20%3D%20case%28%5Cn%20%20%20%20superman%20and%20batman%2C%20%5C%22both%5C%22%2C%5Cn%20%20%20%20superman%2C%20%5C%22superman%20%20%20%5C%22%2C%20%2F%2F%20spaces%20change%20the%20color%5Cn%20%20%20%20batman%2C%20%5C%22batman%20%20%20%20%20%20%20%5C%22%2C%5Cn%20%20%20%20%5C%22none%5C%22%29%5Cn%7C%20where%20%28superman%20or%20batman%29%20and%20not%20%28batman%20and%20superman%29%5Cn%7C%20summarize%20count%28%29%2C%20topk%28type%2C%203%29%20by%20bin%28_time%2C%2030d%29%2C%20hero%5Cn%7C%20take%2010%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) ```kusto ['sample-http-logs'] | summarize flow = dcount( content_type) by ['geo.country'] | take 50 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20summarize%20flow%20%3D%20dcount%28%20content_type%29%20by%20%5B%27geo.country%27%5D%5Cn%7C%20take%2050%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) ## Get the JSON into a property bag using parse-json ```kusto example | where isnotnull(log) | extend parsed_log = parse_json(log) | project service, parsed_log.level, parsed_log.message ``` ## Get average response using project keep function ```kusto ['sample-http-logs'] | where ['geo.country'] == "United States" or ['id'] == 'b2b1f597-0385-4fed-a911-140facb757ef' | extend systematic_view = ceiling( resp_header_size_bytes ) | extend resp_avg = cos( resp_body_size_bytes ) | project-away systematic_view | project-keep resp_avg | take 5 ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%5Cn%7C%20where%20%5B'geo.country'%5D%20%3D%3D%20%5C%22United%20States%5C%22%20or%20%5B'id'%5D%20%3D%3D%20%5C%22b2b1f597-0385-4fed-a911-140facb757ef%5C%22%5Cn%7C%20extend%20systematic_view%20%3D%20ceiling%28%20resp_header_size_bytes%20%29%5Cn%7C%20extend%20resp_avg%20%3D%20cos%28%20resp_body_size_bytes%20%29%5Cn%7C%20project-away%20systematic_view%5Cn%7C%20project-keep%20resp_avg%5Cn%7C%20take%205%22%7D) ## Combine multiple percentiles into a single chart in APL ```kusto ['sample-http-logs'] | summarize percentiles_array(req_duration_ms, 50, 75, 90) by bin_auto(_time) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20summarize%20percentiles_array\(req_duration_ms%2C%2050%2C%2075%2C%2090\)%20by%20bin_auto\(_time\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) ## Combine mathematical functions ```kusto ['sample-http-logs'] | extend tangent = tan( req_duration_ms ), cosine = cos( resp_header_size_bytes ), absolute_input = abs( req_duration_ms ), sine = sin( resp_header_size_bytes ), power_factor = pow( req_duration_ms, 4) | extend angle_pi = degrees( resp_body_size_bytes ), pie = pi() | project tangent, cosine, absolute_input, angle_pi, pie, sine, power_factor ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%5Cn%7C%20extend%20tangent%20%3D%20tan%28%20req_duration_ms%20%29%2C%20cosine%20%3D%20cos%28%20resp_header_size_bytes%20%29%2C%20absolute_input%20%3D%20abs%28%20req_duration_ms%20%29%2C%20sine%20%3D%20sin%28%20resp_header_size_bytes%20%29%2C%20power_factor%20%3D%20pow%28%20req_duration_ms%2C%204%29%5Cn%7C%20extend%20angle_pi%20%3D%20degrees%28%20resp_body_size_bytes%20%29%2C%20pie%20%3D%20pi%28%29%5Cn%7C%20project%20tangent%2C%20cosine%2C%20absolute_input%2C%20angle_pi%2C%20pie%2C%20sine%2C%20power_factor%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) ```kusto ['github-issues-event'] | where actor !endswith "[bot]" | where repo startswith "kubernetes/" | where action == "opened" | summarize count() by bin_auto(_time) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27github-issues-event%27%5D%5Cn%7C%20where%20actor%20%21endswith%20%5C%22%5Bbot%5D%5C%22%5Cn%7C%20where%20repo%20startswith%20%5C%22kubernetes%2F%5C%22%5Cn%7C%20where%20action%20%3D%3D%20%5C%22opened%5C%22%5Cn%7C%20summarize%20count%28%29%20by%20bin_auto%28_time%29%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) ## Change global configuration attributes ```kusto ['sample-http-logs'] | extend status = coalesce(status, "info") ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20extend%20status%20%3D%20coalesce\(status%2C%20%5C%22info%5C%22\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) ## Set defualt value on event field ```kusto ['sample-http-logs'] | project status = case( isnotnull(status) and status != "", content_type, // use the contenttype if it’s not null and not an empty string "info" // default value ) ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20project%20status%20%3D%20case\(isnotnull\(status\)%20and%20status%20!%3D%20%5C%22%5C%22%2C%20content_type%2C%20%5C%22info%5C%22\)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D) ## Extract nested payment amount from custom attributes map field ```kusto ['otel-demo-traces'] | extend amount = ['attributes.custom']['app.payment.amount'] | where isnotnull( amount) | project _time, trace_id, name, amount, ['attributes.custom'] ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20extend%20amount%20%3D%20%5B'attributes.custom'%5D%5B'app.payment.amount'%5D%20%7C%20where%20isnotnull\(%20amount\)%20%7C%20project%20_time%2C%20trace_id%2C%20name%2C%20amount%2C%20%5B'attributes.custom'%5D%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D) ## Filtering GitHub issues by label identifier ```kusto ['github-issues-event'] | extend data = tostring(labels) | where labels contains "d73a4a" ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%20%7C%20extend%20data%20%3D%20tostring\(labels\)%20%7C%20where%20labels%20contains%20'd73a4a'%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D) ## Aggregate trace counts by HTTP method attribute in custom map ```kusto ['otel-demo-traces'] | extend httpFlavor = tostring(['attributes.custom']) | summarize Count=count() by ['attributes.http.method'] ``` [Run in Playground](https://play.axiom.co/axiom-play-qf1k/explorer?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20extend%20httpFlavor%20%3D%20tostring\(%5B'attributes.custom'%5D\)%20%7C%20summarize%20Count%3Dcount\(\)%20by%20%5B'attributes.http.method'%5D%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D) # Connect Axiom with Cloudflare Logpush Axiom gives you an all-at-once view of key Cloudflare Logpush metrics and logs, out of the box, with our dynamic Cloudflare Logpush dashboard. Cloudflare Logpush is a feature that allows you to push HTTP request logs and other Cloudflare-generated logs directly to your desired storage, analytics, and monitoring solutions like Axiom. The integration with Axiom aims to provide real-time insights into web traffic, and operational issues, thereby helping to monitor and troubleshoot effectively. ## What’s Cloudflare Logpush? Cloudflare Logpush enables Cloudflare users to automatically export their logs in JSON format to a variety of endpoints. This feature is incredibly useful for analytics, auditing, debugging, and monitoring the performance and security of websites. Types of logs you can export include HTTP request logs, firewall events, and more. ## Installing Cloudflare Logpush app ### Prerequisites * An active Cloudflare Enterprise account * API token or global API key Logpush on zones You can create a token that has access to a single zone, single account or a mix of all these, depending on your needs. For account access, the token must have theses permissions: * Logs: Edit * Account settings: Read For the zones, only edit permission is required for logs. ## Steps * Log in to Cloudflare, go to your Cloudflare dashboard, and then select the Enterprise zone (domain) you want to enable Logpush for. * Optionally, set filters and fields. You can filter logs by field (like Client IP, User Agent, etc.) and set the type of logs you want (for example, HTTP requests, firewall events). * In Axiom, click **Settings**, select **Apps**, and install the Cloudflare Logpush app with the token you created from the profile settings in Cloudflare. Install CloudFlare Logpush App * You see your available accounts and zones. Select the Cloudflare datasets you want to subscribe to. Install CloudFlare Logpush App * The installation uses the Cloudflare API to create Logpush jobs for each selected dataset. * After the installation completes, you can find the installed Logpush jobs at Cloudflare. For zone-scoped Logpush jobs: CloudFlare Logpush on zone level For account-scoped Logpush jobs: CloudFlare Logpush on account level * In the Axiom, you can see your Cloudflare Logpush dashboard. Using Axiom with Cloudflare Logpush offers a powerful solution for real-time monitoring, observability, and analytics. Axiom can help you gain deep insights into your app’s performance, errors, and app bottlenecks. ### Benefits of using the Axiom Cloudflare Logpush Dashboard * Real-time visibility into web performance: One of the most crucial features is the ability to see how your website or app is performing in real-time. The dashboard can show everything from page load times to error rates, giving you immediate insights that can help in timely decision-making. CloudFlare Logpush on account level * Actionable insights for troubleshooting: The dashboard doesn’t just provide raw data; it provides insights. Whether it’s an error that needs immediate fixing or performance metrics that show an error from your app, having this information readily available makes it easier to identify problems and resolve them swiftly. CloudFlare Logpush on account level * DNS metrics: Understanding the DNS requests, DNS queries, and DNS cache hit from your app is vital to track if there’s a request spike or get the total number of queries in your system. DNS metrics * Centralized logging and error tracing: With logs coming in from various parts of your app stack, centralizing them within Axiom makes it easier to correlate events across different layers of your infrastructure. This is crucial for troubleshooting complex issues that may span multiple services or components. Centralized logging and error tracing ## Supported Cloudflare Logpush Datasets Axiom supports all the Cloudflare account-scoped datasets. Zone-scoped * DNS logs * Firewall events * HTTP requests * NEL reports * Spectrum events Account-scoped * Access requests * Audit logs * CASB Findings * Device posture results * DNS Firewall Logs * Gateway DNS * Gateway HTTP * Gateway Network * Magic IDS Detections * Network Analytics Logs * Workers Trace Events * Zero Trust Network Session Logs # Connect Axiom with Cloudflare Workers Axiom gives you an all-at-once view of key Cloudflare worker’s metrics and logs, out of the box, with our Dynamic Cloudflare workers dashboard. Axiom’s Cloudflare Workers app provides granular detail about the traffic coming in from your monitored sites. This includes edge requests, static resources, client auth, response duration, and status. Axiom gives you an all-at-once view of key Cloudflare worker’s metrics and logs, out of the box, with our Dynamic Cloudflare workers dashboard. The data obtained with the Axiom dashboard gives you better insights into the state of your Cloudflare workers so you can easily monitor bad requests, popular URLs, cumulative execution time, successful requests, and more. The app is part of Axiom’s unified logging and observability platform, so you can easily track Cloudflare Workers edge requests alongside a comprehensive view of other resources in your Cloudflare worker environments. ## What is Cloudflare Workers Cloudflare Workers is a serverless computing platform developed by Cloudflare. The Workers platform allows developers to deploy and run JavaScript code directly at the network edge in more than 200 data centers worldwide. This serverless architecture enables high performance, low latency, and efficient scaling for web apps and APIs. ## Sending Cloudflare Worker logs to Axiom The Axiom Cloudflare worker repository plugin is available on [GitHub](https://github.com/axiomhq/axiom-cloudflare-workers). 1. Copy the contents of [src/worker.js](https://github.com/axiomhq/axiom-cloudflare-workers/blob/main/src/worker.js) into a new worker on Cloudflare. 2. Update the authentication variables to corresponding dataset and token: ```bash const axiomDataset = "my-dataset" // Your Axiom dataset const axiomToken = "xapt-xxx" // Your Axiom API token ``` * The dataset is where your Cloudflare worker logs will be stored. Create a dataset from the settings page in the Axiom UI. Add new layer * The Axiom token will be your API token with ingest and query permissions. Add new layer 3. Add triggers for the worker, for example,, a route trigger: * Navigate to the worker and click on the Triggers tab. Cloudflare worker * Scroll down to Routes and click Add Route. Cloudflare scroll down * Enter a route, for example,, \*.example.com, choose the related zone, then click Save. Cloudflare routes ## View Cloudflare Workers Logs When requests are made to the routes you set up, the worker will be triggered, and you will see the logs delivered to your Axiom dataset. Cloudflare datasets # Connect Axiom with Grafana Learn how to extend the functionality of Grafana by installing the Axiom data source plugin. Data visualisation ## What is a Grafana data source plugin? Grafana is an open-source tool for time-series analytics, visualization, and alerting. It’s frequently used in DevOps and IT Operations roles to provide real-time information on system health and performance. Data sources in Grafana are the actual databases or services where the data is stored. Grafana has a variety of data source plugins that connect Grafana to different types of databases or services. This enables Grafana to query those sources from display that data on its dashboards. The data sources can be anything from traditional SQL databases to time-series databases or metrics, and logs from Axiom. A Grafana data source plugin extends the functionality of Grafana by allowing it to interact with a specific type of data source. These plugins enable users to extract data from a variety of different sources, not just those that come supported by default in Grafana. ## Prerequisites * [Create an Axiom account](https://app.axiom.co/). * [Create a dataset in Axiom](/reference/datasets) where you send your data. * [Create an API token in Axiom](/reference/tokens) with permissions to create, read, update, and delete datasets. ## Install the Axiom Grafana data source plugin on Grafana Cloud * In Grafana, click Administration > Plugins in the side navigation menu to view installed plugins. * In the filter bar, search for the Axiom plugin * Click on the plugin logo. * Click Install. Add new layer When the update is complete, a confirmation message is displayed, indicating that the installation was successful. * The Axiom Grafana Plugin is also installable from the [Grafana Plugins page](https://grafana.com/grafana/plugins/axiomhq-axiom-datasource/) Add new layer ## Install the Axiom Grafana data source plugin on local Grafana The Axiom data source plugin for Grafana is [open source on GitHub](https://github.com/axiomhq/axiom-grafana). It can be installed via the Grafana CLI, or via Docker. ### Install the Axiom Grafana Plugin using Grafana CLI ```bash grafana-cli plugins install axiomhq-axiom-datasource ``` ### Install Via Docker * Add the plugin to your `docker-compose.yml` or `Dockerfile` * Set the environment variable `GF_INSTALL_PLUGINS` to include the plugin Example: `GF_INSTALL_PLUGINS="axiomhq-axiom-datasource"` ## Configuration * Add a new data source in Grafana * Select the Axiom data source type. Add new layer * Enter the previously generated API token. * Save and test the data source. ## Build Queries with Query Editor The Axiom data source Plugin provides a custom query editor to build and visualize your Axiom event data. After configuring the Axiom data source, start building visualizations from metrics and logs stored in Axiom. * Create a new panel in Grafana by clicking on Add visualization Build queries * Select the Axiom data source. Axiom data source * Use the query editor to choose the desired metrics, dimensions, and filters. Axiom Query Editor ## Benefits of the Axiom Grafana data source plugin The Axiom Grafana data source plugin allows users to display and interact with their Axiom data directly from within Grafana. By doing so, it provides several advantages: 1. **Unified visualization:** The Axiom Grafana data source plugin allows users to utilize Grafana’s powerful visualization tools with Axiom’s data. This enables users to create, explore, and share dashboards which visually represent their Axiom logs and metrics. Data visualisation 2. **Rich Querying Capability:** Grafana has a powerful and flexible interface for building data queries. With the Axiom plugin, and leverage this capability to build complex queries against your Axiom data. Rich querying 3. **Customizable Alerting:** Grafana’s alerting feature allows you to set alerts based on your queries' results, and set up custom alerts based on specific conditions in your Axiom log data. 4. **Sharing and Collaboration:** Grafana’s features for sharing and collaboration can help teams work together more effectively. Share Axiom data visualizations with others, collaborate on dashboards, and discuss insights directly in Grafana. Rich querying # Apps Enrich you Axiom organization with dedicated apps. This section walks you through a catalogue of dedicated apps that enrich your Axiom organization. To use standard APIs and other data shippers like the Elasticsearch Bulk API, FluentBit log processor or Fluentd log collector, go to [Send data](/send-data/ingest) instead. # Enrich Axiom experience with AWS Lambda This page explains how to enrich your Axiom experience with AWS Lambda. Use the Axiom Lambda Extension to enrich your Axiom organization with quick filters and a dashboard. For information on how to send logs and platform events of your Lambda function to Axiom, see [Send data from AWS Lambda](/send-data/aws-lambda). ## What’s the Axiom Lambda Extension AWS Lambda is a compute service that allows you to build applications and run your code at scale without provisioning or maintaining any servers. Use the AWS Lambda Extension to collect Lambda logs, performance metrics, platform events, and memory usage from your Lambda functions. With the Axiom Lambda Extension, you can monitor Lambda performance and aggregate system-level metrics for your serverless applications and optimize lambda functions through easy-to-use automatic dashboards. With the Axiom Lambda extension, you can: * Monitor your Lambda functions and invocations. * Get full visibility into your AWS Lambda events in minutes. * Collect metrics and logs from your Lambda-based Serverless Applications. * Track and view enhanced memory usage by versions, durations, and cold start. * Detect and get alerts on Lambda event errors, Lambda request timeout, and low execution time. ## Comprehensive AWS Lambda dashboards The Axiom AWS Lambda integration comes with a pre-built dashboard where you can see and group your functions with the versions and AWS resource that triggers them, making this the ideal starting point for getting an advanced view of the performance and health of your AWS Lambda serverless services and Lambda function events. The AWS Lambda dashboards automatically show up in Axiom through schema detection after installing the Axiom Lambda Extension. These new zero-config dashboards help you spot and troubleshoot Lambda function errors. For example, if there’s high memory usage on your functions, you can spot the unusual delay from the max execution dashboard and filter your errors by functions, durations, invocations, and versions. With your Lambda version name, you can gain and expand your views on what’s happening in your Lambda event source mapping and invocation type. AWS Lambda dashboards ## Monitor Lambda functions and usage in Axiom Having real-time visibility into your function logs is important because any duration between sending your lambda request and the execution time can cause a delay and adds to customer-facing latency. You need to be able to measure and track your Lambda invocations, maximum and minimum execution time, and all invocations by function. Monitor Lambda functions and usage in Axiom The Axiom Lambda Extension gives you full visibility into the most important metrics and logs coming from your Lambda function out of the box without any further configuration required. Monitor Lambda functions and usage in Axiom ## Track cold start on your Lambda function A cold start occurs when there’s a delay between your invocation and runtime created during the initialization process. During this period, there’s no available function instance to respond to an invocation. With the Axiom built-in Serverless AWS Lambda dashboard, you can track and see the effect of cold start on your Lambda functions and its impact on every Lambda function. This data lets you know when to take actionable steps, such as using provisioned concurrency or reducing function dependencies. Track cold start on your Lambda function ## Optimize slow-performing Lambda queries Grouping logs with Lambda invocations and execution time by function provides insights into your events request and response pattern. You can extend your query to view when an invocation request is rejected and configure alerts to be notified on Serverless log patterns and Lambda function payloads. With the invocation request dashboard, you can monitor request function logs and see how your Lambda serverless functions process your events and Lambda queues over time. Optimize slow-performing Lambda queries ## Detect timeout on your Lambda function Axiom Lambda function monitors let you identify the different points of invocation failures, cold-start delays, and AWS Lambda errors on your Lambda functions. With standard function logs like invocations by function, and Lambda cold start, monitoring the rate of your execution time can alert you to be aware of a significant spike whenever an error occurs in your Lambda function. Detect timeout on your Lambda function ## Smart filters Axiom Lambda Serverless Smart Filters lets you easily filter down to specific AWS Lambda functions or Serverless projects and use saved queries to get deep insights on how functions are performing with a single click. Smart filters # Connect Axiom with Netlify Integrating Axiom with Netlify to get a comprehensive observability experience for your Netlify projects. This app will give you a better understanding of how your Jamstack apps are performing. Integrate Axiom with Netlify to get a comprehensive observability experience for your Netlify projects. This integration will give you a better understanding of how your Jamstack apps are performing. You can easily monitor logs and metrics related to your website traffic, serverless functions, and app requests. The integration is easy to set up, and you don’t need to configure anything to get started. With Axiom’s Zero-Config Observability app, you can see all your metrics in real-time, without sampling. That means you can get a complete view of your app’s performance without any gaps in data. Axiom’s Netlify app is complete with a pre-built dashboard that gives you control over your Jamstack projects. You can use this dashboard to track key metrics and make informed decisions about your app’s performance. Overall, the Axiom Netlify app makes it easy to monitor and optimize your Jamstack apps. However, do note that this integration is only available for Netlify customers enterprise-level plans where [Log Drains are supported](https://docs.netlify.com/monitor-sites/log-drains/). ## What is Netlify Netlify is a platform for building highly-performant and dynamic websites, e-commerce stores, and web apps. Netlify automatically builds your site and deploys it across its global edge network. The Netlify platform provides teams everything they need to take modern web projects from the first preview to full production. ## Sending logs to Axiom The log events gotten from Axiom gives you better insight into the state of your Netlify sites environment so that you can easily monitor traffic volume, website configurations, function logs, resource usage, and more. 1. Simply login to your [Axiom account](https://app.axiom.co/), click on **Apps** from the **Settings** menu, select the **Netlify app** and click on **Install now**. * It’ll redirect you to Netlify to authorize Axiom. * Click **Authorize**, and then copy the integration token. 2. Log into your **Netlify Team Account**, click on your site settings and select **Log Drains**. * In your log drain service, select **Axiom**, paste the integration token from Step 1, and then click **Connect**. ## App overview ### Traffic and function Logs With Axiom, you can instrument, and actively monitor your Netlify sites, stream your build logs, and analyze your deployment process, or use our pre-build Netlify Dashboard to get an overview of all the important traffic data, usage, and metrics. Various logs will be produced when users collaborate and interact with your sites and websites hosted on Netlify. Axiom captures and ingests all these logs into the `netlify` dataset. You can also drill down to your site source with our advanced query language and fork our dashboard to start building your own site monitors. * Back in your Axiom datasets console you'll see all your traffic and function logs in your `netlify` dataset. ### Live stream logs Stream your sites and app logs live, and filter them to see important information. ### Zero-config dashboard for your Netlify sites Use our pre-build Netlify Dashboard to get an overview of all the important metrics. When ready, you can fork our dashboard and start building your own! ## Start logging Netlify Sites today Axiom Netlify integration allows you to monitor, and log all of your sites, and apps in one place. With the Axiom app, you can quickly detect site errors, and get high-level insights into your Netlify projects. * We welcome ideas, feedback, and collaboration, join us in our [Discord Community](http://axiom.co/discord) to share them with us. # Connect Axiom with Tailscale This page explains how to integrate Axiom with Tailscale. Tailscale is a secure networking solution that allows you to create and manage a private network (tailnet), securely connecting all your devices. Integrating Axiom with Tailscale allows you to stream your audit and network flow logs directly to Axiom seamlessly, unlocking powerful insights and analysis. Whether you’re conducting a security audit, optimizing performance, or ensuring compliance, Axiom’s Tailscale dashboard equips you with the tools to maintain a secure and efficient network, respond quickly to potential issues, and make informed decisions about your network configuration and usage. ## Prerequisites * [Create an Axiom account](https://app.axiom.co/register). * [Create a dataset in Axiom](/reference/datasets) where you send your data. * [Create an API token in Axiom](/reference/tokens) with permissions to update the dataset you have created. {/* list separator */} * [Create a Tailscale account](https://login.tailscale.com/start). ## Setup 1. In Tailscale, go to the [configuration logs page](https://login.tailscale.com/admin/logs) of the admin console. 2. Add Axiom as a configuration log streaming destination in Tailscale. For more information, see the [Tailscale documentation](https://tailscale.com/kb/1255/log-streaming?q=stream#add-a-configuration-log-streaming-destination). ## Tailscale dashboard Axiom displays the data it receives in a pre-built Tailscale dashboard that delivers immediate, actionable insights into your tailnet’s activity and health. This comprehensive overview includes: * **Log type distribution**: Understand the balance between configuration audit logs and network flow logs over time. * **Top actions and hosts**: Identify the most common network actions and most active devices. * **Traffic visualization**: View physical, virtual, and exit traffic patterns for both sources and destinations. * **User activity tracking**: Monitor actions by user display name, email, and ID for security audits and compliance. * **Configuration log stream**: Access a detailed audit trail of all configuration changes. With these insights, you can: * Quickly identify unusual network activity or traffic patterns. * Track configuration changes and user actions. * Monitor overall network health and performance. * Investigate specific events or users as needed. * Understand traffic distribution across your tailnet. # Connect Axiom with Terraform Provision and manage Axiom resources such as datasets and monitors with Terraform. Axiom Terraform Provider lets you provision and manage Axiom resources (datasets, notifiers, monitors, and users) with Terraform. This means that you can programmatically create resources, access existing ones, and perform further infrastructure automation tasks. Install the Axiom Terraform Provider from the [Terraform Registry](https://registry.terraform.io/providers/axiomhq/axiom/latest). To see the provider in action, check out the [example](https://github.com/axiomhq/terraform-provider-axiom/blob/main/example/main.tf). This guide explains how to install the provider and perform some common procedures such as creating new resources and accessing existing ones. For the full API reference, see the [documentation in the Terraform Registry](https://registry.terraform.io/providers/axiomhq/axiom/latest/docs). ## Prerequisites * [Sign up for a free Axiom account](https://app.axiom.co/register). All you need is an email address. * [Create an advanced API token in Axiom](/reference/tokens#create-advanced-api-token) with the permissions to perform the actions you want to use Terraform for. For example, to use Terraform to create and update datasets, create the advanced API token with these permissions. * [Create a Terraform account](https://app.terraform.io/signup/account). * [Install the Terraform CLI](https://developer.hashicorp.com/terraform/cli). ## Install the provider To install the Axiom Terraform Provider from the [Terraform Registry](https://registry.terraform.io/providers/axiomhq/axiom/latest), follow these steps: 1. Add the following code to your Terraform configuration file. Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable. ```hcl terraform { required_providers { axiom = { source = "axiomhq/axiom" } } } provider "axiom" { api_token = "API_TOKEN" } ``` 2. In your terminal, go to the folder of your main Terraform configuration file, and then run the command `terraform init`. ## Create new resources ### Create dataset To create a dataset in Axiom using the provider, add the following code to your Terraform configuration file. Customize the `name` and `description` fields. ```hcl resource "axiom_dataset" "test_dataset" { name = "test_dataset" description = "This is a test dataset created by Terraform." } ``` ### Create notifier To create a Slack notifier in Axiom using the provider, add the following code to your Terraform configuration file. Replace `SLACK_URL` with the webhook URL from your Slack instance. For more information on obtaining this URL, see the [Slack documentation](https://api.slack.com/messaging/webhooks). ```hcl resource "axiom_notifier" "test_slack_notifier" { name = "test_slack_notifier" properties = { slack = { slack_url = "SLACK_URL" } } } ``` To create a Discord notifier in Axiom using the provider, add the following code to your Terraform configuration file. * Replace `DISCORD_CHANNEL` with the webhook URL from your Discord instance. For more information on obtaining this URL, see the [Discord documentation](https://discord.com/developers/resources/webhook). * Replace `DISCORD_TOKEN` with your Discord API token. For more information on obtaining this token, see the [Discord documentation](https://discord.com/developers/topics/oauth2). ```hcl resource "axiom_notifier" "test_discord_notifier" { name = "test_discord_notifier" properties = { discord = { discord_channel = "DISCORD_CHANNEL" discord_token = "DISCORD_TOKEN" } } } ``` To create an email notifier in Axiom using the provider, add the following code to your Terraform configuration file. Replace `EMAIL1` and `EMAIL2` with the email addresses you want to notify. ```hcl resource "axiom_notifier" "test_email_notifier" { name = "test_email_notifier" properties = { email= { emails = ["EMAIL1","EMAIL2"] } } } ``` For more information on the types of notifier you can create, see the [documentation in the Terraform Registry](https://registry.terraform.io/providers/axiomhq/axiom/latest/resources/notifier). ### Create monitor To create a monitor in Axiom using the provider, add the following code to your Terraform configuration file and customize it: ```hcl resource "axiom_monitor" "test_monitor" { depends_on = [axiom_dataset.test_dataset, axiom_notifier.test_slack_notifier] name = "test_monitor" description = "This is a test monitor created by Terraform." apl_query = "['test_dataset'] | summarize count() by bin_auto(_time)" interval_minutes = 5 operator = "Above" range_minutes = 5 threshold = 1 notifier_ids = [ axiom_notifier.test_slack_notifier.id ] alert_on_no_data = false notify_by_group = false } ``` This example creates a monitor using the dataset `test_dataset` and the notifier `test_slack_notifier`. These are resources you have created and accessed in the sections above. * Customize the `name` and the `description` fields. * In the `apl_query` field, specify the APL query for the monitor. For more information on these fields, see the [documentation in the Terraform Registry](https://registry.terraform.io/providers/axiomhq/axiom/latest/resources/monitor). ### Create user To create a user in Axiom using the provider, add the following code to your Terraform configuration file. Customize the `name`, `email`, and `role` fields. ```hcl resource "axiom_user" "test_user" { name = "test_user" email = "test@abc.com" role = "user" } ``` ## Access existing resources ### Access existing dataset To access an existing dataset, follow these steps: 1. Determine the ID of the Axiom dataset by sending a GET request to the [`datasets` endpoint of the Axiom API](/restapi/endpoints/getDatasets). 2. Add the following code to your Terraform configuration file. Replace `DATASET_ID` with the ID of the Axiom dataset. ```hcl data "axiom_dataset" "test_dataset" { id = "DATASET_ID" } ``` ### Access existing notifier To access an existing notifier, follow these steps: 1. Determine the ID of the Axiom notifier by sending a GET request to the `notifiers` endpoint of the Axiom API. 2. Add the following code to your Terraform configuration file. Replace `NOTIFIER_ID` with the ID of the Axiom notifier. ```hcl data "axiom_dataset" "test_slack_notifier" { id = "NOTIFIER_ID" } ``` ### Access existing monitor To access an existing monitor, follow these steps: 1. Determine the ID of the Axiom monitor by sending a GET request to the `monitors` endpoint of the Axiom API. 2. Add the following code to your Terraform configuration file. Replace `MONITOR_ID` with the ID of the Axiom monitor. ```hcl data "axiom_monitor" "test_monitor" { id = "MONITOR_ID" } ``` ### Access existing user To access an existing user, follow these steps: 1. Determine the ID of the Axiom user by sending a GET request to the `users` endpoint of the Axiom API. 2. Add the following code to your Terraform configuration file. Replace `USER_ID` with the ID of the Axiom user. ```hcl data "axiom_user" "test_user" { id = "USER_ID" } ``` # Connect Axiom with Vercel Easily monitor data from requests, functions, and web vitals in one place to get the deepest observability experience for your Vercel projects. Connect Axiom with Vercel to get the deepest observability experience for your Vercel projects. Easily monitor data from requests, functions, and web vitals in one place. 100% live and 100% of your data, no sampling. Axiom’s Vercel app ships with a pre-built dashboard and pre-installed monitors so you can be in complete control of your projects with minimal effort. If you use Axiom Vercel integration, [annotations](/query-data/annotate-charts) are automatically created for deployments. ## What is Vercel? Vercel is a platform for frontend frameworks and static sites, built to integrate with your headless content, commerce, or database. Vercel provides a frictionless developer experience to take care of the hard things: deploying instantly, scaling automatically, and serving personalized content around the globe. Vercel makes it easy for frontend teams to develop, preview, and ship delightful user experiences, where performance is the default. ## Send logs to Axiom Simply install the [Axiom Vercel app from here](https://vercel.com/integrations/axiom) and be streaming logs and web vitals within minutes! ## App Overview ### Request and function logs For both requests and serverless functions, Axiom automatically installs a [log drain](https://vercel.com/blog/log-drains) in your Vercel account to capture data live. As users interact with your website, various logs will be produced. Axiom captures all these logs and ingests them into the `vercel` dataset. You can stream and analyze these logs live, or use our pre-build Vercel Dashboard to get an overview of all the important metrics. When you’re ready, you can fork our dashboard and start building your own! For function logs, if you call `console.log`, `console.warn` or `console.error` in your function, the output will also be captured and made available as part of the log. You can use our extended query language, APL, to easily search these logs. ## Web vitals Axiom supports capturing and analyzing Web Vital data directly from your user’s browser without any sampling and with more data than is available elsewhere. It is perfect to pair with Vercel’s in-built analytics when you want to get really deep into a specific problem or debug issues with a specific audience (user-agent, location, region, etc). Web Vitals are only currently supported for Next.js websites. Expanded support is coming soon. ### Installation Perform the following steps to install Web Vitals: 1. In your Vercel project, run `npm install --save next-axiom`. 2. In `next.config.js`, wrap your NextJS config in `withAxiom` as follows: ```js const { withAxiom } = require('next-axiom'); module.exports = withAxiom({ // ... your existing config }) ``` This will proxy the Axiom ingest call to improve deliverability. 3. For Web Vitals, navigate to `app/layout.tsx` and add the `AxiomWebVitals` component: ```js import { AxiomWebVitals } from 'next-axiom'; export default function RootLayout() { return ( ...
...
); } ``` WebVitals are sent only from production deployments. 4. Deploy your site and watch data coming into your Axiom dashboard. * To send logs from different parts of your app, make use of the provided logging functions. For example: ```js log.info('Payment completed', { userID: '123', amount: '25USD' }); ``` ### Client Components For Client Components, replace the `log` prop usage with the `useLogger` hook: ```js 'use client'; import { useLogger } from 'next-axiom'; export default function ClientComponent() { const log = useLogger(); log.debug('User logged in', { userId: 42 }); return

Logged in

; } ``` ### Server Components For Server Components, create a logger and make sure to call flush before returning: ```js import { Logger } from 'next-axiom'; export default async function ServerComponent() { const log = new Logger(); log.info('User logged in', { userId: 42 }); // ... other operations ... await log.flush(); return

Logged in

; } ``` ### Route Handlers For Route Handlers, wrapping your Route Handlers in `withAxiom` will add a logger to your request and automatically log exceptions: ```js import { withAxiom, AxiomRequest } from 'next-axiom'; export const GET = withAxiom((req: AxiomRequest) => { req.log.info('Login function called'); // You can create intermediate loggers const log = req.log.with({ scope: 'user' }); log.info('User logged in', { userId: 42 }); return NextResponse.json({ hello: 'world' }); }); ``` ## Use Next.js 12 for Web Vitals If you’re using Next.js version 12, follow the instructions below to integrate Axiom for logging and capturing Web Vitals data. In your `pages/_app.js` or `pages/_app.ts` and add the following line: ```js export { reportWebVitals } from 'next-axiom'; ``` ## Upgrade to Next.js 13 from Next.js 12 If you plan on upgrading to Next.js 13, you'll need to make specific changes to ensure compatibility: * Upgrade the next-axiom package to version `1.0.0` or higher: * Make sure any exported variables have the `NEXT_PUBLIC_ prefix`, for example,, `NEXT_PUBLIC_AXIOM_TOKEN`. * In client components, use the `useLogger` hook instead of the `log` prop. * For server-side components, you need to create an instance of the `Logger` and flush the logs before the component returns. * For Web Vitals tracking, you'll replace the previous method of capturing data. Remove the `reportWebVitals()` line and instead integrate the `AxiomWebVitals` component into your layout. ## Vercel Function logs 4KB limit The Vercel 4KB log limit refers to a restriction placed by Vercel on the size of log output generated by serverless functions running on their platform. The 4KB log limit means that each log entry produced by your function should be at most 4 Kilobytes in size. If your log output is larger than 4KB, you might experience truncation or missing logs. To log above this limit, you can send your function logs using [next-axiom](https://github.com/axiomhq/next-axiom). ## Parse JSON on the message field If you use a logging library in your Vercel project that prints JSON, your **message** field will contain a stringified and therefore escaped JSON object. * If your Vercel logs are encoded as JSON, they will look like this: ```json { "level": "error", "message": "{ \"message\": \"user signed in\", \"metadata\": { \"userId\": 2234, \"signInType\": \"sso-google\" }}", "request": { "host": "www.axiom.co", "id": "iad1:iad1::sgh2r-1655985890301-f7025aa764a9", "ip": "199.16.157.13", "method": "GET", "path": "/sign-in/google", "scheme": "https", "statusCode": 500, "teamName": "AxiomHQ", }, "vercel": { "deploymentId": "dpl_7UcdgdgNsdgbcPY3Lg6RoXPfA6xbo8", "deploymentURL": "axiom-bdsgvweie6au-axiomhq.vercel.app", "projectId": "prj_TxvF2SOZdgdgwJ2OBLnZH2QVw7f1Ih7", "projectName": "axiom-co", "region": "iad1", "route": "/signin/[id]", "source": "lambda-log" } } ``` * The **JSON** data in your **message** would be: ```json { "message": "user signed in", "metadata": { "userId": 2234, "signInType": "sso-google" } } ``` You can **parse** the JSON using the [parse\_json function](/apl/scalar-functions/string-functions#parse-json\(\)) and run queries against the **values** in the **message** field. ### Example ```kusto ['vercel'] | extend parsed = parse_json(message) ``` * You can select the field to **insert** into new columns using the [project operator](/apl/tabular-operators/project-operator) ```kusto ['vercel'] | extend parsed = parse_json('{"message":"user signed in", "metadata": { "userId": 2234, "SignInType": "sso-google" }}') | project parsed["message"] ``` ### More Examples * If you have **null values** in your data you can use the **isnotnull()** function ```kusto ['vercel'] | extend parsed = parse_json(message) | where isnotnull(parsed) | summarize count() by parsed["message"], parsed["metadata"]["userId"] ``` * Check out our [APL Documentation on how to use more functions](/apl/scalar-functions/string-functions) and run your own queries against your Vercel logs. ## Migrate from Vercel app to next-axiom In May 2024, Vercel [introduced higher costs](https://axiom.co/blog/changes-to-vercel-log-drains) for using Vercel Log Drains. Because the Axiom Vercel app depends on Log Drains, using the next-axiom library can be the cheaper option to analyze telemetry data for higher volume projects. To migrate from the Axiom Vercel app to the next-axiom library, follow these steps: 1. Delete the existing log drain from your Vercel project. 2. Delete `NEXT_PUBLIC_AXIOM_INGEST_ENDPOINT` from the environment variables of your Vercel project. For more information, see the [Vercel documentation](https://vercel.com/projects/environment-variables). 3. [Create a new dataset in Axiom](/reference/datasets), and [create a new advanced API token](/reference/tokens) with ingest permissions for that dataset. 4. Add the following environment variables to your Vercel project: * `NEXT_PUBLIC_AXIOM_DATASET` is the name of the Axiom dataset where you want to send data. * `NEXT_PUBLIC_AXIOM_TOKEN` is the Axiom API token you have generated. 5. In your terminal, go to the root folder of your Next.js app, and then run `npm install --save next-axiom` to install the latest version of next-axiom. 6. In the `next.config.ts` file, wrap your Next.js configuration in `withAxiom`: ```js const { withAxiom } = require('next-axiom'); module.exports = withAxiom({ // Your existing configuration }); ``` For more configuration options, see the [documentation in the next-axiom GitHub repository](https://github.com/axiomhq/next-axiom). ## Send logs from Vercel preview deployments To send logs from Vercel preview deployments to Axiom, enable preview deployments for the environment variable `NEXT_PUBLIC_AXIOM_INGEST_ENDPOINT`. For more information, see the [Vercel documentation](https://vercel.com/docs/projects/environment-variables/managing-environment-variables). # Configure dashboard elements This section explains how to configure dashboard elements. When you create a chart, click View options icon to access the following options. ## Values Specify how to treat missing or undefined values: * **Auto:** This option automatically decides the best way to represent missing or undefined values in the data series based on the chart type and the rest of the data. * **Ignore:** This option ignores any missing or undefined values in the data series. This means that the chart only displays the known, defined values. * **Join adjacent values:** This option connects adjacent data points in the data series, effectively filling in any gaps caused by missing values. The benefit of joining adjacent values is that it can provide a smoother, more continuous visualization of your data. * **Fill with zeros:** This option replaces any missing or undefined values in the data series with zero. This can be useful if you want to emphasize that the data is missing or undefined, as it causes a drop to zero in your chart. ## Variant Specify the chart type. **Area:** An area chart displays the area between the data line and the axes, often filled with a color or pattern. Stacked charts provide the capability to design and implement intricate query dashboards while integrating advanced visualizations, enriching your logging experience over time. Area chart **Bars:** A bar chart represents data in rectangular bars. The length of each bar is proportional to the value it represents. Bar charts can be used to compare discrete quantities, or when you have categorical data. Bar chart **Line:** A line chart connects individual data points into a continuous line, which is useful for showing logs over time. Line charts are often used for time series data. Line chart ## Y-Axis Specify the scale of the vertical axis. **Linear:** A linear scale maintains a consistent scale where equal distances represent equal changes in value. This is the most common scale type and is useful for most types of data. Linear scale **Log:** A logarithmic (or log) scale represents values in terms of their order of magnitude. Each unit of distance on a log scale represents a tenfold increase in value. Log scales make it easy to see backend errors and compare values across a wide range. Log scale ## Annotations Specify the types of annotations to display in the chart: * Show all annotations * Hide all annotations * Selective determine the annotations types to display # Create dashboard elements This section explains how to create dashboard elements. To create new dashboard elements: 1. [Create a dashboard](/dashboards/create) or open an existing dashboard. 2. Click Add chart **Add element** in the top right corner. 3. Choose the dashboard element from the list. 4. For charts, select one of the following: * Click **Simple Query Builder** to create your chart using a [visual query builder](#create-chart-using-visual-query-builder). * Click **Advanced Query Language** to create your chart using the Axiom Processing Language (APL). Create a chart in the same way you create a chart in the APL query builder of the [Query tab](/query-data/explore#create-a-query-using-apl). 5. Optional: [Configure chart options](/dashboard-elements/configure). 6. Optional: Set a custom time range that is different from the dashboard’s time range. 7. Click **Save**. The new element appears in your dashboard. At the bottom, click **Save** to save your changes to the dashboard. ## Create chart using visual query builder Use the query builder to create or edit queries for the selected dataset: Query builder This component is a visual query builder that eases the process of building visualizations and segments of your data. This guide walks you through the individual sections of the query builder. ### Time range Every query has a start and end time and the time range component allows quick selection of common time ranges as well as the ability to input specific start and end timestamps: Time range * Use the **Quick Range** items to quickly select popular ranges * Use the **Custom Start/End Date** inputs to select specific times * Use the **Resolution** items to choose between various time bucket resolutions ### Against When a time series visualization is selected, such as `count`, the **Against** menu is enabled and it’s possible to select a historical time to compare the results of your time range too. For example, to compare the last hour’s average response time to the same time yesterday, select `1 hr` in the time range menu, and then select `-1D` from the **Against** menu: Time range against menu The results look like this: Time range against chart The dotted line represents results from the base date, and the totals table includes the comparative totals. When you add `field` to the `group by` clause, the **time range against** values are attached to each `events`. Time range against chart ### Visualizations Axiom provides powerful visualizations that display the output of running aggregate functions across your dataset. The Visualization menu allows you to add these visualizations and, where required, input their arguments: Visualizations menu You can select a visualization to add it to the query. If a visualization requires an argument (such as the field and/or other parameters), the menu allows you to select eligible fields and input those arguments. Press `Enter` to complete the addition: Visualizations demo Click Visualization in the query builder to edit it at any time. [Learn about supported visualizations](/query-data/visualizations) ### Filters Use the filter menu to attach filter clauses to your search. Axiom supports AND/OR operators at the top-level as well as one level deep. This means you can create filters that would read as `status == 200 AND (method == get OR method == head) AND (user-agent contains Mozilla or user-agent contains Webkit)`. Filters are divided up by the field type they operate on, but some may apply to more than one field type. Filters demo #### List of filters *String Fields* * `==` * `!=` * `exists` * `not-exists` * `starts-with` * `not-starts-with` * `ends-with` * `not-ends-with` * `contains` * `not-contains` * `regexp` * `not-regexp` *Number Fields* * `==` * `!=` * `exists` * `not-exists` * `>` * `>=` * `<` * `<=` *Boolean Fields* * `==` * `!=` * `exists` * `not-exists` *Array Fields* * `contains` * `not-contains` * `exists` * `not-exists` #### Special fields Axiom creates the following two fields automatically for a new dataset: * `_time` is the timestamp of the event. If the data you ingest doesn’t have a `_time` field, Axiom assigns the time of the data ingest to the events. * `_sysTime` is the time when you ingested the data. In most cases, you can use `_time` and `_sysTime` interchangeably. The difference between them can be useful if you experience clock skews on your event-producing systems. ### Group by (segmentation) When visualizing data, it can be useful to segment data into specific groups to more clearly understand how the data behaves. The Group By component enables you to add one or more fields to group events by: Group by ### Other options #### Order By default, Axiom automatically chooses the best ordering for results. However, you can manually set the desired order through this menu. #### Limit By default, Axiom chooses a reasonable limit for the query that has been passed in. However, you can control that limit manually through this component. ## Change element’s position To change element’s position on the dashboard, drag the title bar of the chart.