In Axiom, you can use two query languages:
- Axiom Processing Language (APL) lets you query datasets with events, OTel logs, and OTel traces. For more information, see Introduction to APL.
- Metrics Processing Language (MPL) lets you query OTel metrics. This page explains how to use MPL.
MPL is a metric-focused query language that combines the simplicity of APL with the expressive power of PromQL. It enables effective querying, transformation, and aggregation of metric data, supporting diverse observability use cases.
If you use PromQL, your existing expressions can be translated to MPL for quick onboarding and greater flexibility. For more information, see Migrate PromQL queries to Axiom.
Support for MPL (Metrics Processing Language) is currently in public preview. For more information, see Feature states.
Limitations
The current implementation of MPL comes with the following limitations:
- You can only query one dataset in a query.
Concepts
- Dataset: A group of related metrics.
- Metric: Two-dimensional time series data with a metric name and a set of tags.
- Tag: Key-value pair identifying a series.
- Series: A unique combination of a metric and tag set.
Query structure
A typical MPL query contains the following:
- Source: Defines dataset, metric, and optional time range
- Filter: Applies conditions to series via tags
- Transformation can be the following:
- Map: Maps the data to a new value.
- Align: Aggregates the data over time to align to a given time interval.
- Group: Aggregates the data over tag values.
- Bucket: A two-dimensional transformation that aggregates along both the time and tag dimensions.
Example:
`otel-demo-metrics`:`go.memory.used`[1h..]
| where service == "frontend"
| align to 5m using avg
This example queries the otel-demo-metrics dataset’s go.memory.used metric one hour before the current time. It filters results to the frontend service and aggregates values over 5-minute time windows into their average.
Elements of queries
The following explains each element of an MPL query.
To learn more about the language features of MPL, see Language features.
Source
Specify the dataset, the metric, and optional time bounds.
Syntax:
<dataset>:<metric>[<time range>][ as <alias>]
- dataset: Name of the dataset.
- metric: Name of the metric.
- time range: Optional: The time range of the query. For more information, see Time ranges.
- alias: Optional: Renames the metric for later use.
Examples:
`otel-demo-metrics`:`go.memory.used`[1h..]
`otel-demo-metrics`:`go.memory.used`[2h..5m]
`otel-demo-metrics`:`go.memory.used`[2025-03-01T13:00:00Z..+1h] as mem_usage
Filter
Use where to filter series based on tag values.
Syntax:
| where <filter-expression>
A filter expression can be one of the following:
<tag> <operator> <value> — a single tag filter
<filter-expression> and <filter-expression> — logical AND of two expressions
<filter-expression> or <filter-expression> — logical OR of two expressions
not <filter-expression> — negation of an expression
(<filter-expression>) — parentheses to control order of evaluation
Available operators for single tag filters:
- Equality:
==, !=
- Comparisons:
<, <=, >, >=
The value must be one of the supported data types.
Examples:
| where environment == "production" and status_code >= 200 and status_code < 300
| where (environment == "production" or environment == "staging") and not status_code == 500
Map
Use map to transform individual values.
Available functions:
| Function | Description |
|---|
rate | Computes the per-second rate of change for a metric. |
increase | Calculates the increase between the data point and the previous one. |
min(arg) | Returns the minimum between the argument and the value. |
max(arg) | Returns the maximum between the argument and the value. |
abs | Returns the absolute value of each data point. |
fill::prev | Fills missing values using the previous non-null value. |
fill::const(arg) | Fills missing values with a constant. |
interpolate::linear | Linear interpolation of missing values. |
+, -, *, / | Performs the respective mathematical calculation on each value. |
Examples:
// Calculate rate per second for the metric
| map rate
// Add 5 to each value
| map + 5
// Fill empty values with the latest value
| map fill::prev
// Fill empty values with zeros
| map fill::const(0)
filter:: functions
Use filter:: functions to remove data points that don’t match a condition. Data points that don’t match are removed from the series entirely. Unlike where, which filters series based on tag values, filter:: operates on the numeric values of data points within a series.
| Function | Description |
|---|
filter::eq(v) | Keeps only data points equal to v. |
filter::neq(v) | Keeps only data points not equal to v. |
filter::gt(v) | Keeps only data points greater than v. |
filter::gte(v) | Keeps only data points greater than or equal to v. |
filter::lt(v) | Keeps only data points less than v. |
filter::lte(v) | Keeps only data points less than or equal to v. |
Example:
// Remove data points where latency exceeds 400ms
| map filter::lt(0.4)
is:: functions
Use is:: functions to test data points against a condition. Matching data points are set to 1.0 and non-matching data points are set to 0.0. The series retains all its data points.
Use is:: instead of filter:: when you need to preserve the shape of the time series, for example in SLO calculations where gaps in data would produce incorrect results.
| Function | Description |
|---|
is::eq(v) | Sets data points equal to v to 1.0, all others to 0.0. |
is::neq(v) | Sets data points not equal to v to 1.0, all others to 0.0. |
is::gt(v) | Sets data points greater than v to 1.0, all others to 0.0. |
is::gte(v) | Sets data points greater than or equal to v to 1.0, all others to 0.0. |
is::lt(v) | Sets data points less than v to 1.0, all others to 0.0. |
is::lte(v) | Sets data points less than or equal to v to 1.0, all others to 0.0. |
Example:
// Set to 1.0 where latency is within SLO (below 400ms), 0.0 otherwise
| map is::lt(0.4)
Align
Use align to aggregate over time windows. You can specify the time window and the aggregation function to apply.
Syntax:
| align to <time_window> using <aggregation_function>
Available aggregation functions:
| Function | Description |
|---|
avg | Averages values in each interval. |
count | Counts non-null values per interval. |
max | Takes the maximum value per interval. |
min | Takes the minimum value per interval. |
prom::rate | PromQL-style rate calculation. |
sum | Sums values in each interval. |
last | Takes the last value in each interval. |
Examples:
// Calculate the average over 5-minute time windows
| align to 5m using avg
// Count the data points in the last hour
| align to 1h using count
Group
Use group by to combine series by tags.
Syntax:
| group [by <tag1>, <tag2>] using <aggregation_function>
If you don’t specify tags, Axiom aggregates all series into one group.
Available aggregation functions:
| Function | Description |
|---|
avg | Averages values in each interval. |
sum | Sums values in each interval. |
min | Takes the minimum value per interval. |
max | Takes the maximum value per interval. |
count | Counts non-null values per interval. |
Examples:
// Calculate the number of series
| group using count
// Sum all series into a single total
| group using sum
// Group data by the `service` and `namespace` tags using the `sum` aggregation
| group by service, namespace using sum
Bucket
Use bucket to aggregate over time and tag dimensions simultaneously.
Syntax:
| bucket [by <tags>] to <window> using <function>
Available functions:
| Function | Description |
|---|
histogram(specs) | Aggregates non-histogram series into buckets. specs is one or more quantile values between 0 and 1, or aggregation functions (count, avg, sum, min, max). |
interpolate_cumulative_histogram(mode, specs) | Aggregates cumulative-temporality histogram series. mode is rate or increase. specs is one or more quantile values or aggregation functions. |
interpolate_delta_histogram(specs) | Aggregates delta-temporality histogram series. specs is one or more quantile values or aggregation functions. |
interpolate_cumulative_histogram works on histogram metrics using cumulative temporality. interpolate_delta_histogram works on histogram metrics using delta temporality.
Examples:
// Bucket over the `service` and `endpoint` tags using the histogram aggregation
| bucket by service, endpoint to 5m using histogram(max)
// Compute the 50th and 99th percentiles of request duration for histogram data stored using cumulative temporality
| bucket by service to 1m using interpolate_cumulative_histogram(rate, 0.50, 0.99)
// Compute the 50th and 99th percentiles of request duration for histogram data stored using delta temporality
| bucket by service to 1m using interpolate_delta_histogram(0.50, 0.99)
Other operations
Compute
Combine multiple metrics in one query block.
Syntax:
(
<subquery1>,
<subquery2>
)
| compute <name> using <operator>
Available operators:
| Operator | Description |
|---|
+ | Adds subquery results. |
- | Subtracts one subquery from another. |
* | Multiplies subquery results. |
/ | Divides one subquery by another. |
min | Minimum across result series. |
max | Maximum across result series. |
avg | Average across result series. |
Example:
// Return the average error rate over the past 5 minutes
(
`otel-demo-metrics`:`http.server.request.duration`
| where status_code >= 500
| map rate
| align to 5m using avg
| group by method, route using sum,
`otel-demo-metrics`:`http.server.request.duration`
| map rate
| align to 5m using avg
| group by method, route using sum
)
| compute error_rate using /
Language features
Data types
- Strings:
"string"
- Integers:
42
- Floats:
3.14
- Booleans:
true, false
- Regex:
#/.*metrics.*/
Identifier naming rules
Identifiers represent fields, metrics, datasets, function names, and other named entities in your query.
Valid identifier names are case-sensitive and follow these rules:
- Start with an ASCII letter.
- Followed by zero or more ASCII letters, digits, or underscores (
_).
Quote identifiers
Quote an identifier in your MPL query if any of the following is true:
- The identifier name doesn’t match the rules for valid identifier names.
- The identifier name is identical to one of the reserved keywords of the MPL query language. For example,
by or where.
If any of the above is true, you must quote the identifier by enclosing it in backticks (`). For example, `my-field`.
If none of the above is true, you don’t need to quote the identifier in your MPL query. For example, myfield. In this case, quoting the identifier name is optional.
Time ranges
Syntax:
Define time ranges with the following:
- Start time: Inclusive beginning of the time range.
- End time: Optional, exclusive end of the time range. If you don’t specify the end time, Axiom uses the current time.
Separate the start and the end times with ...
Time can be defined in one of the following ways:
- Relative time. The time unit can be one of the following:
ms for milliseconds (will be rounded to seconds)
s for seconds
m for minutes
h for hours
d for days
w for weeks
M for months
y for years
Examples: -1h, +5m
- Unix epoch timestamp in seconds. For example:
1723982394
- An RFC3339 timestamp. For example:
2025-03-01T13:00:00Z
Examples:
// One hour ago until the current time
[1h..]
// One hour after a Unix timestamp
[1747077736..+1h]
// One hour before a Unix timestamp
[-1h..1747077736]
// One hour before an RFC3339 date
[-1h..2025-03-01T13:00:00Z]