Metrics are now generally available. Axiom stores and queries logs, traces, metrics, and events in one platform, and with this release, metrics is production-ready.
To understand why we built MetricsDB the way we did, it's worth looking at what the metrics market asks teams to accept today.
- Metrics is GA: Production-ready ingest, query, dashboards, and monitors. MetricsDB handles hyper-cardinality by design, with no active time series tax.
- Logs, traces, metrics, and events in one platform: All your machine data, queryable from one place.
- Unified pricing: Just like logs, traces and events, pay only for what you use, with ingest priced per gigabyte.
- MPL ships as a public preview: An LLM-friendly query language for metrics, accelerated by customer demand during preview. Same pipeline model as APL, purpose-built for time-series operations.
- Agent-native from day one: Metrics are queryable through Axiom's MCP server and a dedicated metrics skill.
The metrics tax
Most metrics systems charge by active time series. The more dimensions you track, the more you pay. This creates a tax on visibility: teams pre-aggregate data, drop labels, and limit cardinality to control costs. Self-hosted Prometheus stacks add their own overhead: capacity planning, cluster management, and pipeline engineering that scales with data volume, not with insight.
The result is that most teams decide what to throw away before they know what they’ll need. Every label you drop to save money is a question you can’t answer later.
MetricsDB: the same architectural bet
Axiom's event store already proved that object storage, ephemeral compute, and schema-free ingest change the economics of machine data at scale. It’s powering petabyte-scale workloads for the fastest-growing companies on the planet. MetricsDB applies that same approach to time-series metrics.
Metrics write to object storage in a compressed columnar format. Queries fan out to ephemeral compute that scales with the needs of the query. There are no indexer clusters to size, no storage tiers to manage, no active time series limits that force pre-aggregation.
This is what makes hyper-cardinality a design principle rather than a cost problem. Track every container, every GPU, every service instance. The cost model doesn't punish you for it.
All four signals, unified
Metrics completes Axiom's machine data coverage. Logs, traces, metrics, and events are all queryable from the same platform, integrated with Dashboards and Monitors.
When something breaks, you examine error rate spikes in your metrics, exception details in your logs, and request flows in your traces. One platform. No switching tools.
MPL: a code-first query language for metrics
During the preview, we heard the same request from multiple teams independently. Engineers who had learned Axiom’s APL for logs and traces felt limited by the visual builder for metrics. Teams building agent-driven workflows needed API access and something code-first.
MPL is the Metrics Processing Language. It ships today in public preview alongside the metrics GA release, accelerated by that demand.
MPL queries are pipelines. Each step transforms the result of the previous one, separated by |:
`otel-demo-metrics`:`http.server.request.duration`
| where service == "frontend"
| align to 5m using avgThis queries http.server.request.duration from the otel-demo-metrics dataset, keeps only series from the frontend service, and averages each series into 5-minute windows.
Why a new language? PromQL nests function calls inside-out, which gets hard to read and awkward for agents to compose incrementally. APL is built for event data: rows you filter, extend, and summarize. Time-series metrics have different query patterns: alignment to time windows, rate computation, histogram interpolation, series grouping. MPL handles these as first-class pipeline operations while keeping the linear, pipe-delimited structure that makes APL approachable. If you already write APL, picking up MPL is a matter of learning the metrics-specific operations, not a new language.
If your team currently uses Prometheus, we've published a migration guide with side-by-side comparisons of common patterns, as well as an agent-skill to make the process easy.
Metrics your agents can query
Axiom is built with a clear philosophy: the future of working with machine data is not purely human. Engineering teams are already using AI agents to investigate incidents, answer questions about system state, and take action on what they find. For agents to do this well, they need the same data engineers rely on, accessible programmatically.
With this release, your entire machine data estate is agent-queryable: logs, traces, metrics, and events, all through Axiom's MCP server and a dedicated metrics skill. Agents discover metrics, filter by tags, write MPL, and act on what they find.
MPL being code-first means it’s agent-first. Where a human investigates one incident at a time, an agent with full-fidelity metrics can continuously scan for anomalies, correlate across signal types, and surface problems before they escalate. The new-language friction that humans might feel is already a non-issue for agents. They write MPL the same way they write APL.
Pricing you already understand
We're billing for metrics exactly the same way we bill for logs, traces and events. Ingest starts at $0.12/GB, with volume discounts as you grow. No need to track your active time series count, it's as simple as the number of bytes you're sending over the wire. Metrics shares the same generous allowances that come with Axiom Cloud, and you only pay for what you use in query and storage. With customized retention, you have complete control over your costs.
Get started
If your team is paying the active time series tax, managing Prometheus infrastructure, or building agent workflows that need programmatic access to metrics: Axiom Metrics is ready with no infrastructure to provision. Point your OpenTelemetry collector at an HTTP endpoint and start querying.
MPL is in public preview and will move to GA as we incorporate feedback.
Questions? Connect with our team at support@axiom.co. We’re here to help.
Learn more
- Metrics overview — Architecture and capabilities of MetricsDB
- Query metrics using MPL — Full language reference with examples
- Sample MPL queries — Real-world queries for common observability patterns
- Migrate PromQL queries to Axiom — Side-by-side translation guide
- Axiom MCP server — Query metrics (and your other machine data) with AI agents
- Metrics agent skill — A purpose-built skill for querying metrics from agents
- Try out Metrics on your own data — If you use Claude Code, this quickstart guide will metrics data from your Claude Code usage into Axiom in a matter of minutes