September 26, 2023

#product, #engineeering

It’s time to stop self-managing your log infrastructure


Blog Screenshot
Author
Dominic Chapman

Head of Product

The landscape of logs, traces and other event data has both changed and expanded rapidly in the past few years. It’s only going to keep going. The growth in volume of event data — it’s literally exponential. Your data has been evolving, too, with OTel and other intentional event structures.

You’ve been keeping up by scaling your existing logging infrastructure, either on-prem or in the public cloud. But if you’re still managing it yourself, and using technologies built for an earlier era, you’re making tradeoffs, incurring costs, and expending effort that’s no longer necessary. Self-managing your log infrastructure made sense in the 2010s, but not today.

Cost equations have changed

Self-managed log infrastructure incurs costs on several fronts that are unnecessary in a cloud-native era:

  • Provisioning your own machines, whether in a datacenter or your own public cloud account.
  • Managing incidents and upgrades yourself.
  • Expensive licenses for legacy architecture (and legacy business model) commercial products.
  • Developer time necessary to support license-free open source tools.
  • Resources squandered by the inefficient architectures of both commercial and open source legacy tools, which were “lift and shift” ports from on-prem versions (designed to run on dedicated machines) into the cloud, rather than architected cloud-native for much higher efficiency.

Even when run in the cloud, self-managed resources aren’t truly elastic. It’s easy to end up underprovisioned during a service spike, or to be safely overprovisioned and end up paying for resources you didn’t use.

Event data has evolved

The other change is the evolution of events into self-describing structures. Many microservices output JSON structures. OpenTelemetry’s output isn't explicitly JSON, but OTel provides versioned JSON schemas that can be used to parse OTel data.

Legacy technologies still expect messy log events and don’t take advantage of the new structures to be more efficient in compute and storage, and much easier to work with. Specialist observability tools, on the other hand, only work with clean new data, which leaves you to keep legacy tools to handle the messy data that will be with us for a long time.

Axiom is more efficient than legacy products

We architected Axiom from the ground up to be cloud-native and make efficient use of cloud resources, which we understand down to the byte-and-bit level. We rethought each stage of log management — ingest, storage, query — separately, and focused on minimizing Axiom’s use of both storage and compute resources.

  • Storage: Axiom’s compression is optimized for today’s actual event data — a mix of highly structured, self-describing clean data and messy, highly variable legacy output. Rather than a general algorithm, we minimize the storage needed to keep each type of event at full fidelity. We capture structure where it’s there, making it available for easy queries while offering full-text search where it’s needed. For full flexibility, our APL query language lets you transform and extract virtual fields from events to fit the needs of each specific query at query time.
  • Compute: Axiom queries are run on AWS Lambdas rather than a classic query container architecture. Processes are spun up as needed, then spun down rather than sitting idle. Not only does this reduce costs by a surprising amount, it gives Axiom true elasticity as demand goes up and down.

Axiom reduces the number of machines sitting around doing nothing, and uses them more efficiently when it spins them up. Since you don’t need to manage them, you needn’t care except that Axiom passes along the savings in both cost and environmental impact to you.

Axiom is truly cloud-native for today’s cloud

Many cloud-based products for event data management still carry the architectures of their on-premise predecessors. And we found that early SaaS products were designed for earlier generations of cloud. They don’t take advantage of modern object storage, container orchestration, serverless execution, and more. The reason we founded Axiom was that as users of previous-generation logging, trying to deploy it in the cloud, we were frustrated by how inefficient and hard to manage they were. Can’t someone rearchitect log management to be truly cloud-native? Eventually we decided that someone is us.

Axiom beats self-managed even on your own storage space

We knew while designing Axiom that many organizations would want Axiom’s efficient and powerful ingest, compression and query, but would have reasons to maintain data sovereignty.

No problem! You can still use Axiom’s better compression with your own S3-compatible object storage. Axiom will manage the control plane, not only improving efficiency but removing the overhead of managing, monitoring and upgrading our software. We provide the tools you need to access Axiom-format event data in your storage.

We conceived Axiom after years of self-managing our own event data, so we understand your needs firsthand. We recognized our own need for a new approach to event data unencumbered by legacies, then designed and developed Axiom to meet yours, too.

Our new pricing starts as low as $25 per month, free for personal projects. No surprise bills, ever. Contact us today to get started: sales@axiom.co

Share
Get started with Axiom

Learn how to start ingesting, streaming, and
querying data into Axiom in less than 10 minutes.