Limits and requirements

Axiom applies certain limits and requirements to guarantee good service across the platform. Some of these limits depend on your pricing plan, and some of them are applied system-wide. This reference article explains all limits and requirements applied by Axiom.

Limits are necessary to prevent potential issues that could arise from the ingestion of excessively large events or data structures that are too complex. Limits help maintain system performance, allow for effective data processing, and manage resources effectively.

Pricing-based limits

The table below summarizes the limits applied to each pricing plan. For more details on pricing and contact information, see the Axiom pricing page.

Ingest (included)500 GB / month1 TB / monthCustom
Ingest (maximum)500 GB / month50 TB / monthCustom
Query-hours (included)10 GB-hours / month100 GB-hours / monthCustom
Retention30 days95 daysCustom
Fields per dataset2561024Custom
NotifiersEmail, DiscordEmail, Discord, Opsgenie,
PagerDuty, Slack, Webhook,
Microsoft Teams
Endpoints1 (Honeycomb, Loki, Splunk)5 (Honeycomb, Loki, Splunk, Syslog)Custom

If you're on the Team plan and you exceed the maximum ingest and query-hours quota outlined above, additional charges apply based on your usage above the quota. For more information, see the Axiom pricing page.

All plans include unlimited bandwidth, API access, and data sources subject to the Fair Use Policy.

Restrictions on datasets and fields

Axiom restricts the number of datasets and the number of fields in your datasets. The number of datasets and fields you can use is based on your pricing plan and explained in the table above.

If you ingest a new event that would exceed the allowed number of fields in a dataset, Axiom returns an error and rejects the event. To prevent this error, ensure that the number of fields in your events are within the allowed limits. To reduce the number of fields in a dataset, trim the dataset and vacuum its fields.

System-wide limits

The following limits are applied to all accounts, irrespective of the pricing plan.

Limits on ingested data

The table below summarizes the limits Axiom applies to each data ingest. These limits are independent of your pricing plan.

Maximum event size1 MB
Maximum events in a batch10,000
Maximum field name length200 bytes

Requirements of the timestamp field

The most important field requirement is about the timestamp.


All events stored in Axiom must have a _time timestamp field. If the data you ingest doesn't have a _time field, Axiom assigns the time of the data ingest to the events. To specify the timestamp yourself, include a _time field in the ingested data.

If you include the _time field in the ingested data, follow these requirements:

  • Timestamps are specified in the _time field.
  • The _time field contains timestamps in a valid time format. Axiom accepts many date strings and timestamps without knowing the format in advance, including Unix Epoch, RFC3339, or ISO 8601.
  • The _time field is a field with UTF-8 encoding.
  • The _time field is not used for any other purpose.

Temporary account-specific limits

If you send a large amount of data in a short amount of time and with a high frequency of API requests, we may temporarily restrict or disable your ability to send data to Axiom. This is to prevent abuse of our platform and to guarantee consistent and high-quality service to all customers. In this case, we kindly ask you to reconsider your approach to data collection. For example, to reduce the total number of API requests, try sending your data in larger batches. This adjustment both streamlines our operations and improves the efficiency of your data ingest. If you often experience these temporary restrictions and have a good reason for changing these limits, please contact Support.

Was this page helpful?