Limits and requirements
This reference article explains the pricing-based and system-wide limits and requirements imposed by Axiom.
Axiom applies certain limits and requirements to guarantee good service across the platform. Some of these limits depend on your pricing plan, and some of them are applied system-wide. This reference article explains all limits and requirements applied by Axiom.
Limits are necessary to prevent potential issues that could arise from the ingestion of excessively large events or data structures that are too complex. Limits help maintain system performance, allow for effective data processing, and manage resources effectively.
Pricing-based limits
The table below summarizes the limits applied to each pricing plan. For more details on pricing and contact information, see the Axiom pricing page.
Personal | Axiom Cloud | Bring Your Own Cloud | |
---|---|---|---|
Always Free storage | 25 GB | 100 GB | * |
Always Free data loading | 500 GB / month | 1,000 GB / month | * |
Always Free query compute | 10 GB-hours / month | 100 GB-hours / month | * |
Maximum data loading | 500 GB / month | – | – |
Maximum data retention | 30 days | Custom | Custom |
Datasets | 2 | 100 † | 2,500 † |
Fields per dataset | 256 | 1,024 † | 4,096 † |
Users | 1 | 1,000 † | 50,000 † |
Monitors | 3 | 500 † | 20,000 † |
Notifiers | Email, Discord | All supported | All supported |
Supported deployment regions | US | US, EU | Not applicable |
* For the Bring Your Own Cloud (BYOC) plan, Axiom doesn’t charge anything for data loading, query compute, and storage. These costs are billed by your cloud provider.
† Soft limit that can be increased upon request.
If you’re on the Axiom Cloud plan and you exceed the Always Free allowances outlined above, additional charges apply based on your usage above the allowance. For more information, see the Axiom pricing page.
All plans include unlimited bandwidth, API access, and data sources subject to the Fair Use Policy.
To see how much of your allowance each dataset uses, go to Settings > Usage.
Optimize storage costs
The amount of storage you use depends on the following:
- The amount of data you ingest to Axiom.
- The data retention period you specify.
The data retention period defines how long Axiom stores your data. After this period, Axiom trims the data and it doesn’t count towards your storage costs. You can define a custom data retention period for each dataset. For more information, see Specify data retention period.
Restrictions on datasets and fields
Axiom restricts the number of datasets and the number of fields in your datasets. The number of datasets and fields you can use is based on your pricing plan and explained in the table above.
If you ingest a new event that would exceed the allowed number of fields in a dataset, Axiom returns an error and rejects the event. To prevent this error, ensure that the number of fields in your events are within the allowed limits. To reduce the number of fields in a dataset, trim the dataset and vacuum its fields.
System-wide limits
The following limits are applied to all accounts, irrespective of the pricing plan.
Limits on ingested data
The table below summarizes the limits Axiom applies to each data ingest. These limits are independent of your pricing plan.
Limit | |
---|---|
Maximum event size | 1 MB |
Maximum events in a batch | 10,000 |
Maximum field name length | 200 bytes |
Requirements for timestamp field
The most important field requirement is about the timestamp.
All events stored in Axiom must have a _time
timestamp field. If the data you ingest doesn’t have a _time
field, Axiom assigns the time of the data ingest to the events. To specify the timestamp yourself, include a _time
field in the ingested data.
If you include the _time
field in the ingested data, follow these requirements:
- Timestamps are specified in the
_time
field. - The
_time
field contains timestamps in a valid time format. Axiom accepts many date strings and timestamps without knowing the format in advance, including Unix Epoch, RFC3339, or ISO 8601. - The
_time
field is a field with UTF-8 encoding. - The
_time
field is not used for any other purpose.
Requirements for log level fields
The Stream and Query tabs allow you to easily detect warnings and errors in your logs by highlighting the severity of log entries in different colors. As a prerequisite, specify the log level in the data you send to Axiom.
For Open Telemetry logs, specify the log level in the following fields:
severity
severityNumber
severityText
For AWS Lambda logs, specify the log level in the following fields:
record.error
record.level
record.severity
type
For logs from other sources, specify the log level in the following fields:
level
@level
severity
@severity
status.code
Temporary account-specific limits
If you send a large amount of data in a short amount of time and with a high frequency of API requests, we may temporarily restrict or disable your ability to send data to Axiom. This is to prevent abuse of our platform and to guarantee consistent and high-quality service to all customers. In this case, we kindly ask you to reconsider your approach to data collection. For example, to reduce the total number of API requests, try sending your data in larger batches. This adjustment both streamlines our operations and improves the efficiency of your data ingest. If you often experience these temporary restrictions and have a good reason for changing these limits, please contact Support.