Send Vector logs to Axiom
Vector is a lightweight and ultra-fast tool for building observability pipelines. It has a built-in support for shipping logs to Axiom through the axiom
sink.
Installation
Vector can be installed on Linux, Windows, and MacOS. Visit the Vector documentation to install Vector.
timestamp field
Logs and metrics sent via vector should use @timestamp as the timestamp field like: {"@timestamp":"2022-04-14T21:30:30.658Z..."}
name not _time. Axiom accepts many date strings and timestamps without knowing the format in advance, including Unix Epoch, RFC3339, and ISO 8601.
Configuration
Sending data to Axiom with Vector is very straightforward using the axiom
sink. you only need your dataset name and Ingest Token.
In the example below we configure Vector to read and collect metrics from StatsD aggregator and send StatsD metrics to the Axiom sink.
First, you'll need to create vector configuration file, vector.toml
Common Configuration
[sources.my_source_id]
type = "statsd"
[sinks.axiom]
inputs = ["my_source_id"]
type = "axiom"
token = "xaat-1234"
dataset = "vector-dev"
Send Kubernetes logs to Axiom
Send Kubernetes logs to Axiom using the Kubernetes source.
[sources.my_source_id]
type = "kubernetes_logs"
auto_partial_merge = true
ignore_older_secs = 600
read_from = "beginning"
self_node_name = "${VECTOR_SELF_NODE_NAME}"
exclude_paths_glob_patterns = [ "**/exclude/**" ]
extra_field_selector = "metadata.name!=pod-name-to-exclude"
extra_label_selector = "my_custom_label!=my_value"
extra_namespace_label_selector = "my_custom_label!=my_value"
max_read_bytes = 2_048
max_line_bytes = 32_768
fingerprint_lines = 1
glob_minimum_cooldown_ms = 60_000
delay_deletion_ms = 60_000
data_dir = "/var/lib/vector"
timezone = "local"
[sinks.axiom]
type = "axiom"
inputs = ["my_source_id"]
token = "xaat-1234"
dataset = "vector-dev"
- DATASET: is the name of your dataset. When logs are sent from your vector it is stored in a Dataset on Axiom.
See creating a dataset for more
- TOKEN: is used to ingest or query data to your dataset. API token can be generated from settings on Axiom dashboard.
See creating an API token for more
Send Docker logs to Axiom
To send Docker logs using the Axiom sink, you need to create a configuration file, e.g., vector.toml
, with the following content:
# Define the Docker logs source
[sources.docker_logs]
type = "docker_logs"
docker_host = "unix:///var/run/docker.sock"
# Define the Axiom sink
[sinks.axiom]
type = "axiom"
inputs = ["docker_logs"]
dataset = "your_dataset_name" # replace with the name of your Axiom dataset
token = "your_api_token" # replace with your Axiom API token
Replace your_dataset_name
with the name of the dataset you want to send logs to in Axiom, and your_api_token
with your Axiom API token.
- Run Vector: Start Vector with the configuration file you just created:
vector --config /path/to/vector.toml
Now, Vector will collect logs from Docker and forward them to Axiom using the Axiom sink. You can view and analyze your logs in your dataset.
Send AWS S3 logs to Axiom
To send AWS S3 logs using the Axiom sink, you need to create a configuration file, e.g., vector.toml
, with the following content:
[sources.my_s3_source]
type = "aws_s3"
bucket = "my-bucket" # replace with your bucket name
region = "us-west-2" # replace with the AWS region of your bucket
[sinks.axiom]
type = "axiom"
inputs = ["my_s3_source"]
dataset = "your_dataset_name" # replace with the name of your Axiom dataset
token = "your_api_token" # replace with your Axiom API token
Finally, run Vector with the configuration file using vector --config ./vector.toml
This will start Vector, and it will begin reading logs from the specified S3 bucket and sending them to the specified Axiom dataset.
Send Kafka logs to Axiom
To send Kafka logs using the Axiom sink, you need to create a configuration file, e.g., vector.toml
, with the following code:
[sources.my_kafka_source]
type = "kafka" # must be: kafka
bootstrap_servers = "10.14.22.123:9092" # your Kafka bootstrap servers
group_id = "my_group_id" # your Kafka consumer group ID
topics = ["my_topic"] # the Kafka topics to consume from
auto_offset_reset = "earliest" # start reading from the beginning
[sinks.axiom]
type = "axiom"
inputs = ["my_kafka_source"] # connect the Axiom sink to your Kafka source
dataset = "your_dataset_name" # replace with the name of your Axiom dataset
token = "your_api_token" # replace with your Axiom API token
Finally, you can start Vector with your configuration file: vector --config /path/to/your/vector.toml
Send NGINX metrics to Axiom
To send NGINX metrics using Vector to the Axiom sink, you'll need to first enable NGINX to emit metrics, then use Vector to capture and forward those metrics. Here is a step-by-step guide:
Step 1: Enable NGINX Metrics
You'll need to configure NGINX to expose metrics. This typically involves enabling the ngx_http_stub_status_module
module in your NGINX configuration. Here's how you could do that:
- Open your NGINX configuration file (often located at
/etc/nginx/nginx.conf
) and in yourserver
block, add:
location /metrics {
stub_status;
allow 127.0.0.1; # only allow requests from localhost
deny all; # deny all other hosts
}
- Restart or reload NGINX to apply the changes:
sudo systemctl restart nginx
This will expose basic NGINX metrics at the /metrics
endpoint on your server.
Step 2: Configure Vector
You'll need to configure Vector to scrape the NGINX metrics and send them to Axiom. Create a new configuration file (let's call it vector.toml
), and add the following:
[sources.nginx_metrics]
type = "nginx_metrics" # must be: nginx_metrics
endpoints = ["http://localhost/metrics"] # the endpoint where NGINX metrics are exposed
[sinks.axiom]
type = "axiom" # must be: axiom
inputs = ["nginx_metrics"] # use the metrics from the NGINX source
dataset = "your_dataset_name" # replace with the name of your Axiom dataset
token = "your_api_token" # replace with your Axiom API token
Finally, you can start Vector with your configuration file: vector --config /path/to/your/vector.toml
Send Syslog logs to Axiom
To send Syslog logs using the Axiom sink, you need to create a configuration file, e.g., vector.toml
, with the following code:
[sources.my_source_id]
type="syslog"
address="0.0.0.0:6514"
max_length=102_400
mode="tcp"
[sinks.axiom]
type="axiom"
inputs = [ "my_source_id" ] # required
dataset="your_dataset_name" # replace with the name of your Axiom dataset
token="your_api_token" # replace with your Axiom API token
Send Prometheus metrics to Axiom
To send Prometheus scrape metrics using the Axiom sink, you need to create a configuration file, e.g., vector.toml
, with the following code:
# Define the Prometheus source that scrapes metrics
[sources.my_prometheus_source]
type = "prometheus_scrape" # scrape metrics from a Prometheus endpoint
endpoints = ["http://localhost:9090/metrics"] # replace with your Prometheus endpoint
# Define Axiom sink where logs will be sent
[sinks.axiom]
type = "axiom" # Axiom type
inputs = ["my_prometheus_source"] # connect the Axiom sink to your Prometheus source
dataset = "your_prometheus_dataset" # replace with the name of your Axiom dataset
token = "your_api_token" # replace with your Axiom API token
You can checkout the advanced configuration on Batch, Buffer configuration, and Encoding on Vector Documentation