Axoniq Insights Configuration
This page covers the core Insights properties, including data storage, security, DuckDB, and Event Lake tuning.
Base configuration
| Parameter | Default | Description |
|---|---|---|
|
|
Directory where Insights stores its database files (SQLite configuration database, DuckDB data). Can be an absolute or relative path. |
|
|
Whether to enable authorization for accessing Insights. When disabled, all endpoints are accessible without authentication. Useful for development but should always be enabled in production. |
|
|
Initial password for the default |
|
|
Whether to enable the Model Context Protocol (MCP) endpoint. When enabled, AI tools and MCP-compatible clients can connect to Insights at |
|
|
Whether to restrict WebSocket connections by origin. Enable this when Insights is accessed from a different domain than where it is hosted. |
|
(empty) |
Comma-separated list of allowed origins for WebSocket connections (for example, |
Security recommendations
For production deployments:
-
Set
insights.authorization.enabled=true(the default). -
Change the default admin password immediately after first login, or set
insights.authorization.default-passwordto a strong password before starting Insights. -
Configure OAuth2 / OIDC for single sign-on.
-
Enable SSL/TLS on the JDBC endpoint if exposing the PostgreSQL wire protocol.
DuckDB configuration
| Parameter | Default | Description |
|---|---|---|
|
|
Directory where DuckDB extension files are located. DuckDB loads additional functionality (for example, the DuckLake extension) from this directory. |
Eventlake configuration
The Eventlake is Insights' storage engine. It receives events from Axon Server, batches them, and writes them as Parquet files. These settings control batching, compression, file management, and time travel behavior.
Batching and flushing
| Parameter | Default | Description |
|---|---|---|
|
|
Number of events to batch together before writing to the Event Lake. Larger batches improve write throughput at the cost of higher memory usage and increased latency before events become queryable. |
|
|
Maximum time to wait before flushing a partial batch. Ensures events become queryable within this interval even if the batch size has not been reached. |
Storage and compression
| Parameter | Default | Description |
|---|---|---|
|
|
Connection string for the Event Lake catalog database. The catalog tracks metadata about all Parquet files and tables. |
|
|
Compression algorithm for Parquet files. Supported values: |
|
|
Target size for Event Lake data files. The file merge process combines smaller files until they reach this size. |
|
|
Number of rows per Parquet row group. Affects query performance. Smaller row groups improve predicate pushdown for selective queries, while larger row groups reduce metadata overhead. |
Connection pool
| Parameter | Default | Description |
|---|---|---|
|
|
Size of the DuckDB connection pool for accessing the Event Lake. Increase this if you expect many concurrent queries. |
File maintenance
Insights runs background tasks to merge small Parquet files, clean up obsolete snapshots, and remove orphaned data.
| Parameter | Default | Description |
|---|---|---|
|
|
Interval at which the file merge task runs. Merges small Parquet files into larger ones up to |
|
|
Interval at which the file clean task runs. Removes data files that are no longer referenced by any snapshot. |