Axoniq Insights Configuration

This page covers the core Insights properties, including data storage, security, DuckDB, and Event Lake tuning.

Base configuration

Parameter Default Description

insights.dbdir

data

Directory where Insights stores its database files (SQLite configuration database, DuckDB data). Can be an absolute or relative path.

insights.authorization.enabled

true

Whether to enable authorization for accessing Insights. When disabled, all endpoints are accessible without authentication. Useful for development but should always be enabled in production.

insights.authorization.default-password

admin

Initial password for the default admin user, created on first startup. Change this immediately in production environments.

insights.mcp.enabled

true

Whether to enable the Model Context Protocol (MCP) endpoint. When enabled, AI tools and MCP-compatible clients can connect to Insights at /mcp/.

insights.set-web-socket-allowed-origins

false

Whether to restrict WebSocket connections by origin. Enable this when Insights is accessed from a different domain than where it is hosted.

insights.web-socket-allowed-origins

(empty)

Comma-separated list of allowed origins for WebSocket connections (for example, http://localhost:3000,https://app.example.com). Only used when insights.set-web-socket-allowed-origins is true.

Security recommendations

For production deployments:

  • Set insights.authorization.enabled=true (the default).

  • Change the default admin password immediately after first login, or set insights.authorization.default-password to a strong password before starting Insights.

  • Configure OAuth2 / OIDC for single sign-on.

  • Enable SSL/TLS on the JDBC endpoint if exposing the PostgreSQL wire protocol.

DuckDB configuration

Parameter Default Description

insights.duckdb.duckdbExtensionDir

.

Directory where DuckDB extension files are located. DuckDB loads additional functionality (for example, the DuckLake extension) from this directory.

Eventlake configuration

The Eventlake is Insights' storage engine. It receives events from Axon Server, batches them, and writes them as Parquet files. These settings control batching, compression, file management, and time travel behavior.

Batching and flushing

Parameter Default Description

insights.eventlake.batchSize

5000

Number of events to batch together before writing to the Event Lake. Larger batches improve write throughput at the cost of higher memory usage and increased latency before events become queryable.

insights.eventlake.flushInterval

10s

Maximum time to wait before flushing a partial batch. Ensures events become queryable within this interval even if the batch size has not been reached.

Storage and compression

Parameter Default Description

insights.eventlake.catalogConnectionString

metadata/analytics_catalog.sqlite

Connection string for the Event Lake catalog database. The catalog tracks metadata about all Parquet files and tables.

insights.eventlake.parquetCompression

snappy

Compression algorithm for Parquet files. Supported values: snappy, gzip, brotli, zstd, uncompressed. Snappy offers a good balance of speed and compression ratio. Use zstd for better compression at the cost of slightly higher CPU usage.

insights.eventlake.targetFileSize

1GB

Target size for Event Lake data files. The file merge process combines smaller files until they reach this size.

insights.eventlake.parquetRowGroupSize

250000

Number of rows per Parquet row group. Affects query performance. Smaller row groups improve predicate pushdown for selective queries, while larger row groups reduce metadata overhead.

Connection pool

Parameter Default Description

insights.eventlake.connectionPoolSize

10

Size of the DuckDB connection pool for accessing the Event Lake. Increase this if you expect many concurrent queries.

File maintenance

Insights runs background tasks to merge small Parquet files, clean up obsolete snapshots, and remove orphaned data.

Parameter Default Description

insights.eventlake.fileMergeInterval

5m

Interval at which the file merge task runs. Merges small Parquet files into larger ones up to targetFileSize to reduce the number of files and improve query performance.

insights.eventlake.fileCleanInterval

15m

Interval at which the file clean task runs. Removes data files that are no longer referenced by any snapshot.