Axon Server
This section is split into 4 sub-sections.
Docker image
AxonIQ provides ready to use Axon Server images.
There are two types of images available: one with Axon Server running as the user root
and one with Axon Server running as user axonserver
.
Both images are based on Eclipse Temurin.
-
The
root
image of version 2024.2.3 is available asaxoniq/axonserver:2024.2.3
and is based oneclipse-temurin:17-focal
. This image is particularly useful for running in Docker Desktop, as it will not have any trouble creating files and directories as userroot
. -
The
axonserver
image of version 2024.2.3 is available asaxoniq/axonserver:2024.2.3-nonroot
and is based on the same Eclipse Temurin image. This image is more secure and useful in Kubernetes and OpenShift clusters. You should take care to declare the user- and group-id, both of which are1001
and are namedaxonserver
. Doing this will ensure that any mounted volumes will be writable by the user running Axon Server.
The images export the following volumes:
-
/axonserver/config
This is where you can add configuration files, such as an additionalaxonserver.properties
and the license file. However, you can also opt to use, for instance, Kubernetes, or Docker-compose secrets. Note that Axon Server assumes it can write to the directory configured withaxoniq.axonserver.enterprise.licenseDirectory
, so you don’t have to put the license on all nodes. -
/axonserver/data
This is where the ControlDB, the PID file, and a copy of the application logs are written to. -
/axonserver/events
In this volume the Event Store is created, with a single directory per context. -
/axonserver/log
In this volume the Replication Logs are created, with a single directory per Replication Group. -
/axonserver/exts
In this volume you can place Extension JAR-files, such as the LDAP and OAuth2 extensions. -
/axonserver/plugins
In this volume Axon Server will place all uploaded plugins.
Building you own image
A starter Dockerfile
is included below which can be tailored as per your requirements.
The starter file helps create the image in multiple stages,
-
The image will be based on
eclipse-temurin:17-focal
. -
The first stage creates the directories that will become our volumes.
-
The second stage begins by copying the home directory with its volume mount points, carefully keeping ownership set to the new user.
-
The last steps copy the executable jar named axonserver.jar and a common set of properties. It marks the volume mounting points and exposed ports and finally specifies the command to start Axon Server EE.
FROM eclipse-temurin:17-focal
RUN mkdir -p /axonserver/config /axonserver/data /axonserver/events /axonserver/log /axonserver/exts
COPY axonserver.jar axonserver.properties /axonserver/
WORKDIR /axonserver
VOLUME [ "/axonserver/config", "/axonserver/data", "/axonserver/events", "/axonserver/log", "/axonserver/exts", "/axonserver/plugins" ]
EXPOSE 8024/tcp 8124/tcp 8224/tcp
ENTRYPOINT [ "java", "-jar", "./axonserver.jar" ]
For the common properties (axonserver.properties
), the minimum set can be added to ensure that the volumes get mounted and logs generated.
Again these can be tailored as per the deployment requirements.
axoniq.axonserver.event.storage=./events
axoniq.axonserver.snapshot.storage=./events
axoniq.axonserver.replication.log-storage-folder=./log
axoniq.axonserver.enterprise.licenseDirectory=./config
axoniq.axonserver.controldb-path=./data
axoniq.axonserver.pid-file-location=./data
logging.file=./data/axonserver.log
logging.file.max-history=10
logging.file.max-size=10MB
Place the Dockerfile, the Axon Server jar file (axonserver.jar
) and the axonserver.properties
in the current directory.
Assuming we are building version 2024.2.3, the image can be constructed using the following command:
$ docker build --tag my-repository/axonserver:2024.2.3
This completes the construction of the Docker image. The image can pushed to your local repository or you could keep it local if you only want to run it on your development machine. The next step is to run it either using Docker Compose or Kubernetes.
If you want to run the docker image for a standalone instance of Axon Server and have it initialized automatically, you can start it with the axoniq.axonserver.standalone
property set through the environment, for instance:
$ docker run -dit -e axoniq.axonserver.standalone=true -p 8024:8024 -p 8124:8124 my-repository/axonserver:2024.2.3
Production considerations
-
Security Please consider enabling TLS and Access Control to secure your setup.
-
root vs nonroot The image above runs axonserver as the root user. This might be not desirable and you might want to change it to a different user.
-
axonserver-cli The
axonserver-cli
provides useful functionality to interact with a running axonserver. We recommend to include it in the docker image, so it is available in case you need to troubleshoot issues in production.
Docker compose
Axon Server is meant to be run in a distributed manner, as a cluster where there will be multiple instances of Axon Server nodes running all interconnected to each other.
The installation process assumes that Docker Compose will be used to run a 3-node Axon Server cluster, that is running 3 services of the same container image we built above. Let us designate these services as "axonserver-1", "axonserver-2" and "axonserver-3". The default docker compose uses the docker image provided by AxonIQ on DockerHub at axoniq/axonserver:2024.2.3. If you wish to use your own docker image, you can replace each occurrence of "axoniq/axonserver:2024.2.3" with your own Docker image and tag.
Each container instance will use separate volumes for “data”, “events”, and “log”. An environment variable is added to tell Axon Server about the location of the license file. We will use "secrets" to inject the license file, tokens as well as the cluster/context definitions using the autocluster mechanism.
The complete docker-compose file is depicted below.
version: '3.3'
services:
axonserver-1:
image: axoniq/axonserver:2024.2.3
hostname: axonserver-1
volumes:
- axonserver-data1:/axonserver/data
- axonserver-events1:/axonserver/events
- axonserver-log1:/axonserver/log
secrets:
- source: axoniq-license
target: /axonserver/config/axoniq.license
- source: axonserver-properties
target: /axonserver/config/axonserver.properties
environment:
- AXONIQ_LICENSE=/axonserver/config/axoniq.license
ports:
- '8024:8024'
- '8124:8124'
networks:
- axon-demo
axonserver-2:
image: axoniq/axonserver:2024.2.3
hostname: axonserver-2
volumes:
- axonserver-data2:/axonserver/data
- axonserver-events2:/axonserver/events
- axonserver-log2:/axonserver/log
secrets:
- source: axoniq-license
target: /axonserver/config/axoniq.license
- source: axonserver-properties
target: /axonserver/config/axonserver.properties
environment:
- AXONIQ_LICENSE=/axonserver/config/axoniq.license
ports:
- '8025:8024'
- '8125:8124'
networks:
- axon-demo
axonserver-3:
image: axoniq/axonserver:2024.2.3
hostname: axonserver-3
volumes:
- axonserver-data3:/axonserver/data
- axonserver-events3:/axonserver/events
- axonserver-log3:/axonserver/log
secrets:
- source: axoniq-license
target: /axonserver/config/axoniq.license
- source: axonserver-properties
target: /axonserver/config/axonserver.properties
environment:
- AXONIQ_LICENSE=/axonserver/config/axoniq.license
ports:
- '8026:8024'
- '8126:8124'
networks:
- axon-demo
volumes:
axonserver-data1:
axonserver-events1:
axonserver-log1:
axonserver-data2:
axonserver-events2:
axonserver-log2:
axonserver-data3:
axonserver-events3:
axonserver-log3:
networks:
axon-demo:
secrets:
axonserver-properties:
file: ./axonserver.properties
axoniq-license:
file: ./axoniq.license
The axonserver.properties
properties file referred to in the secrets’ definition section is depicted below.
It uses autocluster
to set up the cluster on startup, contacting the first node to initialize the procedure.
axoniq.axonserver.autocluster.first=axonserver-1
axoniq.axonserver.autocluster.contexts=_admin,default
Starting Axon Server using the docker-compose command is depicted below.
$ docker-compose up
Kubernetes
For example purposes only
The examples below show only one of the ways you could deploy Axon Server to Kubernetes. As discussed in this Blog article, there are many aspects that you need to carefully plan ahead for. A complete set of examples can be found in the "Running Axon Server" GitHub repository. We especially recommend using the "Singleton StatefulSet" approach. Although the complexity of deploying any application to Kubernetes can be overwhelming, we strongly recommend you to study this subject carefully. The examples we provide are not necessarily the best approach for your particular situation, so be careful about copying them without any further modifications, if only because they generate self-signed certificates that have a one-year validity.
Creating the Secrets and ConfigMap
An important thing to consider is the use of a "nonroot" image. This is due to the fact that volumes are mounted as owned by the mount location’s owner in Docker, while Kubernetes uses a special security context, defaulting to “root”. Since a "nonroot" image runs Axon Server under its own user, it has no rights on the mounted volume other than “read”. The context can be specified, but only through the user or group’s ID, and not using their name as we did in the image, because that name does not exist in the k8s management context. So we have to adjust the first stage to specify a specific numeric value (here we have given 1001) , and then use that value in the security context of the Stateful set which we shall see below.
We would need to supply a license/token file (for client applications) and cluster/context definitions via an axonserver.properties file. Unlike Docker Compose, Kubernetes mounts Secrets and ConfigMaps as directories rather than files, so we need to split license and configuration to two separate locations. For the license secret we can use a new location “/axonserver/license/axoniq.license” and adjust the environment variable to match. For the system token we’ll use “/axonserver/security/token.txt”, and for the properties file we’ll use a ConfigMap that we mount on top of the “/axonserver/config” directory.
These can be created using "kubectl" directly from their respective file as depicted below. It is recommended to create a dedicated namespace before creating the secrets and the config maps.
$ kubectl create secret generic axonserver-license --from-file=./axoniq.license -n ${axonserver-ns}
secret/axonserver-license created
$ kubectl create secret generic axonserver-token --from-file=./axoniq.token -n ${axonserver-ns}
secret/axonserver-token created
$ kubectl create configmap axonserver-properties --from-file=./axonserver.properties -n ${axonserver-ns}
configmap/axonserver-properties created
$
In the descriptor we now have to declare the secret, add a volume for it, and mount the secret on the volume. Then a list of volumes has to be added to link the actual license and properties.
Deploying Axon Server
The complete spec for the Axon Server Stateful set is given below. This includes the security context, the volume mounts, the readiness and liveness probes and finally the volumes.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: axonserver
labels:
app: axonserver
spec:
serviceName: axonserver
replicas: 1
selector:
matchLabels:
app: axonserver
template:
metadata:
labels:
app: axonserver
spec:
securityContext:
runAsUser: 1001
fsGroup: 1001
containers:
- name: axonserver
image: axoniq/axonserver:latest-dev-nonroot
imagePullPolicy: IfNotPresent
ports:
- name: grpc
containerPort: 8124
protocol: TCP
- name: gui
containerPort: 8024
protocol: TCP
env:
- name: AXONIQ_LICENSE
value: "/axonserver/license/axoniq.license"
volumeMounts:
- name: data
mountPath: /axonserver/data
- name: events
mountPath: /axonserver/events
- name: log
mountPath: /axonserver/log
- name: config
mountPath: /axonserver/config
readOnly: true
- name: system-token
mountPath: /axonserver/security
readOnly: true
- name: license
mountPath: /axonserver/license
readOnly: true
readinessProbe:
httpGet:
path: /actuator/info
port: 8024
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 1
failureThreshold: 30
livenessProbe:
httpGet:
path: /actuator/info
port: 8024
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
volumes:
- name: config
configMap:
name: axonserver-properties
- name: system-token
secret:
secretName: axonserver-token
- name: license
secret:
secretName: axonserver-license
volumeClaimTemplates:
- metadata:
name: events
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 5Gi
- metadata:
name: log
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
The StatefulSet can be applied using the following command (assuming that the StatefulSet spec is stored in the file "axonserver-sts.yml").
$ kubectl apply -f axonserver-sts.yml -n ${axonserver-ns}
statefulset.apps/axonserver created
The next step would be to create the two services required for Axon Server, that is axonserver-gui on 8024 (HTTP) and axonserver on 8124 (gRPC).
---
apiVersion: v1
kind: Service
metadata:
name: axonserver-gui
labels:
app: axonserver
spec:
ports:
- name: gui
port: 8024
targetPort: 8024
selector:
app: axonserver
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: axonserver
labels:
app: axonserver
spec:
ports:
- name: grpc
port: 8124
targetPort: 8124
clusterIP: None
selector:
app: axonserver
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: axonserver
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/affinity-mode: persistent
spec:
rules:
- host: axonserver
http:
paths:
- backend:
serviceName: axonserver-gui
servicePort: 8024
---
The services use an Ingress to allow incoming traffic and can be deployed with the following command (assuming that the Service specs are stored in the file "axonserver-ing.yml").
$ kubectl apply -f axonserver-ing.yml -n ${axonserver-ns}
service/axonserver-gui created
service/axonserver created
ingress.networking.k8s.io/axonserver created
To run an Axon Server cluster in Kubernetes it is recommended to create a separate stateful set for each node, and not to scale a single stateful set to multiple instances. Scaling a single stateful set makes it hard to control individual nodes in the cluster.