Axon Server EE
This section is split into 3 sub-sections.

Construction of the Image

Axon does not provide a public image for Axon Server EE. A starter Dockerfile is included below which can be tailored as per your requirements. This will work for OpenShift, Kubernetes, as well as Docker and Docker Compose.
The starter file helps create the image in multiple stages,
  • The image will be based on a compact image from Google’s “distroless” base images at the gcr.io repository, in this case “gcr.io/distroless/java:11”.
  • The first stage creates a user and group named “axonserveree” so that we can run Axon Server EE as a non-root user. It also creates the directories that will become our volumes ( /axonservee), and finally sets the ownership.
  • The second stage begins by copying the account (in the form of the “passwd” and “group” files) and the home directory with its volume mount points, carefully keeping ownership set to the new user.
  • The last steps copy the executable jar (axonserver.jar -> the enterprise version) and a common set of properties. It marks the volume mounting points and exposed ports and finally specifies the command to start Axon Server EE.
1
FROM busybox as source
2
RUN addgroup -S axonserveree \
3
&& adduser -S -h /axonserveree -D axonserveree \
4
&& mkdir -p /axonserveree/config /axonserveree/data \
5
/axonserveree/events /axonserveree/log \
6
&& chown -R axonserveree:axonserveree /axonserveree
7
8
FROM gcr.io/distroless/java:11
9
10
COPY --from=source /etc/passwd /etc/group /etc/
11
COPY --from=source --chown=axonserveree /axonserveree /axonserveree
12
13
COPY --chown=axonserveree axonserver.jar axonserver-cli.jar axonserver.properties /axonserveree/
14
15
USER axonserveree
16
WORKDIR /axonserveree
17
18
VOLUME [ "/axonserveree/config", "/axonserveree/data", "/axonserveree/events", "/axonserveree/log" ]
19
EXPOSE 8024/tcp 8124/tcp 8224/tcp
20
21
ENTRYPOINT [ "java", "-jar", "axonserver.jar" ]
Copied!
For the common properties (axonserver.properties), the minimum set can be added to ensure that the volumes get mounted and logs generated. Again these can be tailored as per the deployment requirements.
1
axoniq.axonserver.event.storage=/axonserveree/events
2
axoniq.axonserver.snapshot.storage=/axonserveree/events
3
axoniq.axonserver.replication.log-storage-folder=/axonserveree/log
4
axoniq.axonserver.controldb-path=/axonserveree/data
5
axoniq.axonserver.pid-file-location=/axonserveree/data
6
7
logging.file=/axonserveree/data/axonserver.log
8
logging.file.max-history=10
9
logging.file.max-size=10MB
Copied!
Place the Dockerfile, the Axon Server EE jar file (axonserver.jar), the Axon Server EE client jar file (axonserver-cli.jar) and the axonserver.properties. The image can be constructed using the following command.
1
$ docker build --tag ${TAG} .
Copied!
The ${TAG} could be any tag that you would like to give to the Axon Server EE Docker image. This completes the construction of the Docker image. The image can pushed to your local repository or you could keep it local if you only want to run it on your development machine. The next step is to run it either using Docker Compose or Kubernetes.

Docker Compose

Axon Server EE is meant to be run in a distributed manner i.e. as a cluster where there will be multiple instances of Axon Server EE nodes running all interconnected to each other.
The installation process assumes that Docker Compose will be used to run a 3-node Axon Server EE cluster i.e. running 3 services of the same container image we built above. Let us designate these services as "axonserver-1", "axonserver-2" and "axonserver-3". We will also give a tag to the image that we constructed above as "axonserver-ee:running".
Each container instance will use separate volumes for “data”, “events”, and “log”. We will use "secrets" to inject the license file, tokens as well as the cluster/context definitions using the autocluster mechanism. An environment variable is added to tell Axon Server about the location of the license file.
The complete docker-compose file is depicted below.
1
version: '3.3'
2
services:
3
axonserver-1:
4
image: axonserver-ee:running
5
hostname: axonserver-1
6
volumes:
7
- axonserver-data1:/axonserver/data
8
- axonserver-events1:/axonserver/events
9
- axonserver-log1:/axonserver/log
10
secrets:
11
- source: axoniq-license
12
target: /axonserver/config/axoniq.license
13
- source: axonserver-properties
14
target: /axonserver/config/axonserver.properties
15
- source: axonserver-token
16
target: /axonserver/config/axonserver.token
17
environment:
18
- AXONIQ_LICENSE=/axonserver/config/axoniq.license
19
ports:
20
- '8024:8024'
21
- '8124:8124'
22
- '8224:8224'
23
networks:
24
- axon-demo
25
26
axonserver-2:
27
image: axonserver-ee:running
28
hostname: axonserver-2
29
volumes:
30
- axonserver-data2:/axonserver/data
31
- axonserver-events2:/axonserver/events
32
- axonserver-log2:/axonserver/log
33
secrets:
34
- source: axoniq-license
35
target: /axonserver/config/axoniq.license
36
- source: axonserver-properties
37
target: /axonserver/config/axonserver.properties
38
- source: axonserver-token
39
target: /axonserver/config/axonserver.token
40
environment:
41
- AXONIQ_LICENSE=/axonserver/config/axoniq.license
42
ports:
43
- '8025:8024'
44
- '8125:8124'
45
- '8225:8224'
46
networks:
47
- axon-demo
48
49
axonserver-3:
50
image: axonserver-ee:running
51
hostname: axonserver-3
52
volumes:
53
- axonserver-data3:/axonserver/data
54
- axonserver-events3:/axonserver/events
55
- axonserver-log3:/axonserver/log
56
secrets:
57
- source: axoniq-license
58
target: /axonserver/config/axoniq.license
59
- source: axonserver-properties
60
target: /axonserver/config/axonserver.properties
61
- source: axonserver-token
62
target: /axonserver/config/axonserver.token
63
environment:
64
- AXONIQ_LICENSE=/axonserver/config/axoniq.license
65
ports:
66
- '8026:8024'
67
- '8126:8124'
68
- '8226:8224'
69
networks:
70
- axon-demo
71
72
volumes:
73
axonserver-data1:
74
driver: local
75
driver_opts:
76
type: none
77
device: ${PWD}/data1
78
o: bind
79
axonserver-events1:
80
driver: local
81
driver_opts:
82
type: none
83
device: ${PWD}/events1
84
o: bind
85
axonserver-log1:
86
driver: local
87
driver_opts:
88
type: none
89
device: ${PWD}/log1
90
o: bind
91
axonserver-data2:
92
driver: local
93
driver_opts:
94
type: none
95
device: ${PWD}/data2
96
o: bind
97
axonserver-events2:
98
driver: local
99
driver_opts:
100
type: none
101
device: ${PWD}/events2
102
o: bind
103
axonserver-log2:
104
driver: local
105
driver_opts:
106
type: none
107
device: ${PWD}/log2
108
o: bind
109
axonserver-data3:
110
driver: local
111
driver_opts:
112
type: none
113
device: ${PWD}/data3
114
o: bind
115
axonserver-events3:
116
driver: local
117
driver_opts:
118
type: none
119
device: ${PWD}/events3
120
o: bind
121
axonserver-log3:
122
driver: local
123
driver_opts:
124
type: none
125
device: ${PWD}/log3
126
o: bind
127
128
networks:
129
axon-demo:
130
131
secrets:
132
axonserver-properties:
133
file: ./axonserver.properties
134
axoniq-license:
135
file: ./axoniq.license
136
axonserver-token:
137
file: ./axonserver.token
Copied!
The “axonserver-token” secret is used to allow the CLI to talk with nodes. The access control section details the generation of these tokens. A similar approach can be used to configure more secrets for the certificates, and so enable SSL.
The "axonserver.properties" properties file referred to in the secrets’ definition section is depicted below.
1
axoniq.axonserver.autocluster.first=axonserver-1
2
axoniq.axonserver.autocluster.contexts=_admin,default
3
axoniq.axonserver.accesscontrol.enabled=true
4
axoniq.axonserver.accesscontrol.internal-token=${generated_token}
5
axoniq.axonserver.accesscontrol.systemtokenfile=/axonserver/config/axonserver.tok
Copied!
Starting Axon Server EE using the docker-compose command is depicted below.
1
$ docker-compose up
Copied!

Kubernetes

The deployment of Axon Server EE on Kubernetes essentially follows the same principle as we have seen for Axon Server SE i.e. using Stateful Sets. However to cater to the distributed deployment topology of Axon Server EE, there will be some changes that would need to be done.
The Dockerfile that we built above would need a change. This is due to the fact that volumes are mounted as owned by the mount location’s owner in Docker, while Kubernetes uses a special security context, defaulting to root. Since our EE image runs Axon Server under its own user (axonserveree), it has no rights on the mounted volume other than “read”. The context can be specified, but only through the user or group’s ID, and not using their name as we did in the image, because that name does not exist in the k8s management context. So we have to adjust the first stage to specify a specific numeric value (here we have given 1001) , and then use that value in the security context of the Stateful set which we shall see below.
The change is depicted below. As before, create the image using docker build and give it a tag (e.g. axonserveree-running)
1
FROM busybox as source
2
RUN addgroup -S -g 1001 axonserveree \
3
&& adduser -S -u 1001 -h /axonserveree -D axonserveree \
4
&& mkdir -p /axonserveree/config /axonserveree/data \
5
/axonserveree/events /axonserveree/log \
6
&& chown -R axonserveree:axonserveree /axonserveree
Copied!
We would need to supply a licence/token file (for client applications) and cluster/context definitions via an axonserver.properties file. Unlike Docker Compose, Kubernetes mounts Secrets and ConfigMaps as directories rather than files, so we need to split license and configuration to two separate locations. For the license secret we can use a new location “/axonserver/license/axoniq.license” and adjust the environment variable to match. For the system token we’ll use “/axonserver/security/token.txt”, and for the properties file we’ll use a ConfigMap that we mount on top of the “/axonserver/config” directory.
These can be created using "kubectl" directly from their respective file as depicted below. It is recommended to create a dedicated namespace before creating the secrets and the config maps.
1
$ kubectl create secret generic axonserveree-license --from-file=./axoniq.license -n ${axonserveree-ns}
2
secret/axonserver-license created
3
$ kubectl create secret generic axonserveree-token --from-file=./axoniq.token -n ${axonserveree-ns}
4
secret/axonserver-token created
5
$ kubectl create configmap axonserveree-properties --from-file=./axonserver.properties -n ${axonserveree-ns}
6
configmap/axonserver-properties created
7
$
Copied!
In the descriptor we now have to declare the secret, add a volume for it, and mount the secret on the volume. Then a list of volumes has to be added to link the actual license and properties.
The complete spec for the Axon Server EE Stateful set is given below. This includes the security context, the volume mounts, the readiness and liveness probes and finally the volumes.
1
apiVersion: apps/v1
2
kind: StatefulSet
3
metadata:
4
name: axonserveree
5
labels:
6
app: axonserveree
7
spec:
8
serviceName: axonserveree
9
replicas: 1
10
selector:
11
matchLabels:
12
app: axonserveree
13
template:
14
metadata:
15
labels:
16
app: axonserveree
17
spec:
18
securityContext:
19
runAsUser: 1001
20
fsGroup: 1001
21
containers:
22
- name: axonserveree
23
image: axonserver-ee:running
24
imagePullPolicy: IfNotPresent
25
ports:
26
- name: grpc
27
containerPort: 8124
28
protocol: TCP
29
- name: gui
30
containerPort: 8024
31
protocol: TCP
32
env:
33
- name: AXONIQ_LICENSE
34
value: "/axonserveree/license/axoniq.license"
35
volumeMounts:
36
- name: data
37
mountPath: /axonserveree/data
38
- name: events
39
mountPath: /axonserveree/events
40
- name: log
41
mountPath: /axonserveree/log
42
- name: config
43
mountPath: /axonserveree/config
44
readOnly: true
45
- name: system-token
46
mountPath: /axonserveree/security
47
readOnly: true
48
- name: license
49
mountPath: /axonserveree/license
50
readOnly: true
51
readinessProbe:
52
httpGet:
53
path: /actuator/info
54
port: 8024
55
initialDelaySeconds: 5
56
periodSeconds: 5
57
timeoutSeconds: 1
58
failureThreshold: 30
59
livenessProbe:
60
httpGet:
61
path: /actuator/info
62
port: 8024
63
initialDelaySeconds: 5
64
periodSeconds: 10
65
successThreshold: 1
66
failureThreshold: 3
67
volumes:
68
- name: config
69
configMap:
70
name: axonserveree-properties
71
- name: system-token
72
secret:
73
secretName: axonserveree-token
74
- name: license
75
secret:
76
secretName: axonserveree-license
77
volumeClaimTemplates:
78
- metadata:
79
name: events
80
spec:
81
accessModes: [ "ReadWriteOnce" ]
82
resources:
83
requests:
84
storage: 5Gi
85
- metadata:
86
name: log
87
spec:
88
accessModes: [ "ReadWriteOnce" ]
89
resources:
90
requests:
91
storage: 1Gi
92
- metadata:
93
name: data
94
spec:
95
accessModes: [ "ReadWriteOnce" ]
96
resources:
97
requests:
98
storage: 1Gi
Copied!
The StatefulSet can be applied using the following command (assuming that the StatefulSet spec is stored in the file "axonserveree-sts.yml").
1
$ kubectl apply -f axonserver-sts.yml -n ${axonserveree-ns}
2
statefulset.apps/axonserveree created
Copied!
The next step would be to create the two services required for Axon Server EE i.e. axonserver-gui on 8024 (HTTP) and axonserver on 8124 (gRPC).
1
---
2
apiVersion: v1
3
kind: Service
4
metadata:
5
name: axonserveree-gui
6
labels:
7
app: axonserveree
8
spec:
9
ports:
10
- name: gui
11
port: 8024
12
targetPort: 8024
13
selector:
14
app: axonserveree
15
type: ClusterIP
16
---
17
apiVersion: v1
18
kind: Service
19
metadata:
20
name: axonserveree
21
labels:
22
app: axonserveree
23
spec:
24
ports:
25
- name: grpc
26
port: 8124
27
targetPort: 8124
28
clusterIP: None
29
selector:
30
app: axonserveree
31
---
32
apiVersion: networking.k8s.io/v1beta1
33
kind: Ingress
34
metadata:
35
name: axonserveree
36
annotations:
37
kubernetes.io/ingress.class: nginx
38
nginx.ingress.kubernetes.io/affinity: cookie
39
nginx.ingress.kubernetes.io/affinity-mode: persistent
40
spec:
41
rules:
42
- host: axonserveree
43
http:
44
paths:
45
- backend:
46
serviceName: axonserveree-gui
47
servicePort: 8024
48
---
Copied!
The services use an Ingress to allow incoming traffic and can be deployed with the following command (assuming that the Service(s) spec is stored in the file "axonserveree-ing.yml").
1
$ kubectl apply -f axonserveree-ing.yml -n ${axonserveree-ns}
2
service/axonserveree-gui created
3
service/axonserveree created
4
ingress.networking.k8s.io/axonserveree created
Copied!
The final step is to scale out the cluster. The simplest approach, and most often correct one, is to use a scaling factor other than 1, letting Kubernetes take care of deploying several instances. This means we will get several nodes that Kubernetes can dynamically manage and migrate as needed, while at the same time fixing the name and storage. We will get a number suffixed to the name starting at 0, so a scaling factor of 3 gives us “axonserver-0” through “axonserver-2”.
1
$ kubectl scale sts axonserveree -n ${axonserveree-ns} --replicas=3
2
statefulset.apps/axonserveree scaled
Copied!
This completes a basic setup to help install Axon Server EE on Kubernetes. The customer can choose to tailor the entire setup based on their requirements and usage of Kubernetes.
Last modified 7mo ago