This chapter contains a checklist and some guidelines to take into consideration when getting ready for production-level performance. By now, you have probably used the test fixtures to test your command handling logic and sagas. The production environment isn't as forgiving as a test environment, though. Aggregates tend to live longer, be used more frequently and concurrently. For the extra performance and stability, you're better off tweaking the configuration to suit your specific needs.
If you have generated the tables automatically using your JPA implementation (e.g. Hibernate), you probably do not have all the right indexes set on your tables. Different usages of the Event Store require different indexes to be set for optimal performance. This list suggests the indexes that should be added for the different types of queries used by the default
Normal operational use (storing and loading events):
Table 'DomainEventEntry', columns
sequenceNumber (unique index)
eventIdentifier (unique index)
eventIdentifier (unique index)
Table 'AssociationValueEntry', columns
Table 'AssociationValueEntry', columns
The default column lengths generated by e.g. Hibernate may work, but won't be optimal. A UUID, for example, will always have the same length. Instead of a variable length column of 255 characters, you could use a fixed length column of 36 characters for the aggregate identifier.
The 'timestamp' column in the DomainEventEntry table only stores ISO 8601 timestamps. If all times are stored in the UTC timezone, they need a column length of 24 characters. If you use another timezone, this may be up to 28. Using variable length columns is generally not necessary, since time stamps always have the same length.
It is highly recommended to store all timestamps in UTC format. In countries with Daylight Savings Time, storing timestamps in local time may result in sorting errors for events generated around and during the timezone switch. This does not occur when UTC is used. Some servers are configured to always use UTC. Alternatively, you should configure the Event Store to convert timestamps to UTC before storing them.
The 'type' column in the DomainEventEntry stores the Type Identifiers of aggregates. Generally, these are the 'simple name' of the aggregate. Event the infamous 'AbstractDependencyInjectionSpringContextTests' in spring only counts 45 characters. Here, again, a shorter (but variable) length field should suffice.
When using a relational database as an event store, Axon relies on an auto-increment value to allow tracking processors to read all events, roughly in the order they were inserted. "Roughly", because "insert-order" and "commit-order" are different things.
While auto-increment values are (generally) generated at insert-time, these values only become visible at commit-time. This means another process may observe these sequence numbers arriving in a different order. While Axon has mechanisms to ensure eventually all events are handled, even when they become visible in a different order, there are limitations and performance aspects to consider.
When a tracking processor read events, it uses the "global sequence" to track its progress. When events become available in a different order than they were inserted, Axon will encounter a "Gap". Axon will remember these "Gaps" to verify it data has become available since the last read. These gaps may be the result of events becoming visible in a different order, but also because a transaction was rolled back. It is highly recommended to ensure that no gaps exist because of over eagerly increasing the sequence number. The mechanism for checking gaps is convenient, but comes with a performance impact.
When using a JPA based Event Storage Engine, Axon relies on the JPA implementation to create the table structure. While this will work, it is unlikely to provide the configuration that has the best performance for the database engine in use. That is because Axon uses default settings on the
To override these settings, create a file called
/META-INF/orm.xml on the classpath, which looks as follows:
<?xml version="1.0" encoding="UTF-8"?><entity-mappings version="1.0" xmlns="http://java.sun.com/xml/ns/persistence/orm"><mapped-superclass access="FIELD" metadata-complete="false"class="org.axonframework.eventsourcing.eventstore.AbstractSequencedDomainEventEntry"><attributes><id name="globalIndex"><generated-value strategy="SEQUENCE" generator="myGenerator"/><sequence-generator name="myGenerator" sequence-name="mySequence"/></id></attributes></mapped-superclass></entity-mappings>
It is important to specify
metadata-complete="false", as this indicates this file should be used to override existing annotations, instead of replacing them. For the best results, ensure that the
DomainEventEntry table uses its own sequence. This can be ensured by specifying a different sequence generate for that entity only.
The MongoEventStorageEngine has an
@PostConstruct annotated method, called
ensureIndexes which will generate the indexes required for correct operation. That means, when running in a container that automatically calls
@PostConstruct handlers, the required unique index on "Aggregate Identifier" and "Event Sequence Number" is created when the Event Store is created.
Note that there is always a balance between query optimization and update speed. Load testing is ultimately the best way to discover which indices provide the best performance.
Normal operational use:
An index is automatically created on "aggregateIdentifier", "type" and "sequenceNumber" in the domain events (default name: "domainevents") collection
Additionally, a non-unique index on "timestamp" and "sequenceNumber" is configured on the domain events (default name: "domainevents") collection, for the Tracking Event Processors
An (unique) index on "aggregateIdentifier" and "sequenceNumber" is automatically created in the snapshot events (default name: "snapshotevents") collection
Put a (unique) index on the "sagaIdentifier" in the saga (default name: "sagas") collection
Put an index on the "sagaType", "associations.key" and "associations.value" properties in the saga (default name: "sagas") collection
A well designed command handling module should pose no problems when implementing caching. Especially when using Event Sourcing, loading an aggregate from an Event Store is an expensive operation. With a properly configured cache in place, loading an aggregate can be converted into a pure in-memory process.
Here are a few guidelines that help you get the most out of your caching solution:
Make sure the Unit Of Work never needs to perform a rollback for functional reasons.
A rollback means that an aggregate has reached an invalid state. Axon will automatically invalidate the cache entries involved. The next request will force the aggregate to be reconstructed from its Events. If you use exceptions as a potential (functional) return value, you can configure a
RollbackConfiguration on your Command Bus. By default, the Unit Of Work will be rolled back on runtime exceptions for Command handlers, and on all exceptions for Event Handlers.
All commands for a single aggregate must arrive on the machine that has the aggregate in its cache.
This means that commands should be consistently routed to the same machine, for as long as that machine is "healthy". Routing commands consistently prevents the cache from going stale. A hit on a stale cache will cause a command to be executed and fail at the moment events are stored in the event store. By default, Axon's Distributed Command Bus components will use consistent hashing to route Commands.
Configure a sensible time to live / time to idle
By default, caches have a tendency to have a relatively short time to live, a matter of minutes. For a command handling component with consistent routing, a longer time-to-idle and time-to-live is usually better. This prevents the need to re-initialize an aggregate based on its events, just because its cache entry expired. The time-to-live of your cache should match the expected lifetime of your aggregate.
Cache data in-memory
For true optimziation, caches should keep data in-memory (and preferably on-heap) to best performance. This prevents the need to (se)serialize Aggregates when storing to disk and even off-heap.
Snapshotting removes the need to reload and replay large numbers of events. A single snapshot represents the entire aggregate state at a certain moment in time. The process of snapshotting itself, however, also takes processing time. Therefore, there should be a balance in the time spent building snapshots and the time it saves by preventing a number of events being read back in.
There is no default behavior for all types of applications. Some will specify a number of events after which a snapshot will be created, while other applications require a time-based snapshotting interval. Whatever way you choose for your application, make sure snapshotting is in place if you have long-living aggregates.
See Using Snapshot Events for more about snapshotting.
Sometimes a snapshot becomes obsolete (the Aggregate structure has changed since it was snapshotted). In those cases it is convenient to filter snapshots. This is where
snapshotFilter comes in place. It is a Java
Predicate which decides based on
DomainEventData whether a snapshot should be taken into account for processing. If none provided, an implementation which returns always
true is used.
snapshotFilter has to be provided to the
JdbcEventStorageEngine storageEngine = new JdbcEventStorageEngine(serializer,upcasterChain,persistenceExceptionResolver,serializer,snapshotFilter, // <--batchSize,connectionProvider,transactionManager,dataType,eventSchema,maxGapOffset,lowestGlobalSequence);
XStream is very configurable and extensible. If you just use a plain
XStreamSerializer, there are some quick wins ready to pick up. XStream allows you to configure aliases for package names and event class names. Aliases are typically much shorter (especially if you have long package names), making the serialized form of an event smaller. And since we're talking XML, each character removed from XML is twice the profit (one for the start tag, and one for the end tag).
A more advanced topic in XStream is creating custom converters. The default reflection based converters are simple, but do not generate the most compact XML. Always look carefully at the generated XML and see if all the information there is really needed to reconstruct the original instance.
Avoid the use of upcasters when possible. XStream allows aliases to be used for fields, when they have changed name. Imagine revision 0 of an event, that used a field called "clientId". The business prefers the term "customer", so revision 1 was created with a field called "customerId". This can be configured completely in XStream, using field aliases. You need to configure two aliases, in the following order: alias "customerId" to "clientId" and then alias "customerId" to "customerId". This will tell XStream that if it encounters a field called "customerId", it will call the corresponding XML element "customerId" (the second alias overrides the first). But if XStream encounters an XML element called "clientId", it is a known alias and will be resolved to field name "customerId". Check out the XStream documentation for more information.
For ultimate performance, you're probably better off without reflection based mechanisms altogether. In that case, it is probably wisest to create a custom serialization mechanism. The
DataOutputStream allow you to easily write the contents of the Events to an output stream. The
ByteArrayInputStream allow writing to and reading from byte arrays.
Especially in distributed systems, Event Messages need to be serialized on multiple occasions. Axon's components are aware of this and have support for
SerializationAware messages. If a
SerializationAware message is detected, its methods are used to serialize an object, instead of simply passing the payload to a serializer. This allows for performance optimizations.
When you serialize messages yourself, and want to benefit from the
SerializationAware optimization, use the
MessageSerializer class to serialize the payload and meta data of messages. All optimization logic is implemented in that class. See the JavaDoc of the
MessageSerializer for more details.
When using Event Sourcing, Serialized events can stick around for a long time. Therefore, consider the format to which they are serializer carefully. Consider configuring a separate serializer for events, carefully optimizing for the way they are stored. The JSON format generated by Jackson is generally more suitable for the long term than XStream's XML format. For more information on how to configure your Event
Serializer to something different, check out the documentation about Serializers.
The Axon Framework uses an
IdentifierFactory to generate all the identifiers, whether they are for Events or Commands. The default
IdentifierFactory uses randomly generated
java.util.UUID based identifiers. Although they are very safe to use, the process to generate them doesn't excel in performance.
IdentifierFactory is an abstract factory that uses Java's ServiceLoader (since Java 6) mechanism to find the implementation to use. This means you can create your own implementation of the factory and put the name of the implementation in a file called "
/META-INF/services/org.axonframework.common.IdentifierFactory". Java's ServiceLoader mechanism will detect that file and attempt to create an instance of the class named inside.
There are a few requirements for the
IdentifierFactory. The implementation must
have its fully qualified class name as the contents of the
/META-INF/services/org.axonframework.common.IdentifierFactory file on the classpath,
have an accessible zero-argument constructor,
be accessible by the context classloader of the application or by the classloader that loaded the
IdentifierFactory class, and must