Event Processing
Event processing in Axon Framework is centered around PooledStreamingEventProcessor.
That processor is the main tuning surface when you want to increase throughput, control batching, or adapt to multiple application instances.
Pooled streaming processors
Pooled streaming event processors process events asynchronously and claim segments from a token store. They are a good fit when you want higher throughput than subscribing processors and when event handling can be parallelized across segments.
The main tuning knobs are:
-
initialSegmentCount(int)to decide how many segments are created when the token store is empty -
batchSize(int)to decide how many events are handled in one transaction -
tokenClaimInterval(long)to control how often the coordinator retries token claims -
claimExtensionThreshold(long)to decide when a running work package refreshes its claim -
maxClaimedSegments(int)ormaxSegmentProvider(…)to limit how much work one instance can own -
enableCoordinatorClaimExtension()when work packages spend a long time handling batches
In practice, most tuning starts with batchSize, initialSegmentCount, and the executor sizing.
The defaults are conservative and work well for small systems, but high-throughput installations usually need larger batches and more careful claim management.
Configuration
Use the EventProcessingConfigurer to register pooled streaming processors and apply shared defaults:
public void configureEventProcessing(MessagingConfigurer configurer) {
configurer.eventProcessing(eventProcessing -> eventProcessing
.pooledStreaming(pooled -> pooled
.defaults((configuration, processorConfig) -> processorConfig
.initialSegmentCount(8)
.batchSize(50)
.tokenClaimInterval(5000)
.claimExtensionThreshold(5000))
.defaultProcessor("order-processor", components ->
components.declarative("OrderUpdated", cfg -> orderProjection))));
}
Spring Boot users typically configure pooled streaming processors by declaring an EventProcessorDefinition bean.
For example:
@Configuration
public class NotificationAutomationConfiguration {
@Bean
EventProcessorDefinition notificationProcessor() {
return EventProcessorDefinition
.pooledStreaming("notification-processor")
.assigningHandlers(descriptor -> descriptor.beanName().endsWith("Notification")
&& descriptor.beanType().getPackageName().endsWith("automation"))
.customized(config -> config
.initialSegmentCount(4)
.batchSize(100)
);
}
}
This Spring Boot style keeps the processor definition explicit while still allowing the framework to bind the rest of the configuration from properties and defaults.
Sequencing policy
PooledStreamingEventProcessor does not serialize all events globally.
It groups work according to the configured SequencingPolicy.
When two events resolve to the same sequence identifier, they are handled sequentially relative to each other.
When they resolve to different identifiers, they can still be processed in parallel by different work packages.
By default, Axon configures a HierarchicalSequencingPolicy.
It first applies SequentialPerAggregatePolicy and falls back to SequentialPolicy when no aggregate identifier is available in the ProcessingContext.
In that fallback case, events are processed sequentially within a single segment, so parallelism is effectively disabled.
For the full sequencing behavior and configuration options, see Sequential processing.
That means your sequencing policy directly affects throughput:
-
narrow sequence identifiers preserve concurrency
-
broad sequence identifiers reduce parallelism
-
Optional.empty()leaves the event unsequenced and available for concurrent processing
If you migrate toward Dynamic Consistency Boundaries, revisit any event sequencing policy that still assumes aggregate identifiers are the concurrency boundary. The right policy is the one that matches the actual consistency boundary you want to preserve.
For a deeper discussion of sequencing policy options and defaults, see Sequencing Policy Migration.
Token store
Pooled streaming processors rely on a durable TokenStore to remember progress and coordinate segment ownership across instances.
If no token store is configured, Axon Framework falls back to an InMemoryTokenStore, which is fine for tests and demos but not for production.
The practical rule is simple:
-
use a durable token store in production
-
keep the token store close to the data that your event handlers update
-
size connection pooling so that token claims and event handling do not contend for connections
For the full token store configuration options, see Token store.
DCB impact
Dynamic Consistency Boundaries change the shape of the event stream that a processor sees. In an aggregate-based store, the sequencing and token management is still centered on aggregate-oriented ordering. With DCB support, the concurrency boundary is defined by tags and criteria instead, so event handler sequencing and processor parallelism need to be chosen with that model in mind.
In other words, the same tuning knobs still exist, but the right values depend on the storage model:
-
aggregate-based stores usually follow the classic per-aggregate ordering assumptions
-
DCB-based stores require an explicit policy that matches the business boundary you want to preserve
-
if you use DCB, revisit both sequencing and segment sizing instead of assuming aggregate-style defaults will carry over
Operating guidance
-
Increase
batchSizewhen event handling is fast and transactional rollback is acceptable. -
Keep
batchSizesmall when the handler performs slow I/O or when rollback would be expensive. -
Increase
initialSegmentCountwhen you expect multiple application instances to share the load. -
Use a durable
TokenStore; the in-memory store is only appropriate for tests and simple demos. -
Set the executor sizes so coordinator and worker threads do not starve each other.
-
Enable coordinator claim extension when workers process large batches and token claims need to be refreshed more aggressively. See token claim configuration.