Wyn Enterprise Administration Guide

Schedule Service - Load Balancing

With the release version V5.1, Wyn Enterprise application introduced Akka-based Schedule Service to solve the scheduling problems in multi-server and multi-worker situations. However, the Schedule Service could only be deployed as a single instance which resulted in single point of failure and increased pressure on a single service. To solve the issues with deployment as a single instance, Wyn Enterprise introduces deployment of multiple instances for Schedule Service with the release version V6.1.

Note: Multiple instance deployment with the Schedule Service is possible in the independent process modes only.

To synchronize the state data, Wyn Enterprise uses the Redis datastore as a distributed memory service. Redis helps in reducing the complexity and increases the stability of the system.


Each node of the Schedule Service dynamically schedules tasks based on the global task states stored in Redis. When a node goes offline, the task state associated with the node is cleared automatically. The below diagram illustrates the topology of the Schedule Service in Wyn Enterprise, Schedule Service Topology in Wyn Enterprise

Redis Storage Structure

Using the Redis datastore, Wyn Enterprise provides sharding, the process to save the status data of each schedule service in a separate data structure. To ensure atomicity of the query data and to perform complex operations, Wyn uses Lua Scripts with Redis.
Redis Storage

Data Cleaning

Data stored in Redis is programmed with a short expiration time and is refreshed regularly by a dedicated service, WatchDog. To manually delete the data, you can also call the delete command (DEL) of Redis. Few complex situations where data cleaning occurs are described below,

  1. A Schedule Service node suddenly goes offline and the process exits. In this case, WatchDog service will have no time to perform any action and the data associated with the current node saved in Redis will remain in the datastore up to the maximum expiration time.
  2. A schedule service restarts the Akka's system and the process exits its current state. In this case, the WatchDog service monitors the Akka's system restart event and clears all the current service data stored in Redis.
  3. A schedule service node shuts down for some reason and the process exits, but the service is able to receive the exit event. In this situation, WatchDog service monitors the shutdown of the schedule service and clears all the current service data stored in Redis.

Abstraction Packages

Gces.Scheduler.Extensions.Abstractions package has been introduced to abstract,

  1. The structure and logic of the state data store. And,
  2. The configuration options of the Quartz Service.

The assembly of the Gces.Scheduler.Extensions.Abstractions package is described in the below table,

Name Implementation Description
IWorkerTaskCounterFactory MemoryWorkerTaskCounterFactory (Default) Factory class responsible for creating task counters.
IWorkerTaskCounter MemoryWorkerTaskCounter (Default) Task counter, the main responsibility is to count how many tasks each worker is assigned.
IMetricsManagerFactory MemoryMetricsManagerFactory (Default) Factory class responsible for creating task metrics manager.
IMetricsManager MemoryMetricsManager (Default) Task metrics manager, the main responsibility is to collects the status of tasks and provides queries.
IQuartzManager SingleQuartzManager (Default) Provides quartz service configuration management.

For multi-instance deployment, Gces.Scheduler.Extensions.Distribute package is provided as an implementation of Gces.Scheduler.Extensions.Abstractions package under distributed environment conditions.

The assembly of the Gces.Scheduler.Extensions.Distribute package is described in the below table,

Name Implementation Description
RedisDb - Provides a unified entry for Redis operations.
QuartzStorage - Provides a unified entry for the four database operations supported by the quartz service.
IWorkerTaskCounterFactory RedisWorkerTaskCounterFactory Distributed implementation of IWorkerTaskCounterFactory.
IWorkerTaskCounter RedisWorkerTaskCounter Distributed implementation of IWorkerTaskCounter.
IMetricsManagerFactory RedisMetricsManagerFactory Distributed implementation of IMetricsManagerFactory.
IMetricsManager RedisMetricsManager Distributed implementation of IMetricsManager.
IQuartzManager DistributedQuartzManager Provides distributed quartz service configuration management.
WatchDog - Responsible for refreshing expiration time, deleting and cleaning Redis data.

Configuration Changes

Scheduler Service

"ScheduleConfig": {
"Mode": "OutProcess, Multiple",
"LogLevel": "DEBUG",
"HostName": "localhost",
"Port": 42003,
<span style="color: green;">"SeedNodes": [ "akka.tcp://ScheduleCluster@localhost:42003", "akka.tcp://ScheduleCluster@localhost:42004" ],
"HeartbeatInterval": 10,
"AcceptableHeartbeatPause": 30,
"threshold": 10.0,
"MultipleConfig": {
 "DistributedQuartz": {
 "StorageType": "SqlServer",
 "ConnectionString": "Server=.;Database=quartz;User Id=sa;Password=xA123456;",
 "ClusterCheckinInterval": 2000
 "DistributedMemoryCache": {
 "ConfigString": "Server=localhost"


"ScheduleConfig": {
"Mode": "Proxy",
"LogLevel": "DEBUG",
"LocalHost": "localhost",
"LocalPort": 42009,
"ServerHost": "localhost",
 "ServerPort": 42003,
"SeedNodes": [ "akka.tcp://ScheduleCluster@localhost:42003", "akka.tcp://ScheduleCluster@localhost:42004" ],
"HeartbeatInterval": 10,
"AcceptableHeartbeatPause": 30,
"threshold": 10.0


"WorkerConfig": {
"LogLevel": "DEBUG",
"LocalPort": 0,
"ServerHost": "localhost",
 "ServerPort": 42003,
 "SeedNodes": [ "akka.tcp://ScheduleCluster@localhost:42003", "akka.tcp://ScheduleCluster@localhost:42004" ],
"HeartbeatInterval": 10,
"AcceptableHeartbeatPause": 30,
"threshold": 12.0,
"Identities": [ "cot" ]

Note: Few limitation with the above solution,
1. The schedule service can run multiple instances in case of independent processes only.
2. When the schedule service instances are deployed on separate machines, Wyn does not provide a clock synchronization service and you need to ensure it from your end.