2.11.0 - March 2025
Major changes
Backend Horizontal Scaling
The Olympe Platform now supports horizontal scaling for the backend processes. This feature allows you to scale the backend services horizontally by adding more instances of the same service applications. Requests to services are distributed among the instances, which helps to improve the performance and availability of the services. This is particularly useful when you have a high volume of requests or need to handle a large number of concurrent users.
This means that you can run as many instances of the same service application, and automatically remote actions'
calls as well as interactions with data sources
will be distributed among the instances.
Until 2.10, when a second instance of a service application was started, both processes received the same request. This is not the case anymore, now the requests are always distributed among the instances.
In case your process runs on Olympe Cloud or in a Kubernetes equivalent environment, increase the number of replicas of the service application to scale it horizontally. For example, to run 3 replicas of the service application, you can set the following property:
olympe:
serviceApps:
<backend-name>:
replicas: 3
This will create 3 instances of the service application, and the requests will be distributed among them.
When increasing the number of replicas for a backend, it counts as a new backend in the environment. This means that the CPU and memory quotas will be spread among all instances of backends.
Auto-scaling
You can also set up the backend to auto-scale according to the load (based on both CPU and memory usage):
olympe:
serviceApps:
<backend-name>:
autoscaling:
enabled: true
This default configuration will create a minimum of 1 instance and a maximum of 3 instances of the service application. The number of instances will be adjusted automatically based on the load with the following criteria:
- If the CPU usage is above 75% for 1 minute, a new replica will be created.
- If the memory usage is above 85% for 1 minute, a new replica will be created.
- When CPU and memory usage are below those values and stable for 5 minutes, the auto-scaler will remove extra replicas until it reaches the minimum number of replicas.
That configuration can be changed by setting the minReplicas
and maxReplicas
properties.
olympe:
serviceApps:
<backend-name>:
autoscaling:
enabled: true
minReplicas: 2
maxReplicas: 4
The CPU and memory usage thresholds can also be changed by setting the cpuAverageUtilization
and memoryAverageUtilization
properties:
olympe:
serviceApps:
<backend-name>:
autoscaling:
cpuAverageUtilization: 50 # The auto-scaler will create a new replica if the CPU usage is above 50% of usage for 1 minute
memoryAverageUtilization: 75 # The auto-scaler will create a new replica if the memory usage is above 75% of usage for 1 minute
Users can set up custom rules to define how the auto-scaler should behave according to the load. It follows the Kubernetes Horizontal Pod Autoscaler (HPA) configuration.
- For threshold customization (when a scale up/down should be scheduled), the configuration must be set under
autoscaling.metrics
field. - For behavior customization (how the scale up/down process should be operated), configuration must be set under
autoscaling.behaviour
field.
Example:
olympe:
serviceApps:
<backend-name>:
autoscaling:
enabled: true
metrics:
- type: Pods
pods:
metric:
name: requests_per_second
target:
type: AverageValue
averageValue: 1000
behavior:
scaleDown:
stabilizationWindowSeconds: 300
For more information, please refer to the Kubernetes Autoscale documentation.
When activating the auto-scaler, we recommend to set up your environment size as custom to have a full control of CPU and memory limits for backends and make sure the existing other backends won't get affected by the auto-scaling in terms of available resources.
Code API changes for horizontal scaling of backends
Since the Olympe Platform now supports horizontal scaling for the backend processes, the Service
class has been updated to support multiple instances of the same service application. This means that when you create a new service, this service could be running on multiple instances of the same service application.
If your service handles subscriptions, you'd need to send data to the backend process that processed the initial subscription request. Therefore, we had to improve Service API to handle this case:
We introduced the notion of Publisher
that is used to publish messages directly to an instance of a service application, running a service that initially handled a subscription request. When you get a publisher object, use it to send messages continuously to the actual backend process that handled the subscription request.
In the opposite, the original static method Service.publish()
is used to publish a message to a service, without taking care of the instance that will handle the message: this will be distributed to the first available instance of the service application.
Aligned with that change, the static method Service.observe()
now returns both an Observable
and a Publisher
object. As soon as you subscribe to the observable, the subscription will be initiated: one available backend process will take that request into account. The publisher can then be used to send future message to that same backend, in the context of that specific subscription.
import {Service} from 'olympe';
const {observable, publisher} = Service.observe($, 'myService');
// Listen to data coming from the subscription to `myService`
observable.subscribe((data) => {
console.log('Received data from subscription to "myService":', data);
});
// Publish a message to the backend process handling that subscription
publisher.publish('stateFul Message').catch((e) => {;
console.error('Error publishing message:', e);
});
// Publish a message to the first available backend process handling that service
Service.publish('myService', 'stateless message').catch((e) => {
console.error('Error publishing message:', e);
});
When setting up a subscription service, if having multiple replicas of the same service to handle higher load, you might need to synchronise memory of these backend processes if they are stateful services. Therefore, we introduced the multicast()
method on an instance of an opened Service
:
import {Service, ServiceRequestType} from 'olympe';
const myService = new Service('myService', $);
myService.listen().subscribe((request) => {
// Handle data published by a subscriber (e.g. using a publisher)
if (request.getRequestType() === ServiceRequestType.PUBLISH) {
myService.multicast(request.body()).catch((e) => {
console.error('Error sharing message:', e);
});
}
});
Finally, if you need to send multiple message in a row to the same backend process, outside the context of subscription, you can directly invoke the static method Static.getPublisher()
. This is typically used when you need to send multiple messages quickly to a service and want to ensure they are processed in the right order.
import {Service} from 'olympe';
const values = ['message1', 'message2', 'message3'];
const publisher = Service.getPublisher('myService');
for (const message of values) {
try {
await publisher.publish(message);
} catch (e) {
console.error('Error publishing message:', e);
}
}
Breaking changes
- As mentioned here, the method
Service.observe()
will now return both anObservable
and aPublisher
object. - As the olympe runtime uses an AMQP client to connect to the bus shared by both web and node built, we need to ensure webpack gets the appropriate build configuration: webpack.config.js should now include the
fallback
options to avoid trying to resolve node modules required for AMQP-Client in the browser. This must be added to theresolve
section of your webpack browser configurations:resolve: {
alias: { ... },
fallback: {
AMQPClient: false,
buffer: false,
net: false,
tls: false,
}
}
Bug fixes and minor improvements
- Remote actions execution included in graceful shutdown: as we introduced the auto horizontal scaling of backends, we improved the graceful shutdown of the backend processes to ensure that all remote actions are executed before the process is stopped. This ensures that no actions are lost during the shutdown process.
- There is no more limits in the number of results displayed when using the global search bar in Draw.
- We increased the number of rows displayed in the data set editor: the options are now 50, 100, 200 and 500 rows. The default value is 200 rows.
- New context debug methods are available on the context objects of each brick (e.g.:
$
in the update method parameters for coded bricks):$.printUserDefinedData
allows to print value defined by the user (e.g., usingSet in application context
brick) on this context (incl. parents with global flag)$.printData
allows to print all the values contained in this context (incl. parents with global flag)$.printHierarchy
allows to print the whole context hierarchy starting from this current context. The returned object contains the parent and children references which can be dug into, alongside the name. All the logs are logged to thecontextDebug
logger, which visibility level can be adjusted using the oConfig and standard loglevel number. (key:loglevel.contextDebug
). The methods print at theLOG: 2
level.
Bricks
- New brick
Get UUID
in Core library: provide the ability to generate a UUID (Universally Unique Identifier) in your Olympe application.
Use v2.11.0
- Olympe DRAW v2.11.0 / CODE v9.11.0 / Extensions v2.11.0 / Orchestrator v7.5.2 / Toolkit v1.2.2
To check whether your Olympe environment is on v2.10.0: click on the top-right logo in DRAW; you'll see the current version you are using.
CODE update
If you are using coded bricks, please update your package.json
file with the following dependencies:
// Dev dependencies:
"@olympeio/toolkit": "1.2.2",
"@olympeio/draw": "2.11.0",
// Dependencies
"@olympeio/runtime-web": "9.11.0",
"@olympeio/runtime-node": "9.11.0",
// Olympe Extensions
"@olympeio/core": "2.11.0"
"@olympeio-extensions/...": "2.11.0"