Borrar filtros
Borrar filtros

How do I configure a load balancer in front of MATLAB Production Server?

11 visualizaciones (últimos 30 días)
How do I configure a load balancer (for example, nginx) to distribute requests evenly to multiple MATLAB Production Server instances?

Respuesta aceptada

Nicole Bonfatti
Nicole Bonfatti el 30 de Jun. de 2022
Editada: Nicole Bonfatti el 14 de Jul. de 2022
For most uses, this is very straightforward. Put the MATLAB Production Server instance URLs into an upstream block in your nginx config, and then reference that upstream block as the proxy_pass argument. Nginx will then distribute requests in round-robin fashion. Similar capabilities exist in other load balancers as well.
upstream prodserverpool {
server prodserver:9910;
server prodserver2:9910;
}
server {
listen 9910;
listen [::]:9910;
server_name myserver;
location / {
proxy_pass http://prodserverpool;
}
}
However, if you want to support asynchronous requests, there's a bit more to consider. You will need to ensure the backend that each client communicates with stays the same across requests. This is called "session stickiness", and there are several ways to implement this in nginx.
  1. Using the IP Hash. This is easy to implement but if your clients appear to come from the same IP address could lead to one backend instance being overloaded (for example, behind a NAT.) To implement this, just add ip_hash; to your upstream block.
  2. Using a session cookie (Requires the commercial version of nginx). This requires your client to store and include a session cookie
  3. Using the generic hash object with a URL Parameter. Since asynchronous requests already have the client URL Parameter, I recommend using that.
upstream prodserverpool {
# 1. Using the IP Hash.
#ip_hash;
# 2. Using a session cookie
#sticky cookie srv_id expires=1h domain=.example.com path=/;
# 3. Using the generic hash object with the client URL parameter
hash $arg_client;
server prodserver:9910;
server prodserver2:9910;
}
server {
listen 9910;
listen [::]:9910;
server_name myserver;
location / {
proxy_pass http://prodserverpool;
}
}
Other load balancers may implement this slightly different, but the concept is the same.
Next, for either the session cookie or URL parameter hash methods, you'll need to ensure your client code is written to support it. That means you will need to retain the session cookie returned from the first request for the cookie method, or to keep using a consistent client URL parameter for each request to the load balancer.
For more information on using nginx as a load balancer, see the nginx load balancer documentation.
If you are using Amazon AWS Elastic Load Balancer, you need to use a cookie to set affinity. See the ELB sticky sessions documentation.
If you are using Azure Load Balancer, it supports source IP affinity. See the Azure Load balancer distribution mode documentation.
If you are using our Kubernetes Reference Architecture, it uses nginx for ingress, but (as of the time this was written) only supports cookie-based affinity. See the nginx ingress sticky sessions documentation.

Más respuestas (0)

Categorías

Más información sobre Installation en Help Center y File Exchange.

Productos


Versión

R2022a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by