The information herein is only intended for the purpose of better understanding slave server operation and assessing feasibility/suitability for a specific situation. Since the slave server operation, performance and behavior is highly dependent on runtime environment, user behavior, network latency, etc., the information provided here shall only be seen as approximate and is in no sense a guarantee of slave server operation whatsoever. Each slave server installation and setup is unique and it is the responsibility of the using organization to verify that the setup operates properly and in accordance with the using organization’s needs.
As a means of addressing inherent network latency for SystemWeaver users located geographically far away from the master server, Systemite offers a slave server solution. This document provides a description of the slave server concept and its operation along with algorithms behind performance data to assist you with estimating your solution expectations.
The Concept of the SW Slave Server
The slave server can be thought of as a super client account that passes requests to a started master server via one “super” fiber connection. Users located geographically far away from the master connect to a local slave server, yet still access the real-time data stored by the master server.
A slave server is a cached data source that is configured to fetch data from the master server (either at start-up or on demand by user) and send change events to the master server so that the slave server data is always mirrored by the master server. To allow this to happen, there are one or more channels for Write operations and one channel for Read operations. From a user perspective, working through a slave server is just like working through a standard SystemWeaver installation; users can work simultaneously on the same data and see changes to data in real time.
A slave server solution can consist of one slave server or many depending on the locations of users. Regardless of the number of slave servers connecting to the master server, they will all behave similarly.
Figure 1 below shows an example overview of the SystemWeaver Slave-Master setup including the use of two slave servers located in different geographic locations and a master server. On the right side, the master server (SWServer) is connected to the database. It should be located closest to the largest number of users (swClients). The location of the owner of the data may also come into play when determining location. Users in Geographic Areas X and Y connect to their respective slave server (SWSlaveServer) to view (read) and add/edit (write) information in the database. Read operations can occur with no latency involved as the user is accessing the cached data. Write operations are sent to the database and the latency will depend on a number of factors which are discussed below.
Figure 1 – SystemWeaver Slave-Master Overview
SW Slave Server Operation
When setting up a slave server, settings are configured in the swSlaveServer.ini. Among other things, the data-loading option and number of write channels need to be defined. Each slave server has its own swSlaveServer.ini file.
Load All Data Configuration
The data-loading configuration (LoadAllData=true/false) defines when data is loaded to the slave server. There are two options to choose from:
- Loading data on demand: Upon startup, the slave server quickly connects to the master server and then data is loaded into the slave server memory on demand by the users. When a piece of data has been loaded to the slave server, it can be accessed locally from that point forward.
- Load all data: Upon startup, all data is loaded from the master server into the slave server memory so that all data from the master server is locally available to the slave server users from the start.
Systemite recommends loading all data upon startup. Although this entails a longer startup time than the on-demand option, it helps to reduce the latency factor for users.
Write Channel Count
The number of write channels needs to be defined as well (WriteChannelCount=X). There are three options:
- 0: 1 channel for all (Read and Write) operations
- 1: 1 channel for Read operations and 1 channel for Write operations
- 2-N: 1 channel for Read operations and multiple channels for Write operations
Factors That Can Impact Performance
Just as with a standard installation, there are a number of factors that can cause users logging in via a slave server to experience performance problems that are unrelated to the slave server service itself.
- Network limitations (connection latency and the stability of the line)
- Database size
- Model size
- Number of users
- Type of write operations
Slave Server Recommendations for Set-up and Use
When contemplating the implementation of the slave server solution, we suggest you consider the following:
Whenever possible, invest in a high-speed line connection for lower network latency for write performance.
Figure 3 – Comparison between different network latencies
There are known scenarios when the network connection can go down and, in such cases, the server will automatically shut down enabling server administrators to review the cause and restart the service.
Large Write Operations
Large write operations, e.g., importing data into SystemWeaver, should be executed on the master server whenever possible because even in the best-case scenario, the time of executing every single write operation is equal to the write operation time plus the network latency.
Optimal Number of Write Channels
When deciding how many write channels to make available to slave server clients, consider the following:
- Do you have any network bandwidth limitations? (if you increase the number of channels, you are increasing the bandwidth consumption)
- How many users will simultaneously be using the slave server? (you will never need more channels than there are users)
- What is the user profile of the majority of the slaver server users, i.e., do they mostly read or write?
- Do you have enough internal memory allocated on the slave server for multiple write channels?
Figure 4 shows the performance results in ms for different write channel counts using a network latency of 200 ms.
Ch=X: Number of write channels
Simulate Latency: Simulated latency of 200 ms
Figure 4 – Comparison of different write channel counts
The figure indicates that performance is essentially the same when there are between 1 and 10 clients executing write operations. It begins to improve significantly when more than 10 clients are actively writing and additional parallel write channels are added. Adding additional parallel write channels brings down write performance time to the inherent latency. This is most clearly seen when increasing from 1 to 2 write channels and to a lesser extent from 2 to 3 write channels. The improvement resulting from having more than 4 write channels is almost insignificant.