wiki/info/landscape/paris2024/olympic-setup.md
... ...
@@ -183,7 +183,7 @@ The connection IDs will be shown, e.g., ``st-soft-aws_A``. Such a connection can
183 183
184 184
On both laptops there is a script ``/usr/local/bin/tunnels`` which establishes SSH tunnels using the ``autossh`` tool. The ``autossh`` processes are forked into the background using the ``-f`` option. It seems important to then pass the port to use for sending heartbeats using the ``-M`` option. If this is omitted, according to my experience only one of several ``autossh`` processes survives. However, we have also learned that using the ``-M`` option together with the "port" ``0`` can help to stabilize the connection because in some cases, if ``-M`` is used with a real port, port collisions may result, and furthermore when re-connecting the release of those heartbeat ports cannot become an issue which otherwise it sometimes does. The ``-M 0`` option is particularly helpful when tunnelling to ``sapsailing.com`` which is provided through a network load balancer (NLB).
185 185
186
-During regular operations we assume that we have an Internet connection that allows us to reach our jump host ``paris-ssh.sapsailing.com`` through SSH, establishing various port forwards. We also expect TracTrac to have their primary server available. Furthermore, we assume both our laptops to be in service. ``sap-p1-1`` then runs the master server instance, ``sap-p1-2`` runs a local replica. The master on ``sap-p1-1`` replicates the central security service at ``security-service.sapsailing.com`` using the RabbitMQ installation on ``rabbit.internal.sapsailing.com`` in the AWS region `eu-west-1`. The port forwarding through `paris-ssh.sapsailing.com` (in `eu-west-3`) to the internal RabbitMQ address (in eu-west-1) works through VPC peering. The RabbitMQ instance used for outbound replication, both, into the cloud and for the on-site replica, is `rabbit-eu-west-3.sapsailing.com`. The replica on ``sap-p1-2`` obtains its replication stream from there, and for the HTTP connection for "reverse replication" it uses a direct connection to ``sap-p1-1``. The outside world, in particular all "S-paris2024-m" master security groups in all regions supported, access the on-site master through a reverse port forward on our jump host ``paris-ssh.sapsailing.com:8888`` which under regular operations points to ``sap-p1-1:8888`` where the master process runs.
186
+During regular operations we assume that we have an Internet connection that allows us to reach our jump host ``paris-ssh.sapsailing.com`` through SSH, establishing various port forwards. We also expect TracTrac to have their primary server available. Furthermore, we assume both our laptops to be in service. ``sap-p1-1`` then runs the master server instance, ``sap-p1-2`` runs a secondary master, which we can switch to in seconds. This comes at the expense of having to synchronise these devices, using crontabs and ${GIT_ROOT}/java/target/compareServers. They replicate the central security service at ``security-service.sapsailing.com`` using the RabbitMQ installation on ``rabbit.internal.sapsailing.com`` in the AWS region `eu-west-1`. The port forwarding through `paris-ssh.sapsailing.com` (in `eu-west-3`) to the internal RabbitMQ address (in eu-west-1) works through VPC peering. The RabbitMQ instance used for outbound replication, both, into the cloud and for the on-site replica, is `rabbit-eu-west-3.sapsailing.com`. The outside world, in particular all "S-paris2024-m" master security groups in all regions supported, access the on-site master through a reverse port forward on our jump host ``paris-ssh.sapsailing.com:8888`` which under regular operations points to ``sap-p1-1:8888`` where the master process runs.
187 187
188 188
On both laptops we establish a port forward from ``localhost:22443`` to ``sapsailing.com:443``. Together with the alias in ``/etc/hosts`` that aliases ``www.sapsailing.com`` to ``localhost``, requests to ``www.sapsailing.com:22443`` will end up on the archive server.
189 189