wiki/info/landscape/amazon-ec2.md
... ...
@@ -454,9 +454,9 @@ This script has a couple of arguments and options. The most important are the ar
454 454
Ideally, we would have only a single checked out Git copy across all instances: one on the wiki user of the central. However, some crontabs require references to specific users' files, so we have the strings PATH_OF_GIT_HOME_DIR_TO_REPLACE & PATH_OF_HOME_DIR_TO_REPLACE, in the crontabs, as placeholders for the paths the string itself describes, which the build-crontab-and-cp-files script replaces with the right path.
455 455
Have a look at the script itself for more details on the options and arguments.
456 456
457
-## Disposable reverse proxy automation
457
+## Reverse proxy automation
458 458
459
-### Spinning around (spinning up and spinning down)
459
+### Spinning around (spinning up and spinning down Disposable Reverse Proxies)
460 460
461 461
Within the admin console -> Advanced -> landscape, one can launch a new disposable, with the option to customise the region, name and availability zone. The default AZ is the availability zone with the fewest reverse proxies (at the last time of refresh). Users can also rotate the httpd logs here. The automated launch process uses the AMI with the tag key
462 462
`image-type` and corresponding value `disposable-reverse-proxy`. The security group of the disposables is selected by tags too: the key is `reverse-proxy-sg`. This sg allows http (on port 80) on the private network as well as ssh (on port 22) from anywhere.
... ...
@@ -489,6 +489,27 @@ to the healthcheck, is used to get the private IPs of the other instances in the
489 489
490 490
We then use the same nmap/CIDR method, to check which of the discovered instances is in the same AZ as the archive. Finally, we use the internal-server-status, of those instances in the same AZ as the archive, to check if they are healthy. If there are no healthy instances in the "correct" AZ, then we return healthy, otherwise unhealthy.
491 491
492
+### Httpd configuration Git automation
493
+
494
+Because we have changing httpd configurations and different setups for the central and disposables, we decided to use version control and some post-receive hooks to ensure synchronisation and ease of use. We also decided not to store the httpd configuration in the main Git because the post-receive hook automation would allow
495
+those with Git access, to influence the production landscape. We have a larger set of contributors than landscape managers
496
+and want to maintain this distinction.
497
+
498
+The setup involves a repo on the central reverse proxy, in the httpdConf user. The httpdConf user also has a checked out copy for branch manipulation, by the post-receive hook. The repo has 3 branches: a shared configuration branch, a central configuration
499
+branch and a disposable configuration branch. The shared configuration stores content that both the central and disposables have. Changes to different branches cause different parts of the post-receive hook to be triggered:
500
+
501
+1. Any pushes to the central or disposable branch trigger the sync-repo-and-execute-cmd script on instances tagged with CentralReverseProxy and DisposableProxy respectively, to get the changes made on the other instances.
502
+
503
+2. Any pushes to the "shared" configuration branch are merged into both of the other branches (using the checked out workspace), and everything is pushed. This push then propagates to the centrals and disposables via method 1 above.
504
+
505
+If you wish to make persistent changes to the httpd configuration, you must ALWAYS pull the latest changes before committing your changes as follows. If you commit and push changes in the disposable branch, then only the disposables will pull the changes; if you commit and push changes to the central branch, then only the central proxy will pull the changes.
506
+If you want to make alterations to the "shared" configuration of the disposables and central, you have two options:
507
+
508
+1. Fetch the latest changes for all the branches. Test the changes locally, without committing. Run httpd -t (to check the config syntax). Reload and confirm that all is well. Checkout the main branch. Commit the changes and push. Make sure to check out the correct branch afterwards and that they have the latest changes.
509
+
510
+2. Fetch the latest changes to the branches in the httpdConf user's checked out copy. Make the edits in the httpdConf user's checked out workspace, in the correct branch. Commit and push. HttpdConf is currently a user on the central proxy.
511
+
512
+After pushing you should automatically end up in the correct branch too.
492 513
493 514
### Automating archive failover
494 515
... ...
@@ -515,8 +536,7 @@ write and quit, to install the cronjob.
515 536
516 537
If you want to quickly run this script, consider installing it in /usr/local/bin, via `ln -s TARGET_PATH LINK_NAME`.
517 538
518
-<!--TODO: Update the above section with build_crontab.-->
519
-
539
+You can use the build_crontab_and_setup_files (see below) to get these changes.
520 540
## Automated SSH Key Management
521 541
522 542
AWS by default adds the public key of the key pair used when launching an EC2 instance to the default user's `.ssh/authorized_keys` file. For a typical Amazon Linux machine, the default user is the `root` user. For Ubuntu, it's the `ec2-user` or `ubuntu` user. The problem with this approach is that other users with landscape management permissions could not get at this instance with an SSH connection. In the past we worked around this problem by deploying those landscape-managing users' public SSH keys into the root user's `.ssh/authorized_keys` file already in the Amazon Machine Image (AMI) off which the instances were launched. The problem with this, however, is obviously that we have been slow to adjust for changes in the set of users permitted to manage the landscape.