configuration/environments_scripts/repo/usr/local/bin/imageupgrade_functions.sh
... ...
@@ -96,8 +96,6 @@ build_crontab_and_setup_files() {
96 96
97 97
setup_keys() {
98 98
#1: Environment type.
99
- SEPARATOR="@."
100
- ACTUAL_SYMBOL="@@"
101 99
TEMP_KEY_DIR=$(mktemp -d /root/keysXXXXX)
102 100
REGION=$(TOKEN=`curl -X PUT "http://169.254.169.254/latest/api/token" --silent -H "X-aws-ec2-metadata-token-ttl-seconds: 21600"` \
103 101
&& curl -H "X-aws-ec2-metadata-token: $TOKEN" --silent http://169.254.169.254/latest/meta-data/placement/region)
configuration/environments_scripts/reverse_proxy/setup-disposable-reverse-proxy.sh
... ...
@@ -30,7 +30,7 @@ scp -o StrictHostKeyChecking=no -p "root@sapsailing.com:/home/wiki/gitwiki/confi
30 30
. imageupgrade_functions.sh
31 31
setup_keys "${IMAGE_TYPE}"
32 32
setup_cloud_cfg_and_root_login
33
-# setup symbolic links and crontab
33
+# setup files and crontab for the required users, both dependent on the environment type.
34 34
build_crontab_and_setup_files "${IMAGE_TYPE}" "${GIT_COPY_USER}" "${RELATIVE_PATH_TO_GIT}"
35 35
# setup mail
36 36
setup_mail_sending
wiki/info/landscape/amazon-ec2-backup-strategy.md
... ...
@@ -126,6 +126,15 @@ You can also display any text files by replacing `ls` by `cat-file`.
126 126
this is dummy content to test the backup
127 127
</pre>
128 128
129
+# AWS Backup
130
+
131
+We also backup certain volumes using AWS backup, which creates snapshots in accordance with a "plan". We have two backup plans: MongoDB-Live-Replica-Set and
132
+WeeklySailingInfrastructure. Each plan has rules defining frequency, tags, retention time and the transition plan to cold storage. In addition, there is the "resource assignment", which dictates the tags that are required on the volume, in order for it to be backed up, and which IAM role is used to create the snapshots.
133
+
134
+For the Mongo-Live-Replica-Set, we have a daily rule which backs up volumes with the key of DailySailingBackup and a matching value of Yes. Currently, this is the "Hidden MongoDB Live Replica encrypted" volume and the central reverse proxy's (Webserver) /home volume.
135
+
136
+For the WeeklySailingInfrastructure, if there is the tag key WeeklySailingInfrastructureBackup and the corresponding value of Yes, then we do a backup (believe it or not) on a weekly basis.
137
+
129 138
# Restore
130 139
131 140
Depending on what has crashed or where data got lost you need to look at different places to restore content and functionality.
wiki/info/landscape/amazon-ec2.md
... ...
@@ -273,6 +273,43 @@ In all of the following sub-sections the text will assume that you have provided
273 273
274 274
In several of the scenarios, both, AdminConsole and REST API, you will have the option to provide security bearer tokens that are used to authenticate requests to processes running the SAP Sailing Analytics. If you omit those, the credentials of the session used to authenticate your sailing user will be used. (Note, that for local test set-ups disconnected from the standard security realm used by all of the sapsailing.com-deployed processes, these credentials may not be accepted by the processes you're trying to control. In this case, please provide explicit bearer tokens instead.) We distinguish between the credentials required to replicate the information shared across the landscape, usually from ``security-service.sapsailing.com``, and those used by a replica in one of your application replica sets to authenticate for credentials to replicate the application replica set's master.
275 275
276
+There is now a single point of truth for the various ssh and AWS keys, and possibly others in the future. This can be found at /root/key_vault on the central reverse proxy. There you will find directories for different environments' key setups, named consistently with the environment types under `${GIT_HOME}/configuration/environments_scripts` (the directory names are the environment type). One can use the `setup_keys` function in `imageupgrade_functions.sh` to setup the keys. There is 1 parameter, the environment type.
277
+
278
+The structure of the vault is important for the efficacy of the script and should appear as below. There is an explanation afterwards.
279
+```
280
+key_vault
281
+├── aws_credentials
282
+│   └── disposable-reverse-proxy-automation
283
+├── central_reverse_proxy
284
+│   ├── httpdConf
285
+│   │   ├── aws
286
+│   │   │   └── credentials -> ../../../aws_credentials/disposable-reverse-proxy-automation
287
+│   │   └── ssh
288
+│   │   ├── authorized_keys
289
+│   │   │   ├── id_ed25519.pub@root@central_reverse_proxy -> ../../../root/ssh/id_ed25519.pub
290
+│   │   │   └── id_ed25519.pub@root@reverse_proxy -> ../../../../reverse_proxy/root/ssh/id_ed25519.pub
291
+│   │   ├── id_ed25519
292
+│   │   └── id_ed25519.pub
293
+│   ├── root
294
+│   │   └── ssh
295
+│   │   ├── authorized_keys
296
+│   │   │   └── id_ed25519.pub@httpdConf@central_reverse_proxy -> ../../../httpdConf/ssh/id_ed25519.pub
297
+│   │   ├── id_ed25519
298
+│   │   └── id_ed25519.pub
299
+```
300
+1. So we have the aws_credentials directory, storing the credentials for specific AWS users.
301
+2. We also have directories named after the environment types (matching the directory names in GIT_HOME/configuration/environments_scripts).
302
+3. Nested within these, we have directories for each user that will require some keys, for the given environment type.
303
+4. For each user, we have optional directories "ssh" & "aws" (the naming is important).
304
+5. The aws folder should contain only credentials files which are sym links to the aws_credentials folder.
305
+6. If the setup_keys script is run, the contents of the aws folder will be copied, across to the respective .aws folder on the instance the script runs on, within the correct user's home directory. The config file will be created with the correct region. Although, it will *only* be the default profile.
306
+7. The ssh folder will contain the ssh keys of the user; they are named based on the type of the key.
307
+8. Furthermore, the folder will contain an authorized_keys directory, which holds references to the keys (elsewhere in the vault), which should be authorized to access the user. In the above example, the symbolic link named `id_ed25519.pub@httpdConf@central_reverse_proxy` means that the key referenced will be in the authorized keys
308
+for root, so the id_ed25519 key of the httpdConf user on the central reverse proxy will be able to access the root user.
309
+9. The name of these links doesn't matter, but by convention we will use the format used in the image above (`key_type@user@env_type`), using @ as a separator.
310
+10. The script will copy across the keys in the ssh folder (ignoring sym links or directories).
311
+11. The script will append every public key that is linked in the authorized_keys folder, to the authorized_keys file of the respective user.
312
+
276 313
### Creating a New Application Replica Set
277 314
278 315
In the Application Replica Sets table click the "Add" button and provide the replica set name. You may already now press the OK button and will receive a new application replica set with a master process running on a new dedicated host, and a single replica process running on a new instance launched by the application replica set's auto-scaling group.
... ...
@@ -358,6 +395,11 @@ You can also manually trigger the upgrade of the AMI used by an auto-scaling gro
358 395
359 396
In the "Amazon Machine Images (AMIs)" table each row offers an action icon for removing the image. Use this with great care. After confirming the pop-up dialog shown, the AMI as well as its volume snapshots will be removed unrecoverably.
360 397
398
+### Create mailing list for landscape managers
399
+
400
+We now have a script to automatically create a mailing list of all the landscape managers, that is stored in /var/cache. It is updated via a cronjob. We have to be careful to write atomically, so the mailing list isn't missing any email addresses, if the notify-operators script is called midway through a write.
401
+
402
+
361 403
## Automated SSH Key Management
362 404
363 405
AWS by default adds the public key of the key pair used when launching an EC2 instance to the default user's `.ssh/authorized_keys` file. For a typical Amazon Linux machine, the default user is the `root` user. For Ubuntu, it's the `ec2-user` or `ubuntu` user. The problem with this approach is that other users with landscape management permissions could not get at this instance with an SSH connection. In the past we worked around this problem by deploying those landscape-managing users' public SSH keys into the root user's `.ssh/authorized_keys` file already in the Amazon Machine Image (AMI) off which the instances were launched. The problem with this, however, is obviously that we have been slow to adjust for changes in the set of users permitted to manage the landscape.