configuration/environments_scripts/central_mongo_setup/files/usr/local/bin/imageupgrade.sh
... ...
@@ -1,24 +0,0 @@
1
-#!/bin/bash
2
-
3
-# Upgrades the AWS EC2 MongoDB instance that this script is assumed to be executed on.
4
-# The steps are as follows:
5
-
6
-. imageupgrade_functions.sh
7
-
8
-run_git_pull_root() {
9
- echo "Pulling git to /root/code" >>/var/log/sailing.err
10
- cd /root/code
11
- git pull
12
-}
13
-
14
-clean_mongo_pid() {
15
- rm -f /var/run/mongodb/mongod.pid
16
-}
17
-
18
-LOGON_USER_HOME=/home/ec2-user
19
-
20
-run_yum_update
21
-build_crontab_and_setup_files central_mongo_setup root code
22
-clean_startup_logs
23
-clean_mongo_pid
24
-finalize
configuration/environments_scripts/mongo_instance_setup/files/usr/local/bin/imageupgrade.sh
... ...
@@ -1,24 +0,0 @@
1
-#!/bin/bash
2
-
3
-# Upgrades the AWS EC2 MongoDB instance that this script is assumed to be executed on.
4
-# The steps are as follows:
5
-
6
-. imageupgrade_functions.sh
7
-
8
-run_git_pull_root() {
9
- echo "Pulling git to /root/code" >>/var/log/sailing.err
10
- cd /root/code
11
- git pull
12
-}
13
-
14
-clean_mongo_pid() {
15
- rm -f /var/run/mongodb/mongod.pid
16
-}
17
-
18
-LOGON_USER_HOME=/home/ec2-user
19
-
20
-run_yum_update
21
-build_crontab_and_setup_files mongo_instance_setup root code
22
-clean_startup_logs
23
-clean_mongo_pid
24
-finalize
configuration/environments_scripts/repo/usr/local/bin/ephemeralvolume
... ...
@@ -6,8 +6,8 @@
6 6
METADATA=$( /bin/ec2-metadata -d | sed -e 's/^user-data: //' )
7 7
echo "Metadata: ${METADATA}"
8 8
if echo "${METADATA}" | grep -q "^image-upgrade$"; then
9
- echo "Image upgrade; not trying to mount/format ephemeral volume; calling imageupgrade.sh instead..."
10
- imageupgrade.sh
9
+ echo "Image upgrade; not trying to mount/format ephemeral volume; calling imageupgrade instead..."
10
+ imageupgrade
11 11
else
12 12
echo "No image upgrade; looking for ephemeral volume and trying to format with xfs..."
13 13
. imageupgrade_functions.sh
configuration/environments_scripts/sailing_server/files/usr/local/bin/imageupgrade.sh
... ...
@@ -1,15 +0,0 @@
1
-#!/bin/bash
2
-
3
-# Upgrades the AWS EC2 instance that this script is assumed to be executed on.
4
-# The steps are as follows:
5
-
6
-. `dirname $0`/imageupgrade_functions.sh
7
-
8
-run_yum_update
9
-download_and_install_latest_sap_jvm_8
10
-clean_logrotate_target
11
-clean_httpd_logs
12
-clean_servers_dir
13
-clean_startup_logs
14
-build_crontab_and_setup_files sailing_server sailing code
15
-finalize
configuration/environments_scripts/sailing_server/files/usr/local/bin/sailing
... ...
@@ -35,7 +35,7 @@ start_servers() {
35 35
chmod 755 /usr/local/bin/cp_root_mail_properties
36 36
if which $EC2_METADATA_CMD && $EC2_METADATA_CMD -d | sed "s/user-data\: //g" | grep "^image-upgrade$"; then
37 37
echo "Found image-upgrade in EC2 user data; upgrading image, then probably shutting down for AMI creation depending on the no-shutdown user data string..." >>/var/log/sailing.err
38
- /usr/local/bin/imageupgrade.sh
38
+ /usr/local/bin/imageupgrade
39 39
else
40 40
echo "No image-upgrade request found in EC2 user data $($EC2_METADATA_CMD -d); proceeding with regular server launch..." >>/var/log/sailing.err
41 41
echo "Servers to launch: ${JAVA_START_INSTANCES}" >>/var/log/sailing.err
configuration/mongo_instance_setup/imageupgrade.sh
... ...
@@ -1 +0,0 @@
1
-../environments_scripts/mongo_instance_setup/files/usr/local/bin/imageupgrade.sh
... ...
\ No newline at end of file
wiki/info/landscape/amazon-ec2.md
... ...
@@ -581,9 +581,10 @@ write and quit, to install the cronjob.
581 581
If you want to quickly run this script, consider installing it in /usr/local/bin, via `ln -s TARGET_PATH LINK_NAME`.
582 582
583 583
You can use the build_crontab_and_setup_files (see below) to get these changes.
584
+
584 585
## Automated SSH Key Management
585 586
586
-AWS by default adds the public key of the key pair used when launching an EC2 instance to the default user's `.ssh/authorized_keys` file. For a typical Amazon Linux machine, the default user is the `root` user. For Ubuntu, it's the `ec2-user` or `ubuntu` user. The problem with this approach is that other users with landscape management permissions could not get at this instance with an SSH connection. In the past we worked around this problem by deploying those landscape-managing users' public SSH keys into the root user's `.ssh/authorized_keys` file already in the Amazon Machine Image (AMI) off which the instances were launched. The problem with this, however, is obviously that we have been slow to adjust for changes in the set of users permitted to manage the landscape.
587
+AWS by default adds the public key of the key pair used when launching an EC2 instance to the default user's `.ssh/authorized_keys` file. For a typical Amazon Linux machine, the default user is the `ec2-user` user. For Ubuntu, it's the `ubuntu` user, for Debian it's `admin`. The problem with this approach is that other users with landscape management permissions could not get at this instance with an SSH connection. In the past we worked around this problem by deploying those landscape-managing users' public SSH keys into the root user's `.ssh/authorized_keys` file already in the Amazon Machine Image (AMI) off which the instances were launched. The problem with this, however, is obviously that we have been slow to adjust for changes in the set of users permitted to manage the landscape.
587 588
588 589
We decided early 2021 to change this so that things would be based on our own user and security sub-system (see [here](/wiki/info/security/security.md)). We introduced `LANDSCAPE` as a secured object type, with a special permission `MANAGE` and a special object identifier `AWS` such that the permission `LANDSCAPE:MANAGE:AWS` would permit users to manage all aspects of the AWS landscape, given they can present a valid AWS access key/secret. To keep the EC2 instances' SSH public key infrastructure in line, we made the instances poll the SSH public keys of those users with permissions, once per minute, updating the default user's `.ssh/authorized_keys` file accordingly.
589 590
... ...
@@ -591,14 +592,16 @@ The REST end point `/landscape/api/landscape/get_time_point_of_last_change_in_ss
591 592
592 593
With this, the three REST API end points `/landscape/api/landscape/get_time_point_of_last_change_in_ssh_keys_of_aws_landscape_managers`, `/security/api/restsecurity/users_with_permission?permission=LANDSCAPE:MANAGE:AWS`, and `/landscape/api/landscape/get_ssh_keys_owned_by_user?username[]=...` allow clients to efficiently find out whether the set of users with AWS landscape management permission and/or their set of SSH key pairs may have changed, and if so, poll the actual changes which requires a bit more computational effort.
593 594
594
-Two new scripts and a crontab file are provided under the configuration/ folder:
595
-- `update_authorized_keys_for_landscape_managers_if_changed`
596
-- `update_authorized_keys_for_landscape_managers`
597
-- `crontab` (found within configuration for historical reasons, but we should be using those in configuration/crontabs)
595
+Two new scripts and a crontab snippet are provided under the configuration/ folder:
596
+- `environments_scripts/repo/usr/local/bin/update_authorized_keys_for_landscape_managers_if_changed`
597
+- `environments_scripts/repo/usr/local/bin/update_authorized_keys_for_landscape_managers`
598
+- `crontabs/crontab-update-authorized-keys@HOME_DIR`
599
+
600
+These files are intended to be used in specific ``environments_scripts/`` sub-folders to be deployed to a server for a given environment. The crontab snippet should be symbolically linked to, providing the home directory where to update the ``.ssh/authorized_keys`` in the symbolic link's name, such as ``crontab-update-authorized-keys@HOME_DIR=_root`` (where the '_' will get replaced by a '/' while compiling the ``crontab`` file from the snippets).
598 601
599 602
The first makes a call to `/landscape/api/landscape/get_time_point_of_last_change_in_ssh_keys_of_aws_landscape_managers` (currently coded to `https://security-service.sapsailing.com` in the crontab file). If no previous time stamp for the last change exists under `/var/run/last_change_aws_landscape_managers_ssh_keys` or the time stamp received in the response is newer, the `update_authorized_keys_for_landscape_managers` script is invoked using the bearer token provided in `/root/ssh-key-reader.token` as argument, granting the script READ access to the user list and their SSH key pairs. That script first asks for `/security/api/restsecurity/users_with_permission?permission=LANDSCAPE:MANAGE:AWS` and then uses `/landscape/api/landscape/get_ssh_keys_owned_by_user?username[]=..`. to obtain the actual SSH public key information for the landscape managers. The original `/root/.ssh/authorized_keys` file is copied to `/root/.ssh/authorized_keys.org` once and then used to insert the single public SSH key inserted by AWS, then appending all public keys received for the landscape-managing users.
600 603
601
-The `crontab` file which is used during image-upgrade (see `configuration/imageupgrade.sh`) has a randomized sleeping period within a one minute duration after which it calls the `update_authorized_keys_for_landscape_managers_if_changed` script which transitively invokes `update_authorized_keys_for_landscape_managers` in case of changes possible.
604
+The `crontab-update-authorized-keys@HOME_DIR` snippet has a randomized sleeping period within a one minute duration after which it calls the `update_authorized_keys_for_landscape_managers_if_changed` script which transitively invokes `update_authorized_keys_for_landscape_managers` in case of changes possible.
602 605
603 606
## Legacy Documentation for Manual Operations
604 607