a7bd6a2bdec053170f4eb9e9cc8584416aaeaaa3
configuration/archive_instance_setup/mountnvmeswap
| ... | ... | @@ -1,14 +0,0 @@ |
| 1 | -#!/bin/bash |
|
| 2 | - |
|
| 3 | -# Script to deploy on an instance that has an ephemeral volume as /dev/nvme0n1 (adjust env var PARTITION if different) |
|
| 4 | -# Ensures the partition is xfs-formatted, any existing partition contents will be overwritten if formatted otherwise. |
|
| 5 | -# An existing xfs partition will be left alone. |
|
| 6 | -PARTITION=/dev/nvme0n1 |
|
| 7 | -FSTYPE=$(blkid -p $PARTITION -s TYPE -o value) |
|
| 8 | -if [ "$FSTYPE" = "" ]; then |
|
| 9 | - echo FSTYPE was empty, creating swap partition |
|
| 10 | - mkswap $PARTITION |
|
| 11 | - swapon -a $PARTITION |
|
| 12 | -else |
|
| 13 | - echo FSTYPE was "$FSTYPE", not touching |
|
| 14 | -fi |
configuration/archive_instance_setup/mountnvmeswap
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../environments_scripts/repo/usr/local/bin/mountnvmeswap |
|
| ... | ... | \ No newline at end of file |
configuration/archive_instance_setup/mountnvmeswap.service
| ... | ... | @@ -1,11 +0,0 @@ |
| 1 | -[Unit] |
|
| 2 | -Description=An unformatted /dev/nvme0n1 is turned into swap space |
|
| 3 | -Requires=-.mount |
|
| 4 | -After=-.mount |
|
| 5 | - |
|
| 6 | -[Install] |
|
| 7 | - |
|
| 8 | -[Service] |
|
| 9 | -Type=oneshot |
|
| 10 | -RemainAfterExit=true |
|
| 11 | -ExecStart=/usr/local/bin/mountnvmeswap |
configuration/archive_instance_setup/mountnvmeswap.service
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../environments_scripts/repo/etc/systemd/system/mountnvmeswap.service |
|
| ... | ... | \ No newline at end of file |
configuration/aws-automation/getLatestImageOfType.sh
| ... | ... | @@ -1,3 +0,0 @@ |
| 1 | -#!/bin/bash |
|
| 2 | -imageType="$1" |
|
| 3 | -aws ec2 describe-images --filter Name=tag:image-type,Values=${imageType} | jq --raw-output '.Images | sort_by(.CreationDate) | .[].ImageId' | tail -n 1 |
configuration/aws-automation/getLatestImageOfType.sh
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../environments_scripts/repo/usr/local/bin/getLatestImageOfType.sh |
|
| ... | ... | \ No newline at end of file |
configuration/cp_root_mail_properties
| ... | ... | @@ -1,5 +1,10 @@ |
| 1 | 1 | #!/bin/bash |
| 2 | -RELATIVE_SERVER_DIR=$1 |
|
| 2 | +if [ $# = 0 ]; then |
|
| 3 | + PWD=$( pwd ) |
|
| 4 | + RELATIVE_SERVER_DIR=$( basename ${PWD} ) |
|
| 5 | +else |
|
| 6 | + RELATIVE_SERVER_DIR=$1 |
|
| 7 | +fi |
|
| 3 | 8 | MAIL_PROPERTIES=mail.properties |
| 4 | 9 | SECRETS=secrets |
| 5 | 10 | ROOT_MAIL_PROPERTIES=/root/${MAIL_PROPERTIES} |
configuration/crontab
| ... | ... | @@ -1,2 +1,3 @@ |
| 1 | 1 | * * * * * export PATH=/bin:/usr/bin:/usr/local/bin; sleep $(( $RANDOM * 60 / 32768 )); update_authorized_keys_for_landscape_managers_if_changed $( cat /root/ssh-key-reader.token ) https://security-service.sapsailing.com /root 2>&1 >>/var/log/sailing.err |
| 2 | -* * * * * export PATH=/bin:/usr/bin:/usr/local/bin; switchoverArchive.sh /etc/httpd/conf.d/000-macros.conf 2 9 |
|
| 2 | +# NOTICE: Please try to reference the customised crontabs at $GIT_HOME/configuration/crontabs or use |
|
| 3 | +# the build-crontab script. This file has been maintained for continuity, but is deprecated. |
|
| ... | ... | \ No newline at end of file |
configuration/crontabs/README
| ... | ... | @@ -0,0 +1,7 @@ |
| 1 | +This is the crontab repo and it contains one line crontabs for all the different environments. |
|
| 2 | +These files are concatenated by the build_crontab script. Any time the crontab should contain |
|
| 3 | +a user's home directory, instead write PATH_OF_HOME_DIR_TO_REPLACE; if the crontab should |
|
| 4 | +contain the path to the git directory, instead write PATH_OF_GIT_HOME_DIR_TO_REPLACE. |
|
| 5 | +These are replaced by the build-crontab script. |
|
| 6 | + |
|
| 7 | +Note: theses files are symbolically linked to, so beware of the ramifications of changes. |
|
| ... | ... | \ No newline at end of file |
configuration/crontabs/crontab-docker-registry-gc
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +0 7 2 * * export PATH=/bin:/usr/bin:/usr/local/bin; docker exec -it registry-registry-1 registry garbage-collect /etc/docker/registry/config.yml |
|
| ... | ... | \ No newline at end of file |
configuration/crontabs/crontab-download-new-archived-trac-trac-events
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +15 12 * * * PATH_OF_GIT_HOME_DIR_TO_REPLACE/configuration/downloadNewArchivedTracTracEvents.sh PATH_OF_HOME_DIR_TO_REPLACE/static/TracTracTracks "PATH_OF_GIT_HOME_DIR_TO_REPLACE">PATH_OF_HOME_DIR_TO_REPLACE/downloadNewArchivedTracTracEvents.out 2>PATH_OF_HOME_DIR_TO_REPLACE/downloadNewArchivedTracTracEvents.err |
|
| ... | ... | \ No newline at end of file |
configuration/crontabs/crontab-mail-events-on-my
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +0 10 1 * * export PATH=/bin:/usr/bin:/usr/local/bin; mail-events-on-my >/dev/null 2>/dev/null |
|
| ... | ... | \ No newline at end of file |
configuration/crontabs/crontab-manage2sail-example
| ... | ... | @@ -0,0 +1,5 @@ |
| 1 | +# If you'd like to receive e-mail notifications about new Manage2Sail results for an event, |
|
| 2 | +# adjust the following accordingly, making sure you also update the mailing list file |
|
| 3 | +# referenced by the notify... script. Adjust the Manage2Sail event ID in the script |
|
| 4 | +# to point to the event you'd like to observe. |
|
| 5 | +#* * * * * PATH_OF_HOME_DIR_TO_REPLACE/bin/notifyAbout49erEuros2023Updates 2>PATH_OF_HOME_DIR_TO_REPLACE/notifyAbout49erEuros2023Updates.err >PATH_OF_HOME_DIR_TO_REPLACE/notifyAbout49erEuros2023Updates.out |
configuration/crontabs/crontab-mongo-health-check
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +* * * * * PATH_OF_GIT_HOME_DIR_TO_REPLACE/configuration/notify-unhealthy-mongodb 2>PATH_OF_HOME_DIR_TO_REPLACE/notify-unhealthy-mongodb.err >PATH_OF_HOME_DIR_TO_REPLACE/notify-unhealthy-mongodb.out |
|
| ... | ... | \ No newline at end of file |
configuration/crontabs/crontab-switchoverArchive
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +* * * * * export PATH=/bin:/usr/bin:/usr/local/bin; sleep $(( $RANDOM * 60 / 32768 )); switchoverArchive.sh /etc/httpd/conf.d/000-macros.conf 2 9 |
|
| ... | ... | \ No newline at end of file |
configuration/crontabs/crontab-syncgit
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +*/10 * * * * export PATH=/bin:/usr/bin:/usr/local/bin; syncgit PATH_OF_GIT_HOME_DIR_TO_REPLACE |
|
| ... | ... | \ No newline at end of file |
configuration/crontabs/crontab-update-authorized-keys
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +* * * * * export PATH=/bin:/usr/bin:/usr/local/bin; sleep $(( $RANDOM * 60 / 32768 )); update_authorized_keys_for_landscape_managers_if_changed $( cat /root/ssh-key-reader.token ) https://security-service.sapsailing.com PATH_OF_HOME_DIR_TO_REPLACE 2>&1 >>/var/log/sailing.err |
|
| ... | ... | \ No newline at end of file |
configuration/crontabs/crontab-update-landscape-managers-mailing-list
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +*/10 * * * * export PATH=/bin:/usr/bin:/usr/local/bin; sleep $(( $RANDOM * 60 / 32768 )); update_landscape_managers_mailing_list.sh $(cat /root/ssh-key-reader.token) /var/cache |
|
| ... | ... | \ No newline at end of file |
configuration/crontabs/crontab-update-trac-trac-urls
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +10 12 * * * PATH_OF_GIT_HOME_DIR_TO_REPLACE/configuration/update-tractrac-urls-to-archive.sh "PATH_OF_GIT_HOME_DIR_TO_REPLACE" >PATH_OF_HOME_DIR_TO_REPLACE/update-tractrac-urls-to-archive.out 2>PATH_OF_HOME_DIR_TO_REPLACE/update-tractrac-urls-to-archive.err |
|
| ... | ... | \ No newline at end of file |
configuration/downloadNewArchivedTracTracEvents.sh
| ... | ... | @@ -1,9 +1,14 @@ |
| 1 | 1 | #!/bin/bash |
| 2 | -GIT_ROOT=/home/wiki/gitwiki |
|
| 2 | + |
|
| 3 | 3 | # Downloads all TracTrac event data based on ${GIT_ROOT}/configuration/tractrac-json-urls |
| 4 | 4 | # into the target directory (specified as $1) for those event URLs whose specific folder |
| 5 | 5 | # does not yet exist in the target directory. |
| 6 | 6 | TARGET_DIR="${1}" |
| 7 | +if [[ $# -eq 0 ]]; then |
|
| 8 | + GIT_ROOT=/home/wiki/gitwiki |
|
| 9 | +else |
|
| 10 | + GIT_ROOT="${2}" |
|
| 11 | +fi |
|
| 7 | 12 | JSON_URLS_FILE="${GIT_ROOT}/configuration/tractrac-json-urls" |
| 8 | 13 | for i in `cat "${JSON_URLS_FILE}"`; do |
| 9 | 14 | EVENT_DB="$( basename $( dirname ${i} ) )" |
configuration/environments/archive-server
| ... | ... | @@ -1,4 +1,4 @@ |
| 1 | -MEMORY=400g |
|
| 1 | +MEMORY=500g |
|
| 2 | 2 | SERVER_NAME=ARCHIVE |
| 3 | 3 | REPLICATION_CHANNEL=sapsailinganalytics-archive |
| 4 | 4 | MONGODB_URI="mongodb://dbserver.internal.sapsailing.com:10201/winddb?replicaSet=archive&retryWrites=true&readPreference=secondaryPreferred" |
configuration/environments/dev-server
| ... | ... | @@ -1,13 +1,4 @@ |
| 1 | 1 | SERVER_NAME=DEV |
| 2 | 2 | REPLICATION_HOST=rabbit.internal.sapsailing.com |
| 3 | 3 | REPLICATION_CHANNEL=sapsailinganalytics-dev |
| 4 | -TELNET_PORT=14888 |
|
| 5 | -SERVER_PORT=8888 |
|
| 6 | -MONGODB_HOST=dbserver.internal.sapsailing.com |
|
| 7 | -MONGODB_PORT=10200 |
|
| 8 | -EXPEDITION_PORT=2010 |
|
| 9 | -REPLICATE_ON_START=False |
|
| 10 | -REPLICATE_MASTER_SERVLET_HOST= |
|
| 11 | -REPLICATE_MASTER_SERVLET_PORT= |
|
| 12 | -REPLICATE_MASTER_QUEUE_HOST= |
|
| 13 | -REPLICATE_MASTER_QUEUE_PORT= |
|
| 4 | +MONGODB_URI="mongodb://dbserver.internal.sapsailing.com:10202/dev?replicaSet=slow&retryWrites=true&readPreference=nearest" |
configuration/environments_scripts/README
| ... | ... | @@ -0,0 +1,16 @@ |
| 1 | +The environments_scripts directory contains more directories, which map to environment types, each holding scripts and files useful to that environment. Each environment type also contains a "users" folder, which contains subfolders for each user on the system. |
|
| 2 | +The build-crontab script uses the contents of these subfolders to create and install a customised crontab for each user in the folder. It does this by concatenating one liner crontabs into an uber crontab. |
|
| 3 | +Any time the crontab should contain a user's home directory, instead write PATH_OF_HOME_DIR_TO_REPLACE; if the crontab should |
|
| 4 | +contain the path to the git directory, instead write PATH_OF_GIT_HOME_DIR_TO_REPLACE. |
|
| 5 | +These are replaced by the build-crontab script with the correct paths |
|
| 6 | + |
|
| 7 | +environment_scripts (directory) |
|
| 8 | +| |
|
| 9 | +|_environement_type (directory) |
|
| 10 | + | |
|
| 11 | + |_usefulScripts |
|
| 12 | + |_users (directory) |
|
| 13 | + | |
|
| 14 | + |_user1 (directory) |
|
| 15 | + | |
|
| 16 | + |_symbolicLinks (to configuration/crontabs) |
configuration/environments_scripts/build-crontab
| ... | ... | @@ -0,0 +1,56 @@ |
| 1 | +#!/bin/bash |
|
| 2 | + |
|
| 3 | +# Purpose: The first parameter is an image type. This script iterates over the image types' USER folders, concatenating all the symbolic |
|
| 4 | +# links (referencing locations within configuration/crontabs) into one file for each user. It also takes the name of the user containing a copy of the git repo |
|
| 5 | +# as well as the name of the dir within that user to go to. The created files for each user go into that user's |
|
| 6 | +# home dir and are installed for that user too. Note: within these crontabs are certain regexes, which are replaced by the path to the git home dir and the matching |
|
| 7 | +# user's home. |
|
| 8 | +# Useful files are also copied across from the "files" dir within each image type. |
|
| 9 | + |
|
| 10 | +if [[ $# -ne 3 && $# -ne 4 && $# -ne 5 ]]; then |
|
| 11 | + echo "$0 <ENVIRONMENT_TYPE> <USER_WITH_COPY_OF_REPO> <RELATIVE_PATH_OF_GIT_DIR_WITHIN_USER> " |
|
| 12 | + echo "" |
|
| 13 | + echo "Where USER_WITH_COPY_OF_REPO is a user that contains a checked out copy of the main git." |
|
| 14 | + echo "And where RELATIVE_PATH_OF_GIT_DIR_WITHIN_USER is the path to the git repo from the USER_WITH_COPY_OF_REPO's home directory." |
|
| 15 | + echo "Use the s(imple) flag to only do the crontab and not copy any files across. The n(o install) flag can be used to setup the crontabs but not install them." |
|
| 16 | + exit 2 |
|
| 17 | +fi |
|
| 18 | +INSTALL_CRONTAB="true" |
|
| 19 | +options='sn' |
|
| 20 | +while getopts $options option |
|
| 21 | +do |
|
| 22 | + case $option in |
|
| 23 | + n) INSTALL_CRONTAB="false";; |
|
| 24 | + s) ONLY_CRONTAB="true";; |
|
| 25 | + \?) echo "Invalid option" |
|
| 26 | + exit 4;; |
|
| 27 | + esac |
|
| 28 | +done |
|
| 29 | +shift $((OPTIND-1)) # shift the arguments along so -s is no longer $1 |
|
| 30 | +ENV_TYPE="$1" |
|
| 31 | +GIT_USER="$2" |
|
| 32 | +RELATIVE_GIT_DIR_NAME="$3" |
|
| 33 | +cd "$(dirname "$0")/${ENV_TYPE}" |
|
| 34 | +if [[ -d "users" ]]; then |
|
| 35 | + cd "users" |
|
| 36 | + GIT_PATH="$(eval echo $(printf "~%q" "$GIT_USER"))/${RELATIVE_GIT_DIR_NAME}" # The path to the git repo that contains the files needed. |
|
| 37 | + for dir in $(ls -d */ ); do |
|
| 38 | + USERNAME=$(echo $dir | sed "s/\/$//") # Dirname is the username. The trailing slash is removed. |
|
| 39 | + HOME_DIR=$(eval echo $(printf "~%q" "$USERNAME")) # The path to the home dir of the user whose cronjob will be installed. |
|
| 40 | + > $HOME_DIR/crontab |
|
| 41 | + for crontab in $(ls ${USERNAME}/crontab*); do |
|
| 42 | + cat "$crontab" >> $HOME_DIR/crontab |
|
| 43 | + echo "" >> $HOME_DIR/crontab # Adds a newline |
|
| 44 | + done |
|
| 45 | + sed -i "s|PATH_OF_GIT_HOME_DIR_TO_REPLACE|${GIT_PATH}|g" $HOME_DIR/crontab # Sets correct path to the git repo within the crontab. |
|
| 46 | + sed -i "s|PATH_OF_HOME_DIR_TO_REPLACE|${HOME_DIR}|g" $HOME_DIR/crontab # Sets the correct path to the home dir of the user whose crontab will be installed. |
|
| 47 | + if [[ "$INSTALL_CRONTAB" == "true" ]]; then |
|
| 48 | + crontab -u ${USERNAME} $HOME_DIR/crontab # Install the crontab in the given user's home dir. |
|
| 49 | + fi |
|
| 50 | + done |
|
| 51 | + cd .. # exits users folder, which is essential for the next commands |
|
| 52 | +fi |
|
| 53 | +if [[ "$ONLY_CRONTAB" != "true" && -d "files" ]]; then |
|
| 54 | + cd "files" |
|
| 55 | + \cp -rL * / # copies all files accross, realising any symbolic links. The backslash escapes the alias cp -i. |
|
| 56 | +fi |
configuration/environments_scripts/build_server/files/etc/sysconfig/hudson
| ... | ... | @@ -0,0 +1,88 @@ |
| 1 | +## Path: Development/Hudson |
|
| 2 | +## Description: Configuration for the Hudson continuous build server |
|
| 3 | +## Type: string |
|
| 4 | +## Default: "/var/lib/hudson" |
|
| 5 | +## ServiceRestart: hudson |
|
| 6 | +# |
|
| 7 | +# Directory where Hudson store its configuration and working |
|
| 8 | +# files (checkouts, build reports, artifacts, ...). |
|
| 9 | +# |
|
| 10 | +HUDSON_HOME="/home/hudson/repo" |
|
| 11 | + |
|
| 12 | +## Type: string |
|
| 13 | +## Default: "" |
|
| 14 | +## ServiceRestart: hudson |
|
| 15 | +# |
|
| 16 | +# Java executable to run Hudson |
|
| 17 | +# When left empty, we'll try to find the suitable Java. |
|
| 18 | +# |
|
| 19 | + |
|
| 20 | +HUDSON_JAVA_CMD="/opt/sapjvm_8/bin/java" |
|
| 21 | + |
|
| 22 | +## Type: string |
|
| 23 | +## Default: "hudson" |
|
| 24 | +## ServiceRestart: hudson |
|
| 25 | +# |
|
| 26 | +# Unix user account that runs the Hudson daemon |
|
| 27 | +# Be careful when you change this, as you need to update |
|
| 28 | +# permissions of $HUDSON_HOME and /var/log/hudson. |
|
| 29 | +# |
|
| 30 | +HUDSON_USER="hudson" |
|
| 31 | + |
|
| 32 | +## Type: string |
|
| 33 | +## Default: "-Djava.awt.headless=true" |
|
| 34 | +## ServiceRestart: hudson |
|
| 35 | +# |
|
| 36 | +# Options to pass to java when running Hudson. |
|
| 37 | +# |
|
| 38 | +HUDSON_JAVA_OPTIONS="-Djava.awt.headless=true -Xmx2G -Dhudson.slaves.ChannelPinger.pingInterval=60 -Dhudson.slaves.ChannelPinger.pingIntervalSeconds=60 -Dhudson.slaves.ChannelPinger.pingTimeoutSeconds=60" |
|
| 39 | + |
|
| 40 | +## Type: integer(0:65535) |
|
| 41 | +## Default: 8080 |
|
| 42 | +## ServiceRestart: hudson |
|
| 43 | +# |
|
| 44 | +# Port Hudson is listening on. |
|
| 45 | +# |
|
| 46 | +HUDSON_PORT="8080" |
|
| 47 | + |
|
| 48 | +## Type: integer(1:9) |
|
| 49 | +## Default: 5 |
|
| 50 | +## ServiceRestart: hudson |
|
| 51 | +# |
|
| 52 | +# Debug level for logs -- the higher the value, the more verbose. |
|
| 53 | +# 5 is INFO. |
|
| 54 | +# |
|
| 55 | +HUDSON_DEBUG_LEVEL="5" |
|
| 56 | + |
|
| 57 | +## Type: yesno |
|
| 58 | +## Default: no |
|
| 59 | +## ServiceRestart: hudson |
|
| 60 | +# |
|
| 61 | +# Whether to enable access logging or not. |
|
| 62 | +# |
|
| 63 | +HUDSON_ENABLE_ACCESS_LOG="no" |
|
| 64 | + |
|
| 65 | +## Type: integer |
|
| 66 | +## Default: 100 |
|
| 67 | +## ServiceRestart: hudson |
|
| 68 | +# |
|
| 69 | +# Maximum number of HTTP worker threads. |
|
| 70 | +# |
|
| 71 | +HUDSON_HANDLER_MAX="100" |
|
| 72 | + |
|
| 73 | +## Type: integer |
|
| 74 | +## Default: 20 |
|
| 75 | +## ServiceRestart: hudson |
|
| 76 | +# |
|
| 77 | +# Maximum number of idle HTTP worker threads. |
|
| 78 | +# |
|
| 79 | +HUDSON_HANDLER_IDLE="20" |
|
| 80 | + |
|
| 81 | +## Type: string |
|
| 82 | +## Default: "" |
|
| 83 | +## ServiceRestart: hudson |
|
| 84 | +# |
|
| 85 | +# Pass arbitrary arguments to Hudson. |
|
| 86 | +# Full option list: java -jar hudson.war --help |
|
| 87 | +# |
|
| 88 | +HUDSON_ARGS="" |
configuration/environments_scripts/build_server/files/etc/systemd/system/hudson.service
| ... | ... | @@ -0,0 +1,13 @@ |
| 1 | +[Unit] |
|
| 2 | +Description=The Hudson start-up / shut-down service |
|
| 3 | +Requires=-.mount mongod.service |
|
| 4 | +After=-.mount mongod.service |
|
| 5 | + |
|
| 6 | +[Install] |
|
| 7 | +RequiredBy=multi-user.target |
|
| 8 | + |
|
| 9 | +[Service] |
|
| 10 | +Type=oneshot |
|
| 11 | +RemainAfterExit=true |
|
| 12 | +ExecStart=/etc/init.d/hudson start |
|
| 13 | +ExecStop=/etc/init.d/hudson stop |
configuration/environments_scripts/build_server/files/etc/systemd/system/mountnvmeswap.service
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../../repo/etc/systemd/system/mountnvmeswap.service |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/build_server/files/usr/local/bin/getLatestImageOfType.sh
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../../repo/usr/local/bin/getLatestImageOfType.sh |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/build_server/files/usr/local/bin/hudson
| ... | ... | @@ -0,0 +1,129 @@ |
| 1 | +#!/bin/sh |
|
| 2 | +# Check for missing binaries (stale symlinks should not happen) |
|
| 3 | +HUDSON_WAR="/usr/lib/hudson/hudson.war" |
|
| 4 | +test -r "$HUDSON_WAR" || { echo "$HUDSON_WAR not installed"; |
|
| 5 | + if [ "$1" = "stop" ]; then exit 0; |
|
| 6 | + else exit 5; fi; } |
|
| 7 | + |
|
| 8 | +# Check for existence of needed config file and read it |
|
| 9 | +HUDSON_CONFIG=/etc/sysconfig/hudson |
|
| 10 | +test -e "$HUDSON_CONFIG" || { echo "$HUDSON_CONFIG not existing"; |
|
| 11 | + if [ "$1" = "stop" ]; then exit 0; |
|
| 12 | + else exit 6; fi; } |
|
| 13 | +test -r "$HUDSON_CONFIG" || { echo "$HUDSON_CONFIG not readable. Perhaps you forgot 'sudo'?"; |
|
| 14 | + if [ "$1" = "stop" ]; then exit 0; |
|
| 15 | + else exit 6; fi; } |
|
| 16 | + |
|
| 17 | +HUDSON_PID_FILE="/var/run/hudson.pid" |
|
| 18 | +HUDSON_USER="hudson" |
|
| 19 | +HUDSON_GROUP="hudson" |
|
| 20 | + |
|
| 21 | +# Source function library. |
|
| 22 | +. /etc/init.d/functions |
|
| 23 | + |
|
| 24 | +# Read config |
|
| 25 | +[ -f "$HUDSON_CONFIG" ] && . "$HUDSON_CONFIG" |
|
| 26 | + |
|
| 27 | +# Set up environment accordingly to the configuration settings |
|
| 28 | +[ -n "$HUDSON_HOME" ] || { echo "HUDSON_HOME not configured in $HUDSON_CONFIG"; |
|
| 29 | + if [ "$1" = "stop" ]; then exit 0; |
|
| 30 | + else exit 6; fi; } |
|
| 31 | +[ -d "$HUDSON_HOME" ] || { echo "HUDSON_HOME directory does not exist: $HUDSON_HOME"; |
|
| 32 | + if [ "$1" = "stop" ]; then exit 0; |
|
| 33 | + else exit 1; fi; } |
|
| 34 | +export HUDSON_HOME |
|
| 35 | + |
|
| 36 | +# Search usable Java. We do this because various reports indicated |
|
| 37 | +# that /usr/bin/java may not always point to Java 1.5 |
|
| 38 | +# see http://www.nabble.com/guinea-pigs-wanted-----Hudson-RPM-for-RedHat-Linux-td25673707.html |
|
| 39 | +for candidate in /usr/lib/jvm/java-1.6.0/bin/java /usr/lib/jvm/jre-1.6.0/bin/java /usr/lib/jvm/java-1.5.0/bin/java /usr/lib/jvm/jre-1.5.0/bin/java /usr/bin/java |
|
| 40 | +do |
|
| 41 | + [ -x "$HUDSON_JAVA_CMD" ] && break |
|
| 42 | + HUDSON_JAVA_CMD="$candidate" |
|
| 43 | +done |
|
| 44 | + |
|
| 45 | +JAVA_CMD="$HUDSON_JAVA_CMD $HUDSON_JAVA_OPTIONS -DHUDSON_HOME=$HUDSON_HOME -jar $HUDSON_WAR" |
|
| 46 | +PARAMS="--logfile=/var/log/hudson/hudson.log --daemon" |
|
| 47 | +[ -n "$HUDSON_PORT" ] && PARAMS="$PARAMS --httpPort=$HUDSON_PORT" |
|
| 48 | +[ -n "$HUDSON_DEBUG_LEVEL" ] && PARAMS="$PARAMS --debug=$HUDSON_DEBUG_LEVEL" |
|
| 49 | +[ -n "$HUDSON_HANDLER_STARTUP" ] && PARAMS="$PARAMS --handlerCountStartup=$HUDSON_HANDLER_STARTUP" |
|
| 50 | +[ -n "$HUDSON_HANDLER_MAX" ] && PARAMS="$PARAMS --handlerCountMax=$HUDSON_HANDLER_MAX" |
|
| 51 | +[ -n "$HUDSON_HANDLER_IDLE" ] && PARAMS="$PARAMS --handlerCountMaxIdle=$HUDSON_HANDLER_IDLE" |
|
| 52 | +[ -n "$HUDSON_ARGS" ] && PARAMS="$PARAMS $HUDSON_ARGS" |
|
| 53 | + |
|
| 54 | +if [ "$HUDSON_ENABLE_ACCESS_LOG" = "yes" ]; then |
|
| 55 | + PARAMS="$PARAMS --accessLoggerClassName=winstone.accesslog.SimpleAccessLogger --simpleAccessLogger.format=combined --simpleAccessLogger.file=/var/log/hudson/access_log" |
|
| 56 | +fi |
|
| 57 | + |
|
| 58 | +RETVAL=0 |
|
| 59 | + |
|
| 60 | +case "$1" in |
|
| 61 | + start) |
|
| 62 | + echo -n "Starting Hudson " |
|
| 63 | + daemon --user "$HUDSON_USER" --pidfile "$HUDSON_PID_FILE" "$JAVA_CMD" "$PARAMS" &> /var/tmp/hudson.log & |
|
| 64 | + RETVAL=$? |
|
| 65 | + if [ $RETVAL = 0 ]; then |
|
| 66 | + success |
|
| 67 | + echo > "$HUDSON_PID_FILE" # just in case we fail to find it |
|
| 68 | + MY_SESSION_ID=`/bin/ps h -o sess -p $$` |
|
| 69 | + # get PID |
|
| 70 | + /bin/ps hww -u "$HUDSON_USER" -o sess,ppid,pid,cmd | \ |
|
| 71 | + while read sess ppid pid cmd; do |
|
| 72 | + [ "$ppid" = 1 ] || continue |
|
| 73 | + # this test doesn't work because Hudson sets a new Session ID |
|
| 74 | + # [ "$sess" = "$MY_SESSION_ID" ] || continue |
|
| 75 | + echo "$cmd" | grep $HUDSON_WAR > /dev/null |
|
| 76 | + [ $? = 0 ] || continue |
|
| 77 | + # found a PID |
|
| 78 | + echo $pid > "$HUDSON_PID_FILE" |
|
| 79 | + done |
|
| 80 | + else |
|
| 81 | + failure |
|
| 82 | + fi |
|
| 83 | + echo |
|
| 84 | + ;; |
|
| 85 | + stop) |
|
| 86 | + echo -n "Shutting down Hudson " |
|
| 87 | + killproc hudson |
|
| 88 | + RETVAL=$? |
|
| 89 | + echo |
|
| 90 | + ;; |
|
| 91 | + try-restart|condrestart) |
|
| 92 | + if test "$1" = "condrestart"; then |
|
| 93 | + echo "${attn} Use try-restart ${done}(LSB)${attn} rather than condrestart ${warn}(RH)${norm}" |
|
| 94 | + fi |
|
| 95 | + $0 status |
|
| 96 | + if test $? = 0; then |
|
| 97 | + $0 restart |
|
| 98 | + else |
|
| 99 | + : # Not running is not a failure. |
|
| 100 | + fi |
|
| 101 | + ;; |
|
| 102 | + restart) |
|
| 103 | + $0 stop |
|
| 104 | + $0 start |
|
| 105 | + ;; |
|
| 106 | + force-reload) |
|
| 107 | + echo -n "Reload service Hudson " |
|
| 108 | + $0 try-restart |
|
| 109 | + ;; |
|
| 110 | + reload) |
|
| 111 | + $0 restart |
|
| 112 | + ;; |
|
| 113 | + status) |
|
| 114 | + status hudson |
|
| 115 | + RETVAL=$? |
|
| 116 | + ;; |
|
| 117 | + probe) |
|
| 118 | + ## Optional: Probe for the necessity of a reload, print out the |
|
| 119 | + ## argument to this init script which is required for a reload. |
|
| 120 | + ## Note: probe is not (yet) part of LSB (as of 1.9) |
|
| 121 | + |
|
| 122 | + test "$HUDSON_CONFIG" -nt "$HUDSON_PID_FILE" && echo reload |
|
| 123 | + ;; |
|
| 124 | + *) |
|
| 125 | + echo "Usage: $0 {start|stop|status|try-restart|restart|force-reload|reload|probe}" |
|
| 126 | + exit 1 |
|
| 127 | + ;; |
|
| 128 | +esac |
|
| 129 | +exit $RETVAL |
configuration/environments_scripts/build_server/files/usr/local/bin/launchhudsonslave
| ... | ... | @@ -0,0 +1,31 @@ |
| 1 | +#!/bin/bash |
|
| 2 | +# Enable sudo for user hudson for this script by adding the following to /etc/sudoers.d/hudsoncanlaunchec2instances: |
|
| 3 | +# hudson ALL = (root) NOPASSWD: /usr/local/bin/launchhudsonslave |
|
| 4 | +# hudson ALL = (root) NOPASSWD: /usr/local/bin/launchhudsonslave-java11 |
|
| 5 | +# hudson ALL = (root) NOPASSWD: /usr/local/bin/getLatestImageOfType.sh |
|
| 6 | +AWS=/bin/aws |
|
| 7 | +REGION=eu-west-1 |
|
| 8 | +HUDSON_SLAVE_AMI_ID=$( /usr/local/bin/getLatestImageOfType.sh hudson-slave ) |
|
| 9 | +echo Launching instance from AMI ${HUDSON_SLAVE_AMI_ID} ... |
|
| 10 | +instanceid=`$AWS ec2 run-instances --image-id $HUDSON_SLAVE_AMI_ID --count 1 --instance-type c5d.4xlarge --key-name Axel --security-groups "Sailing Analytics App" --region $REGION --tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value=Hudson Build Instance}]' --instance-initiated-shutdown-behavior terminate | tee /tmp/slavelaunch.out | jq .Instances[0].InstanceId | sed -e 's/"//g'` |
|
| 11 | +if [ "$instanceid" = "" ]; then |
|
| 12 | + echo Error launching instance |
|
| 13 | + exit 1 |
|
| 14 | +else |
|
| 15 | + echo Instance ID is $instanceid |
|
| 16 | + while [ "`$AWS ec2 describe-instances --region $REGION --instance-ids $instanceid | jq .Reservations[0].Instances[0].State.Name`" != "\"running\"" ]; do |
|
| 17 | + echo Instance $instanceid not running yet\; trying again... |
|
| 18 | + sleep 5 |
|
| 19 | + done |
|
| 20 | + echo Instance $instanceid seems running now |
|
| 21 | + private_ip=`$AWS ec2 describe-instances --region $REGION --instance-ids $instanceid | jq .Reservations[0].Instances[0].PrivateIpAddress | sed -e 's/"//g'` |
|
| 22 | + echo Probing for SSH on private IP $private_ip |
|
| 23 | + # Note: it's important to redirect stdin/stdout from/to /dev/null to ensure the Hudon master can properly connect stdin/stdout to the slave later |
|
| 24 | + while ! su - hudson -c "ssh -o ConnectTimeout=10 -o StrictHostKeyChecking=no hudson@$private_ip mkdir -p /home/hudson/workspace/___test___\; rmdir /home/hudson/workspace/___test___ </dev/null >/dev/null 2>/dev/null"; do |
|
| 25 | + echo SSH daemon not reachable yet. Trying again in a few seconds... |
|
| 26 | + sleep 10 |
|
| 27 | + done |
|
| 28 | + echo SSH daemon reached. State should be ready to connect to now. |
|
| 29 | + su - hudson -c "ssh -o ConnectTimeout=10 -o StrictHostKeyChecking=no hudson@$private_ip \"/opt/sapjvm_8/bin/java -jar slave.jar; sudo /sbin/shutdown -h now\"" |
|
| 30 | + $AWS ec2 terminate-instances --instance-ids $instanceid |
|
| 31 | +fi |
configuration/environments_scripts/build_server/files/usr/local/bin/launchhudsonslave-java11
| ... | ... | @@ -0,0 +1,31 @@ |
| 1 | +#!/bin/bash |
|
| 2 | +# Enable sudo for user hudson for this script by adding the following to /etc/sudoers.d/hudsoncanlaunchec2instances: |
|
| 3 | +# hudson ALL = (root) NOPASSWD: /usr/local/bin/launchhudsonslave |
|
| 4 | +# hudson ALL = (root) NOPASSWD: /usr/local/bin/launchhudsonslave-java11 |
|
| 5 | +# hudson ALL = (root) NOPASSWD: /usr/local/bin/getLatestImageOfType.sh |
|
| 6 | +AWS=/usr/bin/aws |
|
| 7 | +REGION=eu-west-1 |
|
| 8 | +HUDSON_SLAVE_AMI_ID=$( /usr/local/bin/getLatestImageOfType.sh hudson-slave-11 ) |
|
| 9 | +echo Launching instance from AMI ${HUDSON_SLAVE_AMI_ID} ... |
|
| 10 | +instanceid=`$AWS ec2 run-instances --image-id $HUDSON_SLAVE_AMI_ID --count 1 --instance-type c5d.4xlarge --key-name Axel --security-groups "Sailing Analytics App" --region $REGION --tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value=Hudson Ubuntu Slave Java 11}]' --instance-initiated-shutdown-behavior terminate | tee /tmp/slavelaunch.out | jq .Instances[0].InstanceId | sed -e 's/"//g'` |
|
| 11 | +if [ "$instanceid" = "" ]; then |
|
| 12 | + echo Error launching instance |
|
| 13 | + exit 1 |
|
| 14 | +else |
|
| 15 | + echo Instance ID is $instanceid |
|
| 16 | + while [ "`$AWS ec2 describe-instances --region $REGION --instance-ids $instanceid | jq .Reservations[0].Instances[0].State.Name`" != "\"running\"" ]; do |
|
| 17 | + echo Instance $instanceid not running yet\; trying again... |
|
| 18 | + sleep 5 |
|
| 19 | + done |
|
| 20 | + echo Instance $instanceid seems running now |
|
| 21 | + private_ip=`$AWS ec2 describe-instances --region $REGION --instance-ids $instanceid | jq .Reservations[0].Instances[0].PrivateIpAddress | sed -e 's/"//g'` |
|
| 22 | + echo Probing for SSH on private IP $private_ip |
|
| 23 | + # Note: it's important to redirect stdin/stdout from/to /dev/null to ensure the Hudon master can properly connect stdin/stdout to the slave later |
|
| 24 | + while ! su - hudson -c "ssh -o ConnectTimeout=10 -o StrictHostKeyChecking=no hudson@$private_ip mkdir -p /home/hudson/workspace/___test___\; rmdir /home/hudson/workspace/___test___ </dev/null >/dev/null 2>/dev/null"; do |
|
| 25 | + echo SSH daemon not reachable yet. Trying again in a few seconds... |
|
| 26 | + sleep 10 |
|
| 27 | + done |
|
| 28 | + echo SSH daemon reached. State should be ready to connect to now. |
|
| 29 | + su - hudson -c "ssh -o ConnectTimeout=10 -o StrictHostKeyChecking=no hudson@$private_ip \"/opt/sapjvm_8/bin/java -jar slave.jar; sudo /sbin/shutdown -h now\"" |
|
| 30 | + $AWS ec2 terminate-instances --instance-ids $instanceid |
|
| 31 | +fi |
configuration/environments_scripts/build_server/files/usr/local/bin/mountnvmeswap
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../../repo/usr/local/bin/mountnvmeswap |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/build_server/files/usr/local/bin/update_authorized_keys_for_landscape_managers
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../../repo/usr/local/bin/update_authorized_keys_for_landscape_managers |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/build_server/files/usr/local/bin/update_authorized_keys_for_landscape_managers_if_changed
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../../repo/usr/local/bin/update_authorized_keys_for_landscape_managers_if_changed |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/build_server/users/root/crontab-update-authorized-keys
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../crontabs/crontab-update-authorized-keys |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/central_reverse_proxy/files/etc/systemd/system/mountnvmeswap.service
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../../repo/etc/systemd/system/mountnvmeswap.service |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/central_reverse_proxy/files/home/trac/bin/notifyAbout49erEuros2023Updates
| ... | ... | @@ -0,0 +1,62 @@ |
| 1 | +#!/usr/bin/env bash |
|
| 2 | + |
|
| 3 | +# set up some constants |
|
| 4 | +## Manage2Sail |
|
| 5 | +M2SURL="http://manage2sail.com" |
|
| 6 | +M2SEVENTID="fa9f5552-68a4-4712-b84b-ece0f36af8e8" |
|
| 7 | +M2SACCESSTOKEN="bDAv8CwsTM94ujZ" |
|
| 8 | +M2SSTRING=${M2SURL}"/api/public/links/event/"${M2SEVENTID}"?accesstoken="${M2SACCESSTOKEN}"&mediaType=json" |
|
| 9 | +## Local JSON |
|
| 10 | +LOCALJSON="/tmp/euros49er2023results.json" |
|
| 11 | +LOCALJSON_OLD=$LOCALJSON.old |
|
| 12 | +## Mails |
|
| 13 | +MAILINGLIST="/home/trac/mailinglists/euros49er2023" |
|
| 14 | +## Eventname |
|
| 15 | +EVENTNAME="49er Euros 2023" |
|
| 16 | +## Misc |
|
| 17 | +DOSENDMAIL=false |
|
| 18 | + |
|
| 19 | +### ROUTINE ### |
|
| 20 | +# get current json from manage2sail |
|
| 21 | +wget -O "$LOCALJSON" "$M2SSTRING" >/dev/null 2>&1 |
|
| 22 | +if [ -f "$LOCALJSON_OLD" ]; then |
|
| 23 | + # results previously downloaded; now compare |
|
| 24 | + # sort by manage2sail Regattas.Id |
|
| 25 | + echo Comparing $LOCALJSON and $LOCALJSON_OLD |
|
| 26 | + LOCALJSON_SORTED="${LOCALJSON}.sorted" |
|
| 27 | + LOCALJSON_OLD_SORTED="${LOCALJSON_OLD}.sorted" |
|
| 28 | + cat "$LOCALJSON" | jq -r '.Regattas|sort_by(.Name)' >"$LOCALJSON_SORTED" |
|
| 29 | + cat "$LOCALJSON_OLD" | jq -r '.Regattas|sort_by(.Name)' >"$LOCALJSON_OLD_SORTED" |
|
| 30 | + # filter only necessary values |
|
| 31 | + LOCALJSON_FORMATTED="${LOCALJSON}.formatted" |
|
| 32 | + LOCALJSON_OLD_FORMATTED="${LOCALJSON_OLD}.formatted" |
|
| 33 | + cat "$LOCALJSON_SORTED" | jq -r '.[]|{Final, LastPublishedRoundName, Published, Name, ClassName, Id}|select(.Published != null)' >"$LOCALJSON_FORMATTED" |
|
| 34 | + cat "$LOCALJSON_OLD_SORTED" | jq -r '.[]|{Final, LastPublishedRoundName, Published, Name, ClassName, Id}|select(.Published != null)' >"$LOCALJSON_OLD_FORMATTED" |
|
| 35 | + # diff sorted+formatted json |
|
| 36 | + diff --brief <(sort ${LOCALJSON_OLD_FORMATTED}) <(sort ${LOCALJSON_FORMATTED}) >/dev/null 2>&1 |
|
| 37 | + comp_value=$? |
|
| 38 | + if [ $comp_value -eq 1 ]; then |
|
| 39 | + echo "Found diff between $LOCALJSON_OLD_FORMATTED and $LOCALJSON_FORMATTED |
|
| 40 | + |
|
| 41 | +`diff -q "$LOCALJSON_OLD_FORMATTED" "$LOCALJSON_FORMATTED"` |
|
| 42 | + |
|
| 43 | +Sending mail." |
|
| 44 | + DOSENDMAIL=true |
|
| 45 | + fi |
|
| 46 | +fi |
|
| 47 | +if [ "$DOSENDMAIL" = "true" ]; then |
|
| 48 | + echo "`date` Sending mail to `cat "$MAILINGLIST"`" |
|
| 49 | + echo "$LOCALJSON has changed. Consider re-importing results for $EVENTNAME. Differences: |
|
| 50 | + |
|
| 51 | +------------------ |
|
| 52 | +`diff -U 6 "$LOCALJSON_OLD_FORMATTED" "$LOCALJSON_FORMATTED"` |
|
| 53 | +------------------ |
|
| 54 | + |
|
| 55 | +To unsubscribe, change $MAILINGLIST on `hostname`." | mail -s "SAP result import update: $EVENTNAME" `cat "$MAILINGLIST" | grep -v "^#"` |
|
| 56 | +fi |
|
| 57 | + |
|
| 58 | +# move to old |
|
| 59 | +mv "$LOCALJSON" "$LOCALJSON_OLD" |
|
| 60 | +# clean up temporary files |
|
| 61 | +#rm -rf "$LOCALJSON_SORTED" "$LOCALJSON_OLD_SORTED" "$LOCALJSON_FORMATTED" "$LOCALJSON_OLD_FORMATTED" |
|
| 62 | + |
configuration/environments_scripts/central_reverse_proxy/files/home/trac/mailinglists/euros49er2023
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +axel.uhl@sap.com |
configuration/environments_scripts/central_reverse_proxy/files/usr/local/bin/mountnvmeswap
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../../repo/usr/local/bin/mountnvmeswap |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/central_reverse_proxy/files/usr/local/bin/notify-operators
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../../repo/usr/local/bin/notify-operators |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/central_reverse_proxy/files/usr/local/bin/switchoverArchive.sh
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../../repo/usr/local/bin/switchoverArchive.sh |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/central_reverse_proxy/files/usr/local/bin/sync-repo-and-execute-cmd.sh
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../../repo/usr/local/bin/sync-repo-and-execute-cmd.sh |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/central_reverse_proxy/files/usr/local/bin/syncgit
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../../repo/usr/local/bin/syncgit |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/central_reverse_proxy/files/usr/local/bin/update_authorized_keys_for_landscape_managers
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../../repo/usr/local/bin/update_authorized_keys_for_landscape_managers |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/central_reverse_proxy/files/usr/local/bin/update_authorized_keys_for_landscape_managers_if_changed
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../../repo/usr/local/bin/update_authorized_keys_for_landscape_managers_if_changed |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/central_reverse_proxy/files/usr/local/bin/update_landscape_managers_mailing_list.sh
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../../repo/usr/local/bin/update_landscape_managers_mailing_list.sh |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/central_reverse_proxy/users/root/crontab-docker-registry-gc
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../crontabs/crontab-docker-registry-gc |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/central_reverse_proxy/users/root/crontab-mail-events-on-my
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../crontabs/crontab-mail-events-on-my |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/central_reverse_proxy/users/root/crontab-switchoverArchive
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../crontabs/crontab-switchoverArchive |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/central_reverse_proxy/users/root/crontab-update-authorized-keys
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../crontabs/crontab-update-authorized-keys |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/central_reverse_proxy/users/root/crontab-update-landscape-managers-mailing-list
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../crontabs/crontab-update-landscape-managers-mailing-list |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/central_reverse_proxy/users/trac/crontab-manage2sail-example
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../crontabs/crontab-manage2sail-example |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/central_reverse_proxy/users/trac/crontab-mongo-health-check
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../crontabs/crontab-mongo-health-check |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/central_reverse_proxy/users/wiki/crontab-download-new-archived-trac-trac-events
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../crontabs/crontab-download-new-archived-trac-trac-events |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/central_reverse_proxy/users/wiki/crontab-syncgit
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../crontabs/crontab-syncgit |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/central_reverse_proxy/users/wiki/crontab-update-trac-trac-urls
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../crontabs/crontab-update-trac-trac-urls |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/hudson_slave/README
| ... | ... | @@ -0,0 +1,5 @@ |
| 1 | +Deploy the .mount and .service units to /etc/systemd/system. |
|
| 2 | +Deploy the imageupgrade script to /usr/local/bin, |
|
| 3 | +furthermore the ../imageupgrade_functions.sh has to go to /usr/local/bin, best as symbolic link to files in git. |
|
| 4 | + |
|
| 5 | +See the script in wiki/howto/development/ci.md that shows how to set up such an instance from scratch. |
configuration/environments_scripts/hudson_slave/files/etc/systemd/system/imageupgrade.service
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../../repo/etc/systemd/system/imageupgrade.service |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/hudson_slave/files/etc/systemd/system/mounthudsonworkspace.service
| ... | ... | @@ -0,0 +1,12 @@ |
| 1 | +[Unit] |
|
| 2 | +Description=Mount Hudson Workspace |
|
| 3 | +After=ephemeral-data.mount |
|
| 4 | + |
|
| 5 | +[Service] |
|
| 6 | +Type=oneshot |
|
| 7 | +ExecStart=/usr/local/bin/mounthudsonworkspace |
|
| 8 | +RemainAfterExit=true |
|
| 9 | +StandardOutput=journal |
|
| 10 | + |
|
| 11 | +[Install] |
|
| 12 | +WantedBy=multi-user.target |
configuration/environments_scripts/hudson_slave/files/usr/local/bin/imageupgrade
| ... | ... | @@ -0,0 +1,21 @@ |
| 1 | +#!/bin/bash |
|
| 2 | + |
|
| 3 | +# Script to deploy on an instance that has an ephemeral volume as /dev/nvme0n1 (adjust env var PARTITION if different) |
|
| 4 | +# Ensures the partition is xfs-formatted, any existing partition contents will be overwritten if formatted otherwise. |
|
| 5 | +# An existing xfs partition will be left alone. |
|
| 6 | +. imageupgrade_functions.sh |
|
| 7 | + |
|
| 8 | +get_ec2_user_data() { |
|
| 9 | + /usr/bin/ec2metadata --user |
|
| 10 | +} |
|
| 11 | + |
|
| 12 | +METADATA=$( get_ec2_user_data ) |
|
| 13 | +echo "Metadata: ${METADATA}" |
|
| 14 | +if echo "${METADATA}" | grep -q "^image-upgrade$"; then |
|
| 15 | + echo "Image upgrade..." |
|
| 16 | + LOGON_USER_HOME=/home/ubuntu |
|
| 17 | + run_apt_update_upgrade |
|
| 18 | + download_and_install_latest_sap_jvm_8 |
|
| 19 | + run_git_pull |
|
| 20 | + finalize |
|
| 21 | +fi |
configuration/environments_scripts/hudson_slave/files/usr/local/bin/mounthudsonworkspace
| ... | ... | @@ -0,0 +1,12 @@ |
| 1 | +#!/bin/bash |
|
| 2 | +# Expects an ephemeral volume to have been mounted at /ephemeral/data and |
|
| 3 | +# mounts that with a "bind" mount to /home/hudson/workspace. |
|
| 4 | +# The directory is then chown/chgrp'ed to hudson |
|
| 5 | +if [ -e /dev/nvme1n1 ]; then |
|
| 6 | + mkfs.ext4 /dev/nvme1n1 |
|
| 7 | + mount /dev/nvme1n1 /home/hudson/workspace |
|
| 8 | +else |
|
| 9 | + mount -o bind /ephemeral/data /home/hudson/workspace |
|
| 10 | +fi |
|
| 11 | +chgrp hudson /home/hudson/workspace |
|
| 12 | +chown hudson /home/hudson/workspace |
configuration/environments_scripts/mongo_instance_setup/README
| ... | ... | @@ -0,0 +1,16 @@ |
| 1 | +Deploy the .mount and .service units to /etc/systemd/system. |
|
| 2 | +Deploy the ephemeralvolume and the patch-mongo-replicaset-name-from-ec2-metadata script to /usr/local/bin, |
|
| 3 | +furthermore the ../imageupgrade_functions.sh has to go to /usr/local/bin. |
|
| 4 | +Deploy mongod.conf to /etc and make sure that /root has a+r and a+x permissions because |
|
| 5 | +otherwise the mongod user won't be able to read through the symbolic link |
|
| 6 | +Link mongodb to /etc/logrotate.d |
|
| 7 | +Link the crontab-mongo, in configuration/crontabs/environments, to /root/crontab and run "crontab crontab" as root. |
|
| 8 | + |
|
| 9 | +Run with optional EC2 user detail, e.g., as follows: |
|
| 10 | + |
|
| 11 | + REPLICA_SET_NAME=archive |
|
| 12 | + REPLICA_SET_PRIMARY=dbserver.internal.sapsailing.com:10201 |
|
| 13 | + |
|
| 14 | +This will automatically patch /etc/mongod.conf such that the replSetName property |
|
| 15 | +is set to the value of REPLICA_SET_NAME. Then, the instance will be added to |
|
| 16 | +the REPLICA_SET_PRIMARY's replica set. |
configuration/environments_scripts/mongo_instance_setup/files/etc/logrotate.d/mongodb
| ... | ... | @@ -0,0 +1,9 @@ |
| 1 | +compress |
|
| 2 | +/var/log/mongodb/mongod.log |
|
| 3 | +{ |
|
| 4 | + rotate 5 |
|
| 5 | + weekly |
|
| 6 | + postrotate |
|
| 7 | + /usr/bin/killall -SIGUSR1 mongod |
|
| 8 | + endscript |
|
| 9 | +} |
configuration/environments_scripts/mongo_instance_setup/files/etc/mongod.conf
| ... | ... | @@ -0,0 +1,48 @@ |
| 1 | +# mongod.conf |
|
| 2 | + |
|
| 3 | +# for documentation of all options, see: |
|
| 4 | +# http://docs.mongodb.org/manual/reference/configuration-options/ |
|
| 5 | + |
|
| 6 | +# where to write logging data. |
|
| 7 | +systemLog: |
|
| 8 | + destination: file |
|
| 9 | + logAppend: true |
|
| 10 | + path: /var/log/mongodb/mongod.log |
|
| 11 | + |
|
| 12 | +# Where and how to store data. |
|
| 13 | +storage: |
|
| 14 | + dbPath: /var/lib/mongo |
|
| 15 | + journal: |
|
| 16 | + enabled: true |
|
| 17 | + directoryPerDB: true |
|
| 18 | +# engine: |
|
| 19 | +# mmapv1: |
|
| 20 | +# wiredTiger: |
|
| 21 | + |
|
| 22 | +# how the process runs |
|
| 23 | +processManagement: |
|
| 24 | + fork: true # fork and run in background |
|
| 25 | + pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile |
|
| 26 | + timeZoneInfo: /usr/share/zoneinfo |
|
| 27 | + |
|
| 28 | +# network interfaces |
|
| 29 | +net: |
|
| 30 | + port: 27017 |
|
| 31 | +# bindIp: 127.0.0.1 # Listen to local interface only, comment to listen on all interfaces. |
|
| 32 | +# bindIp: 172.31.33.146 |
|
| 33 | + bindIp: 0.0.0.0 |
|
| 34 | + |
|
| 35 | +#security: |
|
| 36 | + |
|
| 37 | +#operationProfiling: |
|
| 38 | + |
|
| 39 | +replication: |
|
| 40 | + replSetName: "live" |
|
| 41 | + |
|
| 42 | +#sharding: |
|
| 43 | + |
|
| 44 | +## Enterprise-Only Options |
|
| 45 | + |
|
| 46 | +#auditLog: |
|
| 47 | + |
|
| 48 | +#snmp: |
configuration/environments_scripts/mongo_instance_setup/files/etc/systemd/system/chownvarlibmongo.service
| ... | ... | @@ -0,0 +1,14 @@ |
| 1 | +[Unit] |
|
| 2 | +Description=Ensures all files under /var/lib/mongo are owned by mongod user/group |
|
| 3 | +Requires=ephemeralvolume.service |
|
| 4 | +After=ephemeralvolume.service |
|
| 5 | +Before=mongod.service |
|
| 6 | + |
|
| 7 | +[Install] |
|
| 8 | +RequiredBy=mongod.service |
|
| 9 | + |
|
| 10 | +[Service] |
|
| 11 | +Type=oneshot |
|
| 12 | +RemainAfterExit=true |
|
| 13 | +ExecStart=/bin/chown -R mongod /var/lib/mongo/ |
|
| 14 | +ExecStart=/bin/chgrp -R mongod /var/lib/mongo/ |
configuration/environments_scripts/mongo_instance_setup/files/etc/systemd/system/ephemeralvolume.service
| ... | ... | @@ -0,0 +1,11 @@ |
| 1 | +[Unit] |
|
| 2 | +Description=Ensures /dev/nvme0n1 or /dev/xvdb is XFS-formatted |
|
| 3 | +Requires=-.mount cloud-init.service network.service |
|
| 4 | +After=-.mount cloud-init.service network.service |
|
| 5 | + |
|
| 6 | +[Install] |
|
| 7 | + |
|
| 8 | +[Service] |
|
| 9 | +Type=oneshot |
|
| 10 | +RemainAfterExit=true |
|
| 11 | +ExecStart=/usr/local/bin/ephemeralvolume |
configuration/environments_scripts/mongo_instance_setup/files/etc/systemd/system/mongo-replica-set.service
| ... | ... | @@ -0,0 +1,16 @@ |
| 1 | +[Unit] |
|
| 2 | +Description=If REPLICA_SET_NAME EC2 user data is provided, add this node to the replica set of REPLICA_SET_PRIMARY |
|
| 3 | +Requires=mongod.service |
|
| 4 | +After=mongod.service |
|
| 5 | +Requires=cloud-init.service |
|
| 6 | +After=cloud-init.service |
|
| 7 | + |
|
| 8 | +[Install] |
|
| 9 | +WantedBy=multi-user.target |
|
| 10 | + |
|
| 11 | +[Service] |
|
| 12 | +Type=oneshot |
|
| 13 | +RemainAfterExit=true |
|
| 14 | +ExecStart=/usr/local/bin/add-as-replica |
|
| 15 | +ExecStop=/usr/local/bin/remove-as-replica |
|
| 16 | +TimeoutStopSpec=120s |
configuration/environments_scripts/mongo_instance_setup/files/etc/systemd/system/patch-mongo-replicaset-name-from-ec2-metadata.service
| ... | ... | @@ -0,0 +1,15 @@ |
| 1 | +[Unit] |
|
| 2 | +Description=Check EC2 metadata for MongoDB Replica Set Name and patch /etc/mongod.conf accordingly |
|
| 3 | +Requires=ephemeralvolume.service |
|
| 4 | +After=ephemeralvolume.service |
|
| 5 | +Requires=cloud-init.service |
|
| 6 | +After=cloud-init.service |
|
| 7 | +Before=mongod.service |
|
| 8 | + |
|
| 9 | +[Install] |
|
| 10 | +RequiredBy=mongod.service |
|
| 11 | + |
|
| 12 | +[Service] |
|
| 13 | +Type=oneshot |
|
| 14 | +RemainAfterExit=true |
|
| 15 | +ExecStart=/usr/local/bin/patch-mongo-replicaset-name-from-ec2-metadata |
configuration/environments_scripts/mongo_instance_setup/files/usr/local/bin/add-as-replica
| ... | ... | @@ -0,0 +1,25 @@ |
| 1 | +#!/bin/bash |
|
| 2 | +user_data=$( ec2-metadata -d | sed -e 's/^user-data: //' ) |
|
| 3 | +if echo ${user_data} | grep -q "^image-upgrade$"; then |
|
| 4 | + echo "Image upgrade... didn't expect to get this far because ephemeralvolume should have triggered upgrade and shutdown. Not registering MongoDB replica" |
|
| 5 | +else |
|
| 6 | + eval ${user_data} |
|
| 7 | + if [ -z "$REPLICA_SET_NAME" ]; then |
|
| 8 | + REPLICA_SET_NAME=live |
|
| 9 | + fi |
|
| 10 | + if [ -z "$REPLICA_SET_PRIMARY" ]; then |
|
| 11 | + REPLICA_SET_PRIMARY=mongo0.internal.sapsailing.com:27017 |
|
| 12 | + fi |
|
| 13 | + if [ -z "$REPLICA_SET_PRIORITY" ]; then |
|
| 14 | + REPLICA_SET_PRIORITY=1 |
|
| 15 | + fi |
|
| 16 | + if [ -z "$REPLICA_SET_VOTES" ]; then |
|
| 17 | + REPLICA_SET_VOTES=0 |
|
| 18 | + fi |
|
| 19 | + if [ \! -z "REPLICA_SET_PRIMARY" ]; then |
|
| 20 | + IP=$(ec2-metadata -o | sed -e 's/^local-ipv4: //') |
|
| 21 | + echo "rs.add({host: \"$IP:27017\", priority: $REPLICA_SET_PRIORITY, votes: $REPLICA_SET_VOTES})" | mongo "mongodb://$REPLICA_SET_PRIMARY/?replicaSet=$REPLICA_SET_NAME&retryWrites=true" |
|
| 22 | + else |
|
| 23 | + echo "rs.initiate()" | mongo |
|
| 24 | + fi |
|
| 25 | +fi |
configuration/environments_scripts/mongo_instance_setup/files/usr/local/bin/ephemeralvolume
| ... | ... | @@ -0,0 +1,33 @@ |
| 1 | +#!/bin/bash |
|
| 2 | + |
|
| 3 | +# Script to deploy on an instance that has an ephemeral volume as /dev/nvme0n1 (adjust env var PARTITION if different) |
|
| 4 | +# Ensures the partition is xfs-formatted, any existing partition contents will be overwritten if formatted otherwise. |
|
| 5 | +# An existing xfs partition will be left alone. |
|
| 6 | + |
|
| 7 | +METADATA=$( /bin/ec2-metadata -d | sed -e 's/^user-data: //' ) |
|
| 8 | +echo "Metadata: ${METADATA}" |
|
| 9 | +if echo "${METADATA}" | grep -q "^image-upgrade$"; then |
|
| 10 | + echo "Image upgrade; not trying to mount/format ephemeral volume; calling imageupgrade.sh instead..." |
|
| 11 | + imageupgrade.sh |
|
| 12 | +else |
|
| 13 | + echo "No image upgrade; looking for ephemeral volume and trying to format with xfs..." |
|
| 14 | + PARTITION=/dev/nvme0n1 |
|
| 15 | + if [ \! -e $PARTITION ]; then |
|
| 16 | + PARTITION=/dev/xvdb |
|
| 17 | + fi |
|
| 18 | + if [ \! -e $PARTITION ]; then |
|
| 19 | + echo "Neither /dev/nvme0n1 nor /dev/xvdb partition found; not formatting/mounting ephemeral volume" |
|
| 20 | + elif cat /proc/mounts | awk '{print $1;}' | grep "${PARTITION}"; then |
|
| 21 | + echo "Partition ${PARTITION} already mounted; not formatting/mounting ephemeral volume" |
|
| 22 | + else |
|
| 23 | + FSTYPE=$(blkid -p $PARTITION -s TYPE -o value) |
|
| 24 | + if [ "$FSTYPE" != "xfs" ]; then |
|
| 25 | + echo FSTYPE was "$FSTYPE" but should have been xfs. Formatting $PARTITION... |
|
| 26 | + mkfs.xfs -f $PARTITION |
|
| 27 | + else |
|
| 28 | + echo FSTYPE was "$FSTYPE" which is just right :-\) |
|
| 29 | + fi |
|
| 30 | + # mount the thing to /var/lib/mongo |
|
| 31 | + mount $PARTITION /var/lib/mongo |
|
| 32 | + fi |
|
| 33 | +fi |
configuration/environments_scripts/mongo_instance_setup/files/usr/local/bin/imageupgrade.sh
| ... | ... | @@ -0,0 +1,25 @@ |
| 1 | +#!/bin/bash |
|
| 2 | + |
|
| 3 | +# Upgrades the AWS EC2 MongoDB instance that this script is assumed to be executed on. |
|
| 4 | +# The steps are as follows: |
|
| 5 | + |
|
| 6 | +. imageupgrade_functions.sh |
|
| 7 | + |
|
| 8 | +run_git_pull_root() { |
|
| 9 | + echo "Pulling git to /root/code" >>/var/log/sailing.err |
|
| 10 | + cd /root/code |
|
| 11 | + git pull |
|
| 12 | +} |
|
| 13 | + |
|
| 14 | +clean_mongo_pid() { |
|
| 15 | + rm -f /var/run/mongodb/mongod.pid |
|
| 16 | +} |
|
| 17 | + |
|
| 18 | +LOGON_USER_HOME=/home/ec2-user |
|
| 19 | + |
|
| 20 | +run_yum_update |
|
| 21 | +run_git_pull_root |
|
| 22 | +clean_startup_logs |
|
| 23 | +update_root_crontab |
|
| 24 | +clean_mongo_pid |
|
| 25 | +finalize |
configuration/environments_scripts/mongo_instance_setup/files/usr/local/bin/imageupgrade_functions.sh
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../../repo/usr/local/bin/imageupgrade_functions.sh |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/mongo_instance_setup/files/usr/local/bin/patch-mongo-replicaset-name-from-ec2-metadata
| ... | ... | @@ -0,0 +1,8 @@ |
| 1 | +#!/bin/bash |
|
| 2 | +REPLICA_SET_NAME=$(ec2-metadata | grep REPLICA_SET_NAME | sed -e 's/^user-data: //' | sed -e 's/^REPLICA_SET_NAME=//') |
|
| 3 | +echo Replica set name: $REPLICA_SET_NAME |
|
| 4 | +if [ \! -z "$REPLICA_SET_NAME" ]; then |
|
| 5 | + echo "Not empty. Patching /etc/mongod.conf..." |
|
| 6 | + sed -i -e "s/replSetName: .*$/replSetName: $REPLICA_SET_NAME/" /etc/mongod.conf |
|
| 7 | + echo "Done" |
|
| 8 | +fi |
configuration/environments_scripts/mongo_instance_setup/files/usr/local/bin/remove-as-replica
| ... | ... | @@ -0,0 +1,6 @@ |
| 1 | +#!/bin/bash |
|
| 2 | +eval $( ec2-metadata -d | sed -e 's/^user-data: //' ) |
|
| 3 | +if [ \! -z "REPLICA_SET_PRIMARY" ]; then |
|
| 4 | + IP=$(ec2-metadata -o | sed -e 's/^local-ipv4: //') |
|
| 5 | + echo "rs.remove(\"$IP:27017\")" | mongo "mongodb://$REPLICA_SET_PRIMARY/?replicaSet=$REPLICA_SET_NAME&retryWrites=true" |
|
| 6 | +fi |
configuration/environments_scripts/mongo_instance_setup/files/usr/local/bin/update_authorized_keys_for_landscape_managers
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../../repo/usr/local/bin/update_authorized_keys_for_landscape_managers |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/mongo_instance_setup/files/usr/local/bin/update_authorized_keys_for_landscape_managers_if_changed
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../../repo/usr/local/bin/update_authorized_keys_for_landscape_managers_if_changed |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/mongo_instance_setup/users/root/crontab-update-authorized-keys
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../crontabs/crontab-update-authorized-keys |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/mysql_instance_setup/files/usr/local/bin/update_authorized_keys_for_landscape_managers
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../../repo/usr/local/bin/update_authorized_keys_for_landscape_managers |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/mysql_instance_setup/files/usr/local/bin/update_authorized_keys_for_landscape_managers_if_changed
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../../repo/usr/local/bin/update_authorized_keys_for_landscape_managers_if_changed |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/mysql_instance_setup/users/ec2-user/crontab-update-authorized-keys
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../crontabs/crontab-update-authorized-keys |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/rabbitmq_instance_setup/files/user/local/bin/update_authorized_keys_for_landscape_managers
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../../repo/usr/local/bin/update_authorized_keys_for_landscape_managers |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/rabbitmq_instance_setup/files/user/local/bin/update_authorized_keys_for_landscape_managers_if_changed
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../../repo/usr/local/bin/update_authorized_keys_for_landscape_managers_if_changed |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/rabbitmq_instance_setup/users/admin/crontab-update-authorized-keys
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../crontabs/crontab-update-authorized-keys |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/repo/etc/init.d
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +rc.d/init.d |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/repo/etc/rc.d/init.d/mountnvmeswap.initd
| ... | ... | @@ -0,0 +1,41 @@ |
| 1 | +#!/bin/bash |
|
| 2 | +# |
|
| 3 | +# mountnvmeswap Formats and mounts a yet unpartitioned NVMe volume as swap |
|
| 4 | +# |
|
| 5 | +# chkconfig: 2345 95 10 |
|
| 6 | +# description: Formats and mounts a yet unpartitioned NVMe volume as swap |
|
| 7 | +# |
|
| 8 | +### BEGIN INIT INFO |
|
| 9 | +# Provides: mountnvmeswap |
|
| 10 | +# Required-Start: $local_fs $network |
|
| 11 | +# Should-Start: |
|
| 12 | +# Required-Stop: |
|
| 13 | +# Should-Stop: |
|
| 14 | +# Default-Start: 2 3 4 5 |
|
| 15 | +# Default-Stop: 0 1 6 |
|
| 16 | +# Short-Description: Formats and mounts a yet unpartitioned NVMe volume as swap |
|
| 17 | +# Description: Formats and mounts a yet unpartitioned NVMe volume as swap |
|
| 18 | +### END INIT INFO |
|
| 19 | + |
|
| 20 | +# Source function library. |
|
| 21 | +. /etc/init.d/functions |
|
| 22 | + |
|
| 23 | +RETVAL=0 |
|
| 24 | + |
|
| 25 | +# See how we were called. |
|
| 26 | +case "$1" in |
|
| 27 | + start) |
|
| 28 | + /usr/local/bin/mountnvmeswap |
|
| 29 | + ;; |
|
| 30 | + stop) |
|
| 31 | + ;; |
|
| 32 | + status) |
|
| 33 | + swapon -s | grep /dev/nvme0n1 |
|
| 34 | + RETVAL=$? |
|
| 35 | + ;; |
|
| 36 | + *) |
|
| 37 | + echo $"Usage: $0 {start|status|stop}" |
|
| 38 | + RETVAL=3 |
|
| 39 | +esac |
|
| 40 | + |
|
| 41 | +exit $RETVAL |
configuration/environments_scripts/repo/etc/systemd/system/imageupgrade.service
| ... | ... | @@ -0,0 +1,12 @@ |
| 1 | +[Unit] |
|
| 2 | +Description=Check for image-upgrade EC2 user data and triggers an image upgrade if found |
|
| 3 | +Requires=-.mount cloud-init.service |
|
| 4 | +After=-.mount cloud-init.service networking.service systemd-networkd.service |
|
| 5 | + |
|
| 6 | +[Install] |
|
| 7 | +WantedBy=multi-user.target |
|
| 8 | + |
|
| 9 | +[Service] |
|
| 10 | +Type=oneshot |
|
| 11 | +RemainAfterExit=true |
|
| 12 | +ExecStart=/usr/local/bin/imageupgrade |
configuration/environments_scripts/repo/etc/systemd/system/mountnvmeswap.service
| ... | ... | @@ -0,0 +1,12 @@ |
| 1 | +[Unit] |
|
| 2 | +Description=An unformatted /dev/nvme0n1 is turned into swap space |
|
| 3 | +Requires=-.mount |
|
| 4 | +After=-.mount |
|
| 5 | + |
|
| 6 | +[Install] |
|
| 7 | +RequiredBy=multi-user.target |
|
| 8 | + |
|
| 9 | +[Service] |
|
| 10 | +Type=oneshot |
|
| 11 | +RemainAfterExit=true |
|
| 12 | +ExecStart=/usr/local/bin/mountnvmeswap |
configuration/environments_scripts/repo/usr/local/bin/getLatestImageOfType.sh
| ... | ... | @@ -0,0 +1,3 @@ |
| 1 | +#!/bin/bash |
|
| 2 | +imageType="$1" |
|
| 3 | +aws ec2 describe-images --filter Name=tag:image-type,Values=${imageType} | jq --raw-output '.Images | sort_by(.CreationDate) | .[].ImageId' | tail -n 1 |
configuration/environments_scripts/repo/usr/local/bin/imageupgrade_functions.sh
| ... | ... | @@ -0,0 +1,103 @@ |
| 1 | +#!/bin/bash |
|
| 2 | + |
|
| 3 | +# Upgrades the AWS EC2 instance that this script is assumed to be executed on. |
|
| 4 | +# The steps are as follows: |
|
| 5 | + |
|
| 6 | +REBOOT_INDICATOR=/var/run/is-rebooted |
|
| 7 | +LOGON_USER_HOME=/root |
|
| 8 | + |
|
| 9 | +run_yum_update() { |
|
| 10 | + echo "Updating packages using yum" >>/var/log/sailing.err |
|
| 11 | + yum -y update |
|
| 12 | +} |
|
| 13 | + |
|
| 14 | +run_apt_update_upgrade() { |
|
| 15 | + echo "Updating packages using apt" >>/var/log/sailing.err |
|
| 16 | + apt-get -y update; apt-get -y upgrade |
|
| 17 | + apt-get -y install linux-image-cloud-amd64 |
|
| 18 | + apt-get -y autoremove |
|
| 19 | +} |
|
| 20 | + |
|
| 21 | +run_git_pull() { |
|
| 22 | + echo "Pulling git to /home/sailing/code" >>/var/log/sailing.err |
|
| 23 | + su - sailing -c "cd code; git pull" |
|
| 24 | +} |
|
| 25 | + |
|
| 26 | +download_and_install_latest_sap_jvm_8() { |
|
| 27 | + echo "Downloading and installing latest SAP JVM 8 to /opt/sapjvm_8" >>/var/log/sailing.err |
|
| 28 | + vmpath=$( curl -s --cookie eula_3_1_agreed=tools.hana.ondemand.com/developer-license-3_1.txt https://tools.hana.ondemand.com | grep additional/sapjvm-8\..*-linux-x64.zip | head -1 | sed -e 's/^.*a href="\(additional\/sapjvm-8\..*-linux-x64\.zip\)".*/\1/' ) |
|
| 29 | + if [ -n "${vmpath}" ]; then |
|
| 30 | + echo "Found VM version ${vmpath}; upgrading installation at /opt/sapjvm_8" >>/var/log/sailing.err |
|
| 31 | + if [ -z "${TMP}" ]; then |
|
| 32 | + TMP=/tmp |
|
| 33 | + fi |
|
| 34 | + echo "Downloading SAP JVM 8 as ZIP file to ${TMP}/sapjvm8-linux-x64.zip" >>/var/log/sailing.err |
|
| 35 | + curl --cookie eula_3_1_agreed=tools.hana.ondemand.com/developer-license-3_1.txt "https://tools.hana.ondemand.com/${vmpath}" > ${TMP}/sapjvm8-linux-x64.zip 2>>/var/log/sailing.err |
|
| 36 | + cd /opt |
|
| 37 | + rm -rf sapjvm_8 |
|
| 38 | + if [ -f SIGNATURE.SMF ]; then |
|
| 39 | + rm -f SIGNATURE.SMF |
|
| 40 | + fi |
|
| 41 | + unzip ${TMP}/sapjvm8-linux-x64.zip >>/var/log/sailing.err |
|
| 42 | + rm -f ${TMP}/sapjvm8-linux-x64.zip |
|
| 43 | + rm -f SIGNATURE.SMF |
|
| 44 | + else |
|
| 45 | + echo "Did not find SAP JVM 8 at tools.hana.ondemand.com; not trying to upgrade" >>/var/log/sailing.err |
|
| 46 | + fi |
|
| 47 | +} |
|
| 48 | + |
|
| 49 | +clean_logrotate_target() { |
|
| 50 | + echo "Clearing logrorate-targets" >>/var/log/sailing.err |
|
| 51 | + rm -rf /var/log/logrotate-target/* |
|
| 52 | +} |
|
| 53 | + |
|
| 54 | +clean_httpd_logs() { |
|
| 55 | + echo "Clearing httpd logs" >>/var/log/sailing.err |
|
| 56 | + service httpd stop |
|
| 57 | + rm -rf /var/log/httpd/* |
|
| 58 | + rm -f /etc/httpd/conf.d/001-internals.conf |
|
| 59 | +} |
|
| 60 | + |
|
| 61 | +clean_startup_logs() { |
|
| 62 | + echo "Clearing bootstrap logs" >>/var/log/sailing.err |
|
| 63 | + rm -f /var/log/sailing* |
|
| 64 | + # Ensure that upon the next boot the reboot indicator is not present, indicating that it's the first boot |
|
| 65 | + rm "${REBOOT_INDICATOR}" |
|
| 66 | +} |
|
| 67 | + |
|
| 68 | +clean_servers_dir() { |
|
| 69 | + rm -rf /home/sailing/servers/* |
|
| 70 | +} |
|
| 71 | + |
|
| 72 | +#DEPRECATED |
|
| 73 | +update_root_crontab() { |
|
| 74 | + # The following assumes that /root/crontab is a symbolic link to /home/sailing/code/configuration/crontabs/<the crontab appropriate |
|
| 75 | + # to the environment or user> |
|
| 76 | + # which has previously been updated by a git pull: |
|
| 77 | + cd /root |
|
| 78 | + crontab crontab |
|
| 79 | +} |
|
| 80 | + |
|
| 81 | +clean_root_ssh_dir_and_tmp() { |
|
| 82 | + echo "Cleaning up ${LOGON_USER_HOME}/.ssh" >>/var/log/sailing.err |
|
| 83 | + rm -rf ${LOGON_USER_HOME}/.ssh/* |
|
| 84 | + rm -f /var/run/last_change_aws_landscape_managers_ssh_keys |
|
| 85 | + rm -rf /tmp/image-upgrade-finished |
|
| 86 | +} |
|
| 87 | + |
|
| 88 | +get_ec2_user_data() { |
|
| 89 | + /opt/aws/bin/ec2-metadata -d | sed -e 's/^user-data: //' |
|
| 90 | +} |
|
| 91 | + |
|
| 92 | +finalize() { |
|
| 93 | + # Finally, shut down the node unless "no-shutdown" was provided in the user data, so that a new AMI can be constructed cleanly |
|
| 94 | + if get_ec2_user_data | grep "^no-shutdown$"; then |
|
| 95 | + echo "Shutdown disabled by no-shutdown option in user data. Remember to clean /root/.ssh when done." |
|
| 96 | + touch /tmp/image-upgrade-finished |
|
| 97 | + else |
|
| 98 | + # Only clean ${LOGON_USER_HOME}/.ssh directory and /tmp/image-upgrade-finished if the next step is shutdown / image creation |
|
| 99 | + clean_root_ssh_dir_and_tmp |
|
| 100 | + rm -f /var/log/sailing.err |
|
| 101 | + shutdown -h now & |
|
| 102 | + fi |
|
| 103 | +} |
configuration/environments_scripts/repo/usr/local/bin/mountnvmeswap
| ... | ... | @@ -0,0 +1,31 @@ |
| 1 | +#!/bin/bash |
|
| 2 | + |
|
| 3 | +# Script to deploy on an instance that has an ephemeral volume as /dev/nvme0n1 (adjust env var EPHEMERAL_VOLUME if different) |
|
| 4 | +# Ensures the partition is xfs-formatted, any existing partition contents will be overwritten if formatted otherwise. |
|
| 5 | +# An existing xfs partition will be left alone. |
|
| 6 | +EPHEMERAL_VOLUME_NAME=$( |
|
| 7 | + # List all block devices and find those named nvme... |
|
| 8 | + for i in $(lsblk | grep -o "nvme[0-9][0-9]\?n[0-9]" | sort -u); do |
|
| 9 | + # If they don't have any partitions, then... |
|
| 10 | + if ! lsblk | grep -o "${i}p[0-9]\+" 2>&1 >/dev/null; then |
|
| 11 | + # ...check whether they are EBS devices |
|
| 12 | + /sbin/ebsnvme-id -u "/dev/$i" >/dev/null |
|
| 13 | + # If not, list their name because then they must be ephemeral instance storage |
|
| 14 | + if [[ $? -ne 0 ]]; then |
|
| 15 | + echo "${i}" |
|
| 16 | + fi |
|
| 17 | + fi |
|
| 18 | + done 2>/dev/null | head -n 1 ) |
|
| 19 | +if [ -n "${EPHEMERAL_VOLUME_NAME}" ]; then |
|
| 20 | + EPHEMERAL_VOLUME=/dev/${EPHEMERAL_VOLUME_NAME} |
|
| 21 | + FSTYPE=$(blkid -p $EPHEMERAL_VOLUME -s TYPE -o value) |
|
| 22 | + if [ "$FSTYPE" = "" ]; then |
|
| 23 | + echo FSTYPE was empty, creating swap partition |
|
| 24 | + mkswap $EPHEMERAL_VOLUME |
|
| 25 | + swapon --priority 2 -a $EPHEMERAL_VOLUME |
|
| 26 | + else |
|
| 27 | + echo "FSTYPE was $FSTYPE, not touching" |
|
| 28 | + fi |
|
| 29 | +else |
|
| 30 | + echo "No ephemeral partition found. Not creating any swap partition." |
|
| 31 | +fi |
configuration/environments_scripts/repo/usr/local/bin/notify-operators
| ... | ... | @@ -0,0 +1,4 @@ |
| 1 | +#!/bin/bash |
|
| 2 | +OPERATORS=$(cat /var/cache/landscapeManagersMailingList) |
|
| 3 | +logger -t sailing "Sending notification e-mail with subject $1 to ${OPERATORS}" |
|
| 4 | +mail -s "$1" ${OPERATORS} # This doesn't include the body, so if using programatically, pipe a body into this script. |
configuration/environments_scripts/repo/usr/local/bin/switchoverArchive.sh
| ... | ... | @@ -0,0 +1,114 @@ |
| 1 | +#!/bin/bash |
|
| 2 | + |
|
| 3 | +# Purpose: Script is used to switch to the failover archive if the primary is unhealthy, by altering the macros |
|
| 4 | +# file and then reloading Httpd. |
|
| 5 | +# Crontab for every minute: * * * * * /path/to/switchoverArchive.sh |
|
| 6 | +help() { |
|
| 7 | + echo "$0 PATH_TO_HTTPD_MACROS_FILE TIMEOUT_FIRST_CURL_SECONDS TIMEOUT_SECOND_CURL_SECONDS" |
|
| 8 | + echo "" |
|
| 9 | + echo "Script used to automatically update the archive location (to the failover) in httpd if the primary is down." |
|
| 10 | + echo "Pass in the path to the macros file containing the archive definitions;" |
|
| 11 | + echo "the timeout of the first curl check in seconds; and the timeout of the second curl check, also in seconds." |
|
| 12 | + echo "Make sure the combined time taken is not longer than the crontab." |
|
| 13 | + exit 2 |
|
| 14 | +} |
|
| 15 | +# $# is the number of arguments |
|
| 16 | +if [ $# -eq 0 ]; then |
|
| 17 | + help |
|
| 18 | +fi |
|
| 19 | +#The names of the variables in the macros file. |
|
| 20 | +ARCHIVE_IP_NAME="ARCHIVE_IP" |
|
| 21 | +ARCHIVE_FAILOVER_IP_NAME="ARCHIVE_FAILOVER_IP" |
|
| 22 | +PRODUCTION_ARCHIVE_NAME="PRODUCTION_ARCHIVE" |
|
| 23 | +ARCHIVE_PORT=8888 |
|
| 24 | +MACROS_PATH=$1 |
|
| 25 | +# The amount of time (in seconds) that must have elapsed, since the last httpd macros email, before notifying operators again. |
|
| 26 | +TIME_CHECK_SECONDS=$((15*60)) |
|
| 27 | +# Connection timeouts for curl requests (the time waited for a connection to be established). The second should be longer |
|
| 28 | +# as we want to be confident the main archive is in fact "down" before switching. |
|
| 29 | +TIMEOUT1_IN_SECONDS=$2 |
|
| 30 | +TIMEOUT2_IN_SECONDS=$3 |
|
| 31 | +CACHE_LOCATION="/var/cache/lastIncorrectMacroUnixTime" |
|
| 32 | +# The following line checks if all the strings in "search" are present at the beginning of their own line. Note: grep uses BRE by default, |
|
| 33 | +# so the plus symbol must be escaped to refer to "one or more" of the previous character. |
|
| 34 | +for i in "^Define ${PRODUCTION_ARCHIVE_NAME}\>" \ |
|
| 35 | + "^Define ${ARCHIVE_IP_NAME} [0-9]\+\.[0-9]\+\.[0-9]\+\.[0-9]\+$" \ |
|
| 36 | + "^Define ${ARCHIVE_FAILOVER_IP_NAME} [0-9]\+\.[0-9]\+\.[0-9]\+\.[0-9]\+$" |
|
| 37 | +do |
|
| 38 | + if ! grep -q "${i}" "${MACROS_PATH}"; then |
|
| 39 | + currentUnixTime=$(date +"%s") |
|
| 40 | + if [[ ! -f ${CACHE_LOCATION} || $((currentUnixTime - $(cat "${CACHE_LOCATION}") )) -gt "$TIME_CHECK_SECONDS" ]]; then |
|
| 41 | + date +"%s" > "${CACHE_LOCATION}" |
|
| 42 | + echo "Macros file does not contain proper definitions for the archive and failover IPs. Expression ${i} not matched." | notify-operators "Incorrect httpd macros" |
|
| 43 | + fi |
|
| 44 | + logger -t archive "Necessary variable assignment pattern ${i} not found in macros" |
|
| 45 | + exit 1 |
|
| 46 | + fi |
|
| 47 | +done |
|
| 48 | +# These next lines get the current ip values for the archive and failover, plus they store the value of production, |
|
| 49 | +# which is a variable pointing to either the primary or failover value. |
|
| 50 | +archiveIp="$(sed -n -e "s/^Define ${ARCHIVE_IP_NAME} \(.*\)/\1/p" ${MACROS_PATH} | tr -d '[:space:]')" |
|
| 51 | +failoverIp="$(sed -n -e "s/^Define ${ARCHIVE_FAILOVER_IP_NAME} \(.*\)/\1/p" ${MACROS_PATH} | tr -d '[:space:]')" |
|
| 52 | +productionIp="$(sed -n -e "s/^Define ${PRODUCTION_ARCHIVE_NAME} \(.*\)/\1/p" ${MACROS_PATH} | tr -d '[:space:]')" |
|
| 53 | +# Checks if the macro.conf is set as healthy or unhealthy currently. |
|
| 54 | +if [[ "${productionIp}" == "\${${ARCHIVE_IP_NAME}}" ]] |
|
| 55 | +then |
|
| 56 | + alreadyHealthy=1 |
|
| 57 | + logger -t archive "currently healthy" |
|
| 58 | +else |
|
| 59 | + alreadyHealthy=0 |
|
| 60 | + logger -t archive "currently unhealthy" |
|
| 61 | +fi |
|
| 62 | + |
|
| 63 | +setProduction() { |
|
| 64 | + # parameter $1: the name of the variable holding the IP of the archive instance to switch to |
|
| 65 | + sed -i -e "s/^Define ${PRODUCTION_ARCHIVE_NAME}\>.*$/Define ${PRODUCTION_ARCHIVE_NAME} \${${1}}/" ${MACROS_PATH} |
|
| 66 | +} |
|
| 67 | + |
|
| 68 | +# Sets the production value to point to the variable defining the main archive IP, provided it isn't already set. |
|
| 69 | +setProductionMainIfNotSet() { |
|
| 70 | + if [[ $alreadyHealthy -eq 0 ]] |
|
| 71 | + then |
|
| 72 | + # currently unhealthy |
|
| 73 | + # set production to archive |
|
| 74 | + logger -t archive "Healthy: setting production to main archive" |
|
| 75 | + setProduction ${ARCHIVE_IP_NAME} |
|
| 76 | + systemctl reload httpd |
|
| 77 | + echo "The main archive server is healthy again. Switching to it." | notify-operators "Healthy: main archive online" |
|
| 78 | + else |
|
| 79 | + # If already healthy then no reload or notification occurs. |
|
| 80 | + logger -t archive "Healthy: already set, no change needed" |
|
| 81 | + fi |
|
| 82 | +} |
|
| 83 | + |
|
| 84 | +setFailoverIfNotSet() { |
|
| 85 | + if [[ $alreadyHealthy -eq 1 ]] |
|
| 86 | + then |
|
| 87 | + # Set production to failover if not already. Separate if statement in case the curl statement |
|
| 88 | + # fails but the production is already set to point to the backup |
|
| 89 | + setProduction ${ARCHIVE_FAILOVER_IP_NAME} |
|
| 90 | + logger -t archive "Unhealthy: second check failed, switching to failover" |
|
| 91 | + systemctl reload httpd |
|
| 92 | + echo "Main archive is unhealthy. Switching to failover. Please urgently take a look at ${archiveIp}." | notify-operators "Unhealthy: main archive offline, failover in place" |
|
| 93 | + else |
|
| 94 | + logger -t archive "Unhealthy: second check still fails, failover already in use" |
|
| 95 | + fi |
|
| 96 | +} |
|
| 97 | + |
|
| 98 | +logger -t archive "begin check" |
|
| 99 | +# --fail option ensures that, if a server error is returned (ie. 5xx/4xx status code), then the status code (stored in $?) will be non zero. |
|
| 100 | +# -L follows redirects |
|
| 101 | +curl -s -L --fail --connect-timeout ${TIMEOUT1_IN_SECONDS} "http://${archiveIp}:${ARCHIVE_PORT}/gwt/status" >> /dev/null |
|
| 102 | +if [[ $? -ne 0 ]] |
|
| 103 | +then |
|
| 104 | + logger -t archive "first check failed" |
|
| 105 | + curl -s -L --fail --connect-timeout ${TIMEOUT2_IN_SECONDS} "http://${archiveIp}:${ARCHIVE_PORT}/gwt/status" >> /dev/null |
|
| 106 | + if [[ $? -ne 0 ]] |
|
| 107 | + then |
|
| 108 | + setFailoverIfNotSet |
|
| 109 | + else |
|
| 110 | + setProductionMainIfNotSet |
|
| 111 | + fi |
|
| 112 | +else |
|
| 113 | + setProductionMainIfNotSet |
|
| 114 | +fi |
configuration/environments_scripts/repo/usr/local/bin/sync-repo-and-execute-cmd.sh
| ... | ... | @@ -0,0 +1,48 @@ |
| 1 | +#!/bin/bash |
|
| 2 | + |
|
| 3 | +# Purpose: This script goes to a given git dir (eg. httpd); fetches any new commits to |
|
| 4 | +# the repo; and - if new commits are found - merges them into the branch and runs a command. |
|
| 5 | + |
|
| 6 | +if [ $# -eq 0 ]; then |
|
| 7 | + echo "$0 PATH_TO_GIT_REPO COMMAND_TO_RUN_ON_COMPLETION_IN_REPO" |
|
| 8 | + echo "" |
|
| 9 | + echo "EXAMPLE: sync-repo-and-execute-cmd.sh \"/etc/httpd\" \"sudo service httpd reload\"" |
|
| 10 | + echo "This script is used to automatically fetch from a git repo and, if there are new commits, merge the changes." |
|
| 11 | + echo "And then run a command, passed as an argument." |
|
| 12 | + exit 2 |
|
| 13 | +fi |
|
| 14 | + |
|
| 15 | +GIT_PATH=$1 |
|
| 16 | +COMMAND_ON_COMPLETION=$2 |
|
| 17 | +cd ${GIT_PATH} |
|
| 18 | +# Rev-parse gets the commit hash of given reference. |
|
| 19 | +CURRENT_HEAD=$(git rev-parse HEAD) |
|
| 20 | +git fetch |
|
| 21 | +if [[ $CURRENT_HEAD != $(git rev-parse origin/main) ]] # Checks if there are new commits |
|
| 22 | +then |
|
| 23 | + logger -t httpd "Changes found; merging now" |
|
| 24 | + cd ${GIT_PATH} && git merge origin/main |
|
| 25 | + if [[ $? -eq 0 ]]; then |
|
| 26 | + logger -t httpdMerge "Merge succeeded: different files edited." |
|
| 27 | + else |
|
| 28 | + logger -t httpdMerge "First merge unsuccessful: same file modified." |
|
| 29 | + git merge --abort # Returns to pre-merge state. |
|
| 30 | + git stash |
|
| 31 | + git merge origin/main # This should be a fast-forward merge. |
|
| 32 | + git stash apply # Keeps stash on top of stack, in case the apply fails. |
|
| 33 | + if [[ $? -eq 0 ]]; then |
|
| 34 | + logger -t httpdMerge "Second merge success: merge of httpd remote to local successful, and previous working directory changes restored." |
|
| 35 | + git stash drop # Removes successful stash from stash stack. |
|
| 36 | + else |
|
| 37 | + logger -t httpdMerge "Second merge unsuccessful: same sections modified" |
|
| 38 | + echo "Merging issue at commit $(git rev-parse HEAD). Currently at last safe commit." | notify-operators "Merge conflict on httpd instance. Manual intervention required." |
|
| 39 | + # Returns to pre-pull state and then pops |
|
| 40 | + git reset --hard "${CURRENT_HEAD}" |
|
| 41 | + git stash pop |
|
| 42 | + exit 1 |
|
| 43 | + fi |
|
| 44 | + fi |
|
| 45 | + sleep 2 |
|
| 46 | + $($COMMAND_ON_COMPLETION) |
|
| 47 | +fi |
|
| 48 | + |
configuration/environments_scripts/repo/usr/local/bin/syncgit
| ... | ... | @@ -0,0 +1,16 @@ |
| 1 | +#!/bin/sh |
|
| 2 | +ADMIN_EMAIL="axel.uhl@sap.com jan.hamann@sapsailing.com" |
|
| 3 | +if [ $# -eq 0 ]; then |
|
| 4 | + GIT_PATH="/home/wiki/gitwiki" |
|
| 5 | +else |
|
| 6 | + GIT_PATH=$1 |
|
| 7 | +fi |
|
| 8 | +cd $GIT_PATH |
|
| 9 | +git pull >/tmp/wiki-git.out 2>/tmp/wiki-git.err |
|
| 10 | +if [ "$?" != "0" ]; then |
|
| 11 | + cat /tmp/wiki-git.out /tmp/wiki-git.err | mail -s "Wiki git problem" $ADMIN_EMAIL |
|
| 12 | +fi |
|
| 13 | +git push >>/tmp/wiki-git.out 2>/tmp/wiki-git.err |
|
| 14 | +if [ "$?" != "0" ]; then |
|
| 15 | + cat /tmp/wiki-git.out /tmp/wiki-git.err | mail -s "Wiki git problem" $ADMIN_EMAIL |
|
| 16 | +fi |
configuration/environments_scripts/repo/usr/local/bin/update_authorized_keys_for_landscape_managers
| ... | ... | @@ -0,0 +1,48 @@ |
| 1 | +#!/bin/bash |
|
| 2 | +BEARER_TOKEN="$1" |
|
| 3 | +BASE_URL="$2" |
|
| 4 | +LOGON_USER_HOME="$3" |
|
| 5 | +SSH_DIR="$3/.ssh" |
|
| 6 | +EXIT_CODE=0 |
|
| 7 | +# |
|
| 8 | +curl_output=$( curl -H 'X-SAPSSE-Forward-Request-To: master' -H 'Authorization: Bearer '${BEARER_TOKEN} "${BASE_URL}/security/api/restsecurity/users_with_permission?permission=LANDSCAPE:MANAGE:AWS" 2>/dev/null ) |
|
| 9 | +curl_exit_code=$? |
|
| 10 | +if [ "${curl_exit_code}" = "0" ]; then |
|
| 11 | + users=$( echo "${curl_output}" | jq -r '.[]' ) |
|
| 12 | + jq_exit_code=$? |
|
| 13 | + if [ "${jq_exit_code}" = "0" ]; then |
|
| 14 | + logger -t sailing "Users with LANDSCAPE:MANAGE:AWS permission: ${users}" |
|
| 15 | + public_keys=$( for user in ${users}; do |
|
| 16 | + ssh_key_curl_output=$(curl -H 'X-SAPSSE-Forward-Request-To: master' -H 'Authorization: Bearer '${BEARER_TOKEN} "${BASE_URL}/landscape/api/landscape/get_ssh_keys_owned_by_user?username[]=${user}" 2>/dev/null ) |
|
| 17 | + ssh_key_curl_exit_code=$? |
|
| 18 | + if [ "${ssh_key_curl_exit_code}" = "0" ]; then |
|
| 19 | + echo "${ssh_key_curl_output}" | jq -r '.[].publicKey' |
|
| 20 | + ssh_key_jq_exit_code=$? |
|
| 21 | + if [ "${ssh_key_jq_exit_code}" != "0" ]; then |
|
| 22 | + EXIT_CODE=${ssh_key_jq_exit_code} |
|
| 23 | + logger -t sailing "Couldn't parse response of get_ssh_keys_owned_by_user; jq exit code ${ssh_key_jq_exit_code}" |
|
| 24 | + fi |
|
| 25 | + else |
|
| 26 | + EXIT_CODE=${ssh_key_curl_exit_code} |
|
| 27 | + logger -t sailing "Couldn't get response of get_ssh_keys_owned_by_user; curl exit code ${ssh_key_corl_exit_code}" |
|
| 28 | + fi |
|
| 29 | + done | sort -u ) |
|
| 30 | + logger -t sailing "Obtained public keys: ${public_keys}" |
|
| 31 | + if [ ! -f ${SSH_DIR}/authorized_keys.org ]; then |
|
| 32 | + # Create a copy of the original authorized_keys file as generated by AWS from the start-up key: |
|
| 33 | + logger -t sailing "Saving original authorized_keys file from ${SSH_DIR}" |
|
| 34 | + cp ${SSH_DIR}/authorized_keys ${SSH_DIR}/authorized_keys.org |
|
| 35 | + fi |
|
| 36 | + # Start out with the original AWS-generated authorized_keys file |
|
| 37 | + # and append the public SSH keys of all users having LANDSCAPE:MANAGE:AWS permission: |
|
| 38 | + echo "$( cat ${SSH_DIR}/authorized_keys.org ) |
|
| 39 | +${public_keys}" | sort -u >${SSH_DIR}/authorized_keys |
|
| 40 | + else |
|
| 41 | + EXIT_CODE=${jq_exit_code} |
|
| 42 | + logger -t sailing "Couldn't parse response of users_with_permission; jq exit code ${jq_exit_code}" |
|
| 43 | + fi |
|
| 44 | +else |
|
| 45 | + EXIT_CODE=${curl_exit_code} |
|
| 46 | + logger -t sailing "Couldn't get response of users_with_permission; curl exit code ${curl_exit_code}" |
|
| 47 | +fi |
|
| 48 | +exit ${EXIT_CODE} |
configuration/environments_scripts/repo/usr/local/bin/update_authorized_keys_for_landscape_managers_if_changed
| ... | ... | @@ -0,0 +1,42 @@ |
| 1 | +#!/bin/bash |
|
| 2 | +BEARER_TOKEN="$1" |
|
| 3 | +BASE_URL="$2" |
|
| 4 | +LOGON_USER_HOME="$3" |
|
| 5 | +LAST_CHANGE_FILE=/var/run/last_change_aws_landscape_managers_ssh_keys |
|
| 6 | +# Uncomment the following for production use, with no error output |
|
| 7 | +curl_output=$( curl -H 'X-SAPSSE-Forward-Request-To: master' -H 'Authorization: Bearer '${BEARER_TOKEN} "${BASE_URL}/landscape/api/landscape/get_time_point_of_last_change_in_ssh_keys_of_aws_landscape_managers" 2>/dev/null ) |
|
| 8 | +curl_exit_code=$? |
|
| 9 | +if [ "${curl_exit_code}" = "0" ]; then |
|
| 10 | + last_change_millis=$( echo "${curl_output}" | jq -r '."timePointOfLastChangeOfSetOfLandscapeManagers-millis"' ) |
|
| 11 | + jq_exit_code=$? |
|
| 12 | + if [ "${jq_exit_code}" = "0" ]; then |
|
| 13 | + if [ -f "${LAST_CHANGE_FILE}" ]; then |
|
| 14 | + PREVIOUS_CHANGE=$(cat "${LAST_CHANGE_FILE}") |
|
| 15 | + if [ -z ${PREVIOUS_CHANGE} ]; then |
|
| 16 | + PREVIOUS_CHANGE=0 |
|
| 17 | + fi |
|
| 18 | + else |
|
| 19 | + PREVIOUS_CHANGE=0 |
|
| 20 | + fi |
|
| 21 | + if [ -z ${last_change_millis} ]; then |
|
| 22 | + logger -t sailing "Empty response from get_time_point_of_last_change_in_ssh_keys_of_aws_landscape_managers; exiting" |
|
| 23 | + exit 1 |
|
| 24 | + else |
|
| 25 | + if [ ${PREVIOUS_CHANGE} -lt ${last_change_millis} ]; then |
|
| 26 | + logger -t sailing "New SSH key changes for landscape managers (${last_change_millis} newer than ${PREVIOUS_CHANGE})" |
|
| 27 | + if update_authorized_keys_for_landscape_managers "${BEARER_TOKEN}" "${BASE_URL}" "${LOGON_USER_HOME}" ; then |
|
| 28 | + logger -t sailing "Updating SSH keys for landscape managers successful; updating ${LAST_CHANGE_FILE}" |
|
| 29 | + echo ${last_change_millis} >${LAST_CHANGE_FILE} |
|
| 30 | + else |
|
| 31 | + logger -t sailing "Updating SSH keys for landscape managers failed with exit code $?; not updating ${LAST_CHANGE_FILE}" |
|
| 32 | + fi |
|
| 33 | + fi |
|
| 34 | + fi |
|
| 35 | + else |
|
| 36 | + logger -t sailing "Parsing response of get_time_point_of_last_change_in_ssh_keys_of_aws_landscape_managers failed with exit code ${jq_exit_code}" |
|
| 37 | + exit ${jq_exit_code} |
|
| 38 | + fi |
|
| 39 | +else |
|
| 40 | + logger -t sailing "Getting response of get_time_point_of_last_change_in_ssh_keys_of_aws_landscape_managers failed with exit code ${curl_exit_code}" |
|
| 41 | + exit ${curl_exit_code} |
|
| 42 | +fi |
configuration/environments_scripts/repo/usr/local/bin/update_landscape_managers_mailing_list.sh
| ... | ... | @@ -0,0 +1,17 @@ |
| 1 | +#!/bin/bash |
|
| 2 | + |
|
| 3 | +# Purpose: Create a mailing list in PATH_TO_STORE/landscapeManagersMailingList that contains the emails of all landscape managers, ie. those who |
|
| 4 | +# have complete admin privileges in the aws environment. |
|
| 5 | +BEARER_TOKEN="$1" |
|
| 6 | +PATH_TO_STORE="$2" |
|
| 7 | +NAME_TO_STORE_IN="landscapeManagersMailingList" |
|
| 8 | +BASE_URL="https://security-service.sapsailing.com" |
|
| 9 | +curl_output=$( curl -H 'X-SAPSSE-Forward-Request-To: master' -H 'Authorization: Bearer '${BEARER_TOKEN} "${BASE_URL}/security/api/restsecurity/users_with_permission?permission=LANDSCAPE:MANAGE:AWS" 2>/dev/null ) |
|
| 10 | +if [[ -f "${PATH_TO_STORE}/${NAME_TO_STORE_IN}" ]]; then |
|
| 11 | + mv -f ${PATH_TO_STORE}/${NAME_TO_STORE_IN} ${PATH_TO_STORE}/${NAME_TO_STORE_IN}.bak |
|
| 12 | +fi |
|
| 13 | +touch ${PATH_TO_STORE}/${NAME_TO_STORE_IN} |
|
| 14 | +echo $curl_output | jq -r .[] | while read user; do |
|
| 15 | + email=$(curl -H 'X-SAPSSE-Forward-Request-To: master' -H 'Authorization: Bearer '${BEARER_TOKEN} "${BASE_URL}/security/api/restsecurity/user?username=$user" 2>/dev/null| jq -r '.email' ) |
|
| 16 | + echo $email >> ${PATH_TO_STORE}/${NAME_TO_STORE_IN} |
|
| 17 | +done |
configuration/environments_scripts/reverse_proxy/files/etc/systemd/system/mountnvmeswap.service
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../../repo/etc/systemd/system/mountnvmeswap.service |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/reverse_proxy/files/usr/local/bin/imageupgrade_functions.sh
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../../repo/usr/local/bin/imageupgrade_functions.sh |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/reverse_proxy/files/usr/local/bin/mountnvmeswap
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../../repo/usr/local/bin/mountnvmeswap |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/reverse_proxy/files/usr/local/bin/notify-operators
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../../repo/usr/local/bin/notify-operators |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/reverse_proxy/files/usr/local/bin/switchoverArchive.sh
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../../repo/usr/local/bin/switchoverArchive.sh |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/reverse_proxy/files/usr/local/bin/sync-repo-and-execute-cmd.sh
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../../repo/usr/local/bin/sync-repo-and-execute-cmd.sh |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/reverse_proxy/files/usr/local/bin/syncgit
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../../repo/usr/local/bin/syncgit |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/reverse_proxy/files/usr/local/bin/update_authorized_keys_for_landscape_managers
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../../repo/usr/local/bin/update_authorized_keys_for_landscape_managers |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/reverse_proxy/files/usr/local/bin/update_authorized_keys_for_landscape_managers_if_changed
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../../repo/usr/local/bin/update_authorized_keys_for_landscape_managers_if_changed |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/reverse_proxy/files/usr/local/bin/update_landscape_managers_mailing_list.sh
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../../repo/usr/local/bin/update_landscape_managers_mailing_list.sh |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/reverse_proxy/users/root/crontab-switchoverArchive
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../crontabs/crontab-switchoverArchive |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/reverse_proxy/users/root/crontab-update-authorized-keys
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../crontabs/crontab-update-authorized-keys |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/reverse_proxy/users/root/crontab-update-landscape-managers-mailing-list
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../crontabs/crontab-update-landscape-managers-mailing-list |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/reverse_proxy/users/trac/crontab-syncgit
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../crontabs/crontab-syncgit |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/sailing_server/files/etc/profile.d/sailing.sh
| ... | ... | @@ -0,0 +1,14 @@ |
| 1 | +# Script to be linked from /etc/profile.d |
|
| 2 | +# Appends to PATH, sets DISPLAY for VNC running on :2, exports JAVA_HOME and Amazon EC2 variables |
|
| 3 | + |
|
| 4 | +ulimit -n 100000 |
|
| 5 | +ulimit -u 40000 |
|
| 6 | + |
|
| 7 | +# SAP JVM |
|
| 8 | +export JAVA_HOME=/opt/sapjvm_8 |
|
| 9 | +# JDK 11.0.1: |
|
| 10 | +#export JAVA_HOME=/opt/jdk-11.0.1+13 |
|
| 11 | +#export JAVA_HOME=/opt/jdk1.8.0_45 |
|
| 12 | +export ANDROID_HOME=/opt/android-sdk-linux |
|
| 13 | +export PATH=$PATH:$JAVA_HOME/bin |
|
| 14 | +export DISPLAY=:2.0 |
configuration/environments_scripts/sailing_server/files/etc/systemd/system/mountnvmeswap.service
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../../repo/etc/systemd/system/mountnvmeswap.service |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/sailing_server/files/etc/systemd/system/sailing.service
| ... | ... | @@ -0,0 +1,13 @@ |
| 1 | +[Unit] |
|
| 2 | +Description=The sailing start-up service reading through EC2 userdata and acting accordingly |
|
| 3 | +Requires=-.mount mongod.service |
|
| 4 | +After=-.mount mongod.service |
|
| 5 | + |
|
| 6 | +[Install] |
|
| 7 | +RequiredBy=multi-user.target |
|
| 8 | + |
|
| 9 | +[Service] |
|
| 10 | +Type=oneshot |
|
| 11 | +RemainAfterExit=true |
|
| 12 | +ExecStart=/etc/init.d/sailing start |
|
| 13 | +ExecStop=/etc/init.d/sailing stop |
configuration/environments_scripts/sailing_server/files/usr/local/bin/imageupgrade.sh
| ... | ... | @@ -0,0 +1,16 @@ |
| 1 | +#!/bin/bash |
|
| 2 | + |
|
| 3 | +# Upgrades the AWS EC2 instance that this script is assumed to be executed on. |
|
| 4 | +# The steps are as follows: |
|
| 5 | + |
|
| 6 | +. `dirname $0`/imageupgrade_functions.sh |
|
| 7 | + |
|
| 8 | +run_yum_update |
|
| 9 | +run_git_pull |
|
| 10 | +download_and_install_latest_sap_jvm_8 |
|
| 11 | +clean_logrotate_target |
|
| 12 | +clean_httpd_logs |
|
| 13 | +clean_servers_dir |
|
| 14 | +clean_startup_logs |
|
| 15 | +update_root_crontab |
|
| 16 | +finalize |
configuration/environments_scripts/sailing_server/files/usr/local/bin/imageupgrade_functions.sh
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../../repo/usr/local/bin/imageupgrade_functions.sh |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/sailing_server/files/usr/local/bin/mountnvmeswap
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../../repo/usr/local/bin/mountnvmeswap |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/sailing_server/files/usr/local/bin/sailing
| ... | ... | @@ -0,0 +1,123 @@ |
| 1 | +#!/bin/bash |
|
| 2 | +# |
|
| 3 | +# sailing Starts sailing services |
|
| 4 | +# |
|
| 5 | +# chkconfig: 2345 95 10 |
|
| 6 | +# description: Sailing contains all sailing services |
|
| 7 | +# |
|
| 8 | + |
|
| 9 | + |
|
| 10 | +# Source function library. |
|
| 11 | +. /etc/init.d/functions |
|
| 12 | + |
|
| 13 | +RETVAL=0 |
|
| 14 | + |
|
| 15 | +SERVERS_DIR=/home/sailing/servers |
|
| 16 | +cd "${SERVERS_DIR}" |
|
| 17 | +JAVA_START_INSTANCES="$(find * -type d -prune)" |
|
| 18 | +GIT_REPOSITORY=/home/sailing/code |
|
| 19 | +if [ -x /bin/ec2-metadata ]; then |
|
| 20 | + EC2_METADATA_CMD=/bin/ec2-metadata |
|
| 21 | +elif [ -x /usr/bin/ec2-metadata ]; then |
|
| 22 | + EC2_METADATA_CMD=/usr/bin/ec2-metadata |
|
| 23 | +else |
|
| 24 | + EC2_METADATA_CMD=/opt/aws/bin/ec2-metadata |
|
| 25 | +fi |
|
| 26 | +REBOOT_INDICATOR=/var/run/is-rebooted |
|
| 27 | +SSH_KEY_READER_BEARER_TOKEN=/root/ssh-key-reader.token |
|
| 28 | + |
|
| 29 | +echo "Executing with $1 at `date`" >>/var/log/sailing.err |
|
| 30 | + |
|
| 31 | +start_servers() { |
|
| 32 | + /usr/local/bin/update_authorized_keys_for_landscape_managers $( cat ${SSH_KEY_READER_BEARER_TOKEN} ) https://security-service.sapsailing.com /root 2>&1 >>/var/log/sailing.err |
|
| 33 | + cp /home/sailing/code/configuration/cp_root_mail_properties /usr/local/bin |
|
| 34 | + chown root /usr/local/bin/cp_root_mail_properties |
|
| 35 | + chgrp root /usr/local/bin/cp_root_mail_properties |
|
| 36 | + chmod 755 /usr/local/bin/cp_root_mail_properties |
|
| 37 | + cp /home/sailing/code/configuration/cp_root_mail_properties_sudoers /etc/sudoers.d |
|
| 38 | + if which $EC2_METADATA_CMD && $EC2_METADATA_CMD -d | sed "s/user-data\: //g" | grep "^image-upgrade$"; then |
|
| 39 | + echo "Found image-upgrade in EC2 user data; upgrading image, then probably shutting down for AMI creation depending on the no-shutdown user data string..." >>/var/log/sailing.err |
|
| 40 | + $GIT_REPOSITORY/configuration/imageupgrade.sh |
|
| 41 | + else |
|
| 42 | + echo "No image-upgrade request found in EC2 user data $($EC2_METADATA_CMD -d); proceeding with regular server launch..." >>/var/log/sailing.err |
|
| 43 | + echo "Servers to launch: ${JAVA_START_INSTANCES}" >>/var/log/sailing.err |
|
| 44 | + if [ -f "${REBOOT_INDICATOR}" ]; then |
|
| 45 | + echo "This is a re-boot. No EC2 user data is evaluated for server configuration; no server configuration is performed. Only configured applications are launched." >>/var/log/sailing.err |
|
| 46 | + for conf in ${JAVA_START_INSTANCES}; do |
|
| 47 | + su - sailing -c "cd ${SERVERS_DIR}/${conf} && ./start" 2>>/var/log/sailing.err >>/var/log/sailing.err |
|
| 48 | + done |
|
| 49 | + else |
|
| 50 | + echo "This is a first-time boot. EC2 user data is evaluated for potential application deployment and configuration, and applications are launched." >>/var/log/sailing.err |
|
| 51 | + echo "Initializing local MongoDB replica set \"replica\"..." |
|
| 52 | + while ! echo "rs.initiate()" | mongo; do |
|
| 53 | + echo "MongoDB not ready yet; waiting and trying again..." |
|
| 54 | + sleep 5 |
|
| 55 | + done |
|
| 56 | + FIRST_SERVER=$( eval $( ${EC2_METADATA_CMD} -d | sed -e 's/^user-data: //' ); echo $SERVER_NAME ) |
|
| 57 | + if [ "${FIRST_SERVER}" = "" ]; then |
|
| 58 | + echo "No SERVER_NAME provided; not configuring/starting any application processes" >>/var/log/sailing.err |
|
| 59 | + else |
|
| 60 | + echo "Server to configure and start: ${FIRST_SERVER}" >>/var/log/sailing.err |
|
| 61 | + configure_and_start_server "${FIRST_SERVER}" |
|
| 62 | + fi |
|
| 63 | + echo 1 >"${REBOOT_INDICATOR}" |
|
| 64 | + fi |
|
| 65 | + fi |
|
| 66 | +} |
|
| 67 | + |
|
| 68 | +# Call with the server directory name (not the full path, just a single element from ${JAVA_START_INSTANCE}) as parameter |
|
| 69 | +# Example: configure_and_start_server server |
|
| 70 | +# This is expected to be called only in case there is only one server to configure; otherwise, the same EC2 user data |
|
| 71 | +# would get applied to all application configurations which would not be a good idea. |
|
| 72 | +configure_and_start_server() { |
|
| 73 | + conf="$1" |
|
| 74 | + mkdir -p "${SERVERS_DIR}/${conf}" >/dev/null 2>/dev/null |
|
| 75 | + chown sailing "${SERVERS_DIR}/${conf}" |
|
| 76 | + chgrp sailing "${SERVERS_DIR}/${conf}" |
|
| 77 | + # If there is a secret /root/mail.properties, copy it into the default server's configuration directory: |
|
| 78 | + /usr/local/bin/cp_root_mail_properties "${conf}" |
|
| 79 | + su - sailing -c "cd ${SERVERS_DIR}/${conf} && ${GIT_REPOSITORY}/java/target/refreshInstance.sh auto-install; ./start" 2>>/var/log/sailing.err >>/var/log/sailing.err |
|
| 80 | + pushd ${SERVERS_DIR}/${conf} |
|
| 81 | + ./defineReverseProxyMappings.sh 2>>/var/log/sailing.err >>/var/log/sailing.err |
|
| 82 | + popd |
|
| 83 | + RETVAL=$? |
|
| 84 | + [ $RETVAL -eq 0 ] && success || failure |
|
| 85 | +} |
|
| 86 | + |
|
| 87 | +stop_servers() { |
|
| 88 | + for conf in $JAVA_START_INSTANCES; do |
|
| 89 | + echo "Stopping Java server $conf" >> /var/log/sailing.err |
|
| 90 | + su - sailing -c "cd $SERVERS_DIR/$conf && ./stop" |
|
| 91 | + RETVAL=$? |
|
| 92 | + [ $RETVAL -eq 0 ] && success || failure |
|
| 93 | + sync_logs |
|
| 94 | + done |
|
| 95 | +} |
|
| 96 | + |
|
| 97 | +sync_logs() { |
|
| 98 | + echo "Executing logrotate followed by a sync to ensure that logs are synchronized" >> /var/log/sailing.err |
|
| 99 | + logrotate -f /etc/logrotate.conf |
|
| 100 | + sync |
|
| 101 | +} |
|
| 102 | + |
|
| 103 | +# See how we were called. |
|
| 104 | +case "$1" in |
|
| 105 | + start) |
|
| 106 | + start_servers |
|
| 107 | + /usr/sbin/update-motd |
|
| 108 | + touch /var/lock/subsys/sailing |
|
| 109 | + ;; |
|
| 110 | + stop) |
|
| 111 | + stop_servers |
|
| 112 | + rm -f /var/lock/subsys/sailing |
|
| 113 | + ;; |
|
| 114 | + status) |
|
| 115 | + status java |
|
| 116 | + RETVAL=$? |
|
| 117 | + ;; |
|
| 118 | + *) |
|
| 119 | + echo $"Usage: $0 {start|status|stop}" |
|
| 120 | + RETVAL=3 |
|
| 121 | +esac |
|
| 122 | + |
|
| 123 | +exit $RETVAL |
configuration/environments_scripts/sailing_server/files/usr/local/bin/update_authorized_keys_for_landscape_managers
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../../repo/usr/local/bin/update_authorized_keys_for_landscape_managers |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/sailing_server/files/usr/local/bin/update_authorized_keys_for_landscape_managers_if_changed
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../../repo/usr/local/bin/update_authorized_keys_for_landscape_managers_if_changed |
|
| ... | ... | \ No newline at end of file |
configuration/environments_scripts/sailing_server/users/root/crontab-update-authorized-keys
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../../../../crontabs/crontab-update-authorized-keys |
|
| ... | ... | \ No newline at end of file |
configuration/hudson_instance_setup/hudson
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../environments_scripts/build_server/files/usr/local/bin/hudson |
|
| ... | ... | \ No newline at end of file |
configuration/hudson_instance_setup/hudson.service
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../environments_scripts/build_server/files/etc/systemd/system/hudson.service |
|
| ... | ... | \ No newline at end of file |
configuration/hudson_instance_setup/setup-hudson-server.sh
| ... | ... | @@ -0,0 +1,92 @@ |
| 1 | +#!/bin/bash |
|
| 2 | + |
|
| 3 | +# Usage: Launch an Amazon EC2 instance from an Amazon Linux 2 AMI with |
|
| 4 | +# 100GB of root partition size and the "Sailing Analytics App" security group |
|
| 5 | +# using an SSH key for which you have a working private key available. |
|
| 6 | +# Then, run this script on your local computer, using the external IP address |
|
| 7 | +# of the instance you just launched in AWS as only argument. This will then |
|
| 8 | +# turn the instance into an application server for the SAP Sailing Analytics |
|
| 9 | +# application. When the script is done you may log in to look around and check |
|
| 10 | +# things. When done, shut down the instance (Stop, not Terminate) and create |
|
| 11 | +# an image off of it, naming it, e.g., "SAP Sailing Analytics 2.0" and |
|
| 12 | +# also tagging its root volume snapshot as, e.g., "SAP Sailing Analytics 2.0 (Root)". |
|
| 13 | +# If you want to use the resulting image in production, also tag it with |
|
| 14 | +# tag key "image-type" and tag value "sailing-analytics-server". |
|
| 15 | +if [ $# != 0 ]; then |
|
| 16 | + SERVER=$1 |
|
| 17 | + $(dirname $0)/../sailing_server_setup/setup-sailing-server.sh ${SERVER} |
|
| 18 | + scp "${0}" ec2-user@${SERVER}: |
|
| 19 | + ssh -A ec2-user@${SERVER} ./$( basename "${0}" ) |
|
| 20 | +else |
|
| 21 | + if ec2-metadata | grep -q instance-id; then |
|
| 22 | + echo "Running on an AWS EC2 instance as user ${USER} / $(whoami), starting setup..." |
|
| 23 | + # Install secrets |
|
| 24 | + scp root@sapsailing.com:dev-secrets /tmp |
|
| 25 | + scp root@sapsailing.com:hudson-aws-credentials /tmp |
|
| 26 | + sudo mv /tmp/dev-secrets /root/secrets |
|
| 27 | + mkdir /root/.aws |
|
| 28 | + sudo mv /tmp/hudson-aws-credentials /root/.aws/credentials |
|
| 29 | + sudo chown root:root /root/secrets /root/.aws/credentials |
|
| 30 | + sudo chmod 600 /root/secrets /root/.aws/credentials |
|
| 31 | + # Make eu-west-1 the default region for any aws CLI interaction: |
|
| 32 | + sudo su - -c "aws configure set default.region eu-west-1" |
|
| 33 | + # Create "hudson" user and clear its directory again which is to become a mount point |
|
| 34 | + sudo adduser hudson |
|
| 35 | + sudo su - hudson -c "rm -rf /home/hudson/* /home/hudson/.* 2>/dev/null" |
|
| 36 | + sudo mkdir /usr/lib/hudson |
|
| 37 | + sudo chown hudson /usr/lib/hudson |
|
| 38 | + sudo mkdir /var/log/hudson |
|
| 39 | + sudo chgrp hudson /var/log/hudson |
|
| 40 | + sudo chmod g+w /var/log/hudson |
|
| 41 | + sudo wget -O /usr/lib/hudson/hudson.war "https://static.sapsailing.com/hudson.war.patched-with-mail-1.6.2" |
|
| 42 | + # Link hudson file to /etc/init.d |
|
| 43 | + sudo ln -s /home/sailing/code/configuration/hudson_instance_setup/hudson /etc/init.d |
|
| 44 | + # Link hudson service to /etc/systemd/system |
|
| 45 | + sudo ln -s /home/sailing/code/configuration/hudson_instance_setup/hudson.service /etc/systemd/system |
|
| 46 | + # Link Hudson system-wide config file: |
|
| 47 | + sudo ln -s /home/sailing/code/configuration/hudson_instance_setup/sysconfig-hudson /etc/sysconfig/hudson |
|
| 48 | + # Link additional script files needed for Hudson build server control: |
|
| 49 | + sudo ln -s /home/sailing/code/configuration/launchhudsonslave /usr/local/bin |
|
| 50 | + sudo ln -s /home/sailing/code/configuration/launchhudsonslave-java11 /usr/local/bin |
|
| 51 | + sudo ln -s /home/sailing/code/configuration/aws-automation/getLatestImageOfType.sh /usr/local/bin |
|
| 52 | + # Enable NFS server |
|
| 53 | + sudo systemctl enable nfs-server.service |
|
| 54 | + sudo systemctl start nfs-server.service |
|
| 55 | + # Enable the service: |
|
| 56 | + sudo systemctl daemon-reload |
|
| 57 | + sudo systemctl enable hudson.service |
|
| 58 | + # NFS-export Android SDK |
|
| 59 | + sudo su - -c "cat <<EOF >>/etc/exports |
|
| 60 | +/home/hudson/android-sdk-linux 172.31.0.0/16(rw,nohide,no_root_squash) |
|
| 61 | +EOF |
|
| 62 | +" |
|
| 63 | + # Allow "hudson" user to launch EC2 instances: |
|
| 64 | + sudo su - -c "cat <<EOF >>/etc/sudoers.d/hudsoncanlaunchec2instances |
|
| 65 | +hudson ALL = (root) NOPASSWD: /usr/local/bin/launchhudsonslave |
|
| 66 | +hudson ALL = (root) NOPASSWD: /usr/local/bin/launchhudsonslave-java11 |
|
| 67 | +hudson ALL = (root) NOPASSWD: /usr/local/bin/getLatestImageOfType.sh |
|
| 68 | +EOF |
|
| 69 | +" |
|
| 70 | + # Install DEV server |
|
| 71 | + sudo su - sailing -c "mkdir /home/sailing/servers/DEV |
|
| 72 | +cd /home/sailing/servers/DEV |
|
| 73 | +cat <<EOF | /home/sailing/code/java/target/refreshInstance.sh auto-install-from-stdin |
|
| 74 | +USE_ENVIRONMENT=dev-server |
|
| 75 | +EOF |
|
| 76 | +" |
|
| 77 | + sudo cp /root/secrets /home/sailing/servers/DEV/configuration |
|
| 78 | + sudo chown sailing /home/sailing/servers/DEV/configuration/secrets |
|
| 79 | + sudo chgrp sailing /home/sailing/servers/DEV/configuration/secrets |
|
| 80 | + sudo cp /root/mail.properties /home/sailing/servers/DEV/configuration |
|
| 81 | + sudo chown sailing /home/sailing/servers/DEV/configuration/mail.properties |
|
| 82 | + sudo chgrp sailing /home/sailing/servers/DEV/configuration/mail.properties |
|
| 83 | + # Start the sailing.service with empty/no user data, so the next boot is recognized as a re-boot |
|
| 84 | + sudo systemctl start sailing.service |
|
| 85 | + sudo systemctl stop sailing.service |
|
| 86 | + sudo mount -a |
|
| 87 | + else |
|
| 88 | + echo "Not running on an AWS instance; refusing to run setup!" >&2 |
|
| 89 | + echo "To prepare an instance running in AWS, provide its external IP as argument to this script." >&2 |
|
| 90 | + exit 2 |
|
| 91 | + fi |
|
| 92 | +fi |
configuration/hudson_instance_setup/sysconfig-hudson
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../environments_scripts/build_server/files/etc/sysconfig/hudson |
|
| ... | ... | \ No newline at end of file |
configuration/hudson_slave_setup/README
| ... | ... | @@ -1,5 +0,0 @@ |
| 1 | -Deploy the .mount and .service units to /etc/systemd/system. |
|
| 2 | -Deploy the imageupgrade script to /usr/local/bin, |
|
| 3 | -furthermore the ../imageupgrade_functions.sh has to go to /usr/local/bin, best as symbolic link to files in git. |
|
| 4 | - |
|
| 5 | -See the script in wiki/howto/development/ci.md that shows how to set up such an instance from scratch. |
configuration/hudson_slave_setup/README
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../environments_scripts/hudson_slave/README |
|
| ... | ... | \ No newline at end of file |
configuration/hudson_slave_setup/imageupgrade
| ... | ... | @@ -1,21 +0,0 @@ |
| 1 | -#!/bin/bash |
|
| 2 | - |
|
| 3 | -# Script to deploy on an instance that has an ephemeral volume as /dev/nvme0n1 (adjust env var PARTITION if different) |
|
| 4 | -# Ensures the partition is xfs-formatted, any existing partition contents will be overwritten if formatted otherwise. |
|
| 5 | -# An existing xfs partition will be left alone. |
|
| 6 | -. imageupgrade_functions.sh |
|
| 7 | - |
|
| 8 | -get_ec2_user_data() { |
|
| 9 | - /usr/bin/ec2metadata --user |
|
| 10 | -} |
|
| 11 | - |
|
| 12 | -METADATA=$( get_ec2_user_data ) |
|
| 13 | -echo "Metadata: ${METADATA}" |
|
| 14 | -if echo "${METADATA}" | grep -q "^image-upgrade$"; then |
|
| 15 | - echo "Image upgrade..." |
|
| 16 | - LOGON_USER_HOME=/home/ubuntu |
|
| 17 | - run_apt_update_upgrade |
|
| 18 | - download_and_install_latest_sap_jvm_8 |
|
| 19 | - run_git_pull |
|
| 20 | - finalize |
|
| 21 | -fi |
configuration/hudson_slave_setup/imageupgrade
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../environments_scripts/hudson_slave/files/usr/local/bin/imageupgrade |
|
| ... | ... | \ No newline at end of file |
configuration/hudson_slave_setup/imageupgrade.service
| ... | ... | @@ -1,12 +0,0 @@ |
| 1 | -[Unit] |
|
| 2 | -Description=Check for image-upgrade EC2 user data and triggers an image upgrade if found |
|
| 3 | -Requires=-.mount cloud-init.service |
|
| 4 | -After=-.mount cloud-init.service networking.service systemd-networkd.service |
|
| 5 | - |
|
| 6 | -[Install] |
|
| 7 | -WantedBy=multi-user.target |
|
| 8 | - |
|
| 9 | -[Service] |
|
| 10 | -Type=oneshot |
|
| 11 | -RemainAfterExit=true |
|
| 12 | -ExecStart=/usr/local/bin/imageupgrade |
configuration/hudson_slave_setup/imageupgrade.service
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../environments_scripts/hudson_slave/files/etc/systemd/system/imageupgrade.service |
|
| ... | ... | \ No newline at end of file |
configuration/hudson_slave_setup/mounthudsonworkspace
| ... | ... | @@ -1,12 +0,0 @@ |
| 1 | -#!/bin/bash |
|
| 2 | -# Expects an ephemeral volume to have been mounted at /ephemeral/data and |
|
| 3 | -# mounts that with a "bind" mount to /home/hudson/workspace. |
|
| 4 | -# The directory is then chown/chgrp'ed to hudson |
|
| 5 | -if [ -e /dev/nvme1n1 ]; then |
|
| 6 | - mkfs.ext4 /dev/nvme1n1 |
|
| 7 | - mount /dev/nvme1n1 /home/hudson/workspace |
|
| 8 | -else |
|
| 9 | - mount -o bind /ephemeral/data /home/hudson/workspace |
|
| 10 | -fi |
|
| 11 | -chgrp hudson /home/hudson/workspace |
|
| 12 | -chown hudson /home/hudson/workspace |
configuration/hudson_slave_setup/mounthudsonworkspace
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../environments_scripts/hudson_slave/files/usr/local/bin/mounthudsonworkspace |
|
| ... | ... | \ No newline at end of file |
configuration/hudson_slave_setup/mounthudsonworkspace.service
| ... | ... | @@ -1,12 +0,0 @@ |
| 1 | -[Unit] |
|
| 2 | -Description=Mount Hudson Workspace |
|
| 3 | -After=ephemeral-data.mount |
|
| 4 | - |
|
| 5 | -[Service] |
|
| 6 | -Type=oneshot |
|
| 7 | -ExecStart=/usr/local/bin/mounthudsonworkspace |
|
| 8 | -RemainAfterExit=true |
|
| 9 | -StandardOutput=journal |
|
| 10 | - |
|
| 11 | -[Install] |
|
| 12 | -WantedBy=multi-user.target |
configuration/hudson_slave_setup/mounthudsonworkspace.service
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../environments_scripts/hudson_slave/files/etc/systemd/system/mounthudsonworkspace.service |
|
| ... | ... | \ No newline at end of file |
configuration/imageupgrade.sh
| ... | ... | @@ -1,16 +0,0 @@ |
| 1 | -#!/bin/bash |
|
| 2 | - |
|
| 3 | -# Upgrades the AWS EC2 instance that this script is assumed to be executed on. |
|
| 4 | -# The steps are as follows: |
|
| 5 | - |
|
| 6 | -. `dirname $0`/imageupgrade_functions.sh |
|
| 7 | - |
|
| 8 | -run_yum_update |
|
| 9 | -run_git_pull |
|
| 10 | -download_and_install_latest_sap_jvm_8 |
|
| 11 | -clean_logrotate_target |
|
| 12 | -clean_httpd_logs |
|
| 13 | -clean_servers_dir |
|
| 14 | -clean_startup_logs |
|
| 15 | -update_root_crontab |
|
| 16 | -finalize |
configuration/imageupgrade.sh
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +environments_scripts/sailing_server/files/usr/local/bin/imageupgrade.sh |
|
| ... | ... | \ No newline at end of file |
configuration/imageupgrade_functions.sh
| ... | ... | @@ -1,101 +0,0 @@ |
| 1 | -#!/bin/bash |
|
| 2 | - |
|
| 3 | -# Upgrades the AWS EC2 instance that this script is assumed to be executed on. |
|
| 4 | -# The steps are as follows: |
|
| 5 | - |
|
| 6 | -REBOOT_INDICATOR=/var/run/is-rebooted |
|
| 7 | -LOGON_USER_HOME=/root |
|
| 8 | - |
|
| 9 | -run_yum_update() { |
|
| 10 | - echo "Updating packages using yum" >>/var/log/sailing.err |
|
| 11 | - yum -y update |
|
| 12 | -} |
|
| 13 | - |
|
| 14 | -run_apt_update_upgrade() { |
|
| 15 | - echo "Updating packages using apt" >>/var/log/sailing.err |
|
| 16 | - apt-get -y update; apt-get -y upgrade |
|
| 17 | - apt-get -y install linux-image-cloud-amd64 |
|
| 18 | - apt-get -y autoremove |
|
| 19 | -} |
|
| 20 | - |
|
| 21 | -run_git_pull() { |
|
| 22 | - echo "Pulling git to /home/sailing/code" >>/var/log/sailing.err |
|
| 23 | - su - sailing -c "cd code; git pull" |
|
| 24 | -} |
|
| 25 | - |
|
| 26 | -download_and_install_latest_sap_jvm_8() { |
|
| 27 | - echo "Downloading and installing latest SAP JVM 8 to /opt/sapjvm_8" >>/var/log/sailing.err |
|
| 28 | - vmpath=$( curl -s --cookie eula_3_1_agreed=tools.hana.ondemand.com/developer-license-3_1.txt https://tools.hana.ondemand.com | grep additional/sapjvm-8\..*-linux-x64.zip | head -1 | sed -e 's/^.*a href="\(additional\/sapjvm-8\..*-linux-x64\.zip\)".*/\1/' ) |
|
| 29 | - if [ -n "${vmpath}" ]; then |
|
| 30 | - echo "Found VM version ${vmpath}; upgrading installation at /opt/sapjvm_8" >>/var/log/sailing.err |
|
| 31 | - if [ -z "${TMP}" ]; then |
|
| 32 | - TMP=/tmp |
|
| 33 | - fi |
|
| 34 | - echo "Downloading SAP JVM 8 as ZIP file to ${TMP}/sapjvm8-linux-x64.zip" >>/var/log/sailing.err |
|
| 35 | - curl --cookie eula_3_1_agreed=tools.hana.ondemand.com/developer-license-3_1.txt "https://tools.hana.ondemand.com/${vmpath}" > ${TMP}/sapjvm8-linux-x64.zip 2>>/var/log/sailing.err |
|
| 36 | - cd /opt |
|
| 37 | - rm -rf sapjvm_8 |
|
| 38 | - if [ -f SIGNATURE.SMF ]; then |
|
| 39 | - rm -f SIGNATURE.SMF |
|
| 40 | - fi |
|
| 41 | - unzip ${TMP}/sapjvm8-linux-x64.zip >>/var/log/sailing.err |
|
| 42 | - rm -f ${TMP}/sapjvm8-linux-x64.zip |
|
| 43 | - rm -f SIGNATURE.SMF |
|
| 44 | - else |
|
| 45 | - echo "Did not find SAP JVM 8 at tools.hana.ondemand.com; not trying to upgrade" >>/var/log/sailing.err |
|
| 46 | - fi |
|
| 47 | -} |
|
| 48 | - |
|
| 49 | -clean_logrotate_target() { |
|
| 50 | - echo "Clearing logrorate-targets" >>/var/log/sailing.err |
|
| 51 | - rm -rf /var/log/logrotate-target/* |
|
| 52 | -} |
|
| 53 | - |
|
| 54 | -clean_httpd_logs() { |
|
| 55 | - echo "Clearing httpd logs" >>/var/log/sailing.err |
|
| 56 | - service httpd stop |
|
| 57 | - rm -rf /var/log/httpd/* |
|
| 58 | - rm -f /etc/httpd/conf.d/001-internals.conf |
|
| 59 | -} |
|
| 60 | - |
|
| 61 | -clean_startup_logs() { |
|
| 62 | - echo "Clearing bootstrap logs" >>/var/log/sailing.err |
|
| 63 | - rm -f /var/log/sailing* |
|
| 64 | - # Ensure that upon the next boot the reboot indicator is not present, indicating that it's the first boot |
|
| 65 | - rm "${REBOOT_INDICATOR}" |
|
| 66 | -} |
|
| 67 | - |
|
| 68 | -clean_servers_dir() { |
|
| 69 | - rm -rf /home/sailing/servers/* |
|
| 70 | -} |
|
| 71 | - |
|
| 72 | -update_root_crontab() { |
|
| 73 | - # The following assumes that /root/crontab is a symbolic link to /home/sailing/code/configuration/crontab |
|
| 74 | - # which has previously been updated by a git pull: |
|
| 75 | - cd /root |
|
| 76 | - crontab crontab |
|
| 77 | -} |
|
| 78 | - |
|
| 79 | -clean_root_ssh_dir_and_tmp() { |
|
| 80 | - echo "Cleaning up ${LOGON_USER_HOME}/.ssh" >>/var/log/sailing.err |
|
| 81 | - rm -rf ${LOGON_USER_HOME}/.ssh/* |
|
| 82 | - rm -f /var/run/last_change_aws_landscape_managers_ssh_keys |
|
| 83 | - rm -rf /tmp/image-upgrade-finished |
|
| 84 | -} |
|
| 85 | - |
|
| 86 | -get_ec2_user_data() { |
|
| 87 | - /opt/aws/bin/ec2-metadata -d | sed -e 's/^user-data: //' |
|
| 88 | -} |
|
| 89 | - |
|
| 90 | -finalize() { |
|
| 91 | - # Finally, shut down the node unless "no-shutdown" was provided in the user data, so that a new AMI can be constructed cleanly |
|
| 92 | - if get_ec2_user_data | grep "^no-shutdown$"; then |
|
| 93 | - echo "Shutdown disabled by no-shutdown option in user data. Remember to clean /root/.ssh when done." |
|
| 94 | - touch /tmp/image-upgrade-finished |
|
| 95 | - else |
|
| 96 | - # Only clean ${LOGON_USER_HOME}/.ssh directory and /tmp/image-upgrade-finished if the next step is shutdown / image creation |
|
| 97 | - clean_root_ssh_dir_and_tmp |
|
| 98 | - rm -f /var/log/sailing.err |
|
| 99 | - shutdown -h now & |
|
| 100 | - fi |
|
| 101 | -} |
configuration/imageupgrade_functions.sh
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +environments_scripts/sailing_server/files/usr/local/bin/imageupgrade_functions.sh |
|
| ... | ... | \ No newline at end of file |
configuration/launchhudsonslave
| ... | ... | @@ -1,31 +0,0 @@ |
| 1 | -#!/bin/bash |
|
| 2 | -# Enable sudo for user hudson for this script by adding the following to /etc/sudoers.d/hudsoncanlaunchec2instances: |
|
| 3 | -# hudson ALL = (root) NOPASSWD: /usr/local/bin/launchhudsonslave |
|
| 4 | -# hudson ALL = (root) NOPASSWD: /usr/local/bin/launchhudsonslave-java11 |
|
| 5 | -# hudson ALL = (root) NOPASSWD: /usr/local/bin/getLatestImageOfType.sh |
|
| 6 | -AWS=/usr/bin/aws |
|
| 7 | -REGION=eu-west-1 |
|
| 8 | -HUDSON_SLAVE_AMI_ID=$( /usr/local/bin/getLatestImageOfType.sh hudson-slave ) |
|
| 9 | -echo Launching instance from AMI ${HUDSON_SLAVE_AMI_ID} ... |
|
| 10 | -instanceid=`$AWS ec2 run-instances --image-id $HUDSON_SLAVE_AMI_ID --count 1 --instance-type c5d.4xlarge --key-name Axel --security-groups "Sailing Analytics App" --region $REGION --tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value=Hudson Ubuntu Slave}]' --instance-initiated-shutdown-behavior terminate | tee /tmp/slavelaunch.out | jq .Instances[0].InstanceId | sed -e 's/"//g'` |
|
| 11 | -if [ "$instanceid" = "" ]; then |
|
| 12 | - echo Error launching instance |
|
| 13 | - exit 1 |
|
| 14 | -else |
|
| 15 | - echo Instance ID is $instanceid |
|
| 16 | - while [ "`$AWS ec2 describe-instances --region $REGION --instance-ids $instanceid | jq .Reservations[0].Instances[0].State.Name`" != "\"running\"" ]; do |
|
| 17 | - echo Instance $instanceid not running yet\; trying again... |
|
| 18 | - sleep 5 |
|
| 19 | - done |
|
| 20 | - echo Instance $instanceid seems running now |
|
| 21 | - private_ip=`$AWS ec2 describe-instances --region $REGION --instance-ids $instanceid | jq .Reservations[0].Instances[0].PrivateIpAddress | sed -e 's/"//g'` |
|
| 22 | - echo Probing for SSH on private IP $private_ip |
|
| 23 | - # Note: it's important to redirect stdin/stdout from/to /dev/null to ensure the Hudon master can properly connect stdin/stdout to the slave later |
|
| 24 | - while ! su - hudson -c "ssh -o ConnectTimeout=10 -o StrictHostKeyChecking=no hudson@$private_ip mkdir -p /home/hudson/workspace/___test___\; rmdir /home/hudson/workspace/___test___ </dev/null >/dev/null 2>/dev/null"; do |
|
| 25 | - echo SSH daemon not reachable yet. Trying again in a few seconds... |
|
| 26 | - sleep 10 |
|
| 27 | - done |
|
| 28 | - echo SSH daemon reached. State should be ready to connect to now. |
|
| 29 | - su - hudson -c "ssh -o ConnectTimeout=10 -o StrictHostKeyChecking=no hudson@$private_ip \"/opt/sapjvm_8/bin/java -jar slave.jar; sudo /sbin/shutdown -h now\"" |
|
| 30 | - $AWS ec2 terminate-instances --instance-ids $instanceid |
|
| 31 | -fi |
configuration/launchhudsonslave
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +environments_scripts/build_server/files/usr/local/bin/launchhudsonslave |
|
| ... | ... | \ No newline at end of file |
configuration/launchhudsonslave-java11
| ... | ... | @@ -1,31 +0,0 @@ |
| 1 | -#!/bin/bash |
|
| 2 | -# Enable sudo for user hudson for this script by adding the following to /etc/sudoers.d/hudsoncanlaunchec2instances: |
|
| 3 | -# hudson ALL = (root) NOPASSWD: /usr/local/bin/launchhudsonslave |
|
| 4 | -# hudson ALL = (root) NOPASSWD: /usr/local/bin/launchhudsonslave-java11 |
|
| 5 | -# hudson ALL = (root) NOPASSWD: /usr/local/bin/getLatestImageOfType.sh |
|
| 6 | -AWS=/usr/bin/aws |
|
| 7 | -REGION=eu-west-1 |
|
| 8 | -HUDSON_SLAVE_AMI_ID=$( /usr/local/bin/getLatestImageOfType.sh hudson-slave-11 ) |
|
| 9 | -echo Launching instance from AMI ${HUDSON_SLAVE_AMI_ID} ... |
|
| 10 | -instanceid=`$AWS ec2 run-instances --image-id $HUDSON_SLAVE_AMI_ID --count 1 --instance-type c5d.4xlarge --key-name Axel --security-groups "Sailing Analytics App" --region $REGION --tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value=Hudson Ubuntu Slave Java 11}]' --instance-initiated-shutdown-behavior terminate | tee /tmp/slavelaunch.out | jq .Instances[0].InstanceId | sed -e 's/"//g'` |
|
| 11 | -if [ "$instanceid" = "" ]; then |
|
| 12 | - echo Error launching instance |
|
| 13 | - exit 1 |
|
| 14 | -else |
|
| 15 | - echo Instance ID is $instanceid |
|
| 16 | - while [ "`$AWS ec2 describe-instances --region $REGION --instance-ids $instanceid | jq .Reservations[0].Instances[0].State.Name`" != "\"running\"" ]; do |
|
| 17 | - echo Instance $instanceid not running yet\; trying again... |
|
| 18 | - sleep 5 |
|
| 19 | - done |
|
| 20 | - echo Instance $instanceid seems running now |
|
| 21 | - private_ip=`$AWS ec2 describe-instances --region $REGION --instance-ids $instanceid | jq .Reservations[0].Instances[0].PrivateIpAddress | sed -e 's/"//g'` |
|
| 22 | - echo Probing for SSH on private IP $private_ip |
|
| 23 | - # Note: it's important to redirect stdin/stdout from/to /dev/null to ensure the Hudon master can properly connect stdin/stdout to the slave later |
|
| 24 | - while ! su - hudson -c "ssh -o ConnectTimeout=10 -o StrictHostKeyChecking=no hudson@$private_ip mkdir -p /home/hudson/workspace/___test___\; rmdir /home/hudson/workspace/___test___ </dev/null >/dev/null 2>/dev/null"; do |
|
| 25 | - echo SSH daemon not reachable yet. Trying again in a few seconds... |
|
| 26 | - sleep 10 |
|
| 27 | - done |
|
| 28 | - echo SSH daemon reached. State should be ready to connect to now. |
|
| 29 | - su - hudson -c "ssh -o ConnectTimeout=10 -o StrictHostKeyChecking=no hudson@$private_ip \"/opt/sapjvm_8/bin/java -jar slave.jar; sudo /sbin/shutdown -h now\"" |
|
| 30 | - $AWS ec2 terminate-instances --instance-ids $instanceid |
|
| 31 | -fi |
configuration/launchhudsonslave-java11
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +environments_scripts/build_server/files/usr/local/bin/launchhudsonslave-java11 |
|
| ... | ... | \ No newline at end of file |
configuration/mongo_instance_setup/README
| ... | ... | @@ -1,16 +0,0 @@ |
| 1 | -Deploy the .mount and .service units to /etc/systemd/system. |
|
| 2 | -Deploy the ephemeralvolume and the patch-mongo-replicaset-name-from-ec2-metadata script to /usr/local/bin, |
|
| 3 | -furthermore the ../imageupgrade_functions.sh has to go to /usr/local/bin. |
|
| 4 | -Deploy mongod.conf to /etc and make sure that /root has a+r and a+x permissions because |
|
| 5 | -otherwise the mongod user won't be able to read through the symbolic link |
|
| 6 | -Link mongodb to /etc/logrotate.d |
|
| 7 | -Link crontab to /root/crontab and run "crontab crontab" as root. |
|
| 8 | - |
|
| 9 | -Run with optional EC2 user detail, e.g., as follows: |
|
| 10 | - |
|
| 11 | - REPLICA_SET_NAME=archive |
|
| 12 | - REPLICA_SET_PRIMARY=dbserver.internal.sapsailing.com:10201 |
|
| 13 | - |
|
| 14 | -This will automatically patch /etc/mongod.conf such that the replSetName property |
|
| 15 | -is set to the value of REPLICA_SET_NAME. Then, the instance will be added to |
|
| 16 | -the REPLICA_SET_PRIMARY's replica set. |
configuration/mongo_instance_setup/README
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../environments_scripts/mongo_instance_setup/README |
|
| ... | ... | \ No newline at end of file |
configuration/mongo_instance_setup/add-as-replica
| ... | ... | @@ -1,25 +0,0 @@ |
| 1 | -#!/bin/bash |
|
| 2 | -user_data=$( ec2-metadata -d | sed -e 's/^user-data: //' ) |
|
| 3 | -if echo ${user_data} | grep -q "^image-upgrade$"; then |
|
| 4 | - echo "Image upgrade... didn't expect to get this far because ephemeralvolume should have triggered upgrade and shutdown. Not registering MongoDB replica" |
|
| 5 | -else |
|
| 6 | - eval ${user_data} |
|
| 7 | - if [ -z "$REPLICA_SET_NAME" ]; then |
|
| 8 | - REPLICA_SET_NAME=live |
|
| 9 | - fi |
|
| 10 | - if [ -z "$REPLICA_SET_PRIMARY" ]; then |
|
| 11 | - REPLICA_SET_PRIMARY=mongo0.internal.sapsailing.com:27017 |
|
| 12 | - fi |
|
| 13 | - if [ -z "$REPLICA_SET_PRIORITY" ]; then |
|
| 14 | - REPLICA_SET_PRIORITY=1 |
|
| 15 | - fi |
|
| 16 | - if [ -z "$REPLICA_SET_VOTES" ]; then |
|
| 17 | - REPLICA_SET_VOTES=0 |
|
| 18 | - fi |
|
| 19 | - if [ \! -z "REPLICA_SET_PRIMARY" ]; then |
|
| 20 | - IP=$(ec2-metadata -o | sed -e 's/^local-ipv4: //') |
|
| 21 | - echo "rs.add({host: \"$IP:27017\", priority: $REPLICA_SET_PRIORITY, votes: $REPLICA_SET_VOTES})" | mongo "mongodb://$REPLICA_SET_PRIMARY/?replicaSet=$REPLICA_SET_NAME&retryWrites=true" |
|
| 22 | - else |
|
| 23 | - echo "rs.initiate()" | mongo |
|
| 24 | - fi |
|
| 25 | -fi |
configuration/mongo_instance_setup/add-as-replica
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../environments_scripts/mongo_instance_setup/files/usr/local/bin/add-as-replica |
|
| ... | ... | \ No newline at end of file |
configuration/mongo_instance_setup/chownvarlibmongo.service
| ... | ... | @@ -1,14 +0,0 @@ |
| 1 | -[Unit] |
|
| 2 | -Description=Ensures all files under /var/lib/mongo are owned by mongod user/group |
|
| 3 | -Requires=ephemeralvolume.service |
|
| 4 | -After=ephemeralvolume.service |
|
| 5 | -Before=mongod.service |
|
| 6 | - |
|
| 7 | -[Install] |
|
| 8 | -RequiredBy=mongod.service |
|
| 9 | - |
|
| 10 | -[Service] |
|
| 11 | -Type=oneshot |
|
| 12 | -RemainAfterExit=true |
|
| 13 | -ExecStart=/bin/chown -R mongod /var/lib/mongo/ |
|
| 14 | -ExecStart=/bin/chgrp -R mongod /var/lib/mongo/ |
configuration/mongo_instance_setup/chownvarlibmongo.service
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../environments_scripts/mongo_instance_setup/files/etc/systemd/system/chownvarlibmongo.service |
|
| ... | ... | \ No newline at end of file |
configuration/mongo_instance_setup/crontab
| ... | ... | @@ -1 +1,3 @@ |
| 1 | 1 | * * * * * export PATH=/bin:/usr/bin:/usr/local/bin; sleep $(( $RANDOM * 60 / 32768 )); update_authorized_keys_for_landscape_managers_if_changed $( cat /root/ssh-key-reader.token ) https://security-service.sapsailing.com /home/ec2-user 2>&1 >>/var/log/sailing.err |
| 2 | +# NOTICE: Please try to reference the customised crontabs at $GIT_HOME/configuration/crontabs or use |
|
| 3 | +# the build-crontab script. This file has been maintained for continuity, but is deprecated. |
|
| ... | ... | \ No newline at end of file |
configuration/mongo_instance_setup/ephemeralvolume
| ... | ... | @@ -1,33 +0,0 @@ |
| 1 | -#!/bin/bash |
|
| 2 | - |
|
| 3 | -# Script to deploy on an instance that has an ephemeral volume as /dev/nvme0n1 (adjust env var PARTITION if different) |
|
| 4 | -# Ensures the partition is xfs-formatted, any existing partition contents will be overwritten if formatted otherwise. |
|
| 5 | -# An existing xfs partition will be left alone. |
|
| 6 | - |
|
| 7 | -METADATA=$( /bin/ec2-metadata -d | sed -e 's/^user-data: //' ) |
|
| 8 | -echo "Metadata: ${METADATA}" |
|
| 9 | -if echo "${METADATA}" | grep -q "^image-upgrade$"; then |
|
| 10 | - echo "Image upgrade; not trying to mount/format ephemeral volume; calling imageupgrade.sh instead..." |
|
| 11 | - imageupgrade.sh |
|
| 12 | -else |
|
| 13 | - echo "No image upgrade; looking for ephemeral volume and trying to format with xfs..." |
|
| 14 | - PARTITION=/dev/nvme0n1 |
|
| 15 | - if [ \! -e $PARTITION ]; then |
|
| 16 | - PARTITION=/dev/xvdb |
|
| 17 | - fi |
|
| 18 | - if [ \! -e $PARTITION ]; then |
|
| 19 | - echo "Neither /dev/nvme0n1 nor /dev/xvdb partition found; not formatting/mounting ephemeral volume" |
|
| 20 | - elif cat /proc/mounts | awk '{print $1;}' | grep "${PARTITION}"; then |
|
| 21 | - echo "Partition ${PARTITION} already mounted; not formatting/mounting ephemeral volume" |
|
| 22 | - else |
|
| 23 | - FSTYPE=$(blkid -p $PARTITION -s TYPE -o value) |
|
| 24 | - if [ "$FSTYPE" != "xfs" ]; then |
|
| 25 | - echo FSTYPE was "$FSTYPE" but should have been xfs. Formatting $PARTITION... |
|
| 26 | - mkfs.xfs -f $PARTITION |
|
| 27 | - else |
|
| 28 | - echo FSTYPE was "$FSTYPE" which is just right :-\) |
|
| 29 | - fi |
|
| 30 | - # mount the thing to /var/lib/mongo |
|
| 31 | - mount $PARTITION /var/lib/mongo |
|
| 32 | - fi |
|
| 33 | -fi |
configuration/mongo_instance_setup/ephemeralvolume
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../environments_scripts/mongo_instance_setup/files/usr/local/bin/ephemeralvolume |
|
| ... | ... | \ No newline at end of file |
configuration/mongo_instance_setup/ephemeralvolume.service
| ... | ... | @@ -1,11 +0,0 @@ |
| 1 | -[Unit] |
|
| 2 | -Description=Ensures /dev/nvme0n1 or /dev/xvdb is XFS-formatted |
|
| 3 | -Requires=-.mount cloud-init.service network.service |
|
| 4 | -After=-.mount cloud-init.service network.service |
|
| 5 | - |
|
| 6 | -[Install] |
|
| 7 | - |
|
| 8 | -[Service] |
|
| 9 | -Type=oneshot |
|
| 10 | -RemainAfterExit=true |
|
| 11 | -ExecStart=/usr/local/bin/ephemeralvolume |
configuration/mongo_instance_setup/ephemeralvolume.service
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../environments_scripts/mongo_instance_setup/files/etc/systemd/system/ephemeralvolume.service |
|
| ... | ... | \ No newline at end of file |
configuration/mongo_instance_setup/imageupgrade.sh
| ... | ... | @@ -1,25 +0,0 @@ |
| 1 | -#!/bin/bash |
|
| 2 | - |
|
| 3 | -# Upgrades the AWS EC2 MongoDB instance that this script is assumed to be executed on. |
|
| 4 | -# The steps are as follows: |
|
| 5 | - |
|
| 6 | -. imageupgrade_functions.sh |
|
| 7 | - |
|
| 8 | -run_git_pull_root() { |
|
| 9 | - echo "Pulling git to /root/code" >>/var/log/sailing.err |
|
| 10 | - cd /root/code |
|
| 11 | - git pull |
|
| 12 | -} |
|
| 13 | - |
|
| 14 | -clean_mongo_pid() { |
|
| 15 | - rm -f /var/run/mongodb/mongod.pid |
|
| 16 | -} |
|
| 17 | - |
|
| 18 | -LOGON_USER_HOME=/home/ec2-user |
|
| 19 | - |
|
| 20 | -run_yum_update |
|
| 21 | -run_git_pull_root |
|
| 22 | -clean_startup_logs |
|
| 23 | -update_root_crontab |
|
| 24 | -clean_mongo_pid |
|
| 25 | -finalize |
configuration/mongo_instance_setup/imageupgrade.sh
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../environments_scripts/mongo_instance_setup/files/usr/local/bin/imageupgrade.sh |
|
| ... | ... | \ No newline at end of file |
configuration/mongo_instance_setup/mongo-replica-set.service
| ... | ... | @@ -1,16 +0,0 @@ |
| 1 | -[Unit] |
|
| 2 | -Description=If REPLICA_SET_NAME EC2 user data is provided, add this node to the replica set of REPLICA_SET_PRIMARY |
|
| 3 | -Requires=mongod.service |
|
| 4 | -After=mongod.service |
|
| 5 | -Requires=cloud-init.service |
|
| 6 | -After=cloud-init.service |
|
| 7 | - |
|
| 8 | -[Install] |
|
| 9 | -WantedBy=multi-user.target |
|
| 10 | - |
|
| 11 | -[Service] |
|
| 12 | -Type=oneshot |
|
| 13 | -RemainAfterExit=true |
|
| 14 | -ExecStart=/usr/local/bin/add-as-replica |
|
| 15 | -ExecStop=/usr/local/bin/remove-as-replica |
|
| 16 | -TimeoutStopSpec=120s |
configuration/mongo_instance_setup/mongo-replica-set.service
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../environments_scripts/mongo_instance_setup/files/etc/systemd/system/mongo-replica-set.service |
|
| ... | ... | \ No newline at end of file |
configuration/mongo_instance_setup/mongod.conf
| ... | ... | @@ -1,48 +0,0 @@ |
| 1 | -# mongod.conf |
|
| 2 | - |
|
| 3 | -# for documentation of all options, see: |
|
| 4 | -# http://docs.mongodb.org/manual/reference/configuration-options/ |
|
| 5 | - |
|
| 6 | -# where to write logging data. |
|
| 7 | -systemLog: |
|
| 8 | - destination: file |
|
| 9 | - logAppend: true |
|
| 10 | - path: /var/log/mongodb/mongod.log |
|
| 11 | - |
|
| 12 | -# Where and how to store data. |
|
| 13 | -storage: |
|
| 14 | - dbPath: /var/lib/mongo |
|
| 15 | - journal: |
|
| 16 | - enabled: true |
|
| 17 | - directoryPerDB: true |
|
| 18 | -# engine: |
|
| 19 | -# mmapv1: |
|
| 20 | -# wiredTiger: |
|
| 21 | - |
|
| 22 | -# how the process runs |
|
| 23 | -processManagement: |
|
| 24 | - fork: true # fork and run in background |
|
| 25 | - pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile |
|
| 26 | - timeZoneInfo: /usr/share/zoneinfo |
|
| 27 | - |
|
| 28 | -# network interfaces |
|
| 29 | -net: |
|
| 30 | - port: 27017 |
|
| 31 | -# bindIp: 127.0.0.1 # Listen to local interface only, comment to listen on all interfaces. |
|
| 32 | -# bindIp: 172.31.33.146 |
|
| 33 | - bindIp: 0.0.0.0 |
|
| 34 | - |
|
| 35 | -#security: |
|
| 36 | - |
|
| 37 | -#operationProfiling: |
|
| 38 | - |
|
| 39 | -replication: |
|
| 40 | - replSetName: "live" |
|
| 41 | - |
|
| 42 | -#sharding: |
|
| 43 | - |
|
| 44 | -## Enterprise-Only Options |
|
| 45 | - |
|
| 46 | -#auditLog: |
|
| 47 | - |
|
| 48 | -#snmp: |
configuration/mongo_instance_setup/mongod.conf
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../environments_scripts/mongo_instance_setup/files/etc/mongod.conf |
|
| ... | ... | \ No newline at end of file |
configuration/mongo_instance_setup/mongodb
| ... | ... | @@ -1,9 +0,0 @@ |
| 1 | -compress |
|
| 2 | -/var/log/mongodb/mongod.log |
|
| 3 | -{ |
|
| 4 | - rotate 5 |
|
| 5 | - weekly |
|
| 6 | - postrotate |
|
| 7 | - /usr/bin/killall -SIGUSR1 mongod |
|
| 8 | - endscript |
|
| 9 | -} |
configuration/mongo_instance_setup/mongodb
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../environments_scripts/mongo_instance_setup/files/etc/logrotate.d/mongodb |
|
| ... | ... | \ No newline at end of file |
configuration/mongo_instance_setup/patch-mongo-replicaset-name-from-ec2-metadata
| ... | ... | @@ -1,8 +0,0 @@ |
| 1 | -#!/bin/bash |
|
| 2 | -REPLICA_SET_NAME=$(ec2-metadata | grep REPLICA_SET_NAME | sed -e 's/^user-data: //' | sed -e 's/^REPLICA_SET_NAME=//') |
|
| 3 | -echo Replica set name: $REPLICA_SET_NAME |
|
| 4 | -if [ \! -z "$REPLICA_SET_NAME" ]; then |
|
| 5 | - echo "Not empty. Patching /etc/mongod.conf..." |
|
| 6 | - sed -i -e "s/replSetName: .*$/replSetName: $REPLICA_SET_NAME/" /etc/mongod.conf |
|
| 7 | - echo "Done" |
|
| 8 | -fi |
configuration/mongo_instance_setup/patch-mongo-replicaset-name-from-ec2-metadata
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../environments_scripts/mongo_instance_setup/files/usr/local/bin/patch-mongo-replicaset-name-from-ec2-metadata |
|
| ... | ... | \ No newline at end of file |
configuration/mongo_instance_setup/patch-mongo-replicaset-name-from-ec2-metadata.service
| ... | ... | @@ -1,15 +0,0 @@ |
| 1 | -[Unit] |
|
| 2 | -Description=Check EC2 metadata for MongoDB Replica Set Name and patch /etc/mongod.conf accordingly |
|
| 3 | -Requires=ephemeralvolume.service |
|
| 4 | -After=ephemeralvolume.service |
|
| 5 | -Requires=cloud-init.service |
|
| 6 | -After=cloud-init.service |
|
| 7 | -Before=mongod.service |
|
| 8 | - |
|
| 9 | -[Install] |
|
| 10 | -RequiredBy=mongod.service |
|
| 11 | - |
|
| 12 | -[Service] |
|
| 13 | -Type=oneshot |
|
| 14 | -RemainAfterExit=true |
|
| 15 | -ExecStart=/usr/local/bin/patch-mongo-replicaset-name-from-ec2-metadata |
configuration/mongo_instance_setup/patch-mongo-replicaset-name-from-ec2-metadata.service
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../environments_scripts/mongo_instance_setup/files/etc/systemd/system/patch-mongo-replicaset-name-from-ec2-metadata.service |
|
| ... | ... | \ No newline at end of file |
configuration/mongo_instance_setup/remove-as-replica
| ... | ... | @@ -1,6 +0,0 @@ |
| 1 | -#!/bin/bash |
|
| 2 | -eval $( ec2-metadata -d | sed -e 's/^user-data: //' ) |
|
| 3 | -if [ \! -z "REPLICA_SET_PRIMARY" ]; then |
|
| 4 | - IP=$(ec2-metadata -o | sed -e 's/^local-ipv4: //') |
|
| 5 | - echo "rs.remove(\"$IP:27017\")" | mongo "mongodb://$REPLICA_SET_PRIMARY/?replicaSet=$REPLICA_SET_NAME&retryWrites=true" |
|
| 6 | -fi |
configuration/mongo_instance_setup/remove-as-replica
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../environments_scripts/mongo_instance_setup/files/usr/local/bin/remove-as-replica |
|
| ... | ... | \ No newline at end of file |
configuration/mongohash
| ... | ... | @@ -48,7 +48,7 @@ if [ -z "$URI" ]; then |
| 48 | 48 | fi |
| 49 | 49 | echo "URI: $URI" >&2 |
| 50 | 50 | if [ -z $ALL ]; then |
| 51 | - echo "db.runCommand({dbHash: 1})" | mongo --quiet "$URI" | grep -v "^$(date +%Y-%m-%d)" | grep md5 | sed -e 's/^.*"md5" : "\(.*\)".*$/\1/' |
|
| 51 | + echo "db.runCommand({dbHash: 1})" | mongo --quiet "$URI" | grep -v "^$(date +%Y-%m-%d)" | grep md5 | sed -e 's/^.*"md5" *: *"\([^"]*\)".*$/\1/' |
|
| 52 | 52 | else |
| 53 | 53 | echo "db.runCommand({dbHash: 1})" | mongo --quiet "$URI" | grep -v "^$(date +%Y-%m-%d)" |
| 54 | 54 | fi |
configuration/mysql_instance_setup/crontab-ec2-user
| ... | ... | @@ -0,0 +1,3 @@ |
| 1 | +* * * * * export PATH=/bin:/usr/bin:/usr/local/bin; sleep $(( $RANDOM * 60 / 32768 )); update_authorized_keys_for_landscape_managers_if_changed $( cat /home/ec2-user/ssh-key-reader.token ) https://security-service.sapsailing.com /home/ec2-user |
|
| 2 | +# NOTICE: Please try to reference the customised crontabs at $GIT_HOME/configuration/crontabs or use |
|
| 3 | +# the build-crontab script. This file has been maintained for continuity, but is deprecated. |
configuration/mysql_instance_setup/setup-mysql-server.sh
| ... | ... | @@ -0,0 +1,70 @@ |
| 1 | +#!/bin/bash |
|
| 2 | +# Usage: ${0} [ -b {bugs-password] ] [ -r {root-password} ] {instance-ip} |
|
| 3 | +# Deploy with Amazon Linux 2023 |
|
| 4 | + |
|
| 5 | +# Read options and assign to variables: |
|
| 6 | +options='b:r:' |
|
| 7 | +while getopts $options option |
|
| 8 | +do |
|
| 9 | + case $option in |
|
| 10 | + b) BUGS_PW="${OPTARG}" ;; |
|
| 11 | + r) ROOT_PW="${OPTARG}" ;; |
|
| 12 | + \?) echo "Invalid option" |
|
| 13 | + exit 4;; |
|
| 14 | + esac |
|
| 15 | +done |
|
| 16 | +if [ -z "${ROOT_PW}" ]; then |
|
| 17 | + echo -n "MySQL password for user root: " |
|
| 18 | + read -s ROOT_PW |
|
| 19 | + echo |
|
| 20 | +fi |
|
| 21 | +if [ -z "${BUGS_PW}" ]; then |
|
| 22 | + echo -n "MySQL password for user bugs: " |
|
| 23 | + read -s BUGS_PW |
|
| 24 | + echo |
|
| 25 | +fi |
|
| 26 | +shift $((OPTIND-1)) |
|
| 27 | +if [ $# != 0 ]; then |
|
| 28 | + SERVER=$1 |
|
| 29 | + scp -o StrictHostKeyChecking=false "${0}" ec2-user@${SERVER}: |
|
| 30 | + ssh -o StrictHostKeyChecking=false -A ec2-user@${SERVER} "./$( basename "${0}" ) -r \"${ROOT_PW}\" -b \"${BUGS_PW}\"" |
|
| 31 | +else |
|
| 32 | + BACKUP_FILE=/home/ec2-user/backupdb.sql |
|
| 33 | + backupdbNOLOCK=/home/ec2-user/backupdbNOLOCK.sql |
|
| 34 | + # Install cron job for ssh key update for landscape managers |
|
| 35 | + scp -o StrictHostKeyChecking=false root@sapsailing.com:/home/wiki/gitwiki/configuration/update_authorized_keys_for_landscape_managers /tmp |
|
| 36 | + sudo mv /tmp/update_authorized_keys_for_landscape_managers /usr/local/bin |
|
| 37 | + scp -o StrictHostKeyChecking=false root@sapsailing.com:/home/wiki/gitwiki/configuration/update_authorized_keys_for_landscape_managers_if_changed /tmp |
|
| 38 | + sudo mv /tmp/update_authorized_keys_for_landscape_managers_if_changed /usr/local/bin |
|
| 39 | + scp -o StrictHostKeyChecking=false root@sapsailing.com:/home/wiki/gitwiki/configuration/mysql_instance_setup/crontab-ec2-user /home/ec2-user/crontab |
|
| 40 | + scp -o StrictHostKeyChecking=false root@sapsailing.com:ssh-key-reader.token /home/ec2-user |
|
| 41 | + sudo chown ec2-user /home/ec2-user/ssh-key-reader.token |
|
| 42 | + sudo chgrp ec2-user /home/ec2-user/ssh-key-reader.token |
|
| 43 | + sudo chmod 600 /home/ec2-user/ssh-key-reader.token |
|
| 44 | + # Install packages for MariaDB and cron/anacron/crontab: |
|
| 45 | + sudo yum update -y |
|
| 46 | + sudo yum -y install mariadb105-server cronie |
|
| 47 | + sudo su -c "printf '\n[mysqld]\nlog_bin = /var/log/mariadb/mysql-bin.log\n' >> /etc/my.cnf.d/mariadb-server.cnf" |
|
| 48 | + sudo systemctl enable mariadb.service |
|
| 49 | + sudo systemctl start mariadb.service |
|
| 50 | + sudo systemctl enable crond.service |
|
| 51 | + sudo systemctl start crond.service |
|
| 52 | + crontab /home/ec2-user/crontab |
|
| 53 | + echo "Creating backup through mysql client on sapsailing.com..." |
|
| 54 | + ssh -o StrictHostKeyChecking=false root@sapsailing.com "mysqldump --all-databases -h mysql.internal.sapsailing.com --user=root --password=${ROOT_PW} --master-data --skip-lock-tables --lock-tables=0" >> ${BACKUP_FILE} |
|
| 55 | + # the two lock options are supposed to ignore table locks, but the following removes a problematic exception. |
|
| 56 | + echo "Removing lock on log table which causes failures" |
|
| 57 | + cat ${BACKUP_FILE} | sed "/LOCK TABLES \`transaction_registry\`/,/UNLOCK TABLES;/d" >${backupdbNOLOCK} |
|
| 58 | + echo "Importing backup locally..." |
|
| 59 | + sudo mysql -u root -h localhost <${backupdbNOLOCK} |
|
| 60 | + sudo mysql -u root -p${ROOT_PW} -e "FLUSH PRIVILEGES;" |
|
| 61 | + rm ${BACKUP_FILE} |
|
| 62 | + rm ${backupdbNOLOCK} |
|
| 63 | + sudo systemctl stop mariadb.service |
|
| 64 | + sudo systemctl start mariadb.service |
|
| 65 | + sudo mysql -u root -p${ROOT_PW} -e "select count(bug_id) from bugs.bugs;" |
|
| 66 | + echo 'Test your DB, e.g., by counting bugs: sudo mysql -u root -p -e "use bugs; select count(*) from bugs;"' |
|
| 67 | + echo "If you like what you see, switch to the new DB by updating the mysql.internal.sapsailing.com DNS record to this instance," |
|
| 68 | + echo "make sure the instance has the \"Database and Messaging\" security group set," |
|
| 69 | + echo "and tag the instance's root volume with the WeeklySailingInfrastructureBackup=Yes tag." |
|
| 70 | +fi |
configuration/notify-unhealthy-mongodb
| ... | ... | @@ -1,6 +1,7 @@ |
| 1 | 1 | #!/bin/bash |
| 2 | 2 | MONGODBS="mongodb://dbserver.internal.sapsailing.com:10201/?replicaSet=archive mongodb://dbserver.internal.sapsailing.com:10202/?replicaSet=slow mongodb://dbserver.internal.sapsailing.com:10203/?replicaSet=live" |
| 3 | -MAILINGLIST="/home/trac/mailinglists/unhealthy-mongo-list" |
|
| 3 | +#MAILINGLIST="/home/trac/mailinglists/unhealthy-mongo-list" |
|
| 4 | +MAILINGLIST=$(cat /var/cache/landscapeManagersMailingList) |
|
| 4 | 5 | for db in $MONGODBS; do |
| 5 | 6 | echo "rs.status()" | mongo --quiet "$db" | grep '\("set"\)\|\("name"\)\|\("health"\)\|\("stateStr"\)' | sed -e 's/^.* : "\?//' -e 's/"\?,$//' | ( |
| 6 | 7 | read replicaset |
configuration/rabbitmq_instance_setup/crontab-admin
| ... | ... | @@ -0,0 +1,2 @@ |
| 1 | +SHELL=/bin/bash |
|
| 2 | +* * * * * export PATH=/bin:/usr/bin:/usr/local/bin; sleep $(( $RANDOM * 60 / 32768 )); update_authorized_keys_for_landscape_managers_if_changed $( cat /home/admin/ssh-key-reader.token ) https://security-service.sapsailing.com /home/admin |
configuration/rabbitmq_instance_setup/setup-rabbitmq-server.sh
| ... | ... | @@ -0,0 +1,43 @@ |
| 1 | +#!/bin/bash |
|
| 2 | +# Apply this to an instance launched from a barebones Debian 12 image with ~16GB of root volume size. |
|
| 3 | +# As a result, you'll get a ready-to-use RabbitMQ server that is running on the default port and accepts |
|
| 4 | +# connections with the guest user log-in also from non-localhost addresses. |
|
| 5 | +if [ $# != 0 ]; then |
|
| 6 | + SERVER=$1 |
|
| 7 | + scp -o StrictHostKeyChecking=false "${0}" admin@${SERVER}: |
|
| 8 | + ssh -o StrictHostKeyChecking=false -A admin@${SERVER} ./$( basename "${0}" ) |
|
| 9 | +else |
|
| 10 | + # Fix the non-sensical use of "dash" as the default shell: |
|
| 11 | + sudo rm /usr/bin/sh |
|
| 12 | + sudo ln -s /usr/bin/bash /usr/bin/sh |
|
| 13 | + # Install cron job for ssh key update for landscape managers |
|
| 14 | + scp -o StrictHostKeyChecking=false root@sapsailing.com:/home/wiki/gitwiki/configuration/update_authorized_keys_for_landscape_managers /tmp |
|
| 15 | + sudo mv /tmp/update_authorized_keys_for_landscape_managers /usr/local/bin |
|
| 16 | + scp -o StrictHostKeyChecking=false root@sapsailing.com:/home/wiki/gitwiki/configuration/update_authorized_keys_for_landscape_managers_if_changed /tmp |
|
| 17 | + sudo mv /tmp/update_authorized_keys_for_landscape_managers_if_changed /usr/local/bin |
|
| 18 | + scp -o StrictHostKeyChecking=false root@sapsailing.com:/home/wiki/gitwiki/configuration/rabbitmq_instance_setup/crontab-admin /home/admin/crontab |
|
| 19 | + scp -o StrictHostKeyChecking=false root@sapsailing.com:ssh-key-reader.token /home/admin |
|
| 20 | + sudo chown admin /home/admin/ssh-key-reader.token |
|
| 21 | + sudo chgrp admin /home/admin/ssh-key-reader.token |
|
| 22 | + sudo chmod 600 /home/admin/ssh-key-reader.token |
|
| 23 | + # Install packages for MariaDB and cron/anacron/crontab: |
|
| 24 | + sudo apt-get -y update |
|
| 25 | + sudo DEBIAN_FRONTEND=noninteractive apt-get -yq -o Dpkg::Options::=--force-confdef -o Dpkg::Options::=--force-confnew upgrade |
|
| 26 | + sudo DEBIAN_FRONTEND=noninteractive apt-get -yq -o Dpkg::Options::=--force-confdef -o Dpkg::Options::=--force-confnew install rabbitmq-server systemd-cron jq syslog-ng |
|
| 27 | + sudo touch /var/run/last_change_aws_landscape_managers_ssh_keys |
|
| 28 | + sudo chown admin:admin /var/run/last_change_aws_landscape_managers_ssh_keys |
|
| 29 | + crontab /home/admin/crontab |
|
| 30 | + # Wait for RabbitMQ to become available; note that install under apt also means start... |
|
| 31 | + sleep 10 |
|
| 32 | + sudo rabbitmq-plugins enable rabbitmq_management |
|
| 33 | + # Allow guest login from non-localhost IPs: |
|
| 34 | + sudo su - -c "cat <<EOF >>/etc/rabbitmq/rabbitmq.conf |
|
| 35 | +loopback_users = none |
|
| 36 | +EOF |
|
| 37 | +" |
|
| 38 | + sudo systemctl restart rabbitmq-server.service |
|
| 39 | + echo 'Test your DB, e.g., by counting bugs: sudo mysql -u root -p -e "use bugs; select count(*) from bugs;"' |
|
| 40 | + echo "If you like what you see, switch to the new DB by updating the mysql.internal.sapsailing.com DNS record to this instance," |
|
| 41 | + echo "make sure the instance has the \"Database and Messaging\" security group set," |
|
| 42 | + echo "and tag the instance's root volume with the WeeklySailingInfrastructureBackup=Yes tag." |
|
| 43 | +fi |
configuration/sailing
| ... | ... | @@ -1,163 +0,0 @@ |
| 1 | -#!/bin/bash |
|
| 2 | -# |
|
| 3 | -# sailing Starts sailing services |
|
| 4 | -# |
|
| 5 | -# chkconfig: 2345 95 10 |
|
| 6 | -# description: Sailing contains all sailing services |
|
| 7 | -# |
|
| 8 | -### BEGIN INIT INFO |
|
| 9 | -# Provides: sailing |
|
| 10 | -# Required-Start: $local_fs $network $named $remote_fs |
|
| 11 | -# Should-Start: |
|
| 12 | -# Required-Stop: |
|
| 13 | -# Should-Stop: |
|
| 14 | -# Default-Start: 2 3 4 5 |
|
| 15 | -# Default-Stop: 0 1 6 |
|
| 16 | -# Short-Description: The sailing service |
|
| 17 | -# Description: Start all sailing services required for this instance |
|
| 18 | -### END INIT INFO |
|
| 19 | - |
|
| 20 | -# Source function library. |
|
| 21 | -. /etc/init.d/functions |
|
| 22 | - |
|
| 23 | -RETVAL=0 |
|
| 24 | - |
|
| 25 | -SERVERS_DIR=/home/sailing/servers |
|
| 26 | -cd "${SERVERS_DIR}" |
|
| 27 | -JAVA_START_INSTANCES="$(find * -type d -prune)" |
|
| 28 | -GIT_REPOSITORY=/home/sailing/code |
|
| 29 | -APACHE_CONFIG_DIR=/etc/httpd/conf.d |
|
| 30 | -APACHE_INTERNALS_CONFIG_FILE="$APACHE_CONFIG_DIR/001-internals.conf" |
|
| 31 | -EC2_METADATA_CMD=/opt/aws/bin/ec2-metadata |
|
| 32 | -REBOOT_INDICATOR=/var/run/is-rebooted |
|
| 33 | -SSH_KEY_READER_BEARER_TOKEN=/root/ssh-key-reader.token |
|
| 34 | - |
|
| 35 | -echo "Executing with $1 at `date`" >>/var/log/sailing.err |
|
| 36 | - |
|
| 37 | -start_tmux() { |
|
| 38 | - su - sailing -c "/home/sailing/bin/tmuxConsole.sh unattended" |
|
| 39 | - success |
|
| 40 | -} |
|
| 41 | - |
|
| 42 | -start_servers() { |
|
| 43 | - /usr/local/bin/update_authorized_keys_for_landscape_managers $( cat ${SSH_KEY_READER_BEARER_TOKEN} ) https://security-service.sapsailing.com /root 2>&1 >>/var/log/sailing.err |
|
| 44 | - cp /home/sailing/code/configuration/cp_root_mail_properties /usr/local/bin |
|
| 45 | - chown root /usr/local/bin/cp_root_mail_properties |
|
| 46 | - chgrp root /usr/local/bin/cp_root_mail_properties |
|
| 47 | - chmod 755 /usr/local/bin/cp_root_mail_properties |
|
| 48 | - cp /home/sailing/code/configuration/cp_root_mail_properties_sudoers /etc/sudoers.d |
|
| 49 | - if which $EC2_METADATA_CMD && $EC2_METADATA_CMD -d | sed "s/user-data\: //g" | grep "^image-upgrade$"; then |
|
| 50 | - echo "Found image-upgrade in EC2 user data; upgrading image, then probably shutting down for AMI creation depending on the no-shutdown user data string..." >>/var/log/sailing.err |
|
| 51 | - $GIT_REPOSITORY/configuration/imageupgrade.sh |
|
| 52 | - else |
|
| 53 | - echo "No image-upgrade request found in EC2 user data $($EC2_METADATA_CMD -d); proceeding with regular server launch..." >>/var/log/sailing.err |
|
| 54 | - echo "Servers to launch: ${JAVA_START_INSTANCES}" >>/var/log/sailing.err |
|
| 55 | - if [ -f "${REBOOT_INDICATOR}" ]; then |
|
| 56 | - echo "This is a re-boot. No EC2 user data is evaluated for server configuration; no server configuration is performed. Only configured applications are launched." >>/var/log/sailing.err |
|
| 57 | - for conf in ${JAVA_START_INSTANCES}; do |
|
| 58 | - su - sailing -c "cd ${SERVERS_DIR}/${conf} && ./start" 2>>/var/log/sailing.err >>/var/log/sailing.err |
|
| 59 | - done |
|
| 60 | - else |
|
| 61 | - echo "This is a first-time boot. EC2 user data is evaluated for potential application deployment and configuration; reverse proxy entries may be created, and applications are launched." >>/var/log/sailing.err |
|
| 62 | - FIRST_SERVER=$( eval $( ${EC2_METADATA_CMD} -d | sed -e 's/^user-data: //' ); echo $SERVER_NAME ) |
|
| 63 | - if [ "${FIRST_SERVER}" = "" ]; then |
|
| 64 | - echo "No SERVER_NAME provided; not configuring/starting any application processes" >>/var/log/sailing.err |
|
| 65 | - else |
|
| 66 | - echo "Server to configure and start: ${FIRST_SERVER}" >>/var/log/sailing.err |
|
| 67 | - configure_and_start_server "${FIRST_SERVER}" |
|
| 68 | - create_basic_httpd_config "${FIRST_SERVER}" |
|
| 69 | - reload_httpd |
|
| 70 | - fi |
|
| 71 | - echo 1 >"${REBOOT_INDICATOR}" |
|
| 72 | - fi |
|
| 73 | - fi |
|
| 74 | -} |
|
| 75 | - |
|
| 76 | -# Call with the server directory name (not the full path, just a single element from ${JAVA_START_INSTANCE}) as parameter |
|
| 77 | -# Example: configure_and_start_server server |
|
| 78 | -# This is expected to be called only in case there is only one server to configure; otherwise, the same EC2 user data |
|
| 79 | -# would get applied to all application configurations which would not be a good idea. |
|
| 80 | -configure_and_start_server() { |
|
| 81 | - conf="$1" |
|
| 82 | - mkdir -p "${SERVERS_DIR}/${conf}" >/dev/null 2>/dev/null |
|
| 83 | - chown sailing "${SERVERS_DIR}/${conf}" |
|
| 84 | - chgrp sailing "${SERVERS_DIR}/${conf}" |
|
| 85 | - # If there is a secret /root/mail.properties, copy it into the default server's configuration directory: |
|
| 86 | - /usr/local/bin/cp_root_mail_properties "${conf}" |
|
| 87 | - su - sailing -c "cd ${SERVERS_DIR}/${conf} && ${GIT_REPOSITORY}/java/target/refreshInstance.sh auto-install; ./start" 2>>/var/log/sailing.err >>/var/log/sailing.err |
|
| 88 | - pushd ${SERVERS_DIR}/${conf} |
|
| 89 | - ./defineReverseProxyMappings.sh 2>>/var/log/sailing.err >>/var/log/sailing.err |
|
| 90 | - popd |
|
| 91 | - RETVAL=$? |
|
| 92 | - [ $RETVAL -eq 0 ] && success || failure |
|
| 93 | -} |
|
| 94 | - |
|
| 95 | -stop_servers() { |
|
| 96 | - for conf in $JAVA_START_INSTANCES; do |
|
| 97 | - echo "Stopping Java server $conf" >> /var/log/sailing.err |
|
| 98 | - su - sailing -c "cd $SERVERS_DIR/$conf && ./stop" |
|
| 99 | - RETVAL=$? |
|
| 100 | - [ $RETVAL -eq 0 ] && success || failure |
|
| 101 | - stop_httpd |
|
| 102 | - sync_logs |
|
| 103 | - done |
|
| 104 | -} |
|
| 105 | - |
|
| 106 | -sync_logs() { |
|
| 107 | - echo "Executing logrotate followed by a sync to ensure that logs are synchronized" >> /var/log/sailing.err |
|
| 108 | - logrotate -f /etc/logrotate.conf |
|
| 109 | - sync |
|
| 110 | -} |
|
| 111 | - |
|
| 112 | -reload_httpd() { |
|
| 113 | - echo "Will try to launch httpd so this replica can work with an ELB easily." >>/var/log/sailing.err |
|
| 114 | - if [ -x /etc/init.d/httpd ]; then |
|
| 115 | - echo "Reloading httpd configuration..." >>/var/log/sailing.err |
|
| 116 | - service httpd reload |
|
| 117 | - else |
|
| 118 | - echo "Can't launch httpd; start script doesn't seem to be installed at /etc/init.d/httpd" |
|
| 119 | - fi |
|
| 120 | -} |
|
| 121 | - |
|
| 122 | -# Adds a Plain-SSL mapping to the first server's port and a mapping for /internal-server-status, both to 001-internals.conf |
|
| 123 | -create_basic_httpd_config() { |
|
| 124 | - FIRST_SERVER=$1 |
|
| 125 | - if [ -d $SERVERS_DIR/$FIRST_SERVER ]; then |
|
| 126 | - source $SERVERS_DIR/$FIRST_SERVER/env.sh |
|
| 127 | - fi |
|
| 128 | - echo "Writing macro invocation to ${APACHE_INTERNALS_CONFIG_FILE} to map internal IP $INSTANCE_INTERNAL_IP4 to plain server running $SERVER_PORT..." >>/var/log/sailing.err |
|
| 129 | - echo "Use Plain-SSL ${INSTANCE_INTERNAL_IP4} 127.0.0.1 $SERVER_PORT" >"${APACHE_INTERNALS_CONFIG_FILE}" |
|
| 130 | - # Append Apache macro invocation for /internal-server-status based on mod_status and INSTANCE_DNS to "${APACHE_INTERNALS_CONFIG_FILE}" |
|
| 131 | - echo "Appending macro usage for $INSTANCE_DNS/internal-server-status URL for mod_status based Apache monitoring to ${APACHE_INTERNALS_CONFIG_FILE}" >>/var/log/sailing.err |
|
| 132 | - echo "## SERVER STATUS" >>"${APACHE_INTERNALS_CONFIG_FILE}" |
|
| 133 | - echo "Use Status $INSTANCE_DNS internal-server-status" >>"${APACHE_INTERNALS_CONFIG_FILE}" |
|
| 134 | -} |
|
| 135 | - |
|
| 136 | -stop_httpd() { |
|
| 137 | - if [ -x /etc/init.d/httpd ]; then |
|
| 138 | - service httpd stop |
|
| 139 | - echo "Stopped httpd..." >>/var/log/sailing.err |
|
| 140 | - fi |
|
| 141 | -} |
|
| 142 | - |
|
| 143 | -# See how we were called. |
|
| 144 | -case "$1" in |
|
| 145 | - start) |
|
| 146 | - start_servers |
|
| 147 | - /usr/sbin/update-motd |
|
| 148 | - touch /var/lock/subsys/sailing |
|
| 149 | - ;; |
|
| 150 | - stop) |
|
| 151 | - stop_servers |
|
| 152 | - rm -f /var/lock/subsys/sailing |
|
| 153 | - ;; |
|
| 154 | - status) |
|
| 155 | - status java |
|
| 156 | - RETVAL=$? |
|
| 157 | - ;; |
|
| 158 | - *) |
|
| 159 | - echo $"Usage: $0 {start|status|stop}" |
|
| 160 | - RETVAL=3 |
|
| 161 | -esac |
|
| 162 | - |
|
| 163 | -exit $RETVAL |
configuration/sailing
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +environments_scripts/sailing_server/files/usr/local/bin/sailing |
|
| ... | ... | \ No newline at end of file |
configuration/sailing.sh
| ... | ... | @@ -1,22 +0,0 @@ |
| 1 | -# Script to be linked from /etc/profile.d |
|
| 2 | -# Appends to PATH, sets DISPLAY for VNC running on :2, exports JAVA_HOME and Amazon EC2 variables |
|
| 3 | - |
|
| 4 | -ulimit -n 100000 |
|
| 5 | -ulimit -u 40000 |
|
| 6 | - |
|
| 7 | -export EC2_HOME=/opt/amazon/ec2-api-tools-1.6.8.0 |
|
| 8 | -export EC2_URL=https://ec2.eu-west-1.amazonaws.com |
|
| 9 | -# SAP JVM |
|
| 10 | -export JAVA_HOME=/opt/sapjvm_8 |
|
| 11 | -# JDK 11.0.1: |
|
| 12 | -#export JAVA_HOME=/opt/jdk-11.0.1+13 |
|
| 13 | -#export JAVA_HOME=/opt/jdk1.8.0_45 |
|
| 14 | -export JAVA_1_7_HOME=/opt/jdk1.7.0_75 |
|
| 15 | -export ANDROID_HOME=/opt/android-sdk-linux |
|
| 16 | - |
|
| 17 | -export PATH=$PATH:$JAVA_HOME/bin:/opt/amazon/ec2-api-tools-1.6.8.0/bin:/opt/amazon/bin:/opt/apache-maven-3.6.3/bin |
|
| 18 | - |
|
| 19 | -export DISPLAY=:2.0 |
|
| 20 | - |
|
| 21 | -alias sa='eval `ssh-agent`; ssh-add ~/.ssh/id_dsa' |
|
| 22 | -alias ll="ls -lh --color" |
configuration/sailing.sh
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +environments_scripts/sailing_server/files/etc/profile.d/sailing.sh |
|
| ... | ... | \ No newline at end of file |
configuration/sailing_server_setup/crontab-root
| ... | ... | @@ -0,0 +1,3 @@ |
| 1 | +* * * * * export PATH=/bin:/usr/bin:/usr/local/bin; sleep $(( $RANDOM * 60 / 32768 )); update_authorized_keys_for_landscape_managers_if_changed $( cat /root/ssh-key-reader.token ) https://security-service.sapsailing.com /root 2>&1 >>/var/log/sailing.err |
|
| 2 | +# NOTICE: Please try to reference the customised crontabs at $GIT_HOME/configuration/crontabs or use |
|
| 3 | +# the build-crontab script. This file has been maintained for continuity, but is deprecated. |
configuration/sailing_server_setup/mountnvmeswap
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../archive_instance_setup/mountnvmeswap |
|
| ... | ... | \ No newline at end of file |
configuration/sailing_server_setup/mountnvmeswap.initd
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../archive_instance_setup/mountnvmeswap.initd |
|
| ... | ... | \ No newline at end of file |
configuration/sailing_server_setup/mountnvmeswap.service
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../archive_instance_setup/mountnvmeswap.service |
|
| ... | ... | \ No newline at end of file |
configuration/sailing_server_setup/sailing.service
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +../environments_scripts/sailing_server/files/etc/systemd/system/sailing.service |
|
| ... | ... | \ No newline at end of file |
configuration/sailing_server_setup/setup-sailing-server.sh
| ... | ... | @@ -0,0 +1,130 @@ |
| 1 | +#!/bin/bash |
|
| 2 | +# Usage: Launch an Amazon EC2 instance from an Amazon Linux 2 AMI with |
|
| 3 | +# 100GB of root partition size and the "Sailing Analytics App" security group |
|
| 4 | +# using an SSH key for which you have a working private key available. |
|
| 5 | +# Then, run this script on your local computer, using the external IP address |
|
| 6 | +# of the instance you just launched in AWS as only argument. This will then |
|
| 7 | +# turn the instance into an application server for the SAP Sailing Analytics |
|
| 8 | +# application. When the script is done you may log in to look around and check |
|
| 9 | +# things. When done, shut down the instance (Stop, not Terminate) and create |
|
| 10 | +# an image off of it, naming it, e.g., "SAP Sailing Analytics 2.0" and |
|
| 11 | +# also tagging its root volume snapshot as, e.g., "SAP Sailing Analytics 2.0 (Root)". |
|
| 12 | +# If you want to use the resulting image in production, also tag it with |
|
| 13 | +# tag key "image-type" and tag value "sailing-analytics-server". |
|
| 14 | +if [ $# != 0 ]; then |
|
| 15 | + SERVER=$1 |
|
| 16 | + scp "${0}" ec2-user@${SERVER}: |
|
| 17 | + ssh -A ec2-user@${SERVER} ./$( basename "${0}" ) |
|
| 18 | +else |
|
| 19 | + if ec2-metadata | grep -q instance-id; then |
|
| 20 | + echo "Running on an AWS EC2 instance as user ${USER} / $(whoami), starting setup..." |
|
| 21 | + # Allow root ssh login with the same key used for the ec2-user for now; |
|
| 22 | + # later, a cron job will be installed that keeps the /root/authorized_keys file |
|
| 23 | + # up to date with all landscape managers' public SSH keys |
|
| 24 | + sudo cp /home/ec2-user/.ssh/authorized_keys /root/.ssh |
|
| 25 | + sudo chown root /root/.ssh/authorized_keys |
|
| 26 | + sudo chgrp root /root/.ssh/authorized_keys |
|
| 27 | + sudo adduser sailing |
|
| 28 | + sudo su - sailing -c "mkdir servers" |
|
| 29 | + # Create an SSH key pair with empty passphrase for ec2-user, deploy it to trac@sapsailing.com |
|
| 30 | + # and then move it to the sailing user's .ssh directory |
|
| 31 | + ssh-keygen -t ed25519 -P '' -f /home/ec2-user/.ssh/id_ed25519 |
|
| 32 | + cat /home/ec2-user/.ssh/id_ed25519.pub | ssh -o StrictHostKeyChecking=false root@sapsailing.com "cat >>/home/trac/.ssh/authorized_keys" |
|
| 33 | + sudo mkdir /home/sailing/.ssh |
|
| 34 | + sudo mv /home/ec2-user/.ssh/id* /home/sailing/.ssh |
|
| 35 | + sudo chown -R sailing /home/sailing/.ssh |
|
| 36 | + sudo chgrp -R sailing /home/sailing/.ssh |
|
| 37 | + sudo chmod 700 /home/sailing/.ssh |
|
| 38 | + # Install standard packages: |
|
| 39 | + sudo yum -y update |
|
| 40 | + sudo yum -y install git tmux nvme-cli chrony cronie cronie-anacron jq telnet mailx |
|
| 41 | + # Force acceptance of sapsailing.com's host key: |
|
| 42 | + sudo su - sailing -c "ssh -o StrictHostKeyChecking=false trac@sapsailing.com ls" >/dev/null |
|
| 43 | + # Clone Git to /home/sailing/code |
|
| 44 | + sudo su - sailing -c "git clone ssh://trac@sapsailing.com/home/trac/git code" |
|
| 45 | + # Install SAP JVM 8: |
|
| 46 | + sudo mkdir -p /opt |
|
| 47 | + sudo su - -c "source /home/sailing/code/configuration/imageupgrade_functions.sh; download_and_install_latest_sap_jvm_8" |
|
| 48 | + # Install sailing.sh script to /etc/profile.d |
|
| 49 | + sudo ln -s /home/sailing/code/configuration/sailing.sh /etc/profile.d |
|
| 50 | + # Keep Amazon Linux from patching root's authorized_keys file: |
|
| 51 | + sudo sed -i -e 's/disable_root: *true/disable_root: false/' /etc/cloud/cloud.cfg |
|
| 52 | + # Configure SSH daemon: |
|
| 53 | + sudo su - -c "cat << EOF >>/etc/ssh/sshd_config |
|
| 54 | +PermitRootLogin without-password |
|
| 55 | +PermitRootLogin Yes |
|
| 56 | +MaxStartups 100 |
|
| 57 | +EOF |
|
| 58 | +" |
|
| 59 | + # Increase limits |
|
| 60 | + sudo su - -c "cat << EOF >>/etc/sysctl.conf |
|
| 61 | +# number of connections the firewall can track |
|
| 62 | +net.ipv4.ip_conntrac_max = 131072 |
|
| 63 | +EOF |
|
| 64 | +" |
|
| 65 | + # Install mountnvmeswap stuff |
|
| 66 | + sudo ln -s /home/sailing/code/configuration/sailing_server_setup/mountnvmeswap /usr/local/bin |
|
| 67 | + sudo ln -s /home/sailing/code/configuration/sailing_server_setup/mountnvmeswap.service /etc/systemd/system |
|
| 68 | + sudo systemctl daemon-reload |
|
| 69 | + sudo systemctl enable mountnvmeswap.service |
|
| 70 | + # Install MongoDB 4.4 and configure as replica set "replica" |
|
| 71 | + sudo su - -c "cat << EOF >/etc/yum.repos.d/mongodb-org.4.4.repo |
|
| 72 | +[mongodb-org-4.4] |
|
| 73 | +name=MongoDB Repository |
|
| 74 | +baseurl=https://repo.mongodb.org/yum/amazon/2/mongodb-org/4.4/x86_64/ |
|
| 75 | +gpgcheck=1 |
|
| 76 | +enabled=1 |
|
| 77 | +gpgkey=https://www.mongodb.org/static/pgp/server-4.4.asc |
|
| 78 | +EOF |
|
| 79 | +" |
|
| 80 | + sudo yum -y update |
|
| 81 | + sudo yum -y install mongodb-org-server mongodb-org-shell mongodb-org-tools |
|
| 82 | + sudo su - -c "cat << EOF >>/etc/mongod.conf |
|
| 83 | +replication: |
|
| 84 | + replSetName: replica |
|
| 85 | +EOF |
|
| 86 | +" |
|
| 87 | + sudo sed -i -e 's/bindIp: *[0-9]\+\.[0-9]\+\.[0-9]\+\.[0-9]\+/bindIp: 0.0.0.0/' /etc/mongod.conf |
|
| 88 | + # Install cron job for ssh key update for landscape managers |
|
| 89 | + sudo ln -s /home/sailing/code/configuration/update_authorized_keys_for_landscape_managers /usr/local/bin |
|
| 90 | + sudo ln -s /home/sailing/code/configuration/update_authorized_keys_for_landscape_managers_if_changed /usr/local/bin |
|
| 91 | + sudo ln -s /home/sailing/code/configuration/sailing_server_setup/crontab-root /root/crontab |
|
| 92 | + sudo su - -c "crontab /root/crontab" |
|
| 93 | + scp root@sapsailing.com:ssh-key-reader.token /tmp |
|
| 94 | + sudo mv /tmp/ssh-key-reader.token /root |
|
| 95 | + sudo chown root /root/ssh-key-reader.token |
|
| 96 | + sudo chgrp root /root/ssh-key-reader.token |
|
| 97 | + sudo chmod 600 /root/ssh-key-reader.token |
|
| 98 | + # Install /etc/init.d/sailing start-up / shut-down service |
|
| 99 | + sudo ln -s /home/sailing/code/configuration/sailing /etc/init.d/sailing |
|
| 100 | + sudo ln -s /home/sailing/code/configuration/sailing_server_setup/sailing.service /etc/systemd/system |
|
| 101 | + sudo systemctl daemon-reload |
|
| 102 | + sudo systemctl enable sailing.service |
|
| 103 | + # Install secrets |
|
| 104 | + scp root@sapsailing.com:secrets /tmp |
|
| 105 | + scp root@sapsailing.com:mail.properties /tmp |
|
| 106 | + sudo mv /tmp/secrets /root |
|
| 107 | + sudo mv /tmp/mail.properties /root |
|
| 108 | + sudo chown root /root/secrets |
|
| 109 | + sudo chgrp root /root/secrets |
|
| 110 | + sudo chmod 600 /root/secrets |
|
| 111 | + sudo chown root /root/mail.properties |
|
| 112 | + sudo chgrp root /root/mail.properties |
|
| 113 | + sudo chmod 600 /root/mail.properties |
|
| 114 | + # Create some swap space for the case mountnvmeswap hasn't created any |
|
| 115 | + sudo dd if=/dev/zero of=/var/cache/swapfile bs=1M count=6000 |
|
| 116 | + sudo chown root /var/cache/swapfile |
|
| 117 | + sudo chgrp root /var/cache/swapfile |
|
| 118 | + sudo chmod 600 /var/cache/swapfile |
|
| 119 | + sudo mkswap /var/cache/swapfile |
|
| 120 | + # And while adding to /etc/fstab, also add the NFS mount of /home/scores: |
|
| 121 | + sudo mkdir /home/scores |
|
| 122 | + sudo su - -c 'echo "/var/cache/swapfile none swap pri=0 0 0 |
|
| 123 | +logfiles.internal.sapsailing.com:/home/scores /home/scores nfs tcp,intr,timeo=100,retry=0" >>/etc/fstab' |
|
| 124 | + sudo swapon -a |
|
| 125 | + else |
|
| 126 | + echo "Not running on an AWS instance; refusing to run setup!" >&2 |
|
| 127 | + echo "To prepare an instance running in AWS, provide its external IP as argument to this script." >&2 |
|
| 128 | + exit 2 |
|
| 129 | + fi |
|
| 130 | +fi |
configuration/switchoverArchive.sh
| ... | ... | @@ -1,114 +0,0 @@ |
| 1 | -#!/bin/bash |
|
| 2 | - |
|
| 3 | -# Purpose: Script is used to switch to the failover archive if the primary is unhealthy, by altering the macros |
|
| 4 | -# file and then reloading Httpd. |
|
| 5 | -# Crontab for every minute: * * * * * /path/to/switchoverArchive.sh |
|
| 6 | -help() { |
|
| 7 | - echo "$0 PATH_TO_HTTPD_MACROS_FILE TIMEOUT_FIRST_CURL_SECONDS TIMEOUT_SECOND_CURL_SECONDS" |
|
| 8 | - echo "" |
|
| 9 | - echo "Script used to automatically update the archive location (to the failover) in httpd if the primary is down." |
|
| 10 | - echo "Pass in the path to the macros file containing the archive definitions;" |
|
| 11 | - echo "the timeout of the first curl check in seconds; and the timeout of the second curl check, also in seconds." |
|
| 12 | - echo "Make sure the combined time taken is not longer than the crontab." |
|
| 13 | - exit 2 |
|
| 14 | -} |
|
| 15 | -# $# is the number of arguments |
|
| 16 | -if [ $# -eq 0 ]; then |
|
| 17 | - help |
|
| 18 | -fi |
|
| 19 | -#The names of the variables in the macros file. |
|
| 20 | -ARCHIVE_IP_NAME="ARCHIVE_IP" |
|
| 21 | -ARCHIVE_FAILOVER_IP_NAME="ARCHIVE_FAILOVER_IP" |
|
| 22 | -PRODUCTION_ARCHIVE_NAME="PRODUCTION_ARCHIVE" |
|
| 23 | -ARCHIVE_PORT=8888 |
|
| 24 | -MACROS_PATH=$1 |
|
| 25 | -# The amount of time (in seconds) that must have elapsed, since the last httpd macros email, before notifying operators again. |
|
| 26 | -TIME_CHECK_SECONDS=$((15*60)) |
|
| 27 | -# Connection timeouts for curl requests (the time waited for a connection to be established). The second should be longer |
|
| 28 | -# as we want to be confident the main archive is in fact "down" before switching. |
|
| 29 | -TIMEOUT1_IN_SECONDS=$2 |
|
| 30 | -TIMEOUT2_IN_SECONDS=$3 |
|
| 31 | -CACHE_LOCATION="/var/cache/lastIncorrectMacroUnixTime" |
|
| 32 | -# The following line checks if all the strings in "search" are present at the beginning of their own line. Note: grep uses BRE by default, |
|
| 33 | -# so the plus symbol must be escaped to refer to "one or more" of the previous character. |
|
| 34 | -for i in "^Define ${PRODUCTION_ARCHIVE_NAME}\>" \ |
|
| 35 | - "^Define ${ARCHIVE_IP_NAME} [0-9]\+\.[0-9]\+\.[0-9]\+\.[0-9]\+$" \ |
|
| 36 | - "^Define ${ARCHIVE_FAILOVER_IP_NAME} [0-9]\+\.[0-9]\+\.[0-9]\+\.[0-9]\+$" |
|
| 37 | -do |
|
| 38 | - if ! grep -q "${i}" "${MACROS_PATH}"; then |
|
| 39 | - currentUnixTime=$(date +"%s") |
|
| 40 | - if [[ ! -f ${CACHE_LOCATION} || $((currentUnixTime - $(cat "${CACHE_LOCATION}") )) -gt "$TIME_CHECK_SECONDS" ]]; then |
|
| 41 | - date +"%s" > "${CACHE_LOCATION}" |
|
| 42 | - echo "Macros file does not contain proper definitions for the archive and failover IPs. Expression ${i} not matched." | notify-operators "Incorrect httpd macros" |
|
| 43 | - fi |
|
| 44 | - logger -t archive "Necessary variable assignment pattern ${i} not found in macros" |
|
| 45 | - exit 1 |
|
| 46 | - fi |
|
| 47 | -done |
|
| 48 | -# These next lines get the current ip values for the archive and failover, plus they store the value of production, |
|
| 49 | -# which is a variable pointing to either the primary or failover value. |
|
| 50 | -archiveIp="$(sed -n -e "s/^Define ${ARCHIVE_IP_NAME} \(.*\)/\1/p" ${MACROS_PATH} | tr -d '[:space:]')" |
|
| 51 | -failoverIp="$(sed -n -e "s/^Define ${ARCHIVE_FAILOVER_IP_NAME} \(.*\)/\1/p" ${MACROS_PATH} | tr -d '[:space:]')" |
|
| 52 | -productionIp="$(sed -n -e "s/^Define ${PRODUCTION_ARCHIVE_NAME} \(.*\)/\1/p" ${MACROS_PATH} | tr -d '[:space:]')" |
|
| 53 | -# Checks if the macro.conf is set as healthy or unhealthy currently. |
|
| 54 | -if [[ "${productionIp}" == "\${${ARCHIVE_IP_NAME}}" ]] |
|
| 55 | -then |
|
| 56 | - alreadyHealthy=1 |
|
| 57 | - logger -t archive "currently healthy" |
|
| 58 | -else |
|
| 59 | - alreadyHealthy=0 |
|
| 60 | - logger -t archive "currently unhealthy" |
|
| 61 | -fi |
|
| 62 | - |
|
| 63 | -setProduction() { |
|
| 64 | - # parameter $1: the name of the variable holding the IP of the archive instance to switch to |
|
| 65 | - sed -i -e "s/^Define ${PRODUCTION_ARCHIVE_NAME}\>.*$/Define ${PRODUCTION_ARCHIVE_NAME} \${${1}}/" ${MACROS_PATH} |
|
| 66 | -} |
|
| 67 | - |
|
| 68 | -# Sets the production value to point to the variable defining the main archive IP, provided it isn't already set. |
|
| 69 | -setProductionMainIfNotSet() { |
|
| 70 | - if [[ $alreadyHealthy -eq 0 ]] |
|
| 71 | - then |
|
| 72 | - # currently unhealthy |
|
| 73 | - # set production to archive |
|
| 74 | - logger -t archive "Healthy: setting production to main archive" |
|
| 75 | - setProduction ${ARCHIVE_IP_NAME} |
|
| 76 | - systemctl reload httpd |
|
| 77 | - echo "The main archive server is healthy again. Switching to it." | notify-operators "Healthy: main archive online" |
|
| 78 | - else |
|
| 79 | - # If already healthy then no reload or notification occurs. |
|
| 80 | - logger -t archive "Healthy: already set, no change needed" |
|
| 81 | - fi |
|
| 82 | -} |
|
| 83 | - |
|
| 84 | -setFailoverIfNotSet() { |
|
| 85 | - if [[ $alreadyHealthy -eq 1 ]] |
|
| 86 | - then |
|
| 87 | - # Set production to failover if not already. Separate if statement in case the curl statement |
|
| 88 | - # fails but the production is already set to point to the backup |
|
| 89 | - setProduction ${ARCHIVE_FAILOVER_IP_NAME} |
|
| 90 | - logger -t archive "Unhealthy: second check failed, switching to failover" |
|
| 91 | - systemctl reload httpd |
|
| 92 | - echo "Main archive is unhealthy. Switching to failover. Please urgently take a look at ${archiveIp}." | notify-operators "Unhealthy: main archive offline, failover in place" |
|
| 93 | - else |
|
| 94 | - logger -t archive "Unhealthy: second check still fails, failover already in use" |
|
| 95 | - fi |
|
| 96 | -} |
|
| 97 | - |
|
| 98 | -logger -t archive "begin check" |
|
| 99 | -# --fail option ensures that, if a server error is returned (ie. 5xx/4xx status code), then the status code (stored in $?) will be non zero. |
|
| 100 | -# -L follows redirects |
|
| 101 | -curl -s -L --fail --connect-timeout ${TIMEOUT1_IN_SECONDS} "http://${archiveIp}:${ARCHIVE_PORT}/gwt/status" >> /dev/null |
|
| 102 | -if [[ $? -ne 0 ]] |
|
| 103 | -then |
|
| 104 | - logger -t archive "first check failed" |
|
| 105 | - curl -s -L --fail --connect-timeout ${TIMEOUT2_IN_SECONDS} "http://${archiveIp}:${ARCHIVE_PORT}/gwt/status" >> /dev/null |
|
| 106 | - if [[ $? -ne 0 ]] |
|
| 107 | - then |
|
| 108 | - setFailoverIfNotSet |
|
| 109 | - else |
|
| 110 | - setProductionMainIfNotSet |
|
| 111 | - fi |
|
| 112 | -else |
|
| 113 | - setProductionMainIfNotSet |
|
| 114 | -fi |
configuration/switchoverArchive.sh
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +environments_scripts/repo/usr/local/bin/switchoverArchive.sh |
|
| ... | ... | \ No newline at end of file |
configuration/syncgit
| ... | ... | @@ -1,12 +0,0 @@ |
| 1 | -#!/bin/sh
|
|
| 2 | -ADMIN_EMAIL="axel.uhl@sap.com jan.hamann@sapsailing.com"
|
|
| 3 | -
|
|
| 4 | -cd /home/wiki/gitwiki
|
|
| 5 | -git pull >/tmp/wiki-git.out 2>/tmp/wiki-git.err
|
|
| 6 | -if [ "$?" != "0" ]; then
|
|
| 7 | - cat /tmp/wiki-git.out /tmp/wiki-git.err | mail -s "Wiki git problem" $ADMIN_EMAIL
|
|
| 8 | -fi
|
|
| 9 | -git push >>/tmp/wiki-git.out 2>/tmp/wiki-git.err
|
|
| 10 | -if [ "$?" != "0" ]; then
|
|
| 11 | - cat /tmp/wiki-git.out /tmp/wiki-git.err | mail -s "Wiki git problem" $ADMIN_EMAIL
|
|
| 12 | -fi
|
configuration/syncgit
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +environments_scripts/repo/usr/local/bin/syncgit |
|
| ... | ... | \ No newline at end of file |
configuration/update-tractrac-urls-to-archive.sh
| ... | ... | @@ -1,6 +1,19 @@ |
| 1 | 1 | #!/bin/bash |
| 2 | -GIT_ROOT=/home/wiki/gitwiki |
|
| 3 | -mongo --quiet "mongodb://dbserver.internal.sapsailing.com:10201/winddb?replicaSet=archive" --eval 'db.TRACTRAC_CONFIGURATIONS.find({}, {TT_CONFIG_JSON_URL : 1}).toArray()' | grep -v ObjectId | jq -r '.[].TT_CONFIG_JSON_URL' | sort -u >"${GIT_ROOT}/configuration/tractrac-json-urls" |
|
| 4 | -cd "${GIT_ROOT}" |
|
| 5 | -git commit -m "Updated tractrac-json-urls" -a |
|
| 6 | -git push |
|
| 2 | + |
|
| 3 | +if [[ $# -eq 0 ]]; then |
|
| 4 | + GIT_ROOT=/home/wiki/gitwiki |
|
| 5 | +else |
|
| 6 | + GIT_ROOT=$1 |
|
| 7 | +fi |
|
| 8 | +PATH_TO_TRAC_TRAC_URLS="configuration/tractrac-json-urls" |
|
| 9 | +urls=$(mongo --quiet "mongodb://dbserver.internal.sapsailing.com:10201/winddb?replicaSet=archive" --eval 'db.TRACTRAC_CONFIGURATIONS.find({}, {TT_CONFIG_JSON_URL : 1}).toArray()' | grep -v ObjectId | jq -r '.[].TT_CONFIG_JSON_URL' ) |
|
| 10 | +if [[ $urls == "null" || $? -ne 0 ]]; then |
|
| 11 | + echo "Mongo db returns null for tractrac url discovery" | notify-operators "MongoDB/tractrac urls issue" |
|
| 12 | + exit 1 |
|
| 13 | +else |
|
| 14 | + echo ${urls} | sort -u >"${GIT_ROOT}/${PATH_TO_TRAC_TRAC_URLS}" |
|
| 15 | + cd "${GIT_ROOT}" |
|
| 16 | + git add "${GIT_ROOT}/${PATH_TO_TRAC_TRAC_URLS}" |
|
| 17 | + git commit -m "Updated tractrac-json-urls" |
|
| 18 | + git push |
|
| 19 | +fi |
|
| ... | ... | \ No newline at end of file |
configuration/update_authorized_keys_for_landscape_managers
| ... | ... | @@ -1,48 +0,0 @@ |
| 1 | -#!/bin/bash |
|
| 2 | -BEARER_TOKEN="$1" |
|
| 3 | -BASE_URL="$2" |
|
| 4 | -LOGON_USER_HOME="$3" |
|
| 5 | -SSH_DIR="$3/.ssh" |
|
| 6 | -EXIT_CODE=0 |
|
| 7 | -# |
|
| 8 | -curl_output=$( curl -H 'X-SAPSSE-Forward-Request-To: master' -H 'Authorization: Bearer '${BEARER_TOKEN} "${BASE_URL}/security/api/restsecurity/users_with_permission?permission=LANDSCAPE:MANAGE:AWS" 2>/dev/null ) |
|
| 9 | -curl_exit_code=$? |
|
| 10 | -if [ "${curl_exit_code}" = "0" ]; then |
|
| 11 | - users=$( echo "${curl_output}" | jq -r '.[]' ) |
|
| 12 | - jq_exit_code=$? |
|
| 13 | - if [ "${jq_exit_code}" = "0" ]; then |
|
| 14 | - logger -t sailing "Users with LANDSCAPE:MANAGE:AWS permission: ${users}" |
|
| 15 | - public_keys=$( for user in ${users}; do |
|
| 16 | - ssh_key_curl_output=$(curl -H 'X-SAPSSE-Forward-Request-To: master' -H 'Authorization: Bearer '${BEARER_TOKEN} "${BASE_URL}/landscape/api/landscape/get_ssh_keys_owned_by_user?username[]=${user}" 2>/dev/null ) |
|
| 17 | - ssh_key_curl_exit_code=$? |
|
| 18 | - if [ "${ssh_key_curl_exit_code}" = "0" ]; then |
|
| 19 | - echo "${ssh_key_curl_output}" | jq -r '.[].publicKey' |
|
| 20 | - ssh_key_jq_exit_code=$? |
|
| 21 | - if [ "${ssh_key_jq_exit_code}" != "0" ]; then |
|
| 22 | - EXIT_CODE=${ssh_key_jq_exit_code} |
|
| 23 | - logger -t sailing "Couldn't parse response of get_ssh_keys_owned_by_user; jq exit code ${ssh_key_jq_exit_code}" |
|
| 24 | - fi |
|
| 25 | - else |
|
| 26 | - EXIT_CODE=${ssh_key_curl_exit_code} |
|
| 27 | - logger -t sailing "Couldn't get response of get_ssh_keys_owned_by_user; curl exit code ${ssh_key_corl_exit_code}" |
|
| 28 | - fi |
|
| 29 | - done | sort -u ) |
|
| 30 | - logger -t sailing "Obtained public keys: ${public_keys}" |
|
| 31 | - if [ ! -f ${SSH_DIR}/authorized_keys.org ]; then |
|
| 32 | - # Create a copy of the original authorized_keys file as generated by AWS from the start-up key: |
|
| 33 | - logger -t sailing "Saving original authorized_keys file from ${SSH_DIR}" |
|
| 34 | - cp ${SSH_DIR}/authorized_keys ${SSH_DIR}/authorized_keys.org |
|
| 35 | - fi |
|
| 36 | - # Start out with the original AWS-generated authorized_keys file |
|
| 37 | - # and append the public SSH keys of all users having LANDSCAPE:MANAGE:AWS permission: |
|
| 38 | - echo "$( cat ${SSH_DIR}/authorized_keys.org ) |
|
| 39 | - ${public_keys}" | sort -u >${SSH_DIR}/authorized_keys |
|
| 40 | - else |
|
| 41 | - EXIT_CODE=${jq_exit_code} |
|
| 42 | - logger -t sailing "Couldn't parse response of users_with_permission; jq exit code ${jq_exit_code}" |
|
| 43 | - fi |
|
| 44 | -else |
|
| 45 | - EXIT_CODE=${curl_exit_code} |
|
| 46 | - logger -t sailing "Couldn't get response of users_with_permission; curl exit code ${curl_exit_code}" |
|
| 47 | -fi |
|
| 48 | -exit ${EXIT_CODE} |
configuration/update_authorized_keys_for_landscape_managers
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +environments_scripts/repo/usr/local/bin/update_authorized_keys_for_landscape_managers |
|
| ... | ... | \ No newline at end of file |
configuration/update_authorized_keys_for_landscape_managers_if_changed
| ... | ... | @@ -1,42 +0,0 @@ |
| 1 | -#!/bin/bash |
|
| 2 | -BEARER_TOKEN="$1" |
|
| 3 | -BASE_URL="$2" |
|
| 4 | -LOGON_USER_HOME="$3" |
|
| 5 | -LAST_CHANGE_FILE=/var/run/last_change_aws_landscape_managers_ssh_keys |
|
| 6 | -# Uncomment the following for production use, with no error output |
|
| 7 | -curl_output=$( curl -H 'X-SAPSSE-Forward-Request-To: master' -H 'Authorization: Bearer '${BEARER_TOKEN} "${BASE_URL}/landscape/api/landscape/get_time_point_of_last_change_in_ssh_keys_of_aws_landscape_managers" 2>/dev/null ) |
|
| 8 | -curl_exit_code=$? |
|
| 9 | -if [ "${curl_exit_code}" = "0" ]; then |
|
| 10 | - last_change_millis=$( echo "${curl_output}" | jq -r '."timePointOfLastChangeOfSetOfLandscapeManagers-millis"' ) |
|
| 11 | - jq_exit_code=$? |
|
| 12 | - if [ "${jq_exit_code}" = "0" ]; then |
|
| 13 | - if [ -f "${LAST_CHANGE_FILE}" ]; then |
|
| 14 | - PREVIOUS_CHANGE=$(cat "${LAST_CHANGE_FILE}") |
|
| 15 | - if [ -z ${PREVIOUS_CHANGE} ]; then |
|
| 16 | - PREVIOUS_CHANGE=0 |
|
| 17 | - fi |
|
| 18 | - else |
|
| 19 | - PREVIOUS_CHANGE=0 |
|
| 20 | - fi |
|
| 21 | - if [ -z ${last_change_millis} ]; then |
|
| 22 | - logger -t sailing "Empty response from get_time_point_of_last_change_in_ssh_keys_of_aws_landscape_managers; exiting" |
|
| 23 | - exit 1 |
|
| 24 | - else |
|
| 25 | - if [ ${PREVIOUS_CHANGE} -lt ${last_change_millis} ]; then |
|
| 26 | - logger -t sailing "New SSH key changes for landscape managers (${last_change_millis} newer than ${PREVIOUS_CHANGE})" |
|
| 27 | - if update_authorized_keys_for_landscape_managers "${BEARER_TOKEN}" "${BASE_URL}" "${LOGON_USER_HOME}" ; then |
|
| 28 | - logger -t sailing "Updating SSH keys for landscape managers successful; updating ${LAST_CHANGE_FILE}" |
|
| 29 | - echo ${last_change_millis} >${LAST_CHANGE_FILE} |
|
| 30 | - else |
|
| 31 | - logger -t sailing "Updating SSH keys for landscape managers failed with exit code $?; not updating ${LAST_CHANGE_FILE}" |
|
| 32 | - fi |
|
| 33 | - fi |
|
| 34 | - fi |
|
| 35 | - else |
|
| 36 | - logger -t sailing "Parsing response of get_time_point_of_last_change_in_ssh_keys_of_aws_landscape_managers failed with exit code ${jq_exit_code}" |
|
| 37 | - exit ${jq_exit_code} |
|
| 38 | - fi |
|
| 39 | -else |
|
| 40 | - logger -t sailing "Getting response of get_time_point_of_last_change_in_ssh_keys_of_aws_landscape_managers failed with exit code ${curl_exit_code}" |
|
| 41 | - exit ${curl_exit_code} |
|
| 42 | -fi |
configuration/update_authorized_keys_for_landscape_managers_if_changed
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +environments_scripts/repo/usr/local/bin/update_authorized_keys_for_landscape_managers_if_changed |
|
| ... | ... | \ No newline at end of file |
configuration/update_landscape_managers_mailing_list.sh
| ... | ... | @@ -0,0 +1 @@ |
| 1 | +environments_scripts/repo/usr/local/bin/update_landscape_managers_mailing_list.sh |
|
| ... | ... | \ No newline at end of file |
java/com.sap.sailing.datamining/resources/stringmessages/Sailing_StringMessages_cs.properties
| ... | ... | @@ -28,7 +28,7 @@ LegOfCompetitorSailingDomainRetrieverChain=Úseky závodníků |
| 28 | 28 | GPSFixSailingDomainRetrieverChain=Záznamy polohy GPS |
| 29 | 29 | BravoFixSailingDomainRetrieverChain=Záznamy polohy Bravo |
| 30 | 30 | BravoFixTrackSailingDomainRetrieverChain=Dráhy Bravo |
| 31 | -FoilingSegmentsSailingDomainRetrieverChain=Foilovaný segment |
|
| 31 | +FoilingSegmentsSailingDomainRetrieverChain=Foilované segmenty |
|
| 32 | 32 | FoilingSegmentName=Název foilovaného segmentu |
| 33 | 33 | WindFixSailingDomainRetrieverChain=Záznamy větru |
| 34 | 34 | MarkPassingSailingDomainRetrieverChain=Obeplutí značky |
| ... | ... | @@ -259,9 +259,16 @@ LengthOfTheLeg=Délka úseku |
| 259 | 259 | LegSailingDomainRetrieverChain=Úsek |
| 260 | 260 | TackType=Dlouhý/krátký obrat |
| 261 | 261 | getTackTypeofRace=Dlouhý/krátký obrat rozjížďky |
| 262 | -InRace=In race |
|
| 263 | -InTrackingInterval=In tracking interval |
|
| 264 | -NumberOfCompetitors=Number of Competitors |
|
| 265 | -CompetitorInLeaderboard=Competitor in Leaderboard |
|
| 266 | -CompetitorSailingDomainRetrieverChain=Competitors in Leaderboards |
|
| 267 | -SmoothedSpeed=Smoothed Speed |
|
| 262 | +tackTypeSegmentsRetrieverChainDefinition=Segmenty s dlouhým/krátkým obratem |
|
| 263 | +TackTypeSegments=Segmenty s dl./kr. obratem |
|
| 264 | +TackTypeSegmentName=Název segmentů s dlouhým/krátkým obratem |
|
| 265 | +TackTypeDuration=Doba trvání dlouhého/krátkého obratu |
|
| 266 | +TackTypeDistance=Délka dlouhého/krátkého obratu |
|
| 267 | +InRace=V rozjížďce |
|
| 268 | +InTrackingInterval=Interval trasování |
|
| 269 | +NumberOfCompetitors=Počet závodníků |
|
| 270 | +CompetitorInLeaderboard=Závodník na výsledkové tabuli |
|
| 271 | +CompetitorSailingDomainRetrieverChain=Závodníci na výsledkových tabulích |
|
| 272 | +SmoothedSpeed=Vyhlazená rychlost |
|
| 273 | +RatioDistanceLongVsShortTack=Poměr délky dlouhého/krátkého obratu |
|
| 274 | +RatioDurationLongVsShortTack=Poměr doby trvání dlouhého/krátkého obratu |
java/com.sap.sailing.datamining/resources/stringmessages/Sailing_StringMessages_da.properties
| ... | ... | @@ -259,9 +259,16 @@ LengthOfTheLeg=Længde på benet |
| 259 | 259 | LegSailingDomainRetrieverChain=Ben |
| 260 | 260 | TackType=Lang/kort stagvende |
| 261 | 261 | getTackTypeofRace=Lang/kort stagvende for kapsejladsen |
| 262 | -InRace=In race |
|
| 263 | -InTrackingInterval=In tracking interval |
|
| 264 | -NumberOfCompetitors=Number of Competitors |
|
| 265 | -CompetitorInLeaderboard=Competitor in Leaderboard |
|
| 266 | -CompetitorSailingDomainRetrieverChain=Competitors in Leaderboards |
|
| 267 | -SmoothedSpeed=Smoothed Speed |
|
| 262 | +tackTypeSegmentsRetrieverChainDefinition=Segmenter for lang/kort stagvende |
|
| 263 | +TackTypeSegments=Segmenter for lang/kort |
|
| 264 | +TackTypeSegmentName=Segmentnavn for lang/kort stagvende |
|
| 265 | +TackTypeDuration=Varighed for lang/kort stagvende |
|
| 266 | +TackTypeDistance=Distance for lang/kort stagvende |
|
| 267 | +InRace=I kapsejlads |
|
| 268 | +InTrackingInterval=I sporingsinterval |
|
| 269 | +NumberOfCompetitors=Antal deltagere |
|
| 270 | +CompetitorInLeaderboard=Deltager på rangliste |
|
| 271 | +CompetitorSailingDomainRetrieverChain=Deltagere på rangliste |
|
| 272 | +SmoothedSpeed=Udjævnet hastighed |
|
| 273 | +RatioDistanceLongVsShortTack=Forholdsafstand lang stagvende/kort stagvende |
|
| 274 | +RatioDurationLongVsShortTack=Forholdsvarighed lang stagvende/kort stagvende |
java/com.sap.sailing.datamining/resources/stringmessages/Sailing_StringMessages_es.properties
| ... | ... | @@ -259,9 +259,16 @@ LengthOfTheLeg=Longitud del tramo |
| 259 | 259 | LegSailingDomainRetrieverChain=Tramo |
| 260 | 260 | TackType=Bordo largo/corto |
| 261 | 261 | getTackTypeofRace=Bordo largo/corto de la prueba |
| 262 | -InRace=In race |
|
| 263 | -InTrackingInterval=In tracking interval |
|
| 264 | -NumberOfCompetitors=Number of Competitors |
|
| 265 | -CompetitorInLeaderboard=Competitor in Leaderboard |
|
| 266 | -CompetitorSailingDomainRetrieverChain=Competitors in Leaderboards |
|
| 267 | -SmoothedSpeed=Smoothed Speed |
|
| 262 | +tackTypeSegmentsRetrieverChainDefinition=Segmentos de bordo largo/corto |
|
| 263 | +TackTypeSegments=Segmentos largos/cortos |
|
| 264 | +TackTypeSegmentName=Nombre de segmentos de bordo largo/corto |
|
| 265 | +TackTypeDuration=Duración de bordo largo/corto |
|
| 266 | +TackTypeDistance=Distancia de bordo largo/corto |
|
| 267 | +InRace=En la prueba |
|
| 268 | +InTrackingInterval=En intervalo de seguimiento |
|
| 269 | +NumberOfCompetitors=Número de competidores |
|
| 270 | +CompetitorInLeaderboard=Competidor en tabla de clasificación |
|
| 271 | +CompetitorSailingDomainRetrieverChain=Competidores en tablas de clasificación |
|
| 272 | +SmoothedSpeed=Velocidad suavizada |
|
| 273 | +RatioDistanceLongVsShortTack=Ratio de distancia de bordo largo/corto |
|
| 274 | +RatioDurationLongVsShortTack=Ratio de duración de bordo largo/corto |
java/com.sap.sailing.datamining/resources/stringmessages/Sailing_StringMessages_fr.properties
| ... | ... | @@ -259,9 +259,16 @@ LengthOfTheLeg=Longueur de la portion de parcours |
| 259 | 259 | LegSailingDomainRetrieverChain=Portion de parcours |
| 260 | 260 | TackType=Virement long/court |
| 261 | 261 | getTackTypeofRace=Virement long/court de la course |
| 262 | -InRace=In race |
|
| 263 | -InTrackingInterval=In tracking interval |
|
| 264 | -NumberOfCompetitors=Number of Competitors |
|
| 265 | -CompetitorInLeaderboard=Competitor in Leaderboard |
|
| 266 | -CompetitorSailingDomainRetrieverChain=Competitors in Leaderboards |
|
| 267 | -SmoothedSpeed=Smoothed Speed |
|
| 262 | +tackTypeSegmentsRetrieverChainDefinition=Tronçons de virement long/court |
|
| 263 | +TackTypeSegments=Tronçons longs/courts |
|
| 264 | +TackTypeSegmentName=Noms des tronçons de virement long/court |
|
| 265 | +TackTypeDuration=Durée de virement long/court |
|
| 266 | +TackTypeDistance=Distance de virement long/court |
|
| 267 | +InRace=Dans la course |
|
| 268 | +InTrackingInterval=Dans l''intervalle de suivi |
|
| 269 | +NumberOfCompetitors=Nombre de concurrents |
|
| 270 | +CompetitorInLeaderboard=Concurrent dans le palmarès |
|
| 271 | +CompetitorSailingDomainRetrieverChain=Concurrents dans les palmarès |
|
| 272 | +SmoothedSpeed=Vitesse lissée |
|
| 273 | +RatioDistanceLongVsShortTack=Ratio de distance tronçon long/tronçon court |
|
| 274 | +RatioDurationLongVsShortTack=Ratio de durée tronçon long/tronçon court |
java/com.sap.sailing.datamining/resources/stringmessages/Sailing_StringMessages_it.properties
| ... | ... | @@ -259,9 +259,16 @@ LengthOfTheLeg=Lunghezza della tratta |
| 259 | 259 | LegSailingDomainRetrieverChain=Tratta |
| 260 | 260 | TackType=Bordo lungo/corto |
| 261 | 261 | getTackTypeofRace=Bordo lungo/corto della gara |
| 262 | -InRace=In race |
|
| 263 | -InTrackingInterval=In tracking interval |
|
| 264 | -NumberOfCompetitors=Number of Competitors |
|
| 265 | -CompetitorInLeaderboard=Competitor in Leaderboard |
|
| 266 | -CompetitorSailingDomainRetrieverChain=Competitors in Leaderboards |
|
| 267 | -SmoothedSpeed=Smoothed Speed |
|
| 262 | +tackTypeSegmentsRetrieverChainDefinition=Segmenti del bordo lungo/corto |
|
| 263 | +TackTypeSegments=Segmenti lunghi/corti |
|
| 264 | +TackTypeSegmentName=Nome segmenti del bordo lungo/corto |
|
| 265 | +TackTypeDuration=Durata del bordo lungo/corto |
|
| 266 | +TackTypeDistance=Distanza del bordo lungo/corto |
|
| 267 | +InRace=In gara |
|
| 268 | +InTrackingInterval=Nell''intervallo del tracciamento |
|
| 269 | +NumberOfCompetitors=Numero di concorrenti |
|
| 270 | +CompetitorInLeaderboard=Concorrente in classifica |
|
| 271 | +CompetitorSailingDomainRetrieverChain=Concorrenti nelle classifiche |
|
| 272 | +SmoothedSpeed=Velocità media di movimento |
|
| 273 | +RatioDistanceLongVsShortTack=Distanza in rapporto tra bordo lungo/corto |
|
| 274 | +RatioDurationLongVsShortTack=Durata in rapporto tra bordo lungo/corto |
java/com.sap.sailing.datamining/resources/stringmessages/Sailing_StringMessages_ja.properties
| ... | ... | @@ -259,9 +259,16 @@ LengthOfTheLeg=レグの長さ |
| 259 | 259 | LegSailingDomainRetrieverChain=レグ |
| 260 | 260 | TackType=ロング/ショートタック |
| 261 | 261 | getTackTypeofRace=レースのロング/ショートタック |
| 262 | -InRace=In race |
|
| 263 | -InTrackingInterval=In tracking interval |
|
| 264 | -NumberOfCompetitors=Number of Competitors |
|
| 265 | -CompetitorInLeaderboard=Competitor in Leaderboard |
|
| 266 | -CompetitorSailingDomainRetrieverChain=Competitors in Leaderboards |
|
| 267 | -SmoothedSpeed=Smoothed Speed |
|
| 262 | +tackTypeSegmentsRetrieverChainDefinition=ロング/ショートタックセグメント |
|
| 263 | +TackTypeSegments=ロング/ショートセグメント |
|
| 264 | +TackTypeSegmentName=ロング/ショートタックセグメント名 |
|
| 265 | +TackTypeDuration=ロング/ショートタック時間 |
|
| 266 | +TackTypeDistance=ロング/ショートタック距離 |
|
| 267 | +InRace=レース内 |
|
| 268 | +InTrackingInterval=追跡間隔内 |
|
| 269 | +NumberOfCompetitors=競技者数 |
|
| 270 | +CompetitorInLeaderboard=リーダーボード上の競技者 |
|
| 271 | +CompetitorSailingDomainRetrieverChain=リーダーボード上の競技者 |
|
| 272 | +SmoothedSpeed=平滑化速度 |
|
| 273 | +RatioDistanceLongVsShortTack=ロングタック/ショートタック距離比率 |
|
| 274 | +RatioDurationLongVsShortTack=ロングタック/ショートタック時間比率 |
java/com.sap.sailing.datamining/resources/stringmessages/Sailing_StringMessages_pt.properties
| ... | ... | @@ -259,9 +259,16 @@ LengthOfTheLeg=Comprimento da perna |
| 259 | 259 | LegSailingDomainRetrieverChain=Perna |
| 260 | 260 | TackType=Cambada longa/curta |
| 261 | 261 | getTackTypeofRace=Cambada longa/curta da corrida |
| 262 | -InRace=In race |
|
| 263 | -InTrackingInterval=In tracking interval |
|
| 264 | -NumberOfCompetitors=Number of Competitors |
|
| 265 | -CompetitorInLeaderboard=Competitor in Leaderboard |
|
| 266 | -CompetitorSailingDomainRetrieverChain=Competitors in Leaderboards |
|
| 267 | -SmoothedSpeed=Smoothed Speed |
|
| 262 | +tackTypeSegmentsRetrieverChainDefinition=Segmentos de cambada longa/curta |
|
| 263 | +TackTypeSegments=Segmentos longos/curtos |
|
| 264 | +TackTypeSegmentName=Nome de segmentos de cambada longa/curta |
|
| 265 | +TackTypeDuration=Duração de cambada longa/curta |
|
| 266 | +TackTypeDistance=Distância de cambada longa/curta |
|
| 267 | +InRace=Na corrida |
|
| 268 | +InTrackingInterval=No intervalo de rastreamento |
|
| 269 | +NumberOfCompetitors=Número de competidores |
|
| 270 | +CompetitorInLeaderboard=Competidor no painel de classificação |
|
| 271 | +CompetitorSailingDomainRetrieverChain=Competidores nos painéis de classificação |
|
| 272 | +SmoothedSpeed=Velocidade suavizada |
|
| 273 | +RatioDistanceLongVsShortTack=Proporção distância cambada longa/cambada curta |
|
| 274 | +RatioDurationLongVsShortTack=Proporção duração cambada longa/cambada curta |
java/com.sap.sailing.datamining/resources/stringmessages/Sailing_StringMessages_ru.properties
| ... | ... | @@ -259,9 +259,16 @@ LengthOfTheLeg=Длина отрезка |
| 259 | 259 | LegSailingDomainRetrieverChain=Отрезок |
| 260 | 260 | TackType=Длинный /короткий галс |
| 261 | 261 | getTackTypeofRace=Длинный /короткий галс гонки |
| 262 | -InRace=In race |
|
| 263 | -InTrackingInterval=In tracking interval |
|
| 264 | -NumberOfCompetitors=Number of Competitors |
|
| 265 | -CompetitorInLeaderboard=Competitor in Leaderboard |
|
| 266 | -CompetitorSailingDomainRetrieverChain=Competitors in Leaderboards |
|
| 267 | -SmoothedSpeed=Smoothed Speed |
|
| 262 | +tackTypeSegmentsRetrieverChainDefinition=Сегменты длинного/короткого галса |
|
| 263 | +TackTypeSegments=Длинные/короткие сегменты |
|
| 264 | +TackTypeSegmentName=Имя сегментов длинного/короткого галса |
|
| 265 | +TackTypeDuration=Продолжительность длинного/короткого галса |
|
| 266 | +TackTypeDistance=Расстояние длинного/короткого галса |
|
| 267 | +InRace=В гонке |
|
| 268 | +InTrackingInterval=В интервале отслеживания |
|
| 269 | +NumberOfCompetitors=Число участников |
|
| 270 | +CompetitorInLeaderboard=Участник в таблице лидеров |
|
| 271 | +CompetitorSailingDomainRetrieverChain=Участники в таблицах лидеров |
|
| 272 | +SmoothedSpeed=Сглаженная скорость |
|
| 273 | +RatioDistanceLongVsShortTack=Соотношение расстояния длинного и короткого галса |
|
| 274 | +RatioDurationLongVsShortTack=Соотношение продолжительности длинного и короткого галса |
java/com.sap.sailing.datamining/resources/stringmessages/Sailing_StringMessages_sl.properties
| ... | ... | @@ -259,9 +259,16 @@ LengthOfTheLeg=Dolžina stranice |
| 259 | 259 | LegSailingDomainRetrieverChain=Stranica |
| 260 | 260 | TackType=Dolgo/kratko prečenje |
| 261 | 261 | getTackTypeofRace=Dolgo/kratko prečenje plova |
| 262 | -InRace=In race |
|
| 263 | -InTrackingInterval=In tracking interval |
|
| 264 | -NumberOfCompetitors=Number of Competitors |
|
| 265 | -CompetitorInLeaderboard=Competitor in Leaderboard |
|
| 266 | -CompetitorSailingDomainRetrieverChain=Competitors in Leaderboards |
|
| 267 | -SmoothedSpeed=Smoothed Speed |
|
| 262 | +tackTypeSegmentsRetrieverChainDefinition=Dolgi/kratki segmenti prečenja |
|
| 263 | +TackTypeSegments=Dolgi/kratki segmenti |
|
| 264 | +TackTypeSegmentName=Ime dolgih/kratkih segmentov prečenja |
|
| 265 | +TackTypeDuration=Trajanje dolgega/kratkega prečenja |
|
| 266 | +TackTypeDistance=Razdalja dolgega/kratkega prečenja |
|
| 267 | +InRace=Med plovom |
|
| 268 | +InTrackingInterval=Med intervalom sledenja |
|
| 269 | +NumberOfCompetitors=Število tekmovalcev |
|
| 270 | +CompetitorInLeaderboard=Tekmovalec na lestvici vodilnih |
|
| 271 | +CompetitorSailingDomainRetrieverChain=Tekmovalci na lestvicah vodilnih |
|
| 272 | +SmoothedSpeed=Umirjena hitrost |
|
| 273 | +RatioDistanceLongVsShortTack=Razmerje razdalje dolgo prečenje/kratko prečenje |
|
| 274 | +RatioDurationLongVsShortTack=Razmerje trajanja dolgo prečenje/kratko prečenje |
java/com.sap.sailing.datamining/resources/stringmessages/Sailing_StringMessages_zh.properties
| ... | ... | @@ -259,9 +259,16 @@ LengthOfTheLeg=航段长度 |
| 259 | 259 | LegSailingDomainRetrieverChain=航段 |
| 260 | 260 | TackType=长程/短程迎风转向 |
| 261 | 261 | getTackTypeofRace=比赛轮次的长程/短程迎风转向 |
| 262 | -InRace=In race |
|
| 263 | -InTrackingInterval=In tracking interval |
|
| 264 | -NumberOfCompetitors=Number of Competitors |
|
| 265 | -CompetitorInLeaderboard=Competitor in Leaderboard |
|
| 266 | -CompetitorSailingDomainRetrieverChain=Competitors in Leaderboards |
|
| 267 | -SmoothedSpeed=Smoothed Speed |
|
| 262 | +tackTypeSegmentsRetrieverChainDefinition=长程/短程迎风转向段 |
|
| 263 | +TackTypeSegments=长程/短程段 |
|
| 264 | +TackTypeSegmentName=长程/短程迎风转向段名称 |
|
| 265 | +TackTypeDuration=长程/短程迎风转向持续时间 |
|
| 266 | +TackTypeDistance=长程/短程迎风转向距离 |
|
| 267 | +InRace=比赛中 |
|
| 268 | +InTrackingInterval=跟踪间隔 |
|
| 269 | +NumberOfCompetitors=参赛队数 |
|
| 270 | +CompetitorInLeaderboard=积分榜中的参赛队 |
|
| 271 | +CompetitorSailingDomainRetrieverChain=积分榜中的参赛队 |
|
| 272 | +SmoothedSpeed=平滑速度 |
|
| 273 | +RatioDistanceLongVsShortTack=长程迎风转向/短程迎风转向距离比率 |
|
| 274 | +RatioDurationLongVsShortTack=长程迎风转向/短程迎风转向持续时间比率 |
java/com.sap.sailing.domain.igtimiadapter.test/META-INF/MANIFEST.MF
| ... | ... | @@ -14,4 +14,6 @@ Require-Bundle: org.junit;bundle-version="4.8.2", |
| 14 | 14 | com.sap.sse.security.testsupport, |
| 15 | 15 | org.mockito.mockito-core;bundle-version="1.10.14", |
| 16 | 16 | org.hamcrest;bundle-version="2.2.0", |
| 17 | - org.objenesis;bundle-version="2.1.0" |
|
| 17 | + org.objenesis;bundle-version="2.1.0", |
|
| 18 | + net.bytebuddy.byte-buddy;bundle-version="1.12.18", |
|
| 19 | + net.bytebuddy.byte-buddy-agent;bundle-version="1.12.18" |
java/com.sap.sailing.domain.persistence/src/com/sap/sailing/domain/persistence/impl/MongoRaceLogStoreImpl.java
| ... | ... | @@ -41,7 +41,7 @@ public class MongoRaceLogStoreImpl implements RaceLogStore { |
| 41 | 41 | } |
| 42 | 42 | |
| 43 | 43 | private void addListener(RaceLogIdentifier identifier, final RaceLog raceLog) { |
| 44 | - MongoRaceLogStoreVisitor listener = new MongoRaceLogStoreVisitor(identifier, mongoObjectFactory); |
|
| 44 | + final MongoRaceLogStoreVisitor listener = new MongoRaceLogStoreVisitor(identifier, mongoObjectFactory); |
|
| 45 | 45 | listeners.put(raceLog, listener); |
| 46 | 46 | raceLog.addListener(listener); |
| 47 | 47 | } |
| ... | ... | @@ -55,7 +55,7 @@ public class MongoRaceLogStoreImpl implements RaceLogStore { |
| 55 | 55 | |
| 56 | 56 | @Override |
| 57 | 57 | public void removeListenersAddedByStoreFrom(RaceLog raceLog) { |
| 58 | - RaceLogEventVisitor visitor = listeners.get(raceLog); |
|
| 58 | + final RaceLogEventVisitor visitor = listeners.get(raceLog); |
|
| 59 | 59 | if (visitor != null) { |
| 60 | 60 | raceLog.removeListener(visitor); |
| 61 | 61 | } |
java/com.sap.sailing.domain.racelogtrackingadapter.test/META-INF/MANIFEST.MF
| ... | ... | @@ -13,7 +13,9 @@ Require-Bundle: org.junit;bundle-version="4.8.2", |
| 13 | 13 | org.hamcrest;bundle-version="2.2.0", |
| 14 | 14 | com.sap.sse.common, |
| 15 | 15 | com.sap.sailing.domain, |
| 16 | - com.sap.sse.mongodb |
|
| 16 | + com.sap.sse.mongodb, |
|
| 17 | + net.bytebuddy.byte-buddy;bundle-version="1.12.18", |
|
| 18 | + net.bytebuddy.byte-buddy-agent;bundle-version="1.12.18" |
|
| 17 | 19 | Import-Package: com.sap.sailing.domain.abstractlog.regatta.events, |
| 18 | 20 | com.sap.sailing.domain.persistence.impl, |
| 19 | 21 | com.sap.sailing.domain.persistence.racelog.tracking.impl, |
java/com.sap.sailing.domain.racelogtrackingadapter/src/com/sap/sailing/domain/racelogtracking/impl/RaceLogTrackingAdapterImpl.java
| ... | ... | @@ -302,7 +302,7 @@ public class RaceLogTrackingAdapterImpl implements RaceLogTrackingAdapter { |
| 302 | 302 | private MailService getMailService() { |
| 303 | 303 | ServiceReference<MailService> ref = Activator.getContext().getServiceReference(MailService.class); |
| 304 | 304 | if (ref == null) { |
| 305 | - logger.warning("No file storage management service registered"); |
|
| 305 | + logger.warning("No mail service registered"); |
|
| 306 | 306 | return null; |
| 307 | 307 | } |
| 308 | 308 | return Activator.getContext().getService(ref); |
java/com.sap.sailing.domain/src/com/sap/sailing/domain/base/RaceColumn.java
| ... | ... | @@ -41,6 +41,13 @@ import com.sap.sse.common.Util.Pair; |
| 41 | 41 | */ |
| 42 | 42 | public interface RaceColumn extends Named { |
| 43 | 43 | /** |
| 44 | + * Sets the information object used to access the race column's race logs (see |
|
| 45 | + * {@link #setRaceLogInformation(RaceLogStore, RegattaLikeIdentifier)}) and (re-)loads the contents of all fleets' |
|
| 46 | + * race logs. |
|
| 47 | + */ |
|
| 48 | + void setRaceLogInformationAndLoad(RaceLogStore raceLogStore, RegattaLikeIdentifier regattaLikeParent); |
|
| 49 | + |
|
| 50 | + /** |
|
| 44 | 51 | * Sets the information object used to access the race column's race logs. |
| 45 | 52 | */ |
| 46 | 53 | void setRaceLogInformation(RaceLogStore raceLogStore, RegattaLikeIdentifier regattaLikeParent); |
| ... | ... | @@ -49,7 +56,6 @@ public interface RaceColumn extends Named { |
| 49 | 56 | * Gets the race column's race log associated to the passed fleet. Note that the result may be <code>null</code> |
| 50 | 57 | * particularly for columns in a {@link MetaLeaderboard}. |
| 51 | 58 | * |
| 52 | - * @param fleet |
|
| 53 | 59 | * @return the race log or <code>null</code> in case this column belongs to a {@link MetaLeaderboard} |
| 54 | 60 | */ |
| 55 | 61 | RaceLog getRaceLog(Fleet fleet); |
java/com.sap.sailing.domain/src/com/sap/sailing/domain/base/impl/AbstractRaceColumn.java
| ... | ... | @@ -74,15 +74,20 @@ public abstract class AbstractRaceColumn extends SimpleAbstractRaceColumn implem |
| 74 | 74 | } |
| 75 | 75 | |
| 76 | 76 | @Override |
| 77 | - public synchronized void setRaceLogInformation(RaceLogStore raceLogStore, RegattaLikeIdentifier regattaLikeParent) { |
|
| 78 | - this.raceLogStore = raceLogStore; |
|
| 79 | - this.regattaLikeParent = regattaLikeParent; |
|
| 77 | + public synchronized void setRaceLogInformationAndLoad(RaceLogStore raceLogStore, RegattaLikeIdentifier regattaLikeParent) { |
|
| 78 | + setRaceLogInformation(raceLogStore, regattaLikeParent); |
|
| 80 | 79 | for (final Fleet fleet : getFleets()) { |
| 81 | 80 | reloadRaceLog(fleet); |
| 82 | 81 | } |
| 83 | 82 | } |
| 84 | 83 | |
| 85 | 84 | @Override |
| 85 | + public synchronized void setRaceLogInformation(RaceLogStore raceLogStore, RegattaLikeIdentifier regattaLikeParent) { |
|
| 86 | + this.raceLogStore = raceLogStore; |
|
| 87 | + this.regattaLikeParent = regattaLikeParent; |
|
| 88 | + } |
|
| 89 | + |
|
| 90 | + @Override |
|
| 86 | 91 | public RaceLog getRaceLog(Fleet fleet) { |
| 87 | 92 | return raceLogs.get(fleet); |
| 88 | 93 | } |
| ... | ... | @@ -191,6 +196,7 @@ public abstract class AbstractRaceColumn extends SimpleAbstractRaceColumn implem |
| 191 | 196 | |
| 192 | 197 | @Override |
| 193 | 198 | public void reloadRaceLog(Fleet fleet) { |
| 199 | + // FIXME bug3286: newOrLoadedRaceLog will have MongoRaceLogStoreListener attached; raceLogAvailable, result of de-serialization, will not; merging newOrLoadedRaceLog into raceLogAvailable will leave resulting log without persistence |
|
| 194 | 200 | RaceLogIdentifier identifier = getRaceLogIdentifier(fleet); |
| 195 | 201 | RaceLog newOrLoadedRaceLog = raceLogStore.getRaceLog(identifier, /* ignoreCache */true); |
| 196 | 202 | RaceLog raceLogAvailable = raceLogs.get(fleet); |
java/com.sap.sailing.domain/src/com/sap/sailing/domain/base/impl/RegattaImpl.java
| ... | ... | @@ -284,7 +284,7 @@ public class RegattaImpl extends NamedImpl implements Regatta, RaceColumnListene |
| 284 | 284 | } |
| 285 | 285 | this.series = seriesList; |
| 286 | 286 | for (Series s : series) { |
| 287 | - linkToRegattaAndConnectRaceLogsAndAddListeners(s); |
|
| 287 | + linkToRegattaAndConnectRaceLogsAndAddListeners(s, /* load race logs */ true); |
|
| 288 | 288 | } |
| 289 | 289 | this.persistent = persistent; |
| 290 | 290 | this.scoringScheme = scoringScheme; |
| ... | ... | @@ -323,14 +323,19 @@ public class RegattaImpl extends NamedImpl implements Regatta, RaceColumnListene |
| 323 | 323 | return rankingMetricConstructor == null ? OneDesignRankingMetric::new : rankingMetricConstructor; |
| 324 | 324 | } |
| 325 | 325 | |
| 326 | - private void registerRaceLogsOnRaceColumns(Series series) { |
|
| 326 | + private void registerRaceLogsOnRaceColumns(Series series, boolean loadRaceLogs) { |
|
| 327 | 327 | for (RaceColumn raceColumn : series.getRaceColumns()) { |
| 328 | - setRaceLogInformationOnRaceColumn(raceColumn); |
|
| 328 | + setRaceLogInformationOnRaceColumn(raceColumn, loadRaceLogs); |
|
| 329 | 329 | } |
| 330 | 330 | } |
| 331 | 331 | |
| 332 | - private void setRaceLogInformationOnRaceColumn(RaceColumn raceColumn) { |
|
| 333 | - raceColumn.setRaceLogInformation(raceLogStore, new RegattaAsRegattaLikeIdentifier(this)); |
|
| 332 | + private void setRaceLogInformationOnRaceColumn(RaceColumn raceColumn, boolean loadRaceLogs) { |
|
| 333 | + final RegattaLikeIdentifier regattaLikeIdentifier = new RegattaAsRegattaLikeIdentifier(this); |
|
| 334 | + if (loadRaceLogs) { |
|
| 335 | + raceColumn.setRaceLogInformationAndLoad(raceLogStore, regattaLikeIdentifier); |
|
| 336 | + } else { |
|
| 337 | + raceColumn.setRaceLogInformation(raceLogStore, regattaLikeIdentifier); |
|
| 338 | + } |
|
| 334 | 339 | } |
| 335 | 340 | |
| 336 | 341 | @Override |
| ... | ... | @@ -375,17 +380,14 @@ public class RegattaImpl extends NamedImpl implements Regatta, RaceColumnListene |
| 375 | 380 | |
| 376 | 381 | /** |
| 377 | 382 | * {@link RaceColumnListeners} may not be de-serialized (yet) when the regatta is de-serialized. To avoid |
| 378 | - * re-registering empty objects most probably leading to null pointer exception one needs to initialize all |
|
| 383 | + * re-registering empty objects most probably leading to a {link NullPointerException} one needs to initialize all |
|
| 379 | 384 | * listeners after all objects have been read. |
| 380 | 385 | */ |
| 381 | 386 | public void initializeSeriesAfterDeserialize() { |
| 382 | - for (Series series : getSeries()) { |
|
| 383 | - linkToRegattaAndConnectRaceLogsAndAddListeners(series); |
|
| 384 | - if (series.getRaceColumns() != null) { |
|
| 385 | - for (RaceColumnInSeries column : series.getRaceColumns()) { |
|
| 386 | - column.setRaceLogInformation(raceLogStore, new RegattaAsRegattaLikeIdentifier(this)); |
|
| 387 | - } |
|
| 388 | - } else { |
|
| 387 | + for (final Series series : getSeries()) { |
|
| 388 | + // the following also transitively invokes setRaceLogInformation(raceLogStore, getRegattaLikeIdentifier()) on all race columns |
|
| 389 | + linkToRegattaAndConnectRaceLogsAndAddListeners(series, /* load race logs */ false); |
|
| 390 | + if (series.getRaceColumns() == null) { |
|
| 389 | 391 | logger.warning("Race Columns were null during deserialization. This should not happen."); |
| 390 | 392 | } |
| 391 | 393 | } |
| ... | ... | @@ -594,7 +596,7 @@ public class RegattaImpl extends NamedImpl implements Regatta, RaceColumnListene |
| 594 | 596 | |
| 595 | 597 | @Override |
| 596 | 598 | public void raceColumnAddedToContainer(RaceColumn raceColumn) { |
| 597 | - setRaceLogInformationOnRaceColumn(raceColumn); |
|
| 599 | + setRaceLogInformationOnRaceColumn(raceColumn, /* loadRaceLogs */ true); |
|
| 598 | 600 | raceColumnListeners.notifyListenersAboutRaceColumnAddedToContainer(raceColumn); |
| 599 | 601 | } |
| 600 | 602 | |
| ... | ... | @@ -790,7 +792,7 @@ public class RegattaImpl extends NamedImpl implements Regatta, RaceColumnListene |
| 790 | 792 | public void addSeries(Series seriesToAdd) { |
| 791 | 793 | Series existingSeries = getSeriesByName(seriesToAdd.getName()); |
| 792 | 794 | if (existingSeries == null) { |
| 793 | - linkToRegattaAndConnectRaceLogsAndAddListeners(seriesToAdd); |
|
| 795 | + linkToRegattaAndConnectRaceLogsAndAddListeners(seriesToAdd, /* load race logs */ true); |
|
| 794 | 796 | synchronized (this.series) { |
| 795 | 797 | ArrayList<Series> newSeriesList = new ArrayList<Series>(); |
| 796 | 798 | for (Series seriesObject : this.series) { |
| ... | ... | @@ -802,10 +804,10 @@ public class RegattaImpl extends NamedImpl implements Regatta, RaceColumnListene |
| 802 | 804 | } |
| 803 | 805 | } |
| 804 | 806 | |
| 805 | - private void linkToRegattaAndConnectRaceLogsAndAddListeners(Series seriesToAdd) { |
|
| 807 | + private void linkToRegattaAndConnectRaceLogsAndAddListeners(Series seriesToAdd, boolean loadRaceLogs) { |
|
| 806 | 808 | seriesToAdd.setRegatta(this); |
| 807 | 809 | seriesToAdd.addRaceColumnListener(this); |
| 808 | - registerRaceLogsOnRaceColumns(seriesToAdd); |
|
| 810 | + registerRaceLogsOnRaceColumns(seriesToAdd, loadRaceLogs); |
|
| 809 | 811 | } |
| 810 | 812 | |
| 811 | 813 | @Override |
java/com.sap.sailing.domain/src/com/sap/sailing/domain/leaderboard/impl/FlexibleLeaderboardImpl.java
| ... | ... | @@ -178,7 +178,7 @@ public class FlexibleLeaderboardImpl extends AbstractLeaderboardImpl implements |
| 178 | 178 | column = createRaceColumn(name, medalRace); |
| 179 | 179 | column.addRaceColumnListener(this); |
| 180 | 180 | races.add(column); |
| 181 | - column.setRaceLogInformation(raceLogStore, new FlexibleLeaderboardAsRegattaLikeIdentifier(this)); |
|
| 181 | + column.setRaceLogInformationAndLoad(raceLogStore, new FlexibleLeaderboardAsRegattaLikeIdentifier(this)); |
|
| 182 | 182 | column.setRegattaLikeHelper(regattaLikeHelper); |
| 183 | 183 | getRaceColumnListeners().notifyListenersAboutRaceColumnAddedToContainer(column); |
| 184 | 184 | } |
java/com.sap.sailing.domain/src/com/sap/sailing/domain/leaderboard/meta/MetaLeaderboardColumn.java
| ... | ... | @@ -232,6 +232,10 @@ public class MetaLeaderboardColumn extends SimpleAbstractRaceColumn implements R |
| 232 | 232 | } |
| 233 | 233 | |
| 234 | 234 | @Override |
| 235 | + public void setRaceLogInformationAndLoad(RaceLogStore raceLogStore, RegattaLikeIdentifier regattaLikeParent) { |
|
| 236 | + } |
|
| 237 | + |
|
| 238 | + @Override |
|
| 235 | 239 | public void setRaceLogInformation(RaceLogStore raceLogStore, RegattaLikeIdentifier regattaLikeParent) { |
| 236 | 240 | } |
| 237 | 241 |
java/com.sap.sailing.gwt.ui/src/main/java/com/sap/sailing/gwt/home/communication/eventview/HasRegattaMetadata.java
| ... | ... | @@ -7,7 +7,7 @@ import com.sap.sailing.gwt.home.communication.event.LabelType; |
| 7 | 7 | |
| 8 | 8 | public interface HasRegattaMetadata { |
| 9 | 9 | |
| 10 | - public enum RegattaState { |
|
| 10 | + public static enum RegattaState { |
|
| 11 | 11 | UPCOMING(LabelType.UPCOMING), PROGRESS(LabelType.PROGRESS), RUNNING(LabelType.LIVE), FINISHED(LabelType.FINISHED); |
| 12 | 12 | |
| 13 | 13 | private final LabelType stateMarker; |
java/com.sap.sailing.gwt.ui/src/main/java/com/sap/sailing/gwt/home/desktop/places/whatsnew/resources/SailingAnalyticsNotes.html
| ... | ... | @@ -5,6 +5,11 @@ |
| 5 | 5 | <div id="mainContent"> |
| 6 | 6 | <h4 class="articleHeadline">What's New - SAP Sailing Analytics</h4> |
| 7 | 7 | <div class="innerContent"> |
| 8 | + <h5 class="articleSubheadline">January 2024</h5> |
|
| 9 | + <ul class="bulletList"> |
|
| 10 | + <li>Bug fix for colored tails: maximum value for color key was sometimes not |
|
| 11 | + adjusted properly, leading to a "red shift" on the tails.</li> |
|
| 12 | + </ul> |
|
| 8 | 13 | <h5 class="articleSubheadline">December 2023</h5> |
| 9 | 14 | <ul class="bulletList"> |
| 10 | 15 | <li>The memory manager for data mining queries is now less aggressive for large heap sizes and |
java/com.sap.sailing.gwt.ui/src/main/java/com/sap/sailing/gwt/home/server/LeaderboardContext.java
| ... | ... | @@ -134,36 +134,35 @@ public class LeaderboardContext { |
| 134 | 134 | } |
| 135 | 135 | |
| 136 | 136 | public RegattaState calculateRegattaState() { |
| 137 | - // First using event state -> fast and ensures that all regattas are marked as finished after the event is finished |
|
| 137 | + // First using event state -> fast and ensures that all regattas are marked as finished after the event is |
|
| 138 | + // finished |
|
| 138 | 139 | EventState eventState = HomeServiceUtil.calculateEventState(event); |
| 139 | - if(eventState == EventState.FINISHED) { |
|
| 140 | + if (eventState == EventState.FINISHED) { |
|
| 140 | 141 | return RegattaState.FINISHED; |
| 141 | 142 | } |
| 142 | - if(eventState == EventState.UPCOMING || eventState == EventState.PLANNED) { |
|
| 143 | + if (eventState == EventState.UPCOMING || eventState == EventState.PLANNED) { |
|
| 143 | 144 | return RegattaState.UPCOMING; |
| 144 | 145 | } |
| 145 | - |
|
| 146 | 146 | // Using regatta start and end -> fast calculation of upcoming and finished states but not helpful to |
| 147 | 147 | // distinguish between live and progress |
| 148 | 148 | TimePoint startDate = getStartTimePoint(); |
| 149 | - if(startDate != null && now.before(startDate)) { |
|
| 149 | + if (startDate != null && now.before(startDate)) { |
|
| 150 | 150 | return RegattaState.UPCOMING; |
| 151 | 151 | } |
| 152 | 152 | TimePoint endDate = getEndTimePoint(); |
| 153 | - if(endDate != null && now.after(endDate)) { |
|
| 153 | + if (endDate != null && now.after(endDate)) { |
|
| 154 | 154 | return RegattaState.FINISHED; |
| 155 | 155 | } |
| 156 | - |
|
| 157 | 156 | // Using the race states to calculate the real state for running events/regattas |
| 158 | 157 | OverallRacesStateCalculator racesStateCalculator = new OverallRacesStateCalculator(); |
| 159 | 158 | forRacesWithReadPermissions(racesStateCalculator); |
| 160 | - if(racesStateCalculator.hasLiveRace()) { |
|
| 159 | + if (racesStateCalculator.hasLiveRace()) { |
|
| 161 | 160 | return RegattaState.RUNNING; |
| 162 | 161 | } |
| 163 | - if(!racesStateCalculator.hasUnfinishedRace()) { |
|
| 162 | + if (!racesStateCalculator.hasUnfinishedRace()) { |
|
| 164 | 163 | return RegattaState.FINISHED; |
| 165 | 164 | } |
| 166 | - if(racesStateCalculator.hasAbandonedOrPostponedRace() || racesStateCalculator.hasFinishedRace()) { |
|
| 165 | + if (racesStateCalculator.hasAbandonedOrPostponedRace() || racesStateCalculator.hasFinishedRace()) { |
|
| 167 | 166 | return RegattaState.PROGRESS; |
| 168 | 167 | } |
| 169 | 168 | return RegattaState.UPCOMING; |
java/com.sap.sailing.gwt.ui/src/main/java/com/sap/sailing/gwt/ui/client/StringMessages_cs.properties
| ... | ... | @@ -1679,7 +1679,7 @@ seriesHeaderUpcoming=Nadcházející |
| 1679 | 1679 | eventRegattaLeaderboardLastScoreUpdate=Poslední aktualizace skóre |
| 1680 | 1680 | eventRegattaRaceNotTracked=Netrasováno |
| 1681 | 1681 | footerJobs=Práce |
| 1682 | -footerCopyright=© 2011-2023 SAP Sailing Analytics |
|
| 1682 | +footerCopyright=© 2011-2024 SAP Sailing Analytics |
|
| 1683 | 1683 | footerLanguage=SAP Sailing in |
| 1684 | 1684 | footerLegal=Právní informace |
| 1685 | 1685 | footerPrivacy=Ochrana osobních údajů |
| ... | ... | @@ -2465,7 +2465,7 @@ helptextLinkingRaces=Chcete-li propojit rozjížďky s trasovanými rozjížďka |
| 2465 | 2465 | scoringSchemeLowPointA82Only=Nízkobodový systém; shoda podle závodních pravidel jachtingu A8.2 (poslední rozjížďka) |
| 2466 | 2466 | scoringSchemeLowPointA82OnlyDescription=Nízkobodový systém; shoda podle závodních pravidel jachtingu A8.2; Pokud shoda přetrvá po kontrole účasti a skóre ve finálové rozjížďce, porovnejte skóre v poslední rozjížďce (včetně vyloučených skóre), potom v předposlední atd., dokud shoda nebude rozhodnuta. |
| 2467 | 2467 | scoringSchemeLowPointSystemFirstThreeWinsA82Only=Nízkobodový systém; první se třemi vítězstvími ve finálovém závodu je vítěz; shoda A8.2 (poslední rozjížďka) |
| 2468 | -scoringSchemeLowPointSystemFirstThreeWinsA82OnlyDescription=Nízkobodový systém. První ve finálovém závodu, který vyhraje tři rozjížďky, vyhrává finálový závod. Sloupec přenosu ve finálovém závodu lze použít k modelování přenášených vítězství. Shoda v úvodních závodech je založena na A8.2 (poslední rozjížďka, pak předposlední atd.). |
|
| 2468 | +scoringSchemeLowPointSystemFirstThreeWinsA82OnlyDescription=Nízkobodový systém. První, kdo ve finálovém závodu vyhraje tři rozjížďky, vyhrává finálový závod. Sloupec přenosu ve finálovém závodu lze použít k modelování přenášených vítězství. Shoda v úvodních závodech je založena na A8.2 (poslední rozjížďka, pak předposlední atd.). |
|
| 2469 | 2469 | errorFetchingUserPreference=Chyba při načítání uživatelských preferencí s klíčem „{0}“: {1} |
| 2470 | 2470 | errorSettingUserPreference=Chyba při načítání uživatelských preferencí s klíčem „{0}“: {1} |
| 2471 | 2471 | scoringSchemeLowPointWithEliminatingMedalSeriesPromotingOneToFinalAndTwoToSemifinal=Nízkobodový systém; finálové rozjížďky jako čtvrtfinále, semifinále a finále |
| ... | ... | @@ -2473,5 +2473,11 @@ scoringSchemeLowPointWithEliminatingMedalSeriesPromotingOneToFinalAndTwoToSemifi |
| 2473 | 2473 | incrementalScoreCorrectionInPoints=Dodatečná oprava skóre (body) |
| 2474 | 2474 | errorObtainingCourseAreasForLeaderboard=Chyba při získávání oblastí dráhy pro výsledkovou tabuli {0}: {1}. |
| 2475 | 2475 | tackType=Dlouhý/krátký obrat |
| 2476 | -tackTypeTooltip=Po větru: Pokud je rozdíl mezi kurzem proti dnu a směrem k dalšímu trasovému bodu menší než rozdíl mezi kurzem proti dnu a směrem větru, jedná se o dlouhý obrat (1,0); pokud je menší, jedná se o krátký obrat (-1.0). Proti větru: Podobné jako po větru, ale místo „směru větru“ se používá opačný směr. Tj. směr vanutí větru. Boční vítr: Podobné jako proti větru, ale místo porovnání „kurzu proti dnu a směru větru“ se používá 10°. |
|
| 2476 | +tackTypeTooltip=Po větru: Pokud je rozdíl mezi kurzem proti dnu a směrem k dalšímu trasovému bodu menší než rozdíl mezi kurzem proti dnu a směrem větru, jedná se o dlouhý obrat (1,0); pokud je menší, jedná se o krátký obrat (-1.0). Proti větru: Podobné jako po větru, ale místo „směru větru“ se používá opačný směr, tj. směr vanutí větru. Boční vítr: Podobné jako proti větru, ale místo porovnání „kurzu proti dnu a směru větru“ se používá 10°. |
|
| 2477 | 2477 | tackTypeUnit=Dlouhý=1.0, Krátký=-1.0, Není známo=0.0 |
| 2478 | +tackTypeSegments=Obratové segmenty |
|
| 2479 | +errorMinimumDurationBetweenAdjacentTackTypeSegmentsMustNotBeNegative=Minimální doba mezi dvěma po sobě jdoucími obratovými segmenty nesmí být záporná. |
|
| 2480 | +errorMinimumTackTypeSegmentDurationMustNotBeNegative=Minimální trvání obratového segmentu nesmí být záporné. |
|
| 2481 | +minimumDurationBetweenAdjacentTackTypeSegmentsInSeconds=Minimální doba mezi dvěma po sobě jdoucími obratovými segmenty (s) |
|
| 2482 | +minimumTackTypeSegmentsDurationInSeconds=Minimální trvání obratových segmentů (s) |
|
| 2483 | +errorNoAuthenticationParamsForGoogleMapsFound=Chyba: Nebyly nalezeny žádné autentizační parametry pro API Map Google: {0}. |
java/com.sap.sailing.gwt.ui/src/main/java/com/sap/sailing/gwt/ui/client/StringMessages_da.properties
| ... | ... | @@ -1679,7 +1679,7 @@ seriesHeaderUpcoming=Kommende |
| 1679 | 1679 | eventRegattaLeaderboardLastScoreUpdate=Seneste opdatering af point |
| 1680 | 1680 | eventRegattaRaceNotTracked=Ikke sporet |
| 1681 | 1681 | footerJobs=Job |
| 1682 | -footerCopyright=© 2011-2023 SAP Sailing Analytics |
|
| 1682 | +footerCopyright=© 2011-2024 SAP Sailing Analytics |
|
| 1683 | 1683 | footerLanguage=SAP Sailing på |
| 1684 | 1684 | footerLegal=Ansvarsbegrænsning |
| 1685 | 1685 | footerPrivacy=Databeskyttelse |
| ... | ... | @@ -2475,3 +2475,9 @@ errorObtainingCourseAreasForLeaderboard=Fejl ved hentning af baneområder for ra |
| 2475 | 2475 | tackType=Lang/kort stagvende |
| 2476 | 2476 | tackTypeTooltip=Ved bidevind: Hvis forskellen mellem COG og næste waypoint-retning er mindre end den mellem COG og vindretning, er det lang stagvende (1,0), og hvis den er mindre, kort stagvende (-1,0). Ved læns: Som ved bidevind, men i stedet for "vindretning" bruges den modsatte retning, dvs. den retning, vinden blæser mod. Ved kryds: Som ved bidevind, men i stedet for at sammenligne med "COG og vindretning" bruges 10°. |
| 2477 | 2477 | tackTypeUnit=L=1,0, K=-1,0, Ukendt=0,0 |
| 2478 | +tackTypeSegments=Segmenter for stagvendetyper |
|
| 2479 | +errorMinimumDurationBetweenAdjacentTackTypeSegmentsMustNotBeNegative=Min. varighed mellem segmenter for tilstødende stagvendetyper må ikke være negativ |
|
| 2480 | +errorMinimumTackTypeSegmentDurationMustNotBeNegative=Min. varighed mellem et segment for stagvendetyper må ikke være negativ |
|
| 2481 | +minimumDurationBetweenAdjacentTackTypeSegmentsInSeconds=Min. varighed mellem tilstødende segmenter for stagvendetyper (sek.) |
|
| 2482 | +minimumTackTypeSegmentsDurationInSeconds=Min. varighed mellem segmenter for stagvendetyper (sek.) |
|
| 2483 | +errorNoAuthenticationParamsForGoogleMapsFound=Fejl: Ingen autentifikationsparametre for Google Maps-API blev fundet: {0} |
java/com.sap.sailing.gwt.ui/src/main/java/com/sap/sailing/gwt/ui/client/StringMessages_es.properties
| ... | ... | @@ -1679,7 +1679,7 @@ seriesHeaderUpcoming=Próximo |
| 1679 | 1679 | eventRegattaLeaderboardLastScoreUpdate=Última actualización de puntuación |
| 1680 | 1680 | eventRegattaRaceNotTracked=No rastreado |
| 1681 | 1681 | footerJobs=Jobs |
| 1682 | -footerCopyright=© 2011-2023 SAP Sailing Analytics |
|
| 1682 | +footerCopyright=© 2011-2024 SAP Sailing Analytics |
|
| 1683 | 1683 | footerLanguage=SAP Sailing en |
| 1684 | 1684 | footerLegal=Publicidad legal |
| 1685 | 1685 | footerPrivacy=Privacidad |
| ... | ... | @@ -2475,3 +2475,9 @@ errorObtainingCourseAreasForLeaderboard=Error al obtener las áreas de regata pa |
| 2475 | 2475 | tackType=Bordo largo/corto |
| 2476 | 2476 | tackTypeTooltip=Para barlovento: si la diferencia entre COG y la dirección del punto de ruta siguiente es menor que el de entre COG y la dirección del viento, se trata de un bordo largo (1,0); si es menor, de un bordo corto (-1,0). Para sotavento: similar a barlovento pero en lugar de "dirección del viento", se utiliza la dirección opuesta. Es decir, hacia dónde sopla el viento. Para alcance: similar a barlovento, pero en lugar de comparar con "COG y dirección del viento", se utilizan 10°. |
| 2477 | 2477 | tackTypeUnit=L=1.0, C=-1.0, Desconocido=0.0 |
| 2478 | +tackTypeSegments=Segmentos de tipo de bordo |
|
| 2479 | +errorMinimumDurationBetweenAdjacentTackTypeSegmentsMustNotBeNegative=La duración mínima entre segmentos adyacentes de tipo de bordo no puede ser negativa |
|
| 2480 | +errorMinimumTackTypeSegmentDurationMustNotBeNegative=La duración mínima de un segmento de tipo de bordo no puede ser negativa |
|
| 2481 | +minimumDurationBetweenAdjacentTackTypeSegmentsInSeconds=Duración mínima entre segmentos adyacentes de tipo de bordo (s) |
|
| 2482 | +minimumTackTypeSegmentsDurationInSeconds=Duración mínima de segmentos adyacentes de tipo de bordo (s) |
|
| 2483 | +errorNoAuthenticationParamsForGoogleMapsFound=Error: No se han encontrado parámetros de autenticación para la API de Google Maps: {0} |
java/com.sap.sailing.gwt.ui/src/main/java/com/sap/sailing/gwt/ui/client/StringMessages_fr.properties
| ... | ... | @@ -1679,7 +1679,7 @@ seriesHeaderUpcoming=à venir |
| 1679 | 1679 | eventRegattaLeaderboardLastScoreUpdate=Dernière mise à jour des scores |
| 1680 | 1680 | eventRegattaRaceNotTracked=Non suivi(e) |
| 1681 | 1681 | footerJobs=Carrières |
| 1682 | -footerCopyright=© 2011-2023 SAP Sailing Analytics |
|
| 1682 | +footerCopyright=© 2011-2024 SAP Sailing Analytics |
|
| 1683 | 1683 | footerLanguage=SAP Sailing in |
| 1684 | 1684 | footerLegal=Mentions légales |
| 1685 | 1685 | footerPrivacy=Confidentialité des données |
| ... | ... | @@ -2475,3 +2475,9 @@ errorObtainingCourseAreasForLeaderboard=Erreur lors de l''accès aux zones de co |
| 2475 | 2475 | tackType=Virement long/court |
| 2476 | 2476 | tackTypeTooltip=Dans le lit du vent : si la différence entre la route fond et la direction du prochain point de cheminement est inférieure à celle entre la route fond et la direction du vent, il s''agit d''un virement long (1,0) ; si la différence est inférieure, il s''agit d''un virement court (-1,0). Avec vent arrière : similaire à la situation dans le lit du vent, mais à la place de la « direction du vent », on utilise la direction inverse, c''est-à-dire la direction vers laquelle souffle le vent. Pour l''atteinte d''un point : similaire à la situation dans le lit du vent, mais au lieu de comparer « route fond et direction du vent », on utilise 10°. |
| 2477 | 2477 | tackTypeUnit=L=1,0, S=-1,0, Inconnu=0.0 |
| 2478 | +tackTypeSegments=Tronçons de type Virement |
|
| 2479 | +errorMinimumDurationBetweenAdjacentTackTypeSegmentsMustNotBeNegative=La durée minimale entre deux tronçons de type Virement adjacents ne doit pas être négative. |
|
| 2480 | +errorMinimumTackTypeSegmentDurationMustNotBeNegative=La durée minimale d''un tronçon de type Virement ne doit pas être négative. |
|
| 2481 | +minimumDurationBetweenAdjacentTackTypeSegmentsInSeconds=Durée minimale (en s) entre deux tronçons de type Virement adjacents |
|
| 2482 | +minimumTackTypeSegmentsDurationInSeconds=Durée minimale (en s) d''un tronçon de type Virement |
|
| 2483 | +errorNoAuthenticationParamsForGoogleMapsFound=Erreur : aucun paramètre d''authentification trouvé pour l''API Google Maps : {0} |
java/com.sap.sailing.gwt.ui/src/main/java/com/sap/sailing/gwt/ui/client/StringMessages_it.properties
| ... | ... | @@ -1679,7 +1679,7 @@ seriesHeaderUpcoming=Prossimamente |
| 1679 | 1679 | eventRegattaLeaderboardLastScoreUpdate=Ultimo aggiornamento punteggio |
| 1680 | 1680 | eventRegattaRaceNotTracked=Tracciamento non eseguito |
| 1681 | 1681 | footerJobs=Jobs |
| 1682 | -footerCopyright=© 2011-2023 SAP Sailing Analytics |
|
| 1682 | +footerCopyright=© 2011-2024 SAP Sailing Analytics |
|
| 1683 | 1683 | footerLanguage=SAP Sailing in |
| 1684 | 1684 | footerLegal=Informativa legale |
| 1685 | 1685 | footerPrivacy=Privacy |
| ... | ... | @@ -2475,3 +2475,9 @@ errorObtainingCourseAreasForLeaderboard=Errore durante l''ottenimento delle aree |
| 2475 | 2475 | tackType=Bordo lungo/corto |
| 2476 | 2476 | tackTypeTooltip=Sopravento: se la differenza tra la rotta effettiva al fondo e la successiva direzione del waypoint è superiore a quella tra la rotta effettiva al fondo e la direzione del vento si parla di bordo lungo (1,0); se è inferiore si parla di bordo corto (-1,0). Sottovento: simile al sopravento ma anziché utilizzare la direzione del vento si utilizza la direzione opposta, quindi la direzione verso cui soffia il vento. Per l''andatura al lasco: simile al sopravento ma anziché confrontare rotta effettiva al fondo e direzione del vento, si utilizzano 10°. |
| 2477 | 2477 | tackTypeUnit=L=1,0, B=-1,0, Sconosciuto=0,0 |
| 2478 | +tackTypeSegments=Segmenti tipo bordo |
|
| 2479 | +errorMinimumDurationBetweenAdjacentTackTypeSegmentsMustNotBeNegative=La durata minima tra segmenti adiacenti di tipo bordo non può essere negativa |
|
| 2480 | +errorMinimumTackTypeSegmentDurationMustNotBeNegative=La durata minima di un segmento di tipo bordo non può essere negativa |
|
| 2481 | +minimumDurationBetweenAdjacentTackTypeSegmentsInSeconds=Durata minima tra i segmenti adiacenti di tipo bordo (s) |
|
| 2482 | +minimumTackTypeSegmentsDurationInSeconds=Durata minima dei segmenti di tipo bordo (s) |
|
| 2483 | +errorNoAuthenticationParamsForGoogleMapsFound=Errore: nessun parametro di autenticazione trovato per l''API di Google Maps API: {0} |
java/com.sap.sailing.gwt.ui/src/main/java/com/sap/sailing/gwt/ui/client/StringMessages_ja.properties
| ... | ... | @@ -1679,7 +1679,7 @@ seriesHeaderUpcoming=予定 |
| 1679 | 1679 | eventRegattaLeaderboardLastScoreUpdate=前回スコア更新 |
| 1680 | 1680 | eventRegattaRaceNotTracked=未追跡 |
| 1681 | 1681 | footerJobs=採用 |
| 1682 | -footerCopyright=© 2011-2023 SAP Sailing Analytics |
|
| 1682 | +footerCopyright=© 2011-2024 SAP Sailing Analytics |
|
| 1683 | 1683 | footerLanguage=SAP Sailing - |
| 1684 | 1684 | footerLegal=利用規約 |
| 1685 | 1685 | footerPrivacy=個人情報保護 |
| ... | ... | @@ -2475,3 +2475,9 @@ errorObtainingCourseAreasForLeaderboard=リーダーボード {0} のコース |
| 2475 | 2475 | tackType=ロング/ショートタック |
| 2476 | 2476 | tackTypeTooltip=アップウィンドの場合: COG (対地針路) と次の変針点方向との間の差異が COG と風向との間の差異より小さい場合はロングタック (1.0) です。より小さい場合はショートタック (-1.0) です。ダウンウィンドの場合: アップウィンドに類似していますが、"風向" ではなくその反対方向を使用します (風が正面から吹き付けているときの向き)。リーチングの場合: アップウィンドに類似していますが、"COG と風向" との比較ではなく、10°を使用します。 |
| 2477 | 2477 | tackTypeUnit=L = 1.0、S = -1.0、不明 = 0.0 |
| 2478 | +tackTypeSegments=タックタイプセグメント |
|
| 2479 | +errorMinimumDurationBetweenAdjacentTackTypeSegmentsMustNotBeNegative=隣り合ったタックタイプセグメント間の最小時間はマイナスの値であってはなりません。 |
|
| 2480 | +errorMinimumTackTypeSegmentDurationMustNotBeNegative=タックタイプセグメントの最小時間はマイナスの値であってはなりません。 |
|
| 2481 | +minimumDurationBetweenAdjacentTackTypeSegmentsInSeconds=隣り合ったタックタイプセグメント間の最小時間 (秒) |
|
| 2482 | +minimumTackTypeSegmentsDurationInSeconds=タックタイプセグメントの最小時間 (秒) |
|
| 2483 | +errorNoAuthenticationParamsForGoogleMapsFound=エラー: Google Maps API の認証パラメータが見つかりませんでした: {0} |
java/com.sap.sailing.gwt.ui/src/main/java/com/sap/sailing/gwt/ui/client/StringMessages_pt.properties
| ... | ... | @@ -1679,7 +1679,7 @@ seriesHeaderUpcoming=Próximo |
| 1679 | 1679 | eventRegattaLeaderboardLastScoreUpdate=Atualização da última pontuação |
| 1680 | 1680 | eventRegattaRaceNotTracked=Não rastreado |
| 1681 | 1681 | footerJobs=Tarefas |
| 1682 | -footerCopyright=© 2011-2023 SAP Sailing Analytics |
|
| 1682 | +footerCopyright=© 2011-2024 SAP Sailing Analytics |
|
| 1683 | 1683 | footerLanguage=SAP Sailing em |
| 1684 | 1684 | footerLegal=Divulgação de informação legal |
| 1685 | 1685 | footerPrivacy=Privacidade |
| ... | ... | @@ -2475,3 +2475,9 @@ errorObtainingCourseAreasForLeaderboard=Erro ao obter áreas de percurso para pa |
| 2475 | 2475 | tackType=Cambada longa/curta |
| 2476 | 2476 | tackTypeTooltip=Para contravento: se a diferença entre o percurso no fundo e a direção do próximo waypoint for inferior à diferença entre o percurso no fundo e a direção do vento, é cambada longa (1,0), se for inferior, é cambada curta (-1,0). Para popa: semelhante ao contravento, mas em vez da "direção do vento", usa-se a direção oposta. Isto é, para onde o vento está soprando. Para través: semelhante ao contravento, mas em vez da comparação com "percurso no fundo e direção do vento", usa-se 10º. |
| 2477 | 2477 | tackTypeUnit=L= 1,0, C= -1,0, Desconhecido= 0,0 |
| 2478 | +tackTypeSegments=Segmentos de tipo de cambada |
|
| 2479 | +errorMinimumDurationBetweenAdjacentTackTypeSegmentsMustNotBeNegative=A duração mínima entre os segmentos adjacentes de tipo de cambada não deve ser negativa |
|
| 2480 | +errorMinimumTackTypeSegmentDurationMustNotBeNegative=A duração mínima de um segmento de tipo de cambada não deve ser negativa |
|
| 2481 | +minimumDurationBetweenAdjacentTackTypeSegmentsInSeconds=Duração mínima entre segmentos adjacentes de tipo de cambada (s) |
|
| 2482 | +minimumTackTypeSegmentsDurationInSeconds=Duração mínima de segmentos de tipo de cambada (s) |
|
| 2483 | +errorNoAuthenticationParamsForGoogleMapsFound=Erro: não foram encontrados parâmetros de autenticação para a API do Google Maps: {0} |
java/com.sap.sailing.gwt.ui/src/main/java/com/sap/sailing/gwt/ui/client/StringMessages_ru.properties
| ... | ... | @@ -1679,7 +1679,7 @@ seriesHeaderUpcoming=Скоро |
| 1679 | 1679 | eventRegattaLeaderboardLastScoreUpdate=Последнее обновление оценок |
| 1680 | 1680 | eventRegattaRaceNotTracked=Не отслеживается |
| 1681 | 1681 | footerJobs=Задания |
| 1682 | -footerCopyright=© SAP Sailing Analytics, 2011-2023. |
|
| 1682 | +footerCopyright=© SAP Sailing Analytics, 2011-2024. |
|
| 1683 | 1683 | footerLanguage=Язык SAP Sailing |
| 1684 | 1684 | footerLegal=Раскрытие юридической информации |
| 1685 | 1685 | footerPrivacy=Конфиденциальность |
| ... | ... | @@ -2475,3 +2475,9 @@ errorObtainingCourseAreasForLeaderboard=Ошибка при получении |
| 2475 | 2475 | tackType=Длинный /короткий галс |
| 2476 | 2476 | tackTypeTooltip=Против ветра: если разница между COG и направлением следующей путевой точки меньше разницы между COG и направлением ветра, то это длинный галс (1.0); если меньше, короткий галс (-1.0). По ветру: определяется так же, как против ветра, но вместо направления ветра используют противоположное направление. То есть, куда дует ветер. Полный ветер: определяется так же, как против ветра, но вместо сравнения с COG и направлением ветра используют 10°. |
| 2477 | 2477 | tackTypeUnit=Д=1.0, К=-1.0, Неизвестно=0.0 |
| 2478 | +tackTypeSegments=Сегменты типов галсов |
|
| 2479 | +errorMinimumDurationBetweenAdjacentTackTypeSegmentsMustNotBeNegative=Минимальная продолжительность между смежными сегментами типов галсов не должна быть отрицательной |
|
| 2480 | +errorMinimumTackTypeSegmentDurationMustNotBeNegative=Минимальная продолжительность сегмента типа галса не должна быть отрицательной |
|
| 2481 | +minimumDurationBetweenAdjacentTackTypeSegmentsInSeconds=Минимальная продолжительность между сегментами типов галсов (сек) |
|
| 2482 | +minimumTackTypeSegmentsDurationInSeconds=Минимальная продолжительность сегментов типов галсов (сек) |
|
| 2483 | +errorNoAuthenticationParamsForGoogleMapsFound=Ошибка: не найдены параметры полномочий для API Google Maps: {0} |
java/com.sap.sailing.gwt.ui/src/main/java/com/sap/sailing/gwt/ui/client/StringMessages_sl.properties
| ... | ... | @@ -1679,7 +1679,7 @@ seriesHeaderUpcoming=Kmalu |
| 1679 | 1679 | eventRegattaLeaderboardLastScoreUpdate=Posodobitev zadnjega rezultata |
| 1680 | 1680 | eventRegattaRaceNotTracked=Brez sledenja |
| 1681 | 1681 | footerJobs=Opravila |
| 1682 | -footerCopyright=© 2011-2023 SAP Sailing Analytics |
|
| 1682 | +footerCopyright=© 2011-2024 SAP Sailing Analytics |
|
| 1683 | 1683 | footerLanguage=SAP Sailing v |
| 1684 | 1684 | footerLegal=Pravno razkritje |
| 1685 | 1685 | footerPrivacy=Zasebnost |
| ... | ... | @@ -2475,3 +2475,9 @@ errorObtainingCourseAreasForLeaderboard=Napaka pri pridobivanju območij proge z |
| 2475 | 2475 | tackType=Dolgo/kratko prečenje |
| 2476 | 2476 | tackTypeTooltip=Proti vetru: Če je razlika med COG in smerjo vmesnega cilja poti manjša od razlike med COG in smerjo vetra, je dolgo prečenje (1,0); če je manjše kratko prečenje (-1,0). Z vetrom: Podobno kot proti vetru, vendar namesto "smeri vetra" uporabite nasprotno smer. Torej, kam veter piha. Za doseganje: Podobno kot proti vetru, vendar namesto primerjave z "COG in smerjo vetra" uporabite 10°. |
| 2477 | 2477 | tackTypeUnit=D=1,0, K=-1,0, neznano=0,0 |
| 2478 | +tackTypeSegments=Segmenti vrste prečenja |
|
| 2479 | +errorMinimumDurationBetweenAdjacentTackTypeSegmentsMustNotBeNegative=Minimalno trajanje med sosednjimi segmenti vrste prečenja ne sme biti negativno |
|
| 2480 | +errorMinimumTackTypeSegmentDurationMustNotBeNegative=Minimalno trajanje segmenta vrste prečenja ne sme biti negativno |
|
| 2481 | +minimumDurationBetweenAdjacentTackTypeSegmentsInSeconds=Minimalno trajanje med sosednjimi segmenti vrste prečenja (s) |
|
| 2482 | +minimumTackTypeSegmentsDurationInSeconds=Minimalno trajanje segmentov vrste prečenja (s) |
|
| 2483 | +errorNoAuthenticationParamsForGoogleMapsFound=Napaka: Parametri preverjanja pristnosti za API aplikacije Google Maps niso bilo najdeni: {0} |
java/com.sap.sailing.gwt.ui/src/main/java/com/sap/sailing/gwt/ui/client/StringMessages_zh.properties
| ... | ... | @@ -1679,7 +1679,7 @@ seriesHeaderUpcoming=即将到来 |
| 1679 | 1679 | eventRegattaLeaderboardLastScoreUpdate=上次得分更新 |
| 1680 | 1680 | eventRegattaRaceNotTracked=未跟踪 |
| 1681 | 1681 | footerJobs=招贤纳士 |
| 1682 | -footerCopyright=版权所有 © 2011-2023 SAP Sailing Analytics |
|
| 1682 | +footerCopyright=版权所有 © 2011-2024 SAP Sailing Analytics |
|
| 1683 | 1683 | footerLanguage=SAP Sailing 语言 |
| 1684 | 1684 | footerLegal=法律声明 |
| 1685 | 1685 | footerPrivacy=隐私 |
| ... | ... | @@ -2475,3 +2475,9 @@ errorObtainingCourseAreasForLeaderboard=获取积分榜 {0} 的场地区域出 |
| 2475 | 2475 | tackType=长程/短程迎风转向 |
| 2476 | 2476 | tackTypeTooltip=对于迎风:如果对地航向与下个航路点方向之间的差值小于对地航向与风向之间的差值,则为长程迎风转向 (1.0);如果更小,则为短程迎风转向 (-1.0)。对于顺风:与迎风类似,但不是“风向”,而是使用相反的方向。也就是风吹向哪里。对于横风:与迎风类似,但不与“对地航向和风向”进行比较,而是使用 10°。 |
| 2477 | 2477 | tackTypeUnit=L=1.0,S=-1.0,未知=0.0 |
| 2478 | +tackTypeSegments=迎风转向类型段 |
|
| 2479 | +errorMinimumDurationBetweenAdjacentTackTypeSegmentsMustNotBeNegative=相邻迎风转向类型段之间的最短持续时间不得为负 |
|
| 2480 | +errorMinimumTackTypeSegmentDurationMustNotBeNegative=迎风转向类型段的最短持续时间不得为负 |
|
| 2481 | +minimumDurationBetweenAdjacentTackTypeSegmentsInSeconds=相邻迎风转向类型段之间的最短持续时间 (s) |
|
| 2482 | +minimumTackTypeSegmentsDurationInSeconds=迎风转向类型段的最短持续时间 (s) |
|
| 2483 | +errorNoAuthenticationParamsForGoogleMapsFound=错误:未找到 Google 地图 API 的身份验证参数:{0} |
java/com.sap.sailing.gwt.ui/src/main/java/com/sap/sailing/gwt/ui/client/shared/racemap/FixesAndTails.java
| ... | ... | @@ -761,7 +761,7 @@ public class FixesAndTails { |
| 761 | 761 | } else { |
| 762 | 762 | final GPSFixDTOWithSpeedWindTackAndLegType maxFix = competitorFixes.get(maxIndex); |
| 763 | 763 | // replacing a fix with a non-maximal detailValue |
| 764 | - if (newFix.detailValue != null && maxFix.detailValue != null && newFix.detailValue < maxFix.detailValue) { |
|
| 764 | + if (newFix.detailValue != null && maxFix.detailValue != null && newFix.detailValue > maxFix.detailValue) { |
|
| 765 | 765 | // the replacement fix is a new maximum |
| 766 | 766 | maxDetailValueFixByCompetitorIdsAsStrings.put(competitorDTO.getIdAsString(), replacedFixIndex); |
| 767 | 767 | } |
java/com.sap.sailing.landscape.common/META-INF/MANIFEST.MF
| ... | ... | @@ -11,3 +11,4 @@ Require-Bundle: com.sap.sse.security.common, |
| 11 | 11 | com.sap.sse.landscape.aws.common |
| 12 | 12 | Bundle-ActivationPolicy: lazy |
| 13 | 13 | Export-Package: com.sap.sailing.landscape.common |
| 14 | +Import-Package: software.amazon.awssdk.regions |
java/com.sap.sailing.landscape.common/src/com/sap/sailing/landscape/common/SharedLandscapeConstants.java
| ... | ... | @@ -20,6 +20,15 @@ public interface SharedLandscapeConstants { |
| 20 | 20 | */ |
| 21 | 21 | String DEFAULT_SECURITY_SERVICE_REPLICA_SET_NAME = "security-service"; |
| 22 | 22 | |
| 23 | + String RABBIT_IN_DEFAULT_REGION_HOSTNAME = "rabbit.internal.sapsailing.com"; |
|
| 24 | + |
|
| 25 | + String DEFAULT_REGION = "eu-west-1"; |
|
| 26 | + |
|
| 27 | + /** |
|
| 28 | + * We maintain a DNS entry for "rabbit.internal.sapsailing.com" (see {@link #RABBIT_IN_DEFAULT_REGION_HOSTNAME}) in this region |
|
| 29 | + */ |
|
| 30 | + String REGION_WITH_RABBITMQ_DNS_HOSTNAME = DEFAULT_REGION; |
|
| 31 | + |
|
| 23 | 32 | /** |
| 24 | 33 | * This is the region of the load balancer handling the default traffic for {@code *.sapsailing.com}. It is also |
| 25 | 34 | * called the "dynamic" load balancer because adding, removing or changing any hostname-based rule in its HTTPS |
| ... | ... | @@ -38,8 +47,14 @@ public interface SharedLandscapeConstants { |
| 38 | 47 | * for archived events. If such a state is reached, "dynamic" load balancing may potentially be used regardless |
| 39 | 48 | * the region. |
| 40 | 49 | */ |
| 41 | - String REGION_WITH_DEFAULT_LOAD_BALANCER = "eu-west-1"; |
|
| 42 | - |
|
| 50 | + String REGION_WITH_DEFAULT_LOAD_BALANCER = DEFAULT_REGION; |
|
| 51 | + |
|
| 52 | + /** |
|
| 53 | + * Tag name used to identify instances on which a RabbitMQ installation is running. The tag value is currently interpreted to |
|
| 54 | + * be the port number (usually 5672) on which the RabbitMQ endpoint can be reached. |
|
| 55 | + */ |
|
| 56 | + String RABBITMQ_TAG_NAME = "RabbitMQEndpoint"; |
|
| 57 | + |
|
| 43 | 58 | /** |
| 44 | 59 | * The tag value used to identify host images that can be launched in order to run one or more Sailing Analytics |
| 45 | 60 | * server processes on it. |
java/com.sap.sailing.landscape.ui/src/com/sap/sailing/landscape/ui/client/i18n/StringMessages.properties
| ... | ... | @@ -75,7 +75,7 @@ reallyRemoveApplicationReplicaSet=Really remove application replica set {0} with |
| 75 | 75 | pleaseSelectSshKeyPair=Please select an SSH key pair |
| 76 | 76 | defineLandingPage=Define landing page |
| 77 | 77 | successfullyUpdatedLandingPage=Successfully updated landing page |
| 78 | -defineDefaultRedirect=Default Default Redirect |
|
| 78 | +defineDefaultRedirect=Default Redirect |
|
| 79 | 79 | defineDefaultRedirectMessage=Defines where visitors of the default path "/" will be redirected |
| 80 | 80 | redirectPlain=Plain landing page (index.html) |
| 81 | 81 | redirectHome=Server''s Home.html page with all events |
java/com.sap.sailing.landscape/src/com/sap/sailing/landscape/procedures/DeployProcessOnMultiServer.java
| ... | ... | @@ -284,10 +284,7 @@ implements Procedure<ShardingKey> { |
| 284 | 284 | "sudo /usr/local/bin/cp_root_mail_properties "+applicationConfiguration.getServerName()+"; "+ |
| 285 | 285 | "cd "+serverDirectory.replaceAll("\"", "\\\\\"")+"; "+ |
| 286 | 286 | "echo '"+applicationConfiguration.getAsEnvironmentVariableAssignments().replaceAll("\"", "\\\\\"").replaceAll("\\$", "\\\\\\$")+ |
| 287 | - "' | /home/sailing/code/java/target/refreshInstance.sh auto-install-from-stdin; ./start\";"+ // SAILING_USER ends here |
|
| 288 | - // from here on as root: |
|
| 289 | - "cd "+serverDirectory.replaceAll("\"", "\\\\\"")+"; "+ |
|
| 290 | - "./defineReverseProxyMappings.sh", |
|
| 287 | + "' | /home/sailing/code/java/target/refreshInstance.sh auto-install-from-stdin; ./start\";", // SAILING_USER ends here |
|
| 291 | 288 | "stderr: ", Level.WARNING); |
| 292 | 289 | logger.info("stdout: "+stdout); |
| 293 | 290 | } |
java/com.sap.sailing.server.interface/META-INF/MANIFEST.MF
| ... | ... | @@ -5,6 +5,7 @@ Bundle-SymbolicName: com.sap.sailing.server.interface |
| 5 | 5 | Bundle-Version: 1.0.0.qualifier |
| 6 | 6 | Bundle-Vendor: SAP |
| 7 | 7 | Automatic-Module-Name: com.sap.sailing.server.interface |
| 8 | +Bundle-ActivationPolicy: lazy |
|
| 8 | 9 | Bundle-RequiredExecutionEnvironment: JavaSE-1.8 |
| 9 | 10 | Require-Bundle: com.sap.sailing.domain;bundle-version="1.0.0", |
| 10 | 11 | com.sap.sse.common;bundle-version="1.0.0", |
java/com.sap.sailing.server.interface/src/com/sap/sailing/server/operationaltransformation/ImportMasterDataOperation.java
| ... | ... | @@ -339,7 +339,7 @@ public class ImportMasterDataOperation extends |
| 339 | 339 | } else if (override) { |
| 340 | 340 | for (RaceColumn raceColumn : existingLeaderboards.get(leaderboard.getName()).getRaceColumns()) { |
| 341 | 341 | for (Fleet fleet : raceColumn.getFleets()) { |
| 342 | - TrackedRace trackedRace = raceColumn.getTrackedRace(fleet); |
|
| 342 | + final TrackedRace trackedRace = raceColumn.getTrackedRace(fleet); |
|
| 343 | 343 | if (trackedRace != null) { |
| 344 | 344 | raceColumn.releaseTrackedRace(fleet); |
| 345 | 345 | } |
| ... | ... | @@ -414,7 +414,7 @@ public class ImportMasterDataOperation extends |
| 414 | 414 | |
| 415 | 415 | private void addAllImportedEvents(MongoObjectFactory mongoObjectFactory, RaceLogStore mongoRaceLogStore, |
| 416 | 416 | final RaceLog log, RaceLogIdentifier identifier) { |
| 417 | - RaceLogEventVisitor storeVisitor = MongoRaceLogStoreFactory.INSTANCE |
|
| 417 | + final RaceLogEventVisitor storeVisitor = MongoRaceLogStoreFactory.INSTANCE |
|
| 418 | 418 | .getMongoRaceLogStoreVisitor(identifier, mongoObjectFactory); |
| 419 | 419 | log.lockForRead(); |
| 420 | 420 | try { |
java/com.sap.sailing.server.replication.test/src/com/sap/sailing/server/replication/test/ConnectionResetAndReconnectTest.java
| ... | ... | @@ -42,7 +42,6 @@ public class ConnectionResetAndReconnectTest extends AbstractServerReplicationTe |
| 42 | 42 | public static boolean forceStopDelivery = false; |
| 43 | 43 | |
| 44 | 44 | static class QueuingConsumerTest extends QueueingConsumer { |
| 45 | - |
|
| 46 | 45 | public QueuingConsumerTest(Channel ch) { |
| 47 | 46 | super(ch); |
| 48 | 47 | } |
| ... | ... | @@ -54,11 +53,9 @@ public class ConnectionResetAndReconnectTest extends AbstractServerReplicationTe |
| 54 | 53 | } |
| 55 | 54 | return super.nextDelivery(); |
| 56 | 55 | } |
| 57 | - |
|
| 58 | 56 | } |
| 59 | 57 | |
| 60 | 58 | static class MasterReplicationDescriptorMock extends ReplicationMasterDescriptorImpl { |
| 61 | - |
|
| 62 | 59 | public MasterReplicationDescriptorMock(String messagingHost, String hostname, String exchangeName, int servletPort, int messagingPort, Iterable<Replicable<?, ?>> replicables) { |
| 63 | 60 | super(messagingHost, exchangeName, messagingPort, UUID.randomUUID().toString(), hostname, servletPort, /* bearerToken */ null, replicables); |
| 64 | 61 | } |
| ... | ... | @@ -84,7 +81,6 @@ public class ConnectionResetAndReconnectTest extends AbstractServerReplicationTe |
| 84 | 81 | channel.basicConsume(queueName, /* auto-ack */ true, consumer); |
| 85 | 82 | return consumer; |
| 86 | 83 | } |
| 87 | - |
|
| 88 | 84 | } |
| 89 | 85 | |
| 90 | 86 | private static class ServerReplicationTestSetUp extends |
| ... | ... | @@ -107,7 +103,6 @@ public class ConnectionResetAndReconnectTest extends AbstractServerReplicationTe |
| 107 | 103 | public void testReplicaLoosingConnectionToExchangeQueue() throws Exception { |
| 108 | 104 | assertNotSame(master, replica); |
| 109 | 105 | assertEquals(Util.size(master.getAllRegattas()), Util.size(replica.getAllRegattas())); |
| 110 | - |
|
| 111 | 106 | /* until here both instances should have the same in-memory state. |
| 112 | 107 | * now lets add an event on master and stop the messaging queue. */ |
| 113 | 108 | stopMessagingExchange(); |
java/com.sap.sailing.server/src/com/sap/sailing/server/impl/RacingEventServiceImpl.java
| ... | ... | @@ -4162,12 +4162,12 @@ Replicator { |
| 4162 | 4162 | |
| 4163 | 4163 | @Override |
| 4164 | 4164 | public void reloadRaceLog(String leaderboardName, String raceColumnName, String fleetName) { |
| 4165 | - Leaderboard leaderboard = getLeaderboardByName(leaderboardName); |
|
| 4165 | + final Leaderboard leaderboard = getLeaderboardByName(leaderboardName); |
|
| 4166 | 4166 | if (leaderboard != null) { |
| 4167 | - RaceColumn raceColumn = leaderboard.getRaceColumnByName(raceColumnName); |
|
| 4167 | + final RaceColumn raceColumn = leaderboard.getRaceColumnByName(raceColumnName); |
|
| 4168 | 4168 | if (raceColumn != null) { |
| 4169 | - Fleet fleetImpl = raceColumn.getFleetByName(fleetName); |
|
| 4170 | - RaceLog racelog = raceColumn.getRaceLog(fleetImpl); |
|
| 4169 | + final Fleet fleetImpl = raceColumn.getFleetByName(fleetName); |
|
| 4170 | + final RaceLog racelog = raceColumn.getRaceLog(fleetImpl); |
|
| 4171 | 4171 | if (racelog != null) { |
| 4172 | 4172 | raceColumn.reloadRaceLog(fleetImpl); |
| 4173 | 4173 | logger.info("Reloaded race log for fleet " + fleetImpl + " for race column " + raceColumn.getName() |
java/com.sap.sailing.www/.well-known/security.txt
| ... | ... | @@ -1,5 +1,2 @@ |
| 1 | 1 | Contact: https://www.sap.com/report-a-vulnerability |
| 2 | -Encryption: https://www.sap.com/pgp-keyblock |
|
| 3 | -Policy: https://wiki.scn.sap.com/wiki/x/1s-iGg |
|
| 4 | -Acknowledgments: https://wiki.scn.sap.com/wiki/x/rc-iGg |
|
| 5 | -Expires: Mon, 31 Jan 2022 12:00 +0100 |
|
| ... | ... | \ No newline at end of file |
| 0 | +Expires: 2025-01-30T18:29:00.000Z |
|
| ... | ... | \ No newline at end of file |
java/com.sap.sailing.www/release_notes_admin.html
| ... | ... | @@ -23,6 +23,15 @@ |
| 23 | 23 | <div class="mainContent"> |
| 24 | 24 | <h2 class="releaseHeadline">Release Notes - Administration Console</h2> |
| 25 | 25 | <div class="innerContent"> |
| 26 | + <h2 class="articleSubheadline">January 2024</h2> |
|
| 27 | + <ul class="bulletList"> |
|
| 28 | + <li>When launching a new application replica set in a region, the choice of the default RabbitMQ |
|
| 29 | + server now depends on the region: in our "default" region "eu-west-1", RabbitMQ is identified |
|
| 30 | + by the DNS-mapped host name "rabbit.internal.sapsailing.com". Elsewhere, the RabbitMQ server |
|
| 31 | + in the region is explored using the RabbitMQEndpoint tag. If not found, again |
|
| 32 | + "rabbit.internal.sapsailing.com" will be used, assuming there may be a VPC peering across |
|
| 33 | + regions.</li> |
|
| 34 | + </ul> |
|
| 26 | 35 | <h2 class="articleSubheadline">October 2023</h2> |
| 27 | 36 | <ul class="bulletList"> |
| 28 | 37 | <li>TracTrac and YellowBrick passwords are no longer sent back to the client; there were ways to discover |
java/com.sap.sse.landscape.aws.test/META-INF/MANIFEST.MF
| ... | ... | @@ -14,6 +14,8 @@ Require-Bundle: org.hamcrest;bundle-version="2.2.0", |
| 14 | 14 | org.mongodb.bson;bundle-version="4.3.1", |
| 15 | 15 | org.mongodb.driver-core;bundle-version="4.3.1", |
| 16 | 16 | org.mongodb.driver-sync;bundle-version="4.3.1", |
| 17 | - com.sap.sailing.landscape.common |
|
| 17 | + com.sap.sailing.landscape.common, |
|
| 18 | + net.bytebuddy.byte-buddy;bundle-version="1.12.18", |
|
| 19 | + net.bytebuddy.byte-buddy-agent;bundle-version="1.12.18" |
|
| 18 | 20 | Import-Package: org.mockito, |
| 19 | 21 | org.mockito.stubbing;version="4.8.1" |
java/com.sap.sse.landscape.aws.test/src/com/sap/sse/landscape/aws/ConnectivityTest.java
| ... | ... | @@ -51,6 +51,7 @@ import com.sap.sse.landscape.aws.orchestration.CreateDNSBasedLoadBalancerMapping |
| 51 | 51 | import com.sap.sse.landscape.impl.ReleaseRepositoryImpl; |
| 52 | 52 | import com.sap.sse.landscape.mongodb.MongoEndpoint; |
| 53 | 53 | import com.sap.sse.landscape.mongodb.impl.DatabaseImpl; |
| 54 | +import com.sap.sse.landscape.rabbitmq.RabbitMQEndpoint; |
|
| 54 | 55 | import com.sap.sse.landscape.ssh.SSHKeyPair; |
| 55 | 56 | import com.sap.sse.landscape.ssh.SshCommandChannel; |
| 56 | 57 | |
| ... | ... | @@ -535,4 +536,25 @@ public class ConnectivityTest<ProcessT extends AwsApplicationProcess<String, Sai |
| 535 | 536 | assertEquals(200, healthCheckConnection.getResponseCode()); |
| 536 | 537 | healthCheckConnection.disconnect(); |
| 537 | 538 | } |
| 539 | + |
|
| 540 | + @Test |
|
| 541 | + public void getDefaultRabbitConfigForEuWest1() { |
|
| 542 | + final RabbitMQEndpoint rabbitConfig = landscape.getDefaultRabbitConfiguration(new AwsRegion(Region.EU_WEST_1, landscape)); |
|
| 543 | + assertEquals("rabbit.internal.sapsailing.com", rabbitConfig.getNodeName()); |
|
| 544 | + assertEquals(5672, rabbitConfig.getPort()); |
|
| 545 | + } |
|
| 546 | + |
|
| 547 | + @Test |
|
| 548 | + public void getDefaultRabbitConfigForEuWest2() { |
|
| 549 | + final RabbitMQEndpoint rabbitConfig = landscape.getDefaultRabbitConfiguration(new AwsRegion(Region.EU_WEST_2, landscape)); |
|
| 550 | + assertTrue(rabbitConfig.getNodeName().startsWith("172.31.")); |
|
| 551 | + assertEquals(5672, rabbitConfig.getPort()); |
|
| 552 | + } |
|
| 553 | + |
|
| 554 | + @Test |
|
| 555 | + public void getDefaultRabbitConfigForRegionWithNoTaggedInstanceInIt() { |
|
| 556 | + final RabbitMQEndpoint rabbitConfig = landscape.getDefaultRabbitConfiguration(new AwsRegion(Region.US_EAST_2, landscape)); |
|
| 557 | + assertEquals("rabbit.internal.sapsailing.com", rabbitConfig.getNodeName()); |
|
| 558 | + assertEquals(5672, rabbitConfig.getPort()); |
|
| 559 | + } |
|
| 538 | 560 | } |
java/com.sap.sse.landscape.aws/META-INF/MANIFEST.MF
| ... | ... | @@ -26,7 +26,8 @@ Require-Bundle: com.amazon.aws.aws-java-api;bundle-version="2.13.50", |
| 26 | 26 | com.sap.sse.replication.interfaces, |
| 27 | 27 | com.sap.sse.operationaltransformation, |
| 28 | 28 | org.mongodb.driver-core;bundle-version="4.3.1", |
| 29 | - org.mongodb.driver-sync;bundle-version="4.3.1" |
|
| 29 | + org.mongodb.driver-sync;bundle-version="4.3.1", |
|
| 30 | + com.sap.sailing.landscape.common |
|
| 30 | 31 | Web-ContextPath: /landscape |
| 31 | 32 | Import-Package: org.apache.shiro;version="1.2.2", |
| 32 | 33 | org.osgi.framework;version="1.8.0", |
java/com.sap.sse.landscape.aws/src/com/sap/sse/landscape/aws/AwsLandscape.java
| ... | ... | @@ -34,7 +34,6 @@ import com.sap.sse.landscape.mongodb.MongoProcess; |
| 34 | 34 | import com.sap.sse.landscape.mongodb.MongoProcessInReplicaSet; |
| 35 | 35 | import com.sap.sse.landscape.mongodb.MongoReplicaSet; |
| 36 | 36 | import com.sap.sse.landscape.mongodb.impl.MongoProcessImpl; |
| 37 | -import com.sap.sse.landscape.rabbitmq.RabbitMQEndpoint; |
|
| 38 | 37 | import com.sap.sse.landscape.ssh.SSHKeyPair; |
| 39 | 38 | |
| 40 | 39 | import software.amazon.awssdk.auth.credentials.AwsBasicCredentials; |
| ... | ... | @@ -129,12 +128,6 @@ public interface AwsLandscape<ShardingKey> extends Landscape<ShardingKey> { |
| 129 | 128 | |
| 130 | 129 | String MONGO_REPLICA_SET_NAME_AND_PORT_SEPARATOR = ":"; |
| 131 | 130 | |
| 132 | - /** |
|
| 133 | - * Tag name used to identify instances on which a RabbitMQ installation is running. The tag value is currently interpreted to |
|
| 134 | - * be the port number (usually 5672) on which the RabbitMQ endpoint can be reached. |
|
| 135 | - */ |
|
| 136 | - String RABBITMQ_TAG_NAME = "RabbitMQEndpoint"; |
|
| 137 | - |
|
| 138 | 131 | String CENTRAL_REVERSE_PROXY_TAG_NAME = "CentralReverseProxy"; |
| 139 | 132 | |
| 140 | 133 | /** |
| ... | ... | @@ -677,13 +670,6 @@ public interface AwsLandscape<ShardingKey> extends Landscape<ShardingKey> { |
| 677 | 670 | |
| 678 | 671 | Iterable<MongoEndpoint> getMongoEndpoints(Region region); |
| 679 | 672 | |
| 680 | - /** |
|
| 681 | - * Gets a default RabbitMQ configuration for the {@code region} specified.<p> |
|
| 682 | - * |
|
| 683 | - * TODO For now, the method searches for accordingly-tagged instances and picks the first one it finds. We need to extend this to RabbitMQ replication. |
|
| 684 | - */ |
|
| 685 | - RabbitMQEndpoint getDefaultRabbitConfiguration(AwsRegion region); |
|
| 686 | - |
|
| 687 | 673 | Database getDatabase(Region region, String databaseName); |
| 688 | 674 | |
| 689 | 675 | /** |
java/com.sap.sse.landscape.aws/src/com/sap/sse/landscape/aws/impl/AwsLandscapeImpl.java
| ... | ... | @@ -35,6 +35,7 @@ import java.util.regex.Pattern; |
| 35 | 35 | import com.jcraft.jsch.JSch; |
| 36 | 36 | import com.jcraft.jsch.JSchException; |
| 37 | 37 | import com.jcraft.jsch.KeyPair; |
| 38 | +import com.sap.sailing.landscape.common.SharedLandscapeConstants; |
|
| 38 | 39 | import com.sap.sse.common.Duration; |
| 39 | 40 | import com.sap.sse.common.TimePoint; |
| 40 | 41 | import com.sap.sse.common.Util; |
| ... | ... | @@ -1463,26 +1464,32 @@ public class AwsLandscapeImpl<ShardingKey> implements AwsLandscape<ShardingKey> |
| 1463 | 1464 | } |
| 1464 | 1465 | |
| 1465 | 1466 | @Override |
| 1466 | - public RabbitMQEndpoint getDefaultRabbitConfiguration(AwsRegion region) { |
|
| 1467 | + public RabbitMQEndpoint getDefaultRabbitConfiguration(com.sap.sse.landscape.Region region) { |
|
| 1468 | + final RabbitMQEndpoint defaultRabbitMQInDefaultRegion = ()->SharedLandscapeConstants.RABBIT_IN_DEFAULT_REGION_HOSTNAME; // using default port RabbitMQEndpoint.DEFAULT_PORT |
|
| 1467 | 1469 | final RabbitMQEndpoint result; |
| 1468 | - final Iterable<AwsInstance<ShardingKey>> rabbitMQHostsInRegion = getRunningHostsWithTag(region, RABBITMQ_TAG_NAME, AwsInstanceImpl::new); |
|
| 1469 | - if (rabbitMQHostsInRegion.iterator().hasNext()) { |
|
| 1470 | - final AwsInstance<ShardingKey> anyRabbitMQHost = rabbitMQHostsInRegion.iterator().next(); |
|
| 1471 | - result = new RabbitMQEndpoint() { |
|
| 1472 | - @Override |
|
| 1473 | - public int getPort() { |
|
| 1474 | - return getTag(anyRabbitMQHost, RABBITMQ_TAG_NAME) |
|
| 1475 | - .map(t -> t.trim().isEmpty() ? RabbitMQEndpoint.DEFAULT_PORT : Integer.valueOf(t.trim())) |
|
| 1476 | - .orElse(RabbitMQEndpoint.DEFAULT_PORT); |
|
| 1477 | - } |
|
| 1478 | - |
|
| 1479 | - @Override |
|
| 1480 | - public String getNodeName() { |
|
| 1481 | - return anyRabbitMQHost.getPrivateAddress().getHostAddress(); |
|
| 1482 | - } |
|
| 1483 | - }; |
|
| 1470 | + if (region.getId().equals(Region.EU_WEST_1.id())) { |
|
| 1471 | + result = defaultRabbitMQInDefaultRegion; |
|
| 1484 | 1472 | } else { |
| 1485 | - result = null; |
|
| 1473 | + final Iterable<AwsInstance<ShardingKey>> rabbitMQHostsInRegion = getRunningHostsWithTag( |
|
| 1474 | + region, SharedLandscapeConstants.RABBITMQ_TAG_NAME, AwsInstanceImpl::new); |
|
| 1475 | + if (rabbitMQHostsInRegion.iterator().hasNext()) { |
|
| 1476 | + final AwsInstance<ShardingKey> anyRabbitMQHost = rabbitMQHostsInRegion.iterator().next(); |
|
| 1477 | + result = new RabbitMQEndpoint() { |
|
| 1478 | + @Override |
|
| 1479 | + public int getPort() { |
|
| 1480 | + return getTag(anyRabbitMQHost, SharedLandscapeConstants.RABBITMQ_TAG_NAME) |
|
| 1481 | + .map(t -> t.trim().isEmpty() ? RabbitMQEndpoint.DEFAULT_PORT : Integer.valueOf(t.trim())) |
|
| 1482 | + .orElse(RabbitMQEndpoint.DEFAULT_PORT); |
|
| 1483 | + } |
|
| 1484 | + |
|
| 1485 | + @Override |
|
| 1486 | + public String getNodeName() { |
|
| 1487 | + return anyRabbitMQHost.getPrivateAddress().getHostAddress(); |
|
| 1488 | + } |
|
| 1489 | + }; |
|
| 1490 | + } else { |
|
| 1491 | + result = defaultRabbitMQInDefaultRegion; // no instance with tag found; hope for VPC peering and use RabbitMQ hostname from default region |
|
| 1492 | + } |
|
| 1486 | 1493 | } |
| 1487 | 1494 | return result; |
| 1488 | 1495 | } |
| ... | ... | @@ -1493,17 +1500,6 @@ public class AwsLandscapeImpl<ShardingKey> implements AwsLandscape<ShardingKey> |
| 1493 | 1500 | } |
| 1494 | 1501 | |
| 1495 | 1502 | @Override |
| 1496 | - public RabbitMQEndpoint getMessagingConfigurationForDefaultCluster(com.sap.sse.landscape.Region region) { |
|
| 1497 | - final RabbitMQEndpoint result; |
|
| 1498 | - if (region.getId().equals(Region.EU_WEST_1.id())) { |
|
| 1499 | - result = ()->"rabbit.internal.sapsailing.com"; |
|
| 1500 | - } else { |
|
| 1501 | - result = null; |
|
| 1502 | - } |
|
| 1503 | - return result; |
|
| 1504 | - } |
|
| 1505 | - |
|
| 1506 | - @Override |
|
| 1507 | 1503 | public <MetricsT extends ApplicationProcessMetrics, ProcessT extends AwsApplicationProcess<ShardingKey, MetricsT, ProcessT>, |
| 1508 | 1504 | HostT extends ApplicationProcessHost<ShardingKey, MetricsT, ProcessT>> |
| 1509 | 1505 | Iterable<HostT> getApplicationProcessHostsByTag(com.sap.sse.landscape.Region region, String tagName, |
java/com.sap.sse.landscape.aws/src/com/sap/sse/landscape/aws/orchestration/AwsApplicationConfiguration.java
| ... | ... | @@ -9,6 +9,7 @@ import com.sap.sse.landscape.DefaultProcessConfigurationVariables; |
| 9 | 9 | import com.sap.sse.landscape.InboundReplicationConfiguration; |
| 10 | 10 | import com.sap.sse.landscape.OutboundReplicationConfiguration; |
| 11 | 11 | import com.sap.sse.landscape.ProcessConfigurationVariable; |
| 12 | +import com.sap.sse.landscape.Region; |
|
| 12 | 13 | import com.sap.sse.landscape.Release; |
| 13 | 14 | import com.sap.sse.landscape.UserDataProvider; |
| 14 | 15 | import com.sap.sse.landscape.application.ApplicationProcess; |
| ... | ... | @@ -43,7 +44,7 @@ implements UserDataProvider { |
| 43 | 44 | * {@link #getServerName() server name}.</li> |
| 44 | 45 | * <li>The {@link #setInboundReplicationConfiguration(InboundReplicationConfiguration) inbound replication} |
| 45 | 46 | * {@link InboundReplicationConfiguration#getInboundRabbitMQEndpoint() RabbitMQ endpoint} defaults to the region's |
| 46 | - * {@link AwsLandscape#getDefaultRabbitConfiguration(com.sap.sse.landscape.aws.impl.AwsRegion) default RabbitMQ |
|
| 47 | + * {@link AwsLandscape#getDefaultRabbitConfiguration(Region) default RabbitMQ |
|
| 47 | 48 | * configuration}. Note that this setting will take effect only if auto-replication is activated for one or more |
| 48 | 49 | * replicables (see {@link InboundReplicationConfiguration#getReplicableIds()}).</li> |
| 49 | 50 | * <li>The {@link #setOutboundReplicationConfiguration() outbound replication} |
java/com.sap.sse.landscape/src/com/sap/sse/landscape/Landscape.java
| ... | ... | @@ -49,10 +49,15 @@ public interface Landscape<ShardingKey> { |
| 49 | 49 | /** |
| 50 | 50 | * Obtains the default RabbitMQ configuration for the {@code region} specified. If nothing else is specified |
| 51 | 51 | * explicitly, application server replica sets launched in the {@code region} shall use this for their replication |
| 52 | - * message channels and exchanges. |
|
| 52 | + * message channels and exchanges.<p> |
|
| 53 | + * |
|
| 54 | + * For our default region, this will return a DNS name always pointing to the current private IP of |
|
| 55 | + * the instance running the default RabbitMQ service in the region. In other regions, the private IP |
|
| 56 | + * of the regional default RabbitMQ instance is discovered by scanning for running instances tagged |
|
| 57 | + * with {@link SharedLandscapeConstants#RABBITMQ_TAG_NAME}. |
|
| 53 | 58 | */ |
| 54 | - RabbitMQEndpoint getMessagingConfigurationForDefaultCluster(Region region); |
|
| 55 | - |
|
| 59 | + RabbitMQEndpoint getDefaultRabbitConfiguration(Region region); |
|
| 60 | + |
|
| 56 | 61 | /** |
| 57 | 62 | * Tells the regions supported. The underlying hyperscaler may have more, but we may not want to run in all. |
| 58 | 63 | */ |
java/com.sap.sse.landscape/src/com/sap/sse/landscape/mongodb/impl/MongoEndpointImpl.java
| ... | ... | @@ -59,7 +59,7 @@ public abstract class MongoEndpointImpl implements MongoEndpoint { |
| 59 | 59 | if (i>=BATCH_SIZE) { |
| 60 | 60 | targetCollection.insertMany(documentsToInsert); |
| 61 | 61 | i = 0; |
| 62 | - documentsToInsert = new ArrayList<>(BATCH_SIZE); |
|
| 62 | + documentsToInsert.clear(); |
|
| 63 | 63 | } |
| 64 | 64 | } |
| 65 | 65 | if (i>0) { |
java/com.sap.sse.security.test/META-INF/MANIFEST.MF
| ... | ... | @@ -20,7 +20,9 @@ Require-Bundle: org.junit;bundle-version="4.8.2", |
| 20 | 20 | com.sap.sailing.domain.common, |
| 21 | 21 | org.mongodb.bson;bundle-version="4.3.1", |
| 22 | 22 | org.mongodb.driver-core;bundle-version="4.3.1", |
| 23 | - org.mongodb.driver-sync;bundle-version="4.3.1" |
|
| 23 | + org.mongodb.driver-sync;bundle-version="4.3.1", |
|
| 24 | + net.bytebuddy.byte-buddy;bundle-version="1.12.18", |
|
| 25 | + net.bytebuddy.byte-buddy-agent;bundle-version="1.12.18" |
|
| 24 | 26 | Import-Package: com.sap.sse.security.shared, |
| 25 | 27 | com.sap.sse.security.ui.shared, |
| 26 | 28 | org.apache.shiro.session.mgt |
java/com.sap.sse.security.test/src/com/sap/sse/security/test/SecurityServiceAndHasPermissionsProviderTest.java
| ... | ... | @@ -36,7 +36,8 @@ public class SecurityServiceAndHasPermissionsProviderTest { |
| 36 | 36 | PersistenceFactory.INSTANCE.getDefaultMajorityMongoObjectFactory(), TEST_DEFAULT_TENANT); |
| 37 | 37 | userStore.ensureDefaultRolesExist(); |
| 38 | 38 | userStore.loadAndMigrateUsers(); |
| 39 | - accessControlStore = new AccessControlStoreImpl(userStore); |
|
| 39 | + accessControlStore = new AccessControlStoreImpl(PersistenceFactory.INSTANCE.getDefaultMajorityDomainObjectFactory(), |
|
| 40 | + PersistenceFactory.INSTANCE.getDefaultMajorityMongoObjectFactory(), userStore); |
|
| 40 | 41 | } |
| 41 | 42 | |
| 42 | 43 | @After |
java/target/env-default-rules.sh
| ... | ... | @@ -58,7 +58,13 @@ if [ -z "${MONGODB_PORT}" ]; then |
| 58 | 58 | MONGODB_PORT=27017 |
| 59 | 59 | fi |
| 60 | 60 | if [ -z "${MONGODB_HOST}" -a -z "${MONGODB_URI}" ]; then |
| 61 | - MONGODB_URI="mongodb://mongo0.internal.sapsailing.com,mongo1.internal.sapsailing.com/${MONGODB_NAME}?replicaSet=live&retryWrites=true&readPreference=nearest" |
|
| 61 | + if [ -n "$AUTO_REPLICATE" ]; then |
|
| 62 | + # An auto-replication replica by default assumes it has a local MongoDB replica set running on localhost, |
|
| 63 | + # called "replica" and running on the default port 27017: |
|
| 64 | + MONGODB_URI="mongodb://localhost/${MONGODB_NAME}?replicaSet=replica&retryWrites=true&readPreference=nearest" |
|
| 65 | + else |
|
| 66 | + MONGODB_URI="mongodb://mongo0.internal.sapsailing.com,mongo1.internal.sapsailing.com/${MONGODB_NAME}?replicaSet=live&retryWrites=true&readPreference=nearest" |
|
| 67 | + fi |
|
| 62 | 68 | fi |
| 63 | 69 | if [ -z "${EXPEDITION_PORT}" ]; then |
| 64 | 70 | EXPEDITION_PORT=2010 |
wiki/howto/downloading-and-archiving-tractrac-events.md
| ... | ... | @@ -57,7 +57,7 @@ For example, within the production landscape of sapsailing.com, you could try th |
| 57 | 57 | |
| 58 | 58 | ## Automation |
| 59 | 59 | |
| 60 | -For `wiki@sapsailing.com` the process of updating the `configuration/tractrac-json-urls` file from the ARCHIVE server is automated by means of two cron jobs as specified in `wiki@sapsailing.com:crontab`: |
|
| 60 | +For `wiki@sapsailing.com` the process of updating the `configuration/tractrac-json-urls` file from the ARCHIVE server is automated by means of two cron jobs as specified in `wiki@sapsailing.com:crontab` or at `$OUR_GIT_HOME/configuration/crontabs/users/crontab-wiki`: |
|
| 61 | 61 | |
| 62 | 62 | ``` |
| 63 | 63 | 10 12 * * * /home/wiki/gitwiki/configuration/update-tractrac-urls-to-archive.sh >/home/wiki/update-tractrac-urls-to-archive.out 2>/home/wiki/update-tractrac-urls-to-archive.err |
wiki/info/landscape/amazon-ec2-backup-strategy.md
| ... | ... | @@ -240,7 +240,7 @@ TARGET_DIR=/var/lib/mysql/backup |
| 240 | 240 | |
| 241 | 241 | # Configuration for MySQL |
| 242 | 242 | MYSQL_DATABASES="bugs mysql" |
| 243 | -MYSQLEXPORT_CMD="mysqldump -u root --password=sailaway" |
|
| 243 | +MYSQLEXPORT_CMD="mysqldump -u root --password=..." |
|
| 244 | 244 | |
| 245 | 245 | [...] |
| 246 | 246 |
wiki/info/landscape/amazon-ec2.md
| ... | ... | @@ -187,14 +187,21 @@ A failover instance is kept ready to switch to in case the primary production ar |
| 187 | 187 | ### Important Amazon Machine Images (AMIs) |
| 188 | 188 | |
| 189 | 189 | In our default region ``eu-west-1`` there are four Amazon Machine Image (AMI) types that are relevant for the operation of the landscape. They all have a base name to which, separated by a space character, a version number consisting of a major and minor version, separated by a dot, is appended. Each of these AMIs has a tag ``image-type`` whose value reflects the type of the image. |
| 190 | -- SAP Sailing Analytics, ``image-type`` is ``sailing-analytics-server`` |
|
| 191 | -- MongoDB Live Replica Set NVMe, ``image-type`` is ``mongodb-server`` |
|
| 192 | -- Hudson Ubuntu Slave, ``image-type`` is ``hudson-slave`` |
|
| 193 | -- Webserver, ``image-type`` is ``webserver`` |
|
| 190 | +- SAP Sailing Analytics, ``image-type`` is ``sailing-analytics-server``, see [here](/wiki/info/landscape/creating-ec2-image-from-scratch) |
|
| 191 | +- MongoDB Live Replica Set NVMe, ``image-type`` is ``mongodb-server``, see [here](/wiki/info/landscape/creating-ec2-mongodb-image-from-scratch) |
|
| 192 | +- Hudson Debian/Ubuntu Slave, ``image-type`` is ``hudson-slave`` |
|
| 193 | +- Webserver, ``image-type`` is ``webserver``, see [here](/wiki/info/landscape/creating-ec2-image-for-webserver-from-scratch) |
|
| 194 | + |
|
| 195 | +There are furthermore instance types that we can configure automatically, based on a clean Amazon Linux 2 instance launched from the respective default Amazon image: |
|
| 196 | +- Hudson / dev.sapsailing.com server, see [here](/wiki/info/landscape/creating-ec2-image-for-hudson-from-scratch) |
|
| 197 | +- MySQL / MariaDB database server holding the data for our ``bugzilla.sapsailing.com`` bug/issue tracker, see [here](/wiki/info/landscape/creating-ec2-image-for-mysql-from-scratch) |
|
| 198 | +- RabbitMQ default instance used by all default sailing servers for replication, see [here](/wiki/info/landscape/creating-ec2-image-for-rabbitmq-from-scratch) |
|
| 199 | + |
|
| 200 | +We try to maintain setup scripts that help us with setting up those instance types from scratch. See the respective Wiki pages referenced from the lists above for more details. |
|
| 194 | 201 | |
| 195 | 202 | The SAP Sailing Analytics image is used to launch new instances, shared or dedicated, that host one or more Sailing Analytics application processes. The image contains an installation of the SAP JVM 8 under /opt/sapjvm_8, an Apache httpd service that is not currently used by default for reverse proxying / rewriting / logging activities, an initially empty directory ``/home/sailing/servers`` used to host default application process configurations, and an initialization script under ``/etc/init.d/sailing`` that handles the instance's initialization with a default application process from the EC2 instance's user data. Instructions for setting up such an image from scratch can be found [here](/wiki/info/landscape/creating-ec2-image-from-scratch). |
| 196 | 203 | |
| 197 | -The user data line ``image-upgrade`` will cause the image to ignore all application configuration data and only bring the new instance to an updated state. For this, the Git content under ``/home/sailing/code`` is brought to the latest master branch commit, a ``yum update`` is carried out to install all operating system package updates available, log directories and the ``/home/sailing/servers`` directory are cleared, and the ``root`` user's crontab is brought up to date from the Git ``configuration/crontab`` file. If the ``no-shutdown`` line is provided in the instance's user data, the instance will be left running. Otherwise, it will shut down which would be a good default for creating a new image. See also procedures that automate much of this upgrade process. |
|
| 204 | +The user data line ``image-upgrade`` will cause the image to ignore all application configuration data and only bring the new instance to an updated state. For this, the Git content under ``/home/sailing/code`` is brought to the latest master branch commit, a ``yum update`` is carried out to install all operating system package updates available, log directories and the ``/home/sailing/servers`` directory are cleared, and the ``root`` user's crontab is brought up to date by running `crontab /root/crontab`, under the assumption it points to the appropriately named crontab in `$OUR_GIT_HOME/configuration/crontabs` (as we have different crontabs for different instances/users). If the ``no-shutdown`` line is provided in the instance's user data, the instance will be left running. Otherwise, it will shut down which would be a good default for creating a new image. See also procedures that automate much of this upgrade process. |
|
| 198 | 205 | |
| 199 | 206 | The MongoDB Live Replica Set NVMe image is used to scale out or upgrade existing MongoDB replica sets. It also reads the EC2 instance's user data during start-up and can be parameterized by the following variables: ``REPLICA_SET_NAME``, ``REPLICA_SET_PRIMARY``, ``REPLICA_SET_PRIORITY``, and ``REPLICA_SET_VOTES``. An example configuration could look like this: |
| 200 | 207 | ``` |
| ... | ... | @@ -352,11 +359,11 @@ With this, the three REST API end points `/landscape/api/landscape/get_time_poin |
| 352 | 359 | Two new scripts and a crontab file are provided under the configuration/ folder: |
| 353 | 360 | - `update_authorized_keys_for_landscape_managers_if_changed` |
| 354 | 361 | - `update_authorized_keys_for_landscape_managers` |
| 355 | -- `crontab` |
|
| 362 | +- `crontab` (found within configuration for historical reasons, but we should be using those in configuration/crontabs) |
|
| 356 | 363 | |
| 357 | 364 | The first makes a call to `/landscape/api/landscape/get_time_point_of_last_change_in_ssh_keys_of_aws_landscape_managers` (currently coded to `https://security-service.sapsailing.com` in the crontab file). If no previous time stamp for the last change exists under `/var/run/last_change_aws_landscape_managers_ssh_keys` or the time stamp received in the response is newer, the `update_authorized_keys_for_landscape_managers` script is invoked using the bearer token provided in `/root/ssh-key-reader.token` as argument, granting the script READ access to the user list and their SSH key pairs. That script first asks for `/security/api/restsecurity/users_with_permission?permission=LANDSCAPE:MANAGE:AWS` and then uses `/landscape/api/landscape/get_ssh_keys_owned_by_user?username[]=..`. to obtain the actual SSH public key information for the landscape managers. The original `/root/.ssh/authorized_keys` file is copied to `/root/.ssh/authorized_keys.org` once and then used to insert the single public SSH key inserted by AWS, then appending all public keys received for the landscape-managing users. |
| 358 | 365 | |
| 359 | -The `crontab` file which is used during image-upgrade (see `configuration/imageupdate.sh`) has a randomized sleeping period within a one minute duration after which it calls the `update_authorized_keys_for_landscape_managers_if_changed` script which transitively invokes `update_authorized_keys_for_landscape_managers` in case of changes possible. |
|
| 366 | +The `crontab` file which is used during image-upgrade (see `configuration/imageupgrade.sh`) has a randomized sleeping period within a one minute duration after which it calls the `update_authorized_keys_for_landscape_managers_if_changed` script which transitively invokes `update_authorized_keys_for_landscape_managers` in case of changes possible. |
|
| 360 | 367 | |
| 361 | 368 | ## Legacy Documentation for Manual Operations |
| 362 | 369 |
wiki/info/landscape/creating-ec2-image-for-hudson-from-scratch.md
| ... | ... | @@ -1,112 +1,55 @@ |
| 1 | 1 | # Setting up an image for the hudson.sapsailing.com server |
| 2 | 2 | |
| 3 | -This is an add-on to the regular EC2 image set-up described [here](https://wiki.sapsailing.com/wiki/info/landscape/creating-ec2-image-from-scratch). An Android SDK needs to be installed. |
|
| 4 | - |
|
| 5 | - |
|
| 6 | -* Create a ``hudson`` user/group |
|
| 7 | -* Make sure ``/home/hudson`` is a separate mount; probably just mount the existing volume of a previous installation |
|
| 8 | -* Install an Android SDK under ``/home/hudson/android-sdk-linux``; if you simply re-used an old ``/home/hudson`` mount this should already be in place. |
|
| 9 | -* Install Eclipse to ``/home/hudson/eclipse`` to allow sharing it in case a large AWS instance is needed, e.g., for heap dump analysis. |
|
| 10 | -* export ``/home/hudson/android-sdk-linux`` and ``/home/hudson/eclipse`` as follows in ``/etc/exports``: |
|
| 3 | +Like when setting up a regular sailing application server instance, start with a fresh Amazon Linux 2 image and create an instance with a 16GB root volume. Use the "Sailing Analytics Server" security group and something like a ``t3.small`` instance type for creating the image. Then, invoke the script |
|
| 11 | 4 | ``` |
| 12 | -/home/hudson/android-sdk-linux 172.31.0.0/16(rw,nohide,no_root_squash) |
|
| 13 | -/home/hudson/eclipse 172.31.0.0/16(rw,nohide,no_root_squash) |
|
| 5 | + configuration/hudson_instance_setup/setup-hudson-server.sh {external-IP-of-new-instance} |
|
| 14 | 6 | ``` |
| 7 | +This will first run the regular sailing server set-up which allows the instance to run the ``dev.sapsailing.com`` Sailing Analytics instance later. Then, the script will continue to obtain the Hudson WAR file from ``https://static.sapsailing.com/hudson.war.patched-with-mail-1.6.2`` and deploy it to ``/usr/lib/hudson``, obtain and install the default system-wide Hudson configuration, get adjusted dev server secrets from ``ssh://root@sapsailing.com/root/dev-secrets`` as well as ``mail.properties``, and install a ``hudson.service`` unit under ``/etc/systemd/system``. A ``hudson`` user is created, and its ``/home/hudson`` home directory is emptied so it can act as a mount point. A latest version of the SAP Sailing Analytics is installed to ``/home/sailing/servers/DEV``. |
|
| 15 | 8 | |
| 16 | -* Ensure you have EC2 / EBS snapshot backups for the volumes by tagging them as follows: ``WeeklySailingInfrastructureBackup=Yes`` for ``/`` and ``/home/hudson``. |
|
| 9 | +The ``/home/hudson/android-sdk-linux`` folder that is later expected to be mounted into the ``/home/hudson`` mount point is exported through NFS by appending a corresponding entry to ``/etc/exports``. The script will also allow the ``hudson`` user to run the ``/usr/local/bin/launchhudsonslave`` script with ``sudo``. In order to elastically scale our build / CI infrastructure, we use AWS to provide Hudson build slaves on demand. The Hudson Master (https://hudson.sapsailing.com) has a script obtained from our git at ``./configuration/launchhudsonslave`` which takes an Amazon Machine Image (AMI), launches it in our default region (eu-west-1) and connects to it. The AWS credentials are stored in the ``root`` account on ``hudson.sapsailing.com``, and the ``hudson`` user is granted access to the script by means of an ``/etc/sudoers.d`` entry. |
|
| 17 | 10 | |
| 18 | -``/home/hudson/repo`` has the Hudson build repository. The Hudson WAR file is under ``/usr/lib/hudson/hudson.war``. ``/etc/init.d/hudson``, linked to from ``/etc/rc0.d/K29hudson``, ``/etc/rc1.d/K29hudson``, ``/etc/rc2.d/K29hudson``, ``/etc/rc3.d/S81hudson``, ``/etc/rc4.d/K29hudson``, ``/etc/rc5.d/S81hudson``, and ``/etc/rc6.d/K29hudson``, takes care of spinning up Hudson during instance re-boot. Hudson systemwide configuration is under ``/etc/sysconfig/hudson``: |
|
| 11 | +When the script has finished, proceed as follows: |
|
| 12 | + |
|
| 13 | +* make a volume available that holds the ``/home/hudson`` content, including the ``android-sdk-linux`` folder. This can happen, e.g., by creating a snapshot of the existing ``/home/hudson`` volume of the running "Build/Dev/Test" server, or you may search the weekly backup snapshots for the latest version to start with. Make sure to create the volume in the same availability zone (AZ) as your instance is running in. |
|
| 14 | +* Based on how full the volume already is, consider re-sizing it by creating the volume larger than the snapshot. |
|
| 15 | +* Attach it. |
|
| 16 | +* In the instance, as ``root`` user, call ``dmesg``. This will show the new volume that just got attached, as well as its partition names. |
|
| 17 | +* If you chose the volume bigger than the snapshot, use ``resize2fs`` to resize accordingly. |
|
| 18 | +* Enter an ``/etc/fstab`` entry for the volume, e.g., like this: |
|
| 19 | +``` |
|
| 20 | +UUID={the-UUID-of-the-partition-to-mount} /home/hudson ext4 defaults,noatime,commit=30 0 0 |
|
| 21 | +``` |
|
| 22 | +To find out ``{the-UUID-of-the-partition-to-mount}``, use ``blkid``. |
|
| 23 | +* Mount with ``mount -a`` |
|
| 24 | +* Adjust ownerships in case the new instance's ``hudson`` user/group IDs have changed compared to the old instance: |
|
| 19 | 25 | ``` |
| 20 | -## Path: Development/Hudson |
|
| 21 | -## Description: Configuration for the Hudson continuous build server |
|
| 22 | -## Type: string |
|
| 23 | -## Default: "/var/lib/hudson" |
|
| 24 | -## ServiceRestart: hudson |
|
| 25 | -# |
|
| 26 | -# Directory where Hudson store its configuration and working |
|
| 27 | -# files (checkouts, build reports, artifacts, ...). |
|
| 28 | -# |
|
| 29 | -HUDSON_HOME="/home/hudson/repo" |
|
| 26 | + sudo chmod -R hudson:hudson /home/hudson |
|
| 27 | +``` |
|
| 28 | +* If you'd like to keep in sync with the latest version of a still running live Hudson environment, keep copying its ``/home/hudson`` contents with ``rsync -av root@dev.internal.sapsailing.com:/home/hudson/ /home/hudson/`` until you switch |
|
| 29 | +* Ensure you have EC2 / EBS snapshot backups for the volumes by tagging them as follows: ``WeeklySailingInfrastructureBackup=Yes`` for ``/`` and ``/home/hudson``. |
|
| 30 | 30 | |
| 31 | -## Type: string |
|
| 32 | -## Default: "" |
|
| 33 | -## ServiceRestart: hudson |
|
| 34 | -# |
|
| 35 | -# Java executable to run Hudson |
|
| 36 | -# When left empty, we'll try to find the suitable Java. |
|
| 37 | -# |
|
| 31 | +## In-Place Start-Up |
|
| 38 | 32 | |
| 39 | -HUDSON_JAVA_CMD="/opt/sapjvm_8/bin/java" |
|
| 40 | -# The following line choses JavaSE-1.7 |
|
| 41 | -#HUDSON_JAVA_CMD="/opt/jdk1.7.0_02/bin/java" |
|
| 42 | -# The following line choses JavaSE-1.8 |
|
| 43 | -#HUDSON_JAVA_CMD="/opt/jdk1.8.0_20/bin/java" |
|
| 33 | +You can then either use the instance right away by starting the two essential services, as follows: |
|
| 34 | +``` |
|
| 35 | + sudo systemctl start hudson.service |
|
| 36 | + sudo systemctl start sailing.service |
|
| 37 | +``` |
|
| 38 | +## Creating an AMI to Launch |
|
| 44 | 39 | |
| 45 | -## Type: string |
|
| 46 | -## Default: "hudson" |
|
| 47 | -## ServiceRestart: hudson |
|
| 48 | -# |
|
| 49 | -# Unix user account that runs the Hudson daemon |
|
| 50 | -# Be careful when you change this, as you need to update |
|
| 51 | -# permissions of $HUDSON_HOME and /var/log/hudson. |
|
| 52 | -# |
|
| 53 | -HUDSON_USER="hudson" |
|
| 40 | +Or you stop the instance, either from within using ``shutdown -h now`` or from the AWS Console, then create an image (AMI) that you can use to create a new instance at any time. Keep in mind, though, that keeping such an image around incurs cost for the relatively large ``/home/hudson`` volume's snapshot. As the volume is part of the weekly backup strategy anyhow, and due to the fair degree of automation during producing a new version of this instance type, this may not be necessary. |
|
| 54 | 41 | |
| 55 | -## Type: string |
|
| 56 | -## Default: "-Djava.awt.headless=true" |
|
| 57 | -## ServiceRestart: hudson |
|
| 58 | -# |
|
| 59 | -# Options to pass to java when running Hudson. |
|
| 60 | -# |
|
| 61 | -HUDSON_JAVA_OPTIONS="-Djava.awt.headless=true -Xmx2G -Dhudson.slaves.ChannelPinger.pingInterval=60 -Dhudson.slaves.ChannelPinger.pingIntervalSeconds=60 -Dhudson.slaves.ChannelPinger.pingTimeoutSeconds=60" |
|
| 42 | +The AMI should be labeled accordingly, and so should the snapshots. For allowing for automated image upgrades, ensure the AMI and snapshots follow a common naming and versioning pattern. Name your AMI something like "Build/Dev/Test x.y" with x and y being major and minor version numbers, such as 2.0. Then, your snapshots shall be named "Build/Dev/Test x.y ({volume-name})" where {volume-name} can be any human-readable identifier that lets you recognize the volume. Examples for {volume-name} may be "Root" or "Home" or "HudsonHome" or similar. |
|
| 62 | 43 | |
| 63 | -## Type: integer(0:65535) |
|
| 64 | -## Default: 8080 |
|
| 65 | -## ServiceRestart: hudson |
|
| 66 | -# |
|
| 67 | -# Port Hudson is listening on. |
|
| 68 | -# |
|
| 69 | -HUDSON_PORT="8080" |
|
| 44 | +Then terminate your instance used only for image creation and launch from the AMI, then, as in "In-Place Start-Up", place the instance into the "Hudson" target group and adjust the elastic IP to point to the new instance. |
|
| 70 | 45 | |
| 71 | -## Type: integer(1:9) |
|
| 72 | -## Default: 5 |
|
| 73 | -## ServiceRestart: hudson |
|
| 74 | -# |
|
| 75 | -# Debug level for logs -- the higher the value, the more verbose. |
|
| 76 | -# 5 is INFO. |
|
| 77 | -# |
|
| 78 | -HUDSON_DEBUG_LEVEL="5" |
|
| 46 | +## Steps to Perform to Activate the New Server |
|
| 79 | 47 | |
| 80 | -## Type: yesno |
|
| 81 | -## Default: no |
|
| 82 | -## ServiceRestart: hudson |
|
| 83 | -# |
|
| 84 | -# Whether to enable access logging or not. |
|
| 85 | -# |
|
| 86 | -HUDSON_ENABLE_ACCESS_LOG="no" |
|
| 48 | +* Replace the old "Build/Test/Dev" instance in the "Hudson" and "S-DEV" and "S-DEV-m" target groups with the new one |
|
| 49 | +* Switch the elastic IP ``52.17.217.83`` to the new instance, too |
|
| 50 | +* Change the ``dev.internal.sapsailing.com`` Route53 DNS entry from the old instance's internal IP address to the new instance's internal IP address |
|
| 87 | 51 | |
| 88 | -## Type: integer |
|
| 89 | -## Default: 100 |
|
| 90 | -## ServiceRestart: hudson |
|
| 91 | -# |
|
| 92 | -# Maximum number of HTTP worker threads. |
|
| 93 | -# |
|
| 94 | -HUDSON_HANDLER_MAX="100" |
|
| 95 | 52 | |
| 96 | -## Type: integer |
|
| 97 | -## Default: 20 |
|
| 98 | -## ServiceRestart: hudson |
|
| 99 | -# |
|
| 100 | -# Maximum number of idle HTTP worker threads. |
|
| 101 | -# |
|
| 102 | -HUDSON_HANDLER_IDLE="20" |
|
| 53 | +## Testing the New and Terminating the Old Instance |
|
| 103 | 54 | |
| 104 | -## Type: string |
|
| 105 | -## Default: "" |
|
| 106 | -## ServiceRestart: hudson |
|
| 107 | -# |
|
| 108 | -# Pass arbitrary arguments to Hudson. |
|
| 109 | -# Full option list: java -jar hudson.war --help |
|
| 110 | -# |
|
| 111 | -HUDSON_ARGS="" |
|
| 112 | -``` |
|
| 55 | +After a while of testing the new environment successfully you may choose to terminate the old instance. Make sure the large ``/home/hudson`` volume is deleted with it, and it not, delete it manually. |
|
| ... | ... | \ No newline at end of file |
wiki/info/landscape/creating-ec2-image-for-mysql-from-scratch.md
| ... | ... | @@ -0,0 +1,17 @@ |
| 1 | +# Setting up an Instance for the MySQL / MariaDB Bugzilla Database |
|
| 2 | + |
|
| 3 | +Our Bugzilla system at [bugzilla.sapsailing.com](https://bugzilla.sapsailing.com) uses a relational database to store all the bugs and issues. This used to be a MySQL database and has been migrated to MariaDB at the beginning of 2024. |
|
| 4 | + |
|
| 5 | +We don't provide a dedicated AMI for this because we don't need to scale this out or replicate this by any means. Instead, we provide a script to set this up, starting from a clean Amazon Linux 2023 instance. |
|
| 6 | + |
|
| 7 | +Launch a new instance, based on the latest Amazon Linux 2023 AMI maintained by AWS, and configure the root volume size to be, e.g., 16GB. As of this writing, the total size consumed by the database contents on disk is less than 1GB. Tag the volume with a tag key ``WeeklySailingInfrastructureBackup`` and value ``Yes`` to include it in the weekly backup schedule. |
|
| 8 | + |
|
| 9 | +When the instance has finished booting up, run the following script, passing the external IP address of the instance as mandatory argument: |
|
| 10 | +``` |
|
| 11 | + configuration/mysql_instance_setup/setup-mysql-server.sh a.b.c.d |
|
| 12 | +``` |
|
| 13 | +where ``a.b.c.d`` stands for the external IP address you have to specify. Before the IP address you may optionally specify the passwords for the ``root`` and the ``bugs`` user of the existing database to be cloned to the new instance. Provide the ``root`` password with the ``-r`` option, the ``bugs`` password with the ``-b`` option. Passwords not provided this way will be prompted for. |
|
| 14 | + |
|
| 15 | +The script will then transfer itself to the instance and execute itself there, forwarding the passwords required. On the instance, it will then establish the periodic management of the login user's ``authorized_keys`` file for all landscape managers' keys, install the packages required (in particular mariadb105-server and cronie), then run a backup on the existing ``mysql.internal.sapsailing.com`` database using the ``root`` user and its password. The ``mysqldump`` client for this is run on ``sapsailing.com``, and the result is stored in the ``/tmp`` folder on the new instance where it is then imported. The import is a bit tricky in case this is a migration from MySQL to MariaDB where the users table has become a view. Therefore, a few additional ``DROP TABLE`` and ``DROP VIEW`` commands are issued before importing the data. When the import is complete, user privileges are flushed so they match with what has been imported. The DB is then re-started in "safe" mode so that the user passwords can be adjusted, in case this was a migration from MySQL to MariaDB. Finally, the DB is restarted properly with the new user passwords. |
|
| 16 | + |
|
| 17 | +The instance then is generally available for testing. Run a few ``mysql`` commands, check out the ``bugs`` database and its contents, especially those of the ``bugs.bugs`` table. If this all looks good, switch the DNS record for ``mysql.internal.sapsailing.com`` to the private IP of the new instance. This will be used by the Bugzilla installation running on our central reverse proxy. When this is done you can consider stopping and ultimately terminating the old DB server. |
wiki/info/landscape/creating-ec2-image-for-rabbitmq-from-scratch.md
| ... | ... | @@ -0,0 +1,19 @@ |
| 1 | +# Setting up a RabbitMQ Server Instance |
|
| 2 | + |
|
| 3 | +RabbitMQ is hard to install on latest versions of Amazon Linux (e.g., 2, or 2023). Therefore, we use a latest Debian 12 default image to start with. |
|
| 4 | + |
|
| 5 | +Configure the root volume to be at least 8GB. The empty installation takes about 1.6GB, so you will have enough room for messages queued persistently. |
|
| 6 | + |
|
| 7 | +When the instance has finished booting and SSH access is possible, invoke the following script, providing the instance's external IP address as only parameter: |
|
| 8 | +``` |
|
| 9 | + configuration/rabbitmq_instance_setup/setup-rabbitmq-server.sh a.b.c.d`` |
|
| 10 | +``` |
|
| 11 | +where ``a.b.c.d`` is the external IP address of your fresh instance. |
|
| 12 | + |
|
| 13 | +The script will ensure the login user's ``authorized_keys`` are updated periodically to contain those of the landscape managers, then will install the necessary packages, particularly ``rabbitmq-server`` and, to get real log files under ``/var/log``, the ``syslog-ng`` package. It then enables the ``rabbitmq_management`` plugin, so access to the management UI becomes possible through port ``15672``. The configuration file under ``/etc/rabbitmq/rabbitmq.conf`` is patched such that guest logins are possible also from non-localhost addresses, by adding the ``loopback_users = none`` directive to the config file. It finally (re-)starts the RabbitMQ server to let these config changes take effect. |
|
| 14 | + |
|
| 15 | +Your RabbitMQ server then should be ready to handle requests. Test this by invoking the management UI, e.g., through an ssh port forward to port ``15672``. When this seems good, pick a smart time to change the DNS record for ``rabbit.internal.sapsailing.com`` because there will be a short time of interruptions on all application processes currently connected to the old RabbitMQ which you then have to stop. Those client applications will temporarily lose connection, but our replication component will re-establish these connections, using the DNS name which gets resolved again based on the DNS entry's TTL. |
|
| 16 | + |
|
| 17 | +Then associate the elastic IP ``54.76.64.42`` as the external IP of the new instance. This will let ``rabbit.sapsailing.com`` point to the public IP of the instance. |
|
| 18 | + |
|
| 19 | +Add a tag with key ``RabbitMQEndpoint`` and value ``5672``, specifying the port on which the RabbitMQ server listens. This tag can be used by our landscape automation procedures to discover the RabbitMQ default instance in the region. |
wiki/info/landscape/creating-ec2-image-for-webserver-from-scratch.md
| ... | ... | @@ -123,13 +123,13 @@ lrwxrwxrwx 1 root root 75 Oct 20 09:00 notify-operators -> /home/wiki/git |
| 123 | 123 | lrwxrwxrwx 1 root root 78 Feb 8 2021 update_authorized_keys_for_landscape_managers -> /home/wiki/gitwiki/configuration/update_authorized_keys_for_landscape_managers |
| 124 | 124 | lrwxrwxrwx 1 root root 89 Feb 8 2021 update_authorized_keys_for_landscape_managers_if_changed -> /home/wiki/gitwiki/configuration/update_authorized_keys_for_landscape_managers_if_changed |
| 125 | 125 | ``` |
| 126 | -* set up ``crontab`` for ``root`` user (remove the symbolic link to ``/home/sailing/code/configuration/crontab`` if that had been created earlier) |
|
| 126 | +* set up ``crontab`` for ``root`` user (remove the symbolic link to ``/home/sailing/code/configuration/crontab`` if that had been created earlier). Note that ``configuration/crontabs`` contains a selection of crontab files for different use cases, including the ``environments/crontab-reverse-proxy-instance``, which should be pointed to by a symbolic link in /root. |
|
| 127 | 127 | ``` |
| 128 | 128 | 0 10 1 * * export PATH=/bin:/usr/bin:/usr/local/bin; mail-events-on-my >/dev/null 2>/dev/null |
| 129 | 129 | * * * * * export PATH=/bin:/usr/bin:/usr/local/bin; sleep $(( $RANDOM * 60 / 32768 )); update_authorized_keys_for_landscape_managers_if_changed $( cat /root/ssh-key-reader.token ) https://security-service.sapsailing.com /root 2>&1 >>/var/log/sailing.err |
| 130 | 130 | 0 7 2 * * export PATH=/bin:/usr/bin:/usr/local/bin; docker exec -it registry-registry-1 registry garbage-collect /etc/docker/registry/config.yml |
| 131 | 131 | ``` |
| 132 | -* set up crontab for user `wiki` as `*/10 * * * * /home/wiki/syncgit` and make sure the script is in place |
|
| 132 | +* set up crontab for user `wiki` as a symbolic link to /configuration/crontabs/users/crontab-wiki. |
|
| 133 | 133 | * ensure that ``/var/log/old/cache/docker`` makes it across from any previous installation to the new one; it contains the docker registry contents. See in particular ``/var/log/old/cache/docker/registry/docker/registry/v2/repositories``. |
| 134 | 134 | * [install docker registry](https://wiki.sapsailing.com/wiki/info/landscape/docker-registry) so that the following containers are up and running: |
| 135 | 135 | ``` |
| ... | ... | @@ -190,9 +190,9 @@ write and quit, to install the cronjob. |
| 190 | 190 | * * * * * /home/wiki/gitwiki/configuration/switchoverArchive.sh "/etc/httpd/conf.d/000-macros.conf" 2 9 |
| 191 | 191 | ``` |
| 192 | 192 | |
| 193 | -If you want to quickly run this script, consider installing it in /usr/local/bin, via `ln -s TARGET_PATH LINK_NAME`, in that directory. |
|
| 193 | +If you want to quickly run this script, consider installing it in /usr/local/bin, via `ln -s TARGET_PATH LINK_NAME`. |
|
| 194 | 194 | |
| 195 | -## Basic setup for reverse proxy instance |
|
| 195 | +## Basic setup for disposable reverse proxy instance |
|
| 196 | 196 | |
| 197 | 197 | From a fresh amazon linux 2023 instance (HVM) install perl, httpd, mod_proxy_html, tmux, nfs-utils, git, whois and jq. Then type `amazon-linux-extras install epel`, which adds the epel repo so you can then run install apachetop. |
| 198 | 198 | Then you need to remove the automatic ec2 code which disabled root access; reconfigure the sshd_config; setup the keys update script; and initialise the crontab. Store a bearer token in the home dir. |
| ... | ... | @@ -211,7 +211,7 @@ Postmail is useful. The script for this procedure is in configuration and is tit |
| 211 | 211 | |
| 212 | 212 | Setup the logrotate target. |
| 213 | 213 | |
| 214 | -Setup the fstab (not automated). |
|
| 215 | 214 | Update amazon cli (because pricing list requires it) |
| 216 | 215 | |
| 217 | 216 | |
| 217 | + |
wiki/info/landscape/creating-ec2-image-from-scratch.md
| ... | ... | @@ -1,178 +1,7 @@ |
| 1 | 1 | # Creating an Amazon AWS EC2 Image from Scratch |
| 2 | 2 | |
| 3 | -I started out with a clean "Amazon Linux AMI 2015.03 (HVM), SSD Volume Type - ami-a10897d6" image from Amazon and added the existing Swap and Home snapshots as new volumes. The root/system volume I left as is, to start with. This requires having access to a user key that can be selected when launching the image. |
|
| 3 | +I started out with a clean "Amazon Linux 2" image from Amazon with a single 100GB root volume and the "Sailing Analytics App" security group. The root/system volume I left as is, to start with. This requires having access to a user key that can be selected when launching the image. Then I ran the script ``configuration/sailing_server_setup/setup-sailing-server.sh`` with the instance's external IP address as an argument. This installs everything needed, so in order to understand what happens during this process, review the script. In short, it installs a few packages using the `yum` package manager, downloads and installs the SAP JVM 8 in its latest version into ``/opt/sapjvm_8``, installs a few systemd service units that check for and then activate NVMe swap space where available and interpret the EC2 user data after boot. The MongoDB environment that is being installed is configured to be a replica set named ``replica``, but initialization is left to the ``sailing.service``. See the ``configuration/sailing`` script for the post-boot configuration, installed as a service (see ``configuration/sailing_server_setup/sailing.service``). |
|
| 4 | 4 | |
| 5 | -Add a ``sailing`` user / group. Under that user account, clone ``ssh://trac@sapsailing.com/home/trac/git`` to ``/home/sailing/code``. |
|
| 5 | +When the script finishes, you can shut down / stop the instance, create an AMI and tag the AMI as well as the root volume's snapshot, e.g., as "SAP Sailing Analytics 2.0" and "SAP Sailing Analytics 2.0 (Root)", respectively. |
|
| 6 | 6 | |
| 7 | -Under ``/usr/local/bin`` install the following: |
|
| 8 | -``` |
|
| 9 | -lrwxrwxrwx 1 root root 56 Oct 20 09:20 cp_root_mail_properties -> /home/sailing/code/configuration/cp_root_mail_properties |
|
| 10 | --rwxr-xr-x 1 root root 24707072 Jan 30 2022 docker-compose |
|
| 11 | -lrwxrwxrwx 1 root root 71 May 10 2021 getLatestImageOfType.sh -> /home/sailing/code/configuration/aws-automation/getLatestImageOfType.sh |
|
| 12 | -lrwxrwxrwx 1 root root 50 Mar 23 2021 launchhudsonslave -> /home/sailing/code/configuration/launchhudsonslave |
|
| 13 | -lrwxrwxrwx 1 root root 57 Mar 23 2021 launchhudsonslave-java11 -> /home/sailing/code/configuration/launchhudsonslave-java11 |
|
| 14 | -lrwxrwxrwx 1 root root 69 Jun 1 2019 mountnvmeswap -> /home/sailing/code/configuration/archive_instance_setup/mountnvmeswap |
|
| 15 | -lrwxrwxrwx 1 root root 78 Jan 27 2021 update_authorized_keys_for_landscape_managers -> /home/sailing/code/configuration/update_authorized_keys_for_landscape_managers |
|
| 16 | -lrwxrwxrwx 1 root root 89 Feb 4 2021 update_authorized_keys_for_landscape_managers_if_changed -> /home/sailing/code/configuration/update_authorized_keys_for_landscape_managers_if_changed |
|
| 17 | -``` |
|
| 18 | - |
|
| 19 | -Enable the EPEL repository by issuing `yum-config-manager --enable epel/x86_64` or `sudo amazon-linux-extras install epel -y`. |
|
| 20 | - |
|
| 21 | -I then did a `yum update` and added the following packages: |
|
| 22 | - |
|
| 23 | - - httpd |
|
| 24 | - - mod_proxy_html |
|
| 25 | - - tmux |
|
| 26 | - - nfs-utils |
|
| 27 | - - chrony |
|
| 28 | - - libstdc++48.i686 (for Android builds) |
|
| 29 | - - glibc.i686 (for Android builds) |
|
| 30 | - - libzip.i686 (for Android builds) |
|
| 31 | - - telnet |
|
| 32 | - - apachetop |
|
| 33 | - - goaccess |
|
| 34 | - - postfix (for sending e-mail, e.g., to invite competitors and buoy pingers) |
|
| 35 | - - tigervnc-server |
|
| 36 | - - WindowMaker |
|
| 37 | - - xterm |
|
| 38 | - - sendmail-cf |
|
| 39 | - |
|
| 40 | -I copied the JDK7/JDK8 installations, particularly the current sapjvm_8 VM, from an existing SL instance to /opt (using scp). |
|
| 41 | - |
|
| 42 | -In order to be able to connect to AWS DocumentDB instances, the corresponding certificate must be installed into the JVM's certificate store (2 separate commands): |
|
| 43 | - |
|
| 44 | -``` |
|
| 45 | - wget -O /tmp/rds.pem https://s3.amazonaws.com/rds-downloads/rds-combined-ca-bundle.pem |
|
| 46 | - /opt/sapjvm_8/bin/keytool -importcert -alias AWSRDS -file /tmp/rds.pem -keystore /opt/sapjvm_8/jre/lib/security/cacerts -noprompt -storepass changeit |
|
| 47 | -``` |
|
| 48 | - |
|
| 49 | -A latest MongoDB shell is installed by the following: |
|
| 50 | - |
|
| 51 | -``` |
|
| 52 | -cat << EOF >/etc/yum.repos.d/mongodb-org.4.4.repo |
|
| 53 | -[mongodb-org-4.4] |
|
| 54 | -name=MongoDB Repository |
|
| 55 | -baseurl=https://repo.mongodb.org/yum/amazon/2013.03/mongodb-org/4.4/x86_64/ |
|
| 56 | -gpgcheck=1 |
|
| 57 | -enabled=1 |
|
| 58 | -gpgkey=https://www.mongodb.org/static/pgp/server-4.4.asc |
|
| 59 | -EOF |
|
| 60 | - |
|
| 61 | -yum update |
|
| 62 | -yum install mongodb-org-shell |
|
| 63 | -``` |
|
| 64 | - |
|
| 65 | -Then I created a mount point /home/sailing and copied the following lines from the /etc/fstab file from an existing SL instance: |
|
| 66 | - |
|
| 67 | -``` |
|
| 68 | -UUID=a1d96e53-233f-4e44-b865-c78b862df3b8 /home/sailing ext4 defaults,noatime,commit=30 0 0 |
|
| 69 | -UUID=7d7e68a3-27a1-49ef-908f-a6ebadcc55bb none swap sw 0 0 |
|
| 70 | - |
|
| 71 | -# Mount the Android SDK from the Build/Dev box; use a timeout of 10s (100ds) |
|
| 72 | -172.31.28.17:/home/hudson/android-sdk-linux /opt/android-sdk-linux nfs tcp,intr,timeo=100,retry=0 |
|
| 73 | -172.31.18.15:/var/log/old /var/log/old nfs tcp,intr,timeo=100,retry=0 |
|
| 74 | -``` |
|
| 75 | - |
|
| 76 | -This will mount the swap space partition as well as the /home/sailing partition, /var/log/old and the Android SDK stuff required for local builds. |
|
| 77 | -Do the following steps (until it says otherwise) without logging out in between them: |
|
| 78 | -In `/etc/ssh/sshd_config` I commented the line |
|
| 79 | - |
|
| 80 | -``` |
|
| 81 | -# Only allow root to run commands over ssh, no shell |
|
| 82 | -#PermitRootLogin forced-commands-only |
|
| 83 | -``` |
|
| 84 | - |
|
| 85 | -and added the lines |
|
| 86 | - |
|
| 87 | -``` |
|
| 88 | -PermitRootLogin without-password |
|
| 89 | -PermitRootLogin Yes |
|
| 90 | -MaxStartups 100 |
|
| 91 | -``` |
|
| 92 | - |
|
| 93 | - |
|
| 94 | -to allow root shell login, and allow for several concurrent SSH connections (up to 100) starting up around the |
|
| 95 | -same time. |
|
| 96 | - |
|
| 97 | -Furthermore, on recent AMIs, you may have to go to `/root/.ssh/authorized_keys` and remove the statements before the keys start, otherwise you might lock yourself out (because you can't access root but the new permissions block ec2-user access). If you are locked out, then you can use EC2 Instance Connect, which can be found by clicking on an instance and clicking connect. |
|
| 98 | - |
|
| 99 | -You may now _logout_. |
|
| 100 | - |
|
| 101 | -I linked /etc/init.d/sailing to /home/sailing/code/configuration/sailing and added the following links to it: |
|
| 102 | - |
|
| 103 | -``` |
|
| 104 | -rc0.d/K10sailing |
|
| 105 | -rc1.d/K10sailing |
|
| 106 | -rc2.d/S95sailing |
|
| 107 | -rc3.d/S95sailing |
|
| 108 | -rc4.d/S95sailing |
|
| 109 | -rc5.d/S95sailing |
|
| 110 | -rc6.d/K10sailing |
|
| 111 | -``` |
|
| 112 | - |
|
| 113 | -Linked /etc/profile.d/sailing.sh to /home/sailing/code/configuration/sailing.sh. As this contains a PATH entry for /opt/amazon and the new image has the Amazon scripts at /opt/aws, I aldo created a symbolic link from /opt/amazon to /opt/aws to let this same path configuration find those scripts under the old and the new images. |
|
| 114 | - |
|
| 115 | -Added the lines |
|
| 116 | - |
|
| 117 | -``` |
|
| 118 | -# number of connections the firewall can track |
|
| 119 | -net.ipv4.ip_conntrac_max = 131072 |
|
| 120 | -``` |
|
| 121 | - |
|
| 122 | -to `/etc/sysctl.conf` in order to increase the number of connections that are possible concurrently. |
|
| 123 | - |
|
| 124 | -Added the following two lines to `/etc/security/limits.conf`: |
|
| 125 | - |
|
| 126 | -``` |
|
| 127 | -* hard nproc unlimited |
|
| 128 | -* hard nofile 128000 |
|
| 129 | -``` |
|
| 130 | - |
|
| 131 | -This increases the maximum number of open files allowed from the default 1024 to a more appropriate 128k. |
|
| 132 | - |
|
| 133 | -Copied the httpd configuration files `/etc/httpd/conf/httpd.conf`, `/etc/httpd/conf.d/000-macros.conf` and the skeletal `/etc/httpd/conf.d/001-events.conf` from an existing server. Make sure the following lines are in httpd.conf: |
|
| 134 | - |
|
| 135 | -<pre> |
|
| 136 | - SetEnvIf X-Forwarded-For "^([0-9]*\.[0-9]*\.[0-9]*\.[0-9]*).*$" original_client_ip=$1 |
|
| 137 | - LogFormat "%v %h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined |
|
| 138 | - LogFormat "%v %{original_client_ip}e %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" first_forwarded_for_ip |
|
| 139 | - CustomLog logs/access_log combined env=!original_client_ip |
|
| 140 | - CustomLog logs/access_log first_forwarded_for_ip env=original_client_ip |
|
| 141 | -</pre> |
|
| 142 | - |
|
| 143 | -They ensure that the original client IPs are logged also if the Apache server runs behind a reverse proxy or an ELB. See also [the section on log file analysis](/wiki/howto/development/log-file-analysis#log-file-analysis_log-file-types_apache-log-files). |
|
| 144 | - |
|
| 145 | -Copied /etc/logrotate.conf from an existing SL instance so that `/var/log/logrotate-target` is used to rotate logs to. |
|
| 146 | - |
|
| 147 | -Instead of having the `ANDROID_HOME` environment variable be set in `/etc/profile` as in the old instances, I moved this statement to the `sailing.sh` script in git at `configuration/sailing.sh` and linked to by `/etc/profile.d/sailing.sh`. For old instances this will set the variable redundantly, as they also have it set by a manually adjusted `/etc/profile`, but this shouldn't hurt. |
|
| 148 | - |
|
| 149 | -Had to fiddle a little with the JDK being used. The default installation has an OpenJDK installed, and the AWS tools depend on it. Therefore, it cannot just be removed. As a result, it's important that `env.sh` has the correct `JAVA_HOME` set (/opt/jdk1.8.0_45, in this case). Otherwise, the OSGi environment won't properly start up. |
|
| 150 | - |
|
| 151 | -For the ``root`` user create the symbolic link from ``/root/crontab`` to ``/home/sailing/code/configuration/crontab`` and run ``crontab crontab``. It adds the following crontab entry that is responsible for updating the SSH keys of the users with permission for landscape management in the ``/root/.ssh/authorized_keys`` file. |
|
| 152 | -``` |
|
| 153 | -* * * * * export PATH=/bin:/usr/bin:/usr/local/bin; sleep $(( $RANDOM * 60 / 32768 )); update_authorized_keys_for_landscape_managers_if_changed $( cat /root/ssh-key-reader.token ) https://security-service.sapsailing.com /root 2>&1 >>/var/log/sailing.err |
|
| 154 | -``` |
|
| 155 | -Make sure, a valid bearer token is installed in ``/root/ssh-key-reader.token``. |
|
| 156 | - |
|
| 157 | -To ensure that chronyd is started during the boot sequence, issued the command |
|
| 158 | - |
|
| 159 | -``` |
|
| 160 | -chkconfig chrony on |
|
| 161 | -``` |
|
| 162 | - |
|
| 163 | -which creates the necessary entries in the rc*.d directories. |
|
| 164 | - |
|
| 165 | -Update the file `/etc/postfix/main.cf` in order to set the server's sending hostname to `sapsailing.com` as follows: |
|
| 166 | -``` |
|
| 167 | - myhostname = sapsailing.com |
|
| 168 | -``` |
|
| 169 | - |
|
| 170 | -Adjust the /etc/sysconfig/vncservers settings to something like: |
|
| 171 | - |
|
| 172 | -``` |
|
| 173 | -VNCSERVERS="2:sailing" |
|
| 174 | -VNCSERVERARGS[2]="-geometry 1600x900" |
|
| 175 | -``` |
|
| 176 | - |
|
| 177 | -## Mail Relaying |
|
| 178 | -For setting up mail relaying towards central postfix server, have a look [here](https://wiki.sapsailing.com/wiki/info/landscape/mail-relaying) |
|
| ... | ... | \ No newline at end of file |
| 0 | +Compared to earlier versions of this image type, no mail infrastructure and no httpd reverse proxy is being configured. No NFS mounts are performed, and the instances resulting from this will not have everything required to *build* the solution, in particular no NFS mount of the Android SDK. |
|
| ... | ... | \ No newline at end of file |
wiki/info/landscape/docker-registry.md
| ... | ... | @@ -104,7 +104,7 @@ This process is automated by adding the line |
| 104 | 104 | 0 7 2 * * export PATH=/bin:/usr/bin:/usr/local/bin; docker exec -it registry-registry-1 registry garbage-collect /etc/docker/registry/config.yml |
| 105 | 105 | ``` |
| 106 | 106 | |
| 107 | -to /root/crontab and running ``crontab crontab`` as the ``root`` user. See also ``crontab -l`` for whether this has already been set up. |
|
| 107 | +to /root/crontab and running ``crontab crontab`` as the ``root`` user. See also ``crontab -l`` for whether this has already been set up. This line can also be found in the `/configuration/crontabs/environments/crontab-application-server` file. |
|
| 108 | 108 | |
| 109 | 109 | If you want to delete an entire repository, e.g., because you pushed images under an incorrect repository tag, try this: |
| 110 | 110 | ``` |