|
@@ -1,23 +1,23 @@
|
|
|
# Packer Salt Master FIPS Notes
|
|
|
|
|
|
-check for FIPS
|
|
|
-cat /proc/sys/crypto/fips_enabled
|
|
|
+Check for FIPS
|
|
|
+`cat /proc/sys/crypto/fips_enabled`
|
|
|
1
|
|
|
|
|
|
-Latest in test: MSOC_RedHat_Master_201909301534
|
|
|
-Latest in prod: MSOC_RedHat_Master_201907012051
|
|
|
+ * Latest in test: `MSOC_RedHat_Master_201909301534`
|
|
|
+ * Latest in prod: `MSOC_RedHat_Master_201907012051`
|
|
|
|
|
|
move this
|
|
|
-terraform/02-msoc_vpc/conf/provision_salt_master.sh
|
|
|
+`terraform/02-msoc_vpc/conf/provision_salt_master.sh`
|
|
|
|
|
|
to here
|
|
|
-packer/rhel7_hardened_saltmaster_ami.json
|
|
|
+`packer/rhel7_hardened_saltmaster_ami.json`
|
|
|
|
|
|
|
|
|
|
|
|
+`AWS_PROFILE=mdr-test aws secretsmanager get-secret-value --secret-id saltmaster/ssh_key --query SecretString --output text`
|
|
|
|
|
|
-AWS_PROFILE=mdr-test aws secretsmanager get-secret-value --secret-id saltmaster/ssh_key --query SecretString --output text
|
|
|
-
|
|
|
+```
|
|
|
Build error
|
|
|
==> master: + sudo firewall-cmd --permanent --zone=public --add-port=4505-4506/tcp
|
|
|
master: success
|
|
@@ -28,39 +28,39 @@ Build error
|
|
|
==> master: /home/centos/script_7740.sh: line 56: unexpected EOF while looking for matching `"'
|
|
|
==> master: Provisioning step had errors: Running the cleanup provisioner, if present...
|
|
|
==> master: Terminating the source AWS instance...
|
|
|
-
|
|
|
+```
|
|
|
|
|
|
|
|
|
test instance
|
|
|
packer_5e700a93-aa62-0731-0405-1488fc6aa885
|
|
|
|
|
|
|
|
|
+## PROD Steps
|
|
|
+
|
|
|
+1. Document the salt keys currently accepted to ensure they all come back.
|
|
|
+2. Poweroff salt-master
|
|
|
+3. Create snapshot of salt-master EBS
|
|
|
+4. Check on TF plan
|
|
|
+5. Terminate salt-master
|
|
|
+6. Use TF to re-create salt-master
|
|
|
+7. Log into salt-master via bastion + msoc_build key
|
|
|
+8. Wait for cloud-init scripts to finish running
|
|
|
+9. Wait for state.highstate to finish running (like solid 15 minutes)
|
|
|
+10. Verify cloud-init scripts completed successfully (check on stuff) `/var/lib/cloud/instance/scripts/part-002`
|
|
|
+11. Ensure vault.conf is not foobar and messing up pillars
|
|
|
+12. If needed run salt_master state like this:
|
|
|
|
|
|
-PROD Steps
|
|
|
-Document the salt keys currently accepted to ensure they all come back.
|
|
|
-poweroff salt-master
|
|
|
-create snapshot of salt-master EBS
|
|
|
-check on TF plan
|
|
|
-terminate salt-master
|
|
|
-use TF to re-create salt-master
|
|
|
-log into salt-master via bastion + msoc_build key
|
|
|
-wait for cloud-init scripts to finish running
|
|
|
-wait for state.highstate to finish running (like solid 15 minutes)
|
|
|
-verify cloud-init scripts completed successfully (check on stuff) /var/lib/cloud/instance/scripts/part-002
|
|
|
-Ensure vault.conf is not foobar and messing up pillars
|
|
|
-if needed run salt_master state like this salt-call state.sls salt_master
|
|
|
+```
|
|
|
+salt-call state.sls salt_master
|
|
|
salt salt* pillar.item my-pillar
|
|
|
salt-call state.sls os_modifications.ssh_motd
|
|
|
salt-call state.sls os_modifications.ssh_banner
|
|
|
salt-call state.sls sensu_agent
|
|
|
-
|
|
|
-clean up SFT and remove old salt-master
|
|
|
-
|
|
|
-restart local minions via SSM/SSH
|
|
|
-pop nodes should reconnect to elastic IP of salt master ( no DNS issue)
|
|
|
-
|
|
|
-Run with SSM
|
|
|
-systemctl restart salt-minion
|
|
|
+```
|
|
|
+13. Clean up SFT and remove old salt-master
|
|
|
+14. Restart local minions via SSM/SSH
|
|
|
+15. Pop nodes should reconnect to elastic IP of salt master ( no DNS issue)
|
|
|
+16. Run with SSM `systemctl restart salt-minion`
|
|
|
|
|
|
|
|
|
"missing" minions
|