Browse Source

Merge branch 'master' of github.xdr.accenturefederalcyber.com:mdr-engineering/infrastructure-notes

Brad Poulton 4 years ago
parent
commit
6fce2809ef

+ 6 - 4
DNSSEC Notes.md

@@ -4,11 +4,13 @@
 ## unbound server
 2020-08-05
 
-Unbound is installed on the 2 resolver servers.  
-gc-prod-resolver-govcloud-2
+Unbound is installed on the 2 resolver servers.
+```  
+gc-prod-resolver-govcloud-2               
 gc-prod-resolver-govcloud
+```                 
 
-If DNS resolution stops working, restart the unbound service.
+If DNS resolution stops working, restart the unbound service.                 
 `systemctl status unbound`
 
 ### Troubleshooting
@@ -37,7 +39,7 @@ AWS resolvers can't play any part whatsoever in DNSSEC. They just break it.
 
 So unbound servers need external DNS.
 
-/etc/unbound/conf.d/xdr.conf
+`/etc/unbound/conf.d/xdr.conf`
 ```
 server:
       private-domain: "pvt.xdr.accenturefederalcyber.com."

+ 26 - 27
GitHub Server Notes.md

@@ -3,14 +3,14 @@
 `GitHub Enterprise Server` is an APPLIANCE. No salt minion, No sft. 
 To SSH in you must have your public key manually added. 
 
-Host github
-  Port 122
-  User admin
-  HostName 10.80.101.78
+Host github     
+  Port 122      
+  User admin      
+  HostName 10.80.101.78     
   
 # Adding New Users to GitHub Teams
 
-OKTA does NOT manage the permissions on the GitHub server. To give a user access to a new team, like mdr-engineering, log into the github server and access this URL: [Login](https://github.xdr.accenturefederalcyber.com/orgs/mdr-engineering/teams/onboarding/members) . Find the new user by clicking on the "Add a member" button. 
+OKTA does NOT manage the permissions on the GitHub server. To give a user access to a new team, like `mdr-engineering`, log into the Github server and access this URL: [Login](https://github.xdr.accenturefederalcyber.com/orgs/mdr-engineering/teams/onboarding/members) . Find the new user by clicking on the "Add a member" button. 
 
 # Updating 
 ```
@@ -21,10 +21,9 @@ ghe-upgrade /var/lib/ghe-updates/github-enterprise-2.17.22.hpkg
 Upgrading major version
 ```
 ghe-upgrade
-
 fdisk -l
 ```
-two partitions are installed. when you run an upgrade the VM will install the upgrade to the other partiion. After the upgrade it will switch the primary boot partitions. This leaves the previous version available for roll back. 
+Two partitions are installed. When you run an `upgrade` the VM will install the upgrade to the other partition. After the upgrade it will switch the primary boot partitions. This leaves the previous version available for roll back. 
 
 
 Hit ghe- (TAB) to view all ghe commands. GitHub [Command-line utilities](https://docs.github.com/en/enterprise/2.17/admin/installation/command-line-utilities)
@@ -32,7 +31,7 @@ Hit ghe- (TAB) to view all ghe commands. GitHub [Command-line utilities](https:/
 
 # Installing new license
 
-Should be able to do just via the UI.  https://github.mdr.defpoint.com:8443/setup/upgrade.
+Should be able to do just via the [Web UI](https://github.xdr.accenturefederalcyber.com:8443/setup/upgrade)     
 But there's a gotcha with disabling the DSA key (for a FEDRAMP POAM).  Your services
 may not restart after updating the license.
 
@@ -56,7 +55,7 @@ sudo mv /data/user/common/ssh_host_dsa_key* /data/user/user-tmp/
 sudo systemctl restart babeld
 ```
 
-I'll open a case with github too. 
+I'll open a case with Github too. 
 
 # GitHub-Backup
 
@@ -65,7 +64,7 @@ The `ghe-backup` servers are instances running `Docker`.
 Docker is installed via the `docker` salt state.
 
 Most backup configuration is managed by the salt `github.backup` state:
-* `/usr/local/github-backup-utils` contains a copy of the github repository https://github.com/github/backup-utils
+* `/usr/local/github-backup-utils` contains a copy of the [github repository](https://github.com/github/backup-utils)
 * Build of the docker image. Manual command is: `docker build --build-arg=http_proxy=$HTTP_PROXY --build-arg=https_proxy=$HTTPS_PROXY -t github/backup-utils:v3.0.0 .`. You can run this if you get an error when applying the state.
 * A script is run via a cronjob in /etc/cron.d/ghe-backup, which calls the script /root/github-backup.sh. This script calls docker to run the backup.
 
@@ -83,27 +82,27 @@ and accept the key.
 
 Restoring should be similar to the command called by /root/github-backup.sh, except with a 'ghe-restore' command.
 
-# Migration Steps to govcloud:
+# Migration Steps to Govcloud:
 
-0) Create Okta App Manually
+1) Create Okta App Manually
 1) Stand everything up.
 2) Run highstate 2x (This can t
-  * May have to pkg.upgrade and/or reboot
-3) Copy /root/ghe-backup.sh to /root/ghe-backup-old.sh, and update hostname to legacy hostname
-4) Run ssh command (above) to get key into known hosts file
-5) Run the ghe-backup-old.sh script
-6) Copy ghe-backup.sh to ghe-restore.sh
-7) Edit ghe-restore.sh, change log file name and ghe-backup to ghe-restore
-8) Run ghe-restore.
-9) Log onto instance on port 8443
+   * May have to `pkg.upgrade` and/or reboot
+3) Copy `/root/ghe-backup.sh` to `/root/ghe-backup-old.sh`, and update `hostname` to legacy hostname
+4) Run `ssh` command (above) to get key into known hosts file
+5) Run the `ghe-backup-old.sh` script
+6) Copy `ghe-backup.sh` to `ghe-restore.sh`
+7) Edit `ghe-restore.sh`, change log file name and `ghe-backup` to `ghe-restore`
+8) Run `ghe-restore`.
+9) Log onto instance on port `8443`
 10) Let it do its thing, then go to settings:
-  * Update hostname to github.xdr.accenturefederalcyber.com
-  * Fix authentication with info from okta and step 0
-    * both the url and the http:// address need to be updated from the metadata
-  * Enable "Allow X-Forwarded-For"
-  * Keep "Enable Support for Proxy" enabled
-  * Fix proxy configuration
-  * Fix mailserver
+   * Update hostname to `github.xdr.accenturefederalcyber.com`
+   * Fix authentication with info from Okta and step 0
+     * both the url and the `http:// address` need to be updated from the metadata
+   * Enable `Allow X-Forwarded-For`
+   * Keep `Enable Support for Proxy` enabled
+   * Fix proxy configuration
+   * Fix mailserver
 11) Restore crontab to original
 12) Disable old app in okta
 13) Highstate salt

+ 42 - 40
OpenVPN Notes.md

@@ -1,55 +1,54 @@
 #  OpenVPN Notes
-To admin openvpn, SSH into the openvpn server and use the admin user that is located in Vault. 
+To admin OpenVPN, SSH into the OpenVPN server and use the admin user that is located in Vault. 
 
-the admin username is openvpn
+The admin username is `openvpn`
 
 `systemctl restart openvpnas`
 
-Helpful...
-https://openvpn.net/vpn-server-resources/managing-settings-for-the-web-services-from-the-command-line/
+Helpful... [OpenVPN - Managing settings for the web services from the command line](https://openvpn.net/vpn-server-resources/managing-settings-for-the-web-services-from-the-command-line/)
 
-There is a strict dependency that openvpn be started after firewalld.
+There is a strict dependency that OpenVPN be started after `firewalld`.
 
 
 ## How to Reset ldap.read 
 
-ldap.read@defpoint.com is the okta user that openvpn uses to auth to okta. the ldap.read account's password expires after 60 days. To see when the password will expire, go to Reports -> Okta Password Health. Don't open with EXCEL! Add 60 days to the date in the last column.  
+`ldap.read@defpoint.com` is the Okta user that OpenVPN uses to auth to Okta. The `ldap.read` account's password expires after 60 days. To see when the password will expire, go to [Reports -> Okta Password Health](https://mdr-multipass-admin.okta.com/reports). Don't open with EXCEL! Add 60 days to the date in the last column.  
 
-0. Be on prod VPN.
-1. Log into OKTA in an incognito window using the ldap.read username and the current password from Vault (engineering/root). Brad's phone is currently setup with the Push notification for the account. The MFA is required for the account. To change the password without Brad, remove MFA with your account in OKTA and set it up on your own phone. 
-2. Once the password has been updated, update vault in this location, engineering/root with a key of ldap.read@defpoint.com. You will have to create a new version of engineering/root to save the password. 
-3. Store the new password and the creds for openvpn and drop off the VPN. Log into the openVPN web GUI (https://openvpn.xdr.accenturefederalcyber.com/admin/) as the openvpn user (password in Vault) and update the credentials for ldap.read. Authentication -> ldap -> update password -> Save Settings. Then update running server. Repeat this for the test environment ( https://openvpn.xdrtest.accenturefederalcyber.com/admin/ ) 
+1. Be on prod VPN.
+1. Log into OKTA in an Incognito window using the `ldap.read` username and the current password from Vault (`engineering/root`). Brad's phone is currently setup with the Push notification for the account. The MFA is required for the account. To change the password without Brad, remove MFA with your account in OKTA and set it up on your own phone. 
+2. Once the password has been updated, update Vault in this location, `engineering/root` with a key of `ldap.read@defpoint.com`. You will have to create a new version of engineering/root to save the password. 
+3. Store the new password and the creds for openvpn and drop off the VPN. Log into the [OpenVPN web GUI](https://openvpn.xdr.accenturefederalcyber.com/admin/) as the openvpn user (password in Vault) and update the credentials for `ldap.read`. Authentication -> ldap -> update password -> Save Settings. Then update running server. Repeat this for the [Dev Environment](https://openvpn.xdrtest.accenturefederalcyber.com/admin/) 
 4. Verify that you are able to login to the VPN. 
 5. Set reminder in your calendar to reset the password in less than 60 days. 
 
 
 ------------
 when okta push is slow, get the 6 digits from your okta app
-and put into viscosity your password as  password,123456
+and put into Viscosity your password as  `password,123456`
 clearly your password should have no commas in it
 
 
 ### LDAP config
 
-Primary server: mdr-multipass.ldap.okta.com
-Bind Anon? NO
-Use creds? YES
+Primary server: [MDR Multipass Okta](https://mdr-multipass.ldap.okta.com)      
+Bind Anon? NO       
+Use creds? YES      
 
 
-BIND DN:
-uid=ldap.read@defpoint.com, dc=mdr-multipass, dc=okta, dc=com
+BIND DN:                    
+`uid=ldap.read@defpoint.com, dc=mdr-multipass, dc=okta, dc=com`       
 
-BASE DN for Users
-ou=users, dc=mdr-multipass, dc=okta, dc=com
+BASE DN for Users       
+`ou=users, dc=mdr-multipass, dc=okta, dc=com`    
 
-Username Attribute
-uid
+Username Attribute      
+`uid`    
 
 
 ## OpenVPN License
 
-PROD -> See Salt state.
-TEST -> YOLO via web interface. This means i did not take the time to reconfigure the Salt states to handle a prod and test license. 
+PROD -> See Salt state.     
+TEST -> YOLO via web interface. This means I did not take the time to reconfigure the Salt states to handle a prod and test license. 
 
 
 ## CLI
@@ -58,11 +57,13 @@ OpenVPN can also be configured via CLI.
 
 The `confdba` tool is used to view the configurations DB.
 
-Show all configurations
-`/usr/local/openvpn_as/scripts/confdba -s`
+```
+#Show all configurations
+/usr/local/openvpn_as/scripts/confdba -s
 
-Show all configurations in the User database
-`/usr/local/openvpn_as/scripts/confdba -us`
+#Show all configurations in the User database
+/usr/local/openvpn_as/scripts/confdba -us
+```
 
 The `sacli` tool is used to interact with the OpenVPN API.
 
@@ -70,33 +71,34 @@ The `sacli` tool is used to interact with the OpenVPN API.
 
 View Configurations
 If configuration doesn't show up it is set to the default.
+```
+/usr/local/openvpn_as/scripts/sacli ConfigQuery
+/usr/local/openvpn_as/scripts/sacli UserPropGet
 
-`/usr/local/openvpn_as/scripts/sacli ConfigQuery`
-`/usr/local/openvpn_as/scripts/sacli UserPropGet`
-
-`/usr/local/openvpn_as/scripts/sacli ConfigQuery --pfilt=vpn.server.tls_version_min`
-
+/usr/local/openvpn_as/scripts/sacli ConfigQuery --pfilt=vpn.server.tls_version_min
+```
 ## Timeouts
 
-https://openvpn.net/vpn-server-resources/openvpn-tunnel-session-management-options/
+[OpenVPN Tunnel Session Management Options](https://openvpn.net/vpn-server-resources/openvpn-tunnel-session-management-options/)
 
-Fedramp SC-10
+Fedramp SC-10 [FedRAMP Security Controls Baseline](https://www.fedramp.gov/documents-templates/)
 
-#RIGHT:
+#RIGHT:             
 The Access Server can push the OpenVPN "inactive" directive to clients. The inactive directive can be used to compel clients to disconnect if their bandwidth usage is below a given threshold for a given length of time.
 
 Control with the following user/group properties:
-
+```
 prop_isec:	(int, number of seconds over which to sample bytes in/out)
 prop_ibytes:	(int, minimum number of in/out bytes over prop_isec seconds to allow connection to continue)
 For example, to disconnect a user who fails to transmit/receive at least 75,000 bytes during a 30 minute period:
 
-#default user applies to all users. 
-`/usr/local/openvpn_as/scripts/sacli --user __DEFAULT__ --key prop_isec --value 1800 UserPropPut`
-`/usr/local/openvpn_as/scripts/sacli --user __DEFAULT__ --key prop_ibytes --value 75000 UserPropPut`
+#default user applies to all users.
+/usr/local/openvpn_as/scripts/sacli --user __DEFAULT__ --key prop_isec --value 1800 UserPropPut
+/usr/local/openvpn_as/scripts/sacli --user __DEFAULT__ --key prop_ibytes --value 75000 UserPropPut
 
 #verify the setting is in place
-`/usr/local/openvpn_as/scripts/confdba -us -p __DEFAULT__`
+/usr/local/openvpn_as/scripts/confdba -us -p __DEFAULT__
+```
 
 ## Configure TLS on OpenVPN
 
@@ -109,4 +111,4 @@ Make a certificate like you would any other, using openssl commands and our CA.
 ../scripts/sacli --key "cs.priv_key" --value_file=openvpn.key ConfigPut
 ```
 
-See openvpn docs https://openvpn.net/vpn-server-resources/managing-settings-for-the-web-services-from-the-command-line/#selecting-ssl-and-tls-levels-on-the-web-server
+See [OpenVPN Docs](https://openvpn.net/vpn-server-resources/managing-settings-for-the-web-services-from-the-command-line/#selecting-ssl-and-tls-levels-on-the-web-server)

+ 10 - 8
OpenVPN Upgrade Notes.md

@@ -1,17 +1,19 @@
 # OpenVPN Upgrade Notes
 
-https://openvpn.net/vpn-server-resources/keeping-openvpn-access-server-updated/
+OpenVPN Access Server Knowledge Base Docs - [Keeping OpenVPN Access Server updated](https://openvpn.net/vpn-server-resources/keeping-openvpn-access-server-updated/)
 
-https://openvpn.net/vpn-software-packages/
+OpenVPN Access Server on Linux - [VPN Software Repository & Packages](https://openvpn.net/vpn-software-packages/)
 
 Current version 2.7.3
 > :warning: OpenVPN Version 2.8.x is NOT FIPS Compliant and will NOT run in FIPS mode.
 
-1. Download next version to Repo server. 
-`wget -O openvpn-as-2.8.6-CentOS7.x86_64.rpm https://openvpn.net/downloads/openvpn-as-latest-CentOS7.x86_64.rpm`
-`wget -O openvpn-as-bundled-clients-13.rpm https://openvpn.net/downloads/openvpn-as-bundled-clients-latest.rpm`
-2. Follow notes in (Reposerver Notes.md) for prepping Repo server and target server. 
-3. Backup the current configuration ( https://openvpn.net/vpn-server-resources/configuration-database-management-and-backups/#Backing_up_the_OpenVPN_Access_Server_configuration )
+1. Download next version to Repo server.             
+```
+wget -O openvpn-as-2.8.6-CentOS7.x86_64.rpm https://openvpn.net/downloads/openvpn-as-latest-CentOS7.x86_64.rpm
+wget -O openvpn-as-bundled-clients-13.rpm https://openvpn.net/downloads/openvpn-as-bundled-clients-latest.rpm
+```
+2. Follow [Reposerver Notes](Reposerver%20Notes.md) for prepping Repo server and target server. 
+3. Backup the current configuration - reference [Backing up the OpenVPN Access Server Configuration](https://openvpn.net/vpn-server-resources/configuration-database-management-and-backups/#Backing_up_the_OpenVPN_Access_Server_configuration)
 
 ```
 which apt > /dev/null 2>&1 && apt -y install sqlite3
@@ -29,7 +31,7 @@ cp ../as.conf ../../as.conf.bak
 ```
 4. Ensure you have a good EBS Volume Snapshot ( take a new one so it will not get auto deleted )
 
-5. After a yum update, the OpenVPN service might die and not come back up. Use the bastion host to ssh in a remedy this. 
+5. After a `yum update`, the OpenVPN service might die and not come back up. Use the bastion host to ssh in a remedy this. 
 `systemctl status openvpn`
 
 6. `shutdown -r now`

+ 10 - 10
Packer Notes.md

@@ -1,20 +1,20 @@
 # Packer Notes
 
-https://packer.io/
+[HashiCorp Packer](https://packer.io/)
 Used to create the AWS AMI. Packer is run on your local laptop. 
 
-The Makefile is used to document the different images that you are able to build. Make doesn't provide any parts of the build process, just convience. 
+The `Makefile` is used to document the different images that you are able to build. Make doesn't provide any parts of the build process, just convenience. 
 
-Packer is required to partition the hard drives out before ec2 launch. 
+`Packer` is required to partition the hard drives out before ec2 launch. 
 
 ## Usage
 
-username: centos
-key in vault called msoc-build 
+username: `centos`          
+key in Vault called `msoc-build` 
 
-to run through a new build use the make command. 
+to run through a new build use the `make` command. 
 
-See the README.md in the packer/README.md
+See the `README.md` in the [Packer/README.md](https://github.xdr.accenturefederalcyber.com/mdr-engineering/msoc-infrastructure/blob/develop/packer/README.md)
 
 ```
 make aws-test
@@ -23,8 +23,8 @@ AWS_PROFILE=mdr-test packer build -on-error=ask -only=master -var-file=rhel7_har
 ```
 
 ## Troubleshooting Help
-Add --debug to pause execution at each stage
-add --on-error=ask
+Add --debug to pause execution at each stage            
+Add --on-error=ask
 
-Having issues with the RHEL subscription manager in TEST? switch it to the prod one. 
+Having issues with the RHEL subscription manager in TEST? Switch it to the prod one. 
 

+ 6 - 3
Salt Notes.md

@@ -1,5 +1,8 @@
 # Salt Notes
-Salt is the configuration management tool
+SaltProject or "Salt" is the configuration management tool - 
+* [SaltStack Project](https://saltproject.io/)
+* [SaltStack Project Package Repo](https://repo.saltproject.io/)
+* [SaltStack Project FAQ](https://docs.saltproject.io/en/latest/faq.html#frequently-asked-questions)
 
 ---
 My first section
@@ -23,7 +26,7 @@ but it is broken on salt master, the minion has a static file `/etc/salt/grains`
 `saltutil.sync_grains`
 
 ERROR: Could not get AWS connection: global name 'boto3' is not defined
-SOLUTION: see Salt Upgrade Notes.md
+SOLUTION: see [Salt Upgrade 2019 -> 3001 Notes](Salt%20Upgrade%202019%20->%203001%20Notes.md)
 
 
 --------
@@ -97,7 +100,7 @@ New Github Server
 
 gitfs uses `/root/.ssh/github_read_only` for authentication, which is overridden via `/root/.ssh/config` for teh github server.
 
-So when the git server changes:
+So when the GIT server changes:
 ```
 sudo su -
 cd .ssh

+ 128 - 94
Salt Upgrade 2019 -> 3001 Notes.md

@@ -1,12 +1,12 @@
 # Salt Upgrade 2019 -> 3001 Notes.md
 
 ### Places where code might need to be updated for a new version ( salt.repo )
-- packer/scripts/add-saltstack-repo.sh
-- base/salt_master/cloud-init/provision_salt_master.sh
-- salt/pillar/dev/yumrepos.sls
+- [packer/scripts/add-saltstack-repo.sh](https://github.xdr.accenturefederalcyber.com/mdr-engineering/msoc-infrastructure/blob/develop/packer/scripts/add-saltstack-repo.sh)
+- [base/salt_master/cloud-init/provision_salt_master.sh](https://github.xdr.accenturefederalcyber.com/mdr-engineering/xdr-terraform-modules/blob/master/base/salt_master/cloud-init/provision_salt_master.sh)
+- [salt/pillar/dev/yumrepos.sls](https://github.xdr.accenturefederalcyber.com/mdr-engineering/msoc-infrastructure/blob/develop/salt/pillar/dev/yumrepos.sls)
 
 ### Prep
-- update Pillars yumrepos:salt:version and yumrepos:salt:baseurl
+- update Pillars `yumrepos:salt:version` and `yumrepos:salt:baseurl`
 
 ### On the master
 - update repo `salt salt* state.sls os_modifications.repo_update --output-diff`
@@ -60,25 +60,48 @@ salt *com saltutil.sync_all
 salt *local grains.get ec2:placement:availability_zone
 salt *com grains.get ec2:placement:availability_zone
 
-ISSUE:  [ERROR   ] Returner splunk.returner could not be loaded: 'splunk.returner' is not available.
+ISSUE:  
+```
+[ERROR   ] Returner splunk.returner could not be loaded: 'splunk.returner' is not available.
 SOLUTION: manually restart minion
+```
 
-ISSUE: 2020-11-23 18:13:09,719 [salt.beacons     :144 ][WARNING ][15141] Unable to process beacon inotify
+ISSUE: 
+```
+2020-11-23 18:13:09,719 [salt.beacons     :144 ][WARNING ][15141] Unable to process beacon inotify
 cmd.run 'ls -larth /etc/salt/minion.d/beacons.conf'
+```
 
-ISSUE: requests.packages.urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='iratemoses.mdr.defpoint.com', port=8088): Max retries exceeded with url: /services/collector/event (Caused by NewConnectionError('<requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7f19e76c64a8>: Failed to establish a new connection: [Errno -2] Name or service not known',))
+ISSUE: 
+```
+requests.packages.urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='iratemoses.mdr.defpoint.com', port=8088): Max retries exceeded with url: /services/collector/event (Caused by NewConnectionError('<requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7f19e76c64a8>: Failed to establish a new connection: [Errno -2] Name or service not known',))
+```
 
-SOLUTION: IGNORE: this was happening with previous version of salt and python2. 
+SOLUTION: 
+```
+IGNORE: this was happening with previous version of salt and python2. 
+```
 
-ISSUE on reposerver: 2020-11-23 19:42:20,061 [salt.state       :328 ][ERROR   ][18267] Cron /usr/local/bin/repomirror-cron.sh for user root failed to commit with error
+ISSUE on reposerver: 
+```
+2020-11-23 19:42:20,061 [salt.state       :328 ][ERROR   ][18267] Cron /usr/local/bin/repomirror-cron.sh for user root failed to commit with error
     "/tmp/__salt.tmp.9b64eos8":1: bad minute
     errors in crontab file, can't install.
+```
 
-SOLUTION: bad cron file?
+SOLUTION: 
+```
+bad cron file?
+```
 
-ISSUE: [CRITICAL][1745] Pillar render error: Rendering SLS 'mailrelay' failed
-2020-11-23 19:26:11,255 [salt.pillar      :889 ][CRITICAL][1745] Rendering SLS 'mailrelay' failed, render error:
+ISSUE: 
+```
+[CRITICAL][1745] Pillar render error: Rendering SLS 'mailrelay' failed
+2020-11-23 19:26:11,255 [salt.pillar      :889 ]
+
+[CRITICAL][1745] Rendering SLS 'mailrelay' failed, render error:
 Jinja variable 'salt.utils.context.NamespacedDictWrapper object' has no attribute 'ec2'
+
 Traceback (most recent call last):
   File "/usr/lib/python3.6/site-packages/salt/utils/templates.py", line 400, in render_jinja_tmpl
     output = template.render(**decoded_context)
@@ -92,35 +115,39 @@ Traceback (most recent call last):
   File "/usr/lib/python3.6/site-packages/jinja2/environment.py", line 389, in getitem
     return obj[argument]
 jinja2.exceptions.UndefinedError: 'salt.utils.context.NamespacedDictWrapper object' has no attribute 'ec2'
+```
 
-SOLUTION: ?
-
+SOLUTION: 
+```
+?
+```
 
 ## 2019 Upgrade
-https://jira.xdr.accenturefederalcyber.com/browse/MSOCI-1164
+[Jira MSOCI-1164 ticket - Standardize salt version across infrastructure](https://jira.xdr.accenturefederalcyber.com/browse/MSOCI-1164)
 
 Done when:
-
-All salt minions are running same version (2018)
-All server minions are pegged to specific version (that can be changed at upgrade time)
-Remove yum locks for minion
+ * All salt minions are running same version (2018)
+ * All server minions are pegged to specific version (that can be changed at upgrade time)
+ * Remove yum locks for minion
 
 Notes:
-
-Packer installs 2019 repo (packer/scripts/add-saltstack-repo.sh & packer/scripts/provision-salt-minion.sh) , then os_modifications ( os_modifications.repo_update) overwrites the repo with 2018. This leaves the salt minion stuck at the 2019 version without being able to upgrade. 
+  * Packer installs 2019 repo (`packer/scripts/add-saltstack-repo.sh` & `packer/scripts/provision-salt-minion.sh`) , then os_modifications ( `os_modifications.repo_update` ) overwrites the repo with 2018. This leaves the salt minion stuck at the 2019 version without being able to upgrade. 
 
 #salt master (two salt repo files)
 
-/etc/yum.repos.d/salt.repo (salt/fileroots/os_modifications/minion_upgrade.sls)
+`/etc/yum.repos.d/salt.repo` ([salt/fileroots/os_modifications/minion_upgrade.sls](https://github.xdr.accenturefederalcyber.com/mdr-engineering/msoc-infrastructure/blob/develop/salt/fileroots/os_modifications/minion_upgrade.sls))
 
+```
 [salt-2018.3]
 name=SaltStack 2018.3 Release Channel for Python 2 RHEL/Centos $releasever
 baseurl=https://repo.saltstack.com/yum/redhat/7/$basearch/2018.3
 failovermethod=priority
 enabled=1
-/etc/yum.repos.d/salt-2018.3.repo
+```
 
+`/etc/yum.repos.d/salt-2018.3.repo`
+
+```
 [salt-2018.3]
 name=SaltStack 2018.3 Release Channel for Python 2 RHEL/Centos $releasever
 baseurl=https://repo.saltstack.com/yum/redhat/7/$basearch/2018.3
@@ -128,40 +155,43 @@ failovermethod=priority
 enabled=1
 gpgcheck=1
 gpgkey=file:///etc/pki/rpm-gpg/saltstack-signing-key, file:///etc/pki/rpm-gpg/centos7-signing-key
- 
+```
 
 #reposerver.msoc.defpoint.local
-/etc/yum.repos.d/salt.repo
 
+`/etc/yum.repos.d/salt.repo`
+
+```
 [salt-2018.3]
 name=SaltStack 2018.3 Release Channel for Python 2 RHEL/Centos $releasever
 baseurl=https://repo.saltstack.com/yum/redhat/7/$basearch/2018.3
 failovermethod=priority
 enabled=1
 gpgcheck=0
-Two repo files in salt, both are 2018.3; one has proxy=none other doesn't.  the salt_rhel.repo is just for RHEL and the other is for CENTOS. 
-
-salt/fileroots/os_modifications/files/salt.repo (salt/fileroots/os_modifications/repo_update.sls uses this file and it is actively pushed to CENTOS minions)
-
-salt/fileroots/os_modifications/files/salt_rhel.repo  (salt/fileroots/os_modifications/repo_update.sls uses this file and it is actively pushed to RHEL minions)
-
-
-
-/etc/yum.repos.d/salt-2018.3.repo ( not sure how this file is being pushed. possibly pushed from Chris fixing stuff )
+```
+Two repo files in salt, both are 2018.3; one has `proxy=none` other doesn't.  The `salt_rhel.repo` is just for RHEL and the other is for CENTOS. 
+ * `salt/fileroots/os_modifications/files/salt.repo`   
+    (`salt/fileroots/os_modifications/repo_update.sls` uses this file and it is actively pushed to `CENTOS` minions)
+ 
+ * `salt/fileroots/os_modifications/files/salt_rhel.repo`       
+    (`salt/fileroots/os_modifications/repo_update.sls` uses this file and it is actively pushed to `RHEL` minions)
+    
+ * `/etc/yum.repos.d/salt-2018.3.repo`      
+    ( not sure how this file is being pushed. possibly pushed from Chris fixing stuff )
 
 
 STEPS
-1. remove /etc/yum.repos.d/salt-2018.3.repo from test
-1.2 remove yum versionlock in test (if there are any; None found)
-1.3 yum clean all ; yum makecache fast
-2. use git to update os_modifications/files/salt_rhel.repo file to 2019.2.2 ( match salt master)
-2.1 use salt + repo to update minion to 2019.2.2
-2.5 salt minion cmd.run 'rm -rf /etc/yum.repos.d/salt-2018.3.repo'
-2.5.1 salt minion cmd.run 'ls /etc/yum.repos.d/salt*'
-2.6 salt salt-master* state.sls os_modifications.repo_update
-2.7 salt salt-master* cmd.run 'yum clean all ; yum makecache fast'
-2.8 salt minion cmd.run 'yum update salt-minion -y' 
-2.9 salt minion cmd.run 'yum remove salt-repo -y'
+1. remove `/etc/yum.repos.d/salt-2018.3.repo` from test
+ - 1.2 remove yum versionlock in test (if there are any; None found)
+ - 1.3 `yum clean all` ; `yum makecache fast`
+2. use git to update `os_modifications/files/salt_rhel.repo` file to 2019.2.2 ( match salt master)
+ - 2.1 use salt + repo to update minion to 2019.2.2
+ - 2.5 salt minion cmd.run `rm -rf /etc/yum.repos.d/salt-2018.3.repo`
+  - 2.5.1 salt minion cmd.run `ls /etc/yum.repos.d/salt*`
+- 2.6 `salt salt-master* state.sls os_modifications.repo_update`
+- 2.7 salt salt-master* cmd.run `yum clean all ; yum makecache fast`
+- 2.8 salt minion cmd.run `yum update salt-minion -y` 
+- 2.9 salt minion cmd.run `yum remove salt-repo -y`
 3. upgrade salt master to 2019.2.3 using repo files as a test
 4. upgrade salt mininos to 2019.2.3 using repo files as a test
 5. push to prod. 
@@ -169,48 +199,51 @@ STEPS
 
 
 
-
-PROBLEMS
+PROBLEMS:
+```
 bastion.msoc.defpoint.local
 error: unpacking of archive failed on file /var/log/salt: cpio: lsetfilecon
 mailrelay.msoc.defpoint.local
 pillar broken
-
+```
 
 PROD
 
 1. remove dup repos
-1.1 remove /etc/yum.repos.d/salt-2018.3.repo from environment (looks like it was installed with a RPM) 
-1.1.1 salt minion cmd.run 'yum remove salt-repo -y' (does not remove the proper salt.repo file)
-1.1.2 salt minion cmd.run 'rm -rf /etc/yum.repos.d/salt-2018.3.repo'   (just to make sure)
-1.2 remove yum versionlock
+- 1.1 remove `/etc/yum.repos.d/salt-2018.3.repo` from environment (looks like it was installed with a RPM) 
+- 1.1.1 `salt minion cmd.run 'yum remove salt-repo -y'` (does not remove the proper salt.repo file)
+- 1.1.2 `salt minion cmd.run 'rm -rf /etc/yum.repos.d/salt-2018.3.repo6`   (just to make sure)
+- 1.2 remove yum versionlock
  yum versionlock list
-1.2.1 salt minion cmd.run 'yum versionlock delete salt-minion'
-1.2.2 salt minion cmd.run 'yum versionlock delete salt'
-1.2.3 salt minion cmd.run 'yum versionlock delete salt-master'
+  - 1.2.1 `salt minion cmd.run 'yum versionlock delete salt-minion'`
+  - 1.2.2 `salt minion cmd.run 'yum versionlock delete salt'`
+  - 1.2.3 `salt minion cmd.run 'yum versionlock delete salt-master'`
 2. use salt + repo to update master/minion to 2019.2.2
-2.1 use git to update os_modifications/files/salt_rhel.repo file to 2019.2.2 pin to minor release (match TEST)(https://repo.saltstack.com/yum/redhat/$releasever/$basearch/archive/2019.2.2)
-2.2 Check for environment grain ( needed for repo_update state file. )
-2.2.1 salt minion grains.item environment
-2.6 salt salt-master* state.sls os_modifications.repo_update
-2.7 salt salt-master* cmd.run 'yum clean all ; yum makecache fast'
-2.7.5 salt minion cmd.run 'yum check-update | grep salt'
-2.8 salt minion cmd.run 'yum update salt-minion -y' 
-OR salt minion pkg.upgrade name=salt-minion
-  salt minion pkg.upgrade name=salt-minion fromrepo=salt-2019.2.4
-2.9 salt master cmd.run 'yum update salt-master -y'
+ - 2.1 use git to update `os_modifications/files/salt_rhel.repo` file to 2019.2.2 pin to minor release (match TEST)(https://repo.saltstack.com/yum/redhat/$releasever/$basearch/archive/2019.2.2)
+ - 2.2 Check for environment grain ( needed for repo_update state file. )
+  - 2.2.1 salt minion grains.item environment
+ - 2.3 `salt salt-master* state.sls os_modifications.repo_update`
+ - 2.4 `salt salt-master* cmd.run 'yum clean all ; yum makecache fast'`
+  - 2.4.5 `salt minion cmd.run 'yum check-update | grep salt'`
+ - 2.5 `salt minion cmd.run 'yum update salt-minion -y'` 
+OR `salt minion pkg.upgrade name=salt-minion`
+  `salt minion pkg.upgrade name=salt-minion fromrepo=salt-2019.2.4`
+ - 2.6 salt master cmd.run 'yum update salt-master -y'
 3. ensure salt master and minions are at that minor version. 
-3.1 salt * test.version
-6. upgrade test and prod to 2019.2.3 via repo files to ensure upgrade process works properly. 
-6.5 fix permissions on master to allow non-root users to be able to run ( or run highstate )
-6.5.1 chmod 700 /etc/salt/master.d/
-6.5.2 then restart master
-7. never upgrade salt again. 
+ - 3.1 salt * test.version
+4. upgrade test and prod to 2019.2.3 via repo files to ensure upgrade process works properly.
 
-PROBLEMS
-the pillar depends on a custom grain, the custom grain depends on specific python modules. the moose servers seem to have python module issues. 
-these commands helped fix them. python yum VS. pip 
+5. fix permissions on master to allow non-root users to be able to run ( or run highstate )
+  - 5.1 chmod 700 /etc/salt/master.d/
+  - 5.2 then restart master
+6. never upgrade salt again. 
 
+PROBLEMS:
+* The pillar depends on a custom grain, the custom grain depends on specific python modules. T
+* The moose servers seem to have python module issues. 
+* These commands helped fix them. `python yum VS. pip`
+
+```
 ERROR: Could not get AWS connection: global name 'boto3' is not defined
 ERROR: ImportError: cannot import name certs
 pip list | grep requests
@@ -221,38 +254,39 @@ sudo yum install python-urllib3
 sudo yum install python-requests
 pip install boto3 (this installs urllib3 via pip as a dependency!)
 pip install boto
+```
 
-
-slsutil.renderer salt://os_modifications/repo_update.sls
-if the grain is wrong on the salt master, but correct with salt-call restart the minion. 
-
+`slsutil.renderer salt://os_modifications/repo_update.sls`  
+#if the grain is wrong on the salt master, but correct with salt-call restart the minion. 
+```
 salt moose* grains.item environment
 cmd.run 'salt-call grains.get environment'
 cmd.run 'salt-call -ldebug --local grains.get environment'
 cmd.run 'salt-call -lerror --local grains.get environment'
+```
 
-Boto3 issue is actually a urllib3 issue?
-`pip -V`
-`pip list | grep boto`
-`pip list | grep urllib3`
-
-salt-call is different connecting to python2
-/bin/bash: pip: command not found
-salt 'moose*indexer*' cmd.run "salt-call cmd.run 'pip install boto3'"
+Boto3 issue is actually a urllib3 issue?    
+`pip -V`    
+`pip list | grep boto`    
+`pip list | grep urllib3`   
+      
+salt-call is different connecting to python2    
+/bin/bash: pip: command not found   
+`salt 'moose*indexer*' cmd.run "salt-call cmd.run 'pip install boto3'"`   
 
 
-resolution steps
-Duane will remove /usr/local/bin/pip which is pointing to python3
-pip should be at /usr/bin/pip
-yum --enablerepo=epel -y reinstall python2-pip
+resolution steps  
+Duane will remove `/usr/local/bin/pip` which is pointing to `python3`    
+`pip` should be at `/usr/bin/pip`     
+`yum --enablerepo=epel -y reinstall python2-pip`      
 
-To Fix, upgrade the urllib3 module:
+To Fix, upgrade the `urllib3` module:
 1. `salt '*.local' cmd.run 'pip install --upgrade urllib3'`
 2. restart salt-minion 
 
 
-Permissions issue? Run this command as root:
-salt salt* state.sls salt_master.salt_posix_acl
+Permissions issue? Run this command as `root`:  
+`salt salt* state.sls salt_master.salt_posix_acl`
 
 
 

+ 6 - 6
Salt Upgrade 3001.2 -> 3001.6 Notes.md

@@ -2,13 +2,13 @@
 
 ### Places where code might need to be updated for a new version ( salt.repo )
 
-- packer/scripts/add-saltstack-repo.sh
-- salt/pillar/dev/yumrepos.sls
-- salt/pillar/prod/yumrepos.sls ( you can wait until after testing is done in test before deploying to prod )
+- [packer/scripts/add-saltstack-repo.sh](https://github.xdr.accenturefederalcyber.com/mdr-engineering/msoc-infrastructure/blob/develop/packer/scripts/add-saltstack-repo.sh)
+- [salt/pillar/dev/yumrepos.sls](https://github.xdr.accenturefederalcyber.com/mdr-engineering/msoc-infrastructure/blob/develop/salt/pillar/dev/yumrepos.sls)
+- [salt/pillar/prod/yumrepos.sls](https://github.xdr.accenturefederalcyber.com/mdr-engineering/msoc-infrastructure/blob/develop/salt/pillar/prod/yumrepos.sls) ( you can wait until after testing is done in test before deploying to prod )
 
 For your reference....
-- packer/scripts/provision-salt-master.sh   <- salt master is installed here
-- base/salt_master/cloud-init/provision_salt_master.sh   <- salt master is configured here
+- [packer/scripts/provision-salt-master.sh](https://github.xdr.accenturefederalcyber.com/mdr-engineering/msoc-infrastructure/blob/develop/packer/scripts/provision-salt-master.sh)   <- salt master is installed here
+- [base/salt_master/cloud-init/provision_salt_master.sh](https://github.xdr.accenturefederalcyber.com/mdr-engineering/xdr-terraform-modules/blob/master/base/salt_master/cloud-init/provision_salt_master.sh)   <- salt master is configured here
 
 
 ## 3001.2 -> 3001.6
@@ -58,7 +58,7 @@ Did you miss any?
 
 
 BAD DNS for Splunk returner
-requests.packages.urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='moose-hec.xdr.accenturefederalcyber.com', port=8088): Max retries exceeded with url: /services/collector/event (Caused by NewConnectionError('<requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7fb058f0deb8>: Failed to establish a new connection: [Errno 110] Connection timed out',))
+`requests.packages.urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='moose-hec.xdr.accenturefederalcyber.com', port=8088): Max retries exceeded with url: /services/collector/event` (Caused by `NewConnectionError('<requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7fb058f0deb8>: Failed to establish a new connection: [Errno 110] Connection timed out'`,))
 
 
 

+ 1 - 1
Salt Upgrade 3001.6 -> 3002.6 Notes.md

@@ -1,7 +1,7 @@
 ### Salt Upgrade 3001.6 -> 3002.6 Notes.md
 
 
-next time try this: salt/fileroots/os_modifications/minion_upgrade.sls ( move it to the salt folder or something )
+next time try this: `salt/fileroots/os_modifications/minion_upgrade.sls` ( move it to the salt folder or something )
 
 upgrade salt master then minions
 

+ 13 - 11
ScaleFT Notes.md

@@ -1,6 +1,6 @@
 # ScaleFT Notes.md
 
-OKTA owns ScaleFT (now Okta "Advanced Server Access") and we use it for managed SSH. https://help.okta.com/en/prod/Content/Topics/Adv_Server_Access/docs/asa-overview.htm
+OKTA owns ScaleFT (now Okta "Advanced Server Access") and we use it for managed SSH. [See Advanced Server Access on Okta Website](https://help.okta.com/asa/en-us/Content/Topics/Adv_Server_Access/docs/asa-overview.htm)
 
 
 ## Adding users to groups
@@ -19,13 +19,13 @@ Install ScaleFT on your local machine.
 
 > :warning: Do NOT run these commands as root user. 
 
-Choose your OS platform - https://help.okta.com/en/prod/Content/Topics/Adv_Server_Access/docs/sft.htm
+Choose your OS platform - [Install the Advanced Server Access client](https://help.okta.com/asa/en-us/Content/Topics/Adv_Server_Access/docs/sft.htm)
 
-Enroll the system from the cmd line as a new client using the `--team` switch and value "mdr" `sft enroll --team mdr` . A web page opens joining client to the Advanced Server Access platform. Ensure you are authenticated in MDR Portal via Okta. 
+Enroll the system from the cmd line as a new client using the `--team` switch and value "mdr" `sft enroll --team mdr` . A web page opens joining client to the Advanced Server Access platform. Ensure you are authenticated in [MDR Portal](https://mdr-multipass.okta.com) via Okta. 
 
 SSH Setup - To configure the SSH client, run `sft ssh-config`. This command outputs an SSH configuration block. Append this block to your SSH configuration file (usually `~/.ssh/config`). 
 
-> :note: You can append the configuration to your file in one step by using this cmd `sft ssh-config >> $HOME/.ssh/config`
+> :warning: You can append the configuration to your file in one step by using this cmd `sft ssh-config >> $HOME/.ssh/config`
 
 Client customization - Any paths provided are from a MacOS perspective and use `/Users/Admin/` as an example folder path. Paths on your machine may read differently.
 
@@ -44,7 +44,7 @@ HOSTNAME                      OS_TYPE    PROJECT_NAME            ID
 dev-afs-splunk-cm             linux      AFS                     6b637c27-d885-44ea-9074-18cde8bfaa51    10.x.x.x
 ```
 
-> :note: VPN required - Ensure you are connected to the correct VPN (in this case, XDR) when attempting to SSH into a server. SSH into server from output using the `Id:` field in the cmd `ssh 6b637c27-d885-44ea-9074-18cde8bfaa51` or by hostname `ssh dev-afs-splunk-cm`
+> :warning: VPN required - Ensure you are connected to the correct VPN (in this case, `XDR`) when attempting to SSH into a server. SSH into server from output using the `Id:` field in the cmd `ssh 6b637c27-d885-44ea-9074-18cde8bfaa51` or by hostname `ssh dev-afs-splunk-cm`
 
 If using a proxy, resolve proxy server (retrieve ID) `sft resolve proxy`
 
@@ -57,22 +57,24 @@ Name: 	gc-dev-proxy
 		LastSeen: 	13h38m0s ago
 ```
 
-> :NOTE: VPN required - Ensure you are connected to the correct VPN (in this case, XDRTest) when attempting to SSH into a server. SSH into proxy server from output using the `Id:` field in the cmd `ssh e1c10ac7-f152-45f4-9c42-ba6f30ffd2db` or by hostname `ssh gc-dev-proxy`
+> :warning: VPN required - Ensure you are connected to the correct VPN (in this case, `XDRTest`) when attempting to SSH into a server. SSH into proxy server from output using the `Id:` field in the cmd `ssh e1c10ac7-f152-45f4-9c42-ba6f30ffd2db` or by hostname `ssh gc-dev-proxy`
+
+With the bastion            
+`sft ssh gc-dev-salt-master --via gc-dev-bastion`           
+
 
-With the bastion
-`sft ssh gc-dev-salt-master --via gc-dev-bastion`
 
 
 ###  SSH without sft Using the msoc_build Key
-The ssh key used when packer builds the instance is called msoc_build. Because the servers are setup for FIPS mode, the msoc_build SSH key needs to be in "FIPS mode" before you use it. 
+The ssh key used when Packer builds the instance is called `msoc_build`. Because the servers are setup for FIPS mode, the `msoc_build` SSH key needs to be in "FIPS mode" before you use it. 
 
-To bypass sft and use the msoc_build key use this command.
+To bypass sft and use the `msoc_build` key use this command.
 
 `ssh -i msoc_build_fips centos@10.80.101.126`
 
 To use the key to ssh into hosts without the VPN use these commands. ( Agent Authentication forwarding )
 
-First, add msoc_build key to your ssh agent `ssh-add msoc_build_fips`
+First, add `msoc_build` key to your ssh agent `ssh-add msoc_build_fips`
 Then, SSH into bastion with `ssh -A centos@18.253.126.199`
 Finally, SSH into target server with `ssh centos@10.96.101.249`
 The key authentication will get passed through the proxy server and sent to the target host.  

+ 1 - 1
Sensu Notes.md

@@ -135,7 +135,7 @@ SensuA123
 
 If `/var` starts filling up, a likely candidate is the etcd database. This can be compacted and defragged to free up space, but the tool to do so isn't installed by default.
 
-To defrag: (based off [this document](https://docs.sensu.io/sensu-go/latest/operations/maintain-sensu/troubleshoot/))
+To defrag: (based off [Troubleshoot Sensu document](https://docs.sensu.io/sensu-go/latest/operations/maintain-sensu/troubleshoot/))
 ```
 sudo yum install -y etcd3
 sudo bash

+ 29 - 3
Sensu Upgrade 5.21 -> 6.3 Notes.md

@@ -1,14 +1,40 @@
 # Sensu Upgrade 5.21 -> 6.3 Notes
 
 ### Places where code might need to be upgraded for a new version 
-- Official Sensu Go Repo [Github](https://github.com/sensu/sensu-go)
-- Official Sensu Go Website [Sensu Go]()
+- Official [Sensu Go Repo Github](https://github.com/sensu/sensu-go)
+- Official [Sensu Go Website](https://sensu.io/)
 - Official Sensu Hosted Package Repo Service [Packagecloud](https://packagecloud.io/sensu/stable/)
 
-- ** We will use our XDR Internal `Reposerver` for all upgrade methods - See [How to add a new package to the Reposerver](Reposerver%20Notes.md)
+> :warning: We will use our XDR Internal `Reposerver` for all upgrade methods - See [How to add a new package to the Reposerver](Reposerver%20Notes.md)
 
 
 ### Sensu Go Upgrade to 6.3
+[Jira MSOCI-1565 ticket - Upgrade Sensu to 6.2.X](https://jira.xdr.accenturefederalcyber.com/browse/MSOCI-1565)
+
+Initial Ticket:
+```
+Sensu 6.1.3 breaks the ability for Sensu to use the proxy. This issue ( hopefully) is fixed in 6.2.0. 
+
+https://github.com/sensu/sensu-go/issues/4101
+
+https://github.com/sensu/sensu-go/pull/4113/files
+
+Done When: Sensu is upgraded to 6.2.0 
+```
+
+Ticket Update: [Decision to upgrade to 6.3](https://jira.xdr.accenturefederalcyber.com/browse/MSOCI-1565?focusedCommentId=44245&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-44245)
+
+
+Ticket conclusion: [Sensu Go GC Test and Prod env had been upgraded 100% to 6.3](https://jira.xdr.accenturefederalcyber.com/browse/MSOCI-1565?focusedCommentId=44564&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-44564)
+
+```
+[Conclusion]:
+
+    GC Test and Prod envs are both running Sensu Go 6.3 at both the 'backend', 'cli' and 'agents' from Sensu to 'entities'. 
+    We have not observed any issues within the XDR env pertaining to this upgrade.
+    Sensu Go 6.3 has introduced some newer features within the GUI that we are currently exploring usage of.
+```
+
 [Sensu Upgrade Documentation](https://docs.sensu.io/sensu-go/latest/operations/maintain-sensu/upgrade/) - as of June 17, 2021, `Sensu Go 6.3` is not being displayed in the Sensu documentation, but procedure still applies.
 
 1. Download latest packages for `Sensu backend`, `Sensu agents`, `Sensuctl` (Sensu CLI) to `Repo server` and run `yum clean all` on `Sensu Backend` server - See [Reposerver](Reposerver%20Notes.md) notes.

+ 15 - 11
Splunk MSCAS Notes.md

@@ -2,20 +2,22 @@
 
 
 References:
-https://github.mdr.defpoint.com/MDR-Content/mdr-content/wiki/CS0009:Search:MSOC---MS-CAS---Alert
-https://jira.xdr.accenturefederalcyber.com/browse/MSOCI-890
-https://docs.microsoft.com/en-us/cloud-app-security/siem
-https://splunkbase.splunk.com/app/3110/
 
+ * https://github.mdr.defpoint.com/MDR-Content/mdr-content/wiki/CS0009:Search:MSOC---MS-CAS---Alert
+ * [ONBOARDING: MS CAS - Jira ticket - MSOCI-890](https://jira.xdr.accenturefederalcyber.com/browse/MSOCI-890)
+ * [Integrate Microsoft Cloud App Security with your generic SIEM server](https://docs.microsoft.com/en-us/cloud-app-security/siem)
+ * [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110/)
 
-https://github.xdr.accenturefederalcyber.com/mdr-engineering/msoc-infrastructure/blob/master/salt/fileroots/syslog/files/customers/afs/conf.d/010-mcas.conf
 
+[MCAS Conf file located in Github](https://github.xdr.accenturefederalcyber.com/mdr-engineering/msoc-infrastructure/blob/master/salt/fileroots/syslog/files/customers/afs/conf.d/010-mcas.conf)
+
+```
 sourcetype=microsoft:cas
 index=app_mscas sourcetype="microsoft:cas"
 
 /opt/syslog-ng/mcas/afssplhf103.us.accenturefederal.com/log
 /opt/syslog-ng/mcas/afssplhf103.us.accenturefederal.com/log/2019-09-11/afsspaf101.us.accenturefederal.com/afsspaf101.us.accenturefederal.com/security.log
-
+```
 
 
 start EC2 instance
@@ -33,18 +35,20 @@ add java docker container
 add java code to container
 
 ------------------------------
-Going to try openjdk because oracle java requires login to pull the images
-https://hub.docker.com/_/openjdk
-docker pull openjdk
+Going to try `OpenJDK` because oracle java requires login to pull the images - [OpenJDK Official Image](https://hub.docker.com/_/openjdk)
+
+`docker pull openjdk`
 
 JAVA Command
-java -jar mcas-siemagent-0.87.20-signed.jar [--logsDirectory DIRNAME] [--proxy ADDRESS[:PORT]] --token TOKEN &
+
+`java -jar mcas-siemagent-0.87.20-signed.jar [--logsDirectory DIRNAME] [--proxy ADDRESS[:PORT]] --token TOKEN &`
 
 Docker commands
+```
 cd 
 docker image build -t customjava .
 docker run -d --name customjava --volume /root/java:/logs -t customjava
-
+```
 
 FROM openjdk:12
 COPY . /usr/src/myapp

+ 4 - 4
Splunk Migration from Commercial to GovCloud - 1. Prep and Indexer Cluster.md

@@ -14,11 +14,11 @@ sudo -u splunk /opt/splunk/bin/splunk show cluster-status
 
 # Create "snapshots"
 
-*IMPORTANT NOTE:* Remember to check the 'No Reboot' box!
+> :warning: Remember to check the 'No Reboot' box!
 
 Create "final" snapshots of the SH, CM, and HF on aws.
 
-Name: moose-splunk-hf-FinalSnapshot-20200115
+Name: moose-splunk-hf-FinalSnapshot-20200115      
 Description: Final snapshot before migration to GC
 
 # Create a branch
@@ -57,7 +57,7 @@ tfswitch
 #   migration_cidr = [ "10.40.16.0/22" ] # Determine actual CIDR block for vpc-splunk in the new account
 ```
 
-Commit to git and do a PR
+Commit to GIT and do a PR
 
 ```
 terraform init
@@ -113,7 +113,7 @@ vim moose_variables.sls
 # and update with the information obtained in the following steps.
 ```
 
-1. Log onto https://mdr-multipass-admin.okta.com/admin/apps/active
+1. Log onto the [Okta Admin Active Applications](https://mdr-multipass-admin.okta.com/admin/apps/active)
 1. For each of the new apps ("CUST Splunk CM [Prod] [GC]", HF, and SF), go to groups, and click 'Assign->Assign to Groups', and assign the following groups:
   * CM: mdr-admins, mdr-engineers
   * HF: mdr-admins, mdr-engineers

+ 4 - 4
Splunk Migration from Commercial to GovCloud - 2. Search Head.md

@@ -65,7 +65,7 @@ Excluding directories seems to be a recipe for trouble. But if you really want t
   --exclude 'splunk/bin/'
 ```
 
-Post to slack:
+Post to slack: [xdr-soc](https://afscyber.slack.com/archives/CFUP7STE2) and [xdr-general](https://afscyber.slack.com/archives/G01CY2Q2F8U)
 ```
 The Search Head for CUST is going down for the transition to GovCloud. I will notify again when the new server is operational.
 ```
@@ -109,9 +109,9 @@ sudo systemctl start splunk
 sudo systemctl enable splunk
 ```
 
-Validate that you can log into https://dc-19-splunk.pvt.xdr.accenturefederalcyber.com
+Validate that you can log into the [dc-c19 SH](https://dc-c19-splunk.pvt.xdr.accenturefederalcyber.com/en-US/app/launcher/home)
 
-Post to slack:
+Post to slack: [xdr-general](https://afscyber.slack.com/archives/G01CY2Q2F8U)
 ```
 The CUST Search Head is up. We are commencing testing of functionality and resolving any issues we find. Please let us know if you find anything here and we will resolve them as we are able. Note: The URL has changes. The new url is `https://<CUST>-splunk.pvt.xdr.accenturefederalcyber.com`. In the Okta launch page, it is listed as `<CUST> Splunk SH [Prod] [GC]`.
 ```
@@ -123,7 +123,7 @@ The CUST Search Head is up. We are commencing testing of functionality and resol
 ???
 
 
-Post to slack:
+Post to slack: [xdr-soc](https://afscyber.slack.com/archives/CFUP7STE2), [xdr-general](https://afscyber.slack.com/archives/G01CY2Q2F8U), and [xdr-engineering](https://afscyber.slack.com/archives/CFTJSTGDB)
 ```
 We believe all issues related to the migration of the moose search head have been resolved. If you find further issues, please @mention me here, send me an email, or call me at 616-634-4933 if it's critical. Please remember to include as much detail as possible, including steps to reproduce the issue, expected behavior, and actual behavior. Thanks!
 ```

+ 8 - 4
Splunk Migration from Commercial to GovCloud - 3. Remaining Servers.md

@@ -2,7 +2,7 @@
 
 # Migrate the HF (this will a no brainer in test, but in prod has implications)
 
-Terraform the HF
+Terraform the HF:
 ```
 cd ~/xdr-terraform-live/test/aws-us-gov/mdr-test-c2/180-splunk-heavy-forwarder
 terragrunt apply
@@ -16,7 +16,7 @@ salt 'moose-splunk-hf.pvt.xdrtest.accenturefederalcyber.com' state.highstate --o
 salt 'moose-splunk-hf.pvt.xdrtest.accenturefederalcyber.com' state.highstate --output-diff
 ```
 
-Prep the keys
+Prep the keys:
 ```
 tshp CUST-splunk-hf
 sudo systemctl stop splunk
@@ -33,10 +33,11 @@ mkdir .ssh
 cat >> .ssh/authorized_keys
 # paste from above
 exit
+```
 
 Initial rsyncs:
+
 ```
-# Log into new HF and stop splunkd
 tshp CUST-splunk-hf
 sudo systemctl stop splunk
 sudo su - splunk
@@ -45,9 +46,12 @@ time rsync --rsync-path="sudo rsync" -avz --delete --progress \
   --exclude="*.log"   --exclude '*.log.*'   --exclude '*.bundle' --exclude ".ssh"
 ```
 
+# Log into new HF and stop splunkd
+
 Final cutover:
-```
+
 # Stop splunk on the old HF
+```
 tshp CUST-splunk-hf.msoc.defpoint.local
 sudo systemctl stop splunk
 sudo systemctl disable splunk

+ 10 - 10
Splunk NGA Data Pull Request Notes.md

@@ -1,11 +1,10 @@
 # Splunk NGA Data Pull Request Notes
 
-Stand up a new "search head" that just has splunk installed on it, no need to configure the splunk instance. the splunk instance will query the actual search head and pull the data out. See hurricane labs python script.  
+Stand up a new "search head" that just has Splunk installed on it, no need to configure the Splunk instance. The Splunk instance will query the actual search head and pull the data out. See Hurricane Labs python script.  [The Best Guide for Exporting Massive Amounts of Data From Splunk](https://hurricanelabs.com/splunk-tutorials/the-best-guide-for-exporting-massive-amounts-of-data-from-splunk/)
 
-https://hurricanelabs.com/splunk-tutorials/the-best-guide-for-exporting-massive-amounts-of-data-from-splunk/
-
-https://jira.xdr.accenturefederalcyber.com/browse/MSOCI-1013
+[Jira MSOCI-1013 ticket - SPIKE: NGA CheckPoint Log Export Request](https://jira.xdr.accenturefederalcyber.com/browse/MSOCI-1013)
 
+```
 vpc-05e0cf38982e048db
 
 subnet-0a2384bce743cf303
@@ -25,13 +24,13 @@ delete key pair when done from AWS and the bastion host! bradp
 delete svc-searches from nga splunk SH when done
 
 delete 1TB EBS volume when done
+```
 
 
-
-search "index=network sourcetype=qos_syslog CA98C333-F830-0B45-A543-4450CDFDA84A 1571414560 Accept 47048" -output rawdata -maxout 0 -max_time 0 -uri https://10.2.2.122:8089
-
+`search "index=network sourcetype=qos_syslog CA98C333-F830-0B45-A543-4450CDFDA84A 1571414560 Accept 47048" -output rawdata -maxout 0 -max_time 0 -uri https://10.2.2.122:8089`
 
 
+```
 start fail
 1019_1020export.raw
 1018_1019 times:
@@ -82,10 +81,10 @@ i=7000
 start time 2019-09-15T17:30:00
 stop time 2019-09-16T12:45:00
 
+```
 
 
-
-
+```
 #from my mac
 aws s3 ls s3://nga-mdr-data-pull
 aws s3 cp nga-splunk-pull.zip s3://nga-mdr-data-pull
@@ -96,4 +95,5 @@ aws --profile=mdr-prod s3 presign s3://nga-mdr-data-pull/nga-splunk-pull.zip --e
 https://nga-mdr-data-pull.s3.amazonaws.com/nga-splunk-pull.zip?AWSAccessKeyId=ASIAW6MA4LDMBGUOE7Q6&Signature=6WZ9KdHfH4rj28Ey5hrTib8HcHM%3D&x-amz-security-token=FQoGZXIvYXdzEFIaDCbQsc24x7kkQnhLQSL%2FAV4UBSVowGvhyMyS41rQtbtnmznvrbIu5Y9CCrxJ65RP%2BMeHz7Jkwu8BFEzNeeIT5M6Dfcd1NdFkqXBjE54y6G6HujSSLPk8gp2UqGDKkqMDE3qzrXfHRKaIlMInkACQi6VPpRDjFYGnnILS8vO5gjzqr9HUAsIgfVwpEuVf%2FPBbEcuUH87kZS6FqyQHTBc%2BcPk8KetsX2IuLmpOVAysip3IGgx2duVETNqKH0uXOM%2FUBygyJ7gD3DLoQWqCHQvxG0AfO0vEkRAZxgLKSDm6E2c8d9mJ5I6yXl2xBK7ii5bKWmhWtnPGYrErVFTxhfqeI6SHwzJOsLlNdkAC6nSKRyi1wMztBQ%3D%3D&Expires=1572625186
 
 
-tail -1 1018_1019export.raw
+tail -1 1018_1019export.raw
+```

+ 21 - 13
Splunk Notes.md

@@ -2,47 +2,52 @@
 
 
 ---
-Change user to Splunk
+Change user to `Splunk`
 
-sudo -iu splunk
+`sudo -iu splunk`
 
 
 ---
-How to apply the git changes to the CM or customer DS. Be patient, it is splunk. Review logs in salt
+How to apply the git changes to the CM or customer DS. Be patient, it is Splunk. Review logs in salt
 
-Chris broke Jenkins.but he moved the splunk git repo to gitfs
+Chris broke Jenkins, but he moved the splunk git repo to gitfs
 
 1. add your changes to the appropriate git repo (msoc-moose-cm) 
 2. then use the salt state to push the changes and apply the new bundle
+```
     salt 'moose-splunk-cm*' state.sls splunk.master.apply_bundle_master
     salt 'afs-splunk-cm*' state.sls splunk.master.apply_bundle_master
+```
 
 Apply the git changes to the splunk UFs (Salt Deployment Server)
 
 Moose DS has a salt file for pushing apps out directly to UFs. 
 
 Customer DS
-salt 'afs-splunk-ds*' state.sls splunk.deployment_server.reload_ds
+`salt 'afs-splunk-ds*' state.sls splunk.deployment_server.reload_ds`
 
-to view the splunk command output look at the logs in splunk under the return.cmd_...changes.stdout or stderr
+to view the splunk command output look at the logs in splunk under the 
+```
+return.cmd_...changes.stdout or stderr
 index=salt sourcetype=salt_json fun="state.sls"
+```
 
 # Splunk License 
 Splunk CM is the license master and the salt master is used to push out a new license. Each customer has its own license. 
 
 ## Updating Splunk License
 
-Update the license file at salt/fileroots/splunk/files/licenses/<customer>/
+Update the license file at `salt/fileroots/splunk/files/licenses/<customer>/`
 
-`salt-run 
+`salt-run` 
 `salt *cm* state.sls splunk.license_master --output-diff`
 
    
     
 
 # SEARCHES
-
-
+```
+#Splunk
 | tstats values(sourcetype) where index=* group by index
 
 #collectd
@@ -70,13 +75,16 @@ index=network sourcetype=qos_syslog (service=443 OR service=80) NOT (action=Drop
 #Vault
 index=app_vault
 
+#Splunk
 | rest /services/data/indexes/
 | search title=app_mscas OR title = app_o365 OR title=dns OR title=forescout OR title=network OR title=security OR title=Te
-
+```
 
 ## coldToFrozenScript
 
-Yes, this is a mess. Moose is running a version of splunk that breaks with the coldToFrozen script being pushed from the CM in an app. To get around this, i moved it to /usr/local/bin. The other customers have the script in the app. 
+Yes, this is a mess. Moose is running a version of splunk that breaks with the `coldToFrozen` script being pushed from the CM in an app. To get around this, I moved it to `/usr/local/bin`. The other customers have the script in the app. 
 
+```
 ERROR: runcoldToFrozen and get SyntaxError. 
-SOLUTION: upgrade the awscli with pip3 ( run the splunk.indexer state. )
+SOLUTION: upgrade the awscli with pip3 ( run the splunk.indexer state. )
+```

+ 4 - 3
Splunk Process List Whitelisting FedRAMP Notes.md

@@ -1,12 +1,11 @@
 # Splunk Process List Whitelisting FedRAMP Notes
 
-***Only Used to Fufill CM-7(5)***
+***Only Used to Fufill CM-7(5) in [FedRAMP Security Controls Baseline](https://www.fedramp.gov/documents-templates/)***
 
 Notes from talking with Fred
 Salt State -> Push cron job + bash script to Minions -> Bash script writes to file -> Splunk UF reads file and indexes it. -> Splunk creates lookup file which compares to a baseline lookup file. Differneces between the two are displayed on a dashboard and can be "approved". the approve button runs a search that will merge the two lookups and updates the baseline. 
 
-Prelinking needs to be turned off
-https://access.redhat.com/solutions/61691
+Prelinking needs to be turned off according to [Questions about Prelinking in Red Hat Enterprise Linux](https://access.redhat.com/solutions/61691)
 
 proc f
 
@@ -16,9 +15,11 @@ Dashboard is broken needed to fix it. Remove the blacklist variable and it will
 app uses SHA256 hashes
 
 Splunk search containing whitelist
+```
 |inputlookup ProcessLookup
 |inputlookup ProcessLookup | search process=*splunk*
 |inputlookup ProcessLookup | search process=*splunk* | dedup file_hash
+```
 
 Don't look for salt as a process. It is started with the python process. 
 

+ 62 - 52
Splunk SAF Offboarding Notes.md

@@ -2,15 +2,14 @@
 
 Currently a 3 node multi-site cluster. Possible solution, set search and rep factor to 3 and 3 then pull the index files off one of the indexers to a new instance. On the new instance, setup multi-site cluster with one site and see if you can read the indexed files.
 
+[Splunk Enterprise 8.0.2 - "Managing Indexers and Clusters of Indexers" - Decommission a site in a multisite indexer cluster](https://docs.splunk.com/Documentation/Splunk/8.0.2/Indexer/Decommissionasite)          
+[Splunk Enterprise 7.0.3 - "Managing Indexers and Clusters of Indexers" - Multisite indexer cluster deployment](https://docs.splunk.com/Documentation/Splunk/7.0.3/Indexer/Multisitedeploymentoverview)
 
-https://docs.splunk.com/Documentation/Splunk/8.0.2/Indexer/Decommissionasite
-https://docs.splunk.com/Documentation/Splunk/7.0.3/Indexer/Multisitedeploymentoverview
-
-1 - cluster master
+1 - cluster master      
 1 - indexer with search 
 
 
-
+```
 /opt/splunkdata/hot/normal_primary/
 
 indexes:
@@ -52,17 +51,17 @@ site_replication_factor = origin:1,site1:1,site2:1,site3:1,total:3
 available_sites = site1,site2,site3
 cluster_label = afs_index_cluster
 
-
+```
 
 Steps
-1. change /opt/splunk/etc/system/local/server.conf  site_search_factor to origin:1,site1:1,site2:1,site3:1,total:3 This will ensure we have a searchable copy of all the buckets on all the sites. Should I change site_replication_factor to origin:1, total:1? this would reduce the size of the index. 
-2. restart CM ( this will apply the site_search_factor )
-3. send data to junk index (oneshot)
-3.1 /opt/splunk/bin/splunk add oneshot /opt/splunk/var/log/splunk/splunkd.log -sourcetype splunkd -index junk
-4. stop one indexer and copy index to new cluster. 
-5. on new cluster, setup CM and 1 indexer in multisite cluster. the clustermaster will be a search head in the same site
-6. setup new cluster to have site_mappings = default:site1
-7. attempt to search on new cluster
+1. Change `/opt/splunk/etc/system/local/server.conf`  `site_search_factor` to `origin:1,site1:1,site2:1,site3:1,total:3`. This will ensure we have a searchable copy of all the buckets on all the sites. Should I change `site_replication_factor` to `origin:1, total:1`? This would reduce the size of the index. 
+2. Restart CM ( this will apply the `site_search_factor` )
+3. Send data to junk index (oneshot)
+ - 3.1 /opt/splunk/bin/splunk add oneshot /opt/splunk/var/log/splunk/splunkd.log -sourcetype splunkd -index junk
+4. Stop one indexer and copy index to new cluster. 
+5. On new cluster, setup CM and 1 indexer in multisite cluster. the clustermaster will be a search head in the same site
+6. Setup new cluster to have site_mappings = default:site1
+7. Attempt to search on new cluster
 
 
 made the new junk index on test saf
@@ -71,7 +70,7 @@ latest = 02/21/20 9:32:01 PM UTC
 earlest = 02/19/20 2:32:57 PM UTC
 
 Before copying the buckets, ensure they are ALL WARM buckets, HOT buckets maybe be deleted on startup. 
-
+```
 #check on the buckets 
 | dbinspect index=junk
 
@@ -95,18 +94,20 @@ saf-offboarding-ssh Security group <- delete this not needed just SSH from Basti
 
 splunk version 7.0.3
 
-setup proxy for yum and wget
+#setup proxy for yum and wget
 vi /etc/yum.conf
 proxy=http://proxy.msoc.defpoint.local:80
 yum install vim wget
 vim /etc/wgetrc
 http_proxy = http://proxy.msoc.defpoint.local:80
 https_proxy = http://proxy.msoc.defpoint.local:80
+```
 
-Download Splunk
+#Download Splunk
+```
 wget -O splunk-7.0.3-fa31da744b51-linux-2.6-x86_64.rpm 'https://www.splunk.com/page/download_track?file=7.0.3/linux/splunk-7.0.3-fa31da744b51-linux-2.6-x86_64.rpm&ac=&wget=true&name=wget&platform=Linux&architecture=x86_64&version=7.0.3&product=splunk&typed=release'
 
-install it
+#install it
 yum localinstall splunk-7.0.3-fa31da744b51-linux-2.6-x86_64.rpm
 
 #setup https
@@ -124,11 +125,14 @@ https://10.1.2.170:8000/en-US/app/launcher/home
 #Indexer
 https://10.1.2.236:8000/en-US/app/launcher/home
 
-Change password for admin user
+#Change password for admin user
 /opt/splunk/bin/splunk edit user admin -password Jtg0BS0nrAyD -auth admin:changeme
 
+```
+
 Turn on distributed search in the GUI
 
+```
 #on CM
 /opt/splunk/etc/system/local/server.conf
 [general]
@@ -160,11 +164,12 @@ master_uri = https://10.1.2.170:8089
 mode = slave
 pass4SymmKey = password
 [replication_port://9887]
+```
 
 ***ensure networking is allowed between the hosts***
 
 The indexer will show up in the Cluster master 
-
+```
 #create this file on the indexer
 /opt/splunk/etc/apps/saf_all_indexes/local/indexes.conf
 
@@ -176,14 +181,14 @@ thawedPath     = $SPLUNK_DB/junk/thaweddb
 #copy the index over to the indexer
 cp junk_index.targz /opt/splunk/var/lib/splunk/
 tar -xzvf junk_index.targz
+```
 
-
-###################################################################################
+###################################################################################         
 PROD testing Notes
 
-SAF PROD Cluster testing with the te index.
-The indexers do not have the space to move to search/rep factor 3/3. Duane suggests keeping the current 2/3 and letting the temp splunk cluster  make the buckets searchable. according to the monitoring console:
-
+SAF PROD Cluster testing with the te index.         
+The indexers do not have the space to move to search/rep factor 3/3. Duane suggests keeping the current 2/3 and letting the temp splunk cluster  make the buckets searchable. According to the monitoring console:
+```
 te index gathered on Feb 26
 total index size: 3.1 GB
 total raw data size uncompressed: 10.37 GB
@@ -208,10 +213,10 @@ size on disk
 
 size of tarball
 490 MB
-
+```
 
 Allow instance to write to S3 bucket
-
+```
 {
     "Id": "Policy1582738262834",
     "Version": "2012-10-17",
@@ -231,14 +236,16 @@ Allow instance to write to S3 bucket
         }
     ]
 }
-
+```
+```
 ./aws s3 cp rst2odt.py s3://mdr-saf-off-boarding
 ./aws s3 cp /opt/splunkdata/hot/normal_primary/saf_te_index.tar.gz s3://mdr-saf-off-boarding
 
 aws --profile=mdr-prod s3 presign s3://mdr-saf-off-boarding/saf_te_index.tar.gz --expires-in 604800
+```
 
-uploaded brad_LAN key pair to AWS for new instances. 
-
+uploaded `brad_LAN` key pair to AWS for new instances. 
+```
 vpc-0202aedf3d0417cd3
 subnet-01bc9f77742ff132d
 sg-03dcc0ecde42fc8c2, sg-077ca2baaca3d8d97
@@ -269,31 +276,32 @@ ip-10-1-3-24
 
 #indexer-3
 ip-10-1-3-40
+```
 
-
-use virtualenv to grab awscli
-
+use `virtualenv` to grab awscli
+```
 export https_proxy=http://proxy.msoc.defpoint.local:80
 sudo -E ./pip install awscli
 
 ./aws s3 cp s3://mdr-saf-off-boarding/saf_te_index.tar.gz /opt/splunk/var/lib/splunk/saf_te_index.tar.gz
+```
 
 move index to CM rep buckets are not expanding to Search buckets
 
 1. rm -rf saf_all_indexes
 2. create it on the CM
-2.1 mkdir -p /opt/splunk/etc/master-apps/saf_all_indexes/local/
-2.2 vim /opt/splunk/etc/master-apps/saf_all_indexes/local/indexes.conf
+ - 2.1 `mkdir -p /opt/splunk/etc/master-apps/saf_all_indexes/local/`
+ - 2.2 `vim /opt/splunk/etc/master-apps/saf_all_indexes/local/indexes.conf`         
 [te]
 homePath      = $SPLUNK_DB/te/db
 coldPath      = $SPLUNK_DB/te/colddb
 thawedPath    = $SPLUNK_DB/te/thaweddb
 repFactor=auto
 
-2.3 cluster bundle push
-2.3.1 /opt/splunk/bin/splunk list cluster-peers
-2.3.1 splunk validate cluster-bundle
-2.3.2 splunk apply cluster-bundle
+ - 2.3 cluster bundle push
+   - 2.3.1 `/opt/splunk/bin/splunk` list cluster-peers
+   - 2.3.1 `splunk validate cluster-bundle`
+   - 2.3.2 `splunk apply cluster-bundle`
 
 
 
@@ -303,17 +311,17 @@ repFactor=auto
 #
 ##################
 
-#estimate size and age
+#estimate size and age          
 
-| rest /services/data/indexes/
-| search title=app_mscas OR title = app_o365 OR title=dns OR title=forescout OR title=network OR title=security OR title=Te
-| eval indexSizeGB = if(currentDBSizeMB >= 1 AND totalEventCount >=1, currentDBSizeMB/1024, null())
-| eval elapsedTime = now() - strptime(minTime,"%Y-%m-%dT%H:%M:%S%z")
-| eval dataAge = ceiling(elapsedTime / 86400)
-| stats sum(indexSizeGB) AS totalSize max(dataAge) as oldestDataAge by title
-| eval totalSize = if(isnotnull(totalSize), round(totalSize, 2), 0)
-| eval oldestDataAge = if(isNum(oldestDataAge), oldestDataAge, "N/A")
-| rename title as "Index" totalSize as "Total Size (GB)" oldestDataAge as "Oldest Data Age (days)"
+| rest /services/data/indexes/  
+| search title=app_mscas OR title = app_o365 OR title=dns OR title=forescout OR title=network OR title=security OR title=Te     
+| eval indexSizeGB = if(currentDBSizeMB >= 1 AND totalEventCount >=1, currentDBSizeMB/1024, null())     
+| eval elapsedTime = now() - strptime(minTime,"%Y-%m-%dT%H:%M:%S%z")    
+| eval dataAge = ceiling(elapsedTime / 86400)   
+| stats sum(indexSizeGB) AS totalSize max(dataAge) as oldestDataAge by title    
+| eval totalSize = if(isnotnull(totalSize), round(totalSize, 2), 0)     
+| eval oldestDataAge = if(isNum(oldestDataAge), oldestDataAge, "N/A")       
+| rename title as "Index" totalSize as "Total Size (GB)" oldestDataAge as "Oldest Data Age (days)"  
 
 
 1. adjust CM and push out new data retention limits per customer email
@@ -343,18 +351,19 @@ tar cvzf saf_myindex_index.tar.gz myindex/
 without encryption
 tar cvf /hubble.tar hubble/
 
-trying this: https://github.com/jeremyn/s3-multipart-uploader
+trying this: [Github repo for s3-multipart-uploader](https://github.com/jeremyn/s3-multipart-uploader)
 
 use virtualenv 
-
+```
 bin/python s3-multipart-uploader-master/s3_multipart_uploader.py -h
 
 bucket name mdr-saf-off-boarding
 
 bin/aws s3 cp /opt/splunkdata/hot/saf_te_index.tar.gz s3://mdr-saf-off-boarding/saf_te_index.tar.gz
+```
 
 DID NOT NEED TO USE THE MULTIPART uploader! 
-
+```
 aws --profile=mdr-prod s3 presign s3://mdr-saf-off-boarding/saf_app_mscas_index.tar.gz --expires-in 86400
 aws --profile=mdr-prod s3 presign s3://mdr-saf-off-boarding/saf_app_o365_index.tar.gz --expires-in 86400
 aws --profile=mdr-prod s3 presign s3://mdr-saf-off-boarding/saf_dns_index.tar.gz --expires-in 86400
@@ -362,3 +371,4 @@ aws --profile=mdr-prod s3 presign s3://mdr-saf-off-boarding/saf_forescout_index.
 aws --profile=mdr-prod s3 presign s3://mdr-saf-off-boarding/saf_network_index.tar --expires-in 86400
 aws --profile=mdr-prod s3 presign s3://mdr-saf-off-boarding/saf_security_index.tar.gz --expires-in 86400
 aws --profile=mdr-prod s3 presign s3://mdr-saf-off-boarding/saf_te_index.tar.gz --expires-in 86400
+```

+ 29 - 28
Splunk Upgrade Notes.md

@@ -18,52 +18,53 @@ Software is located in Duane's One drive.
             1. Upgrade using the Python 2 runtime and make minimal changes to Python code
     3. AFS/NGA upgrade
         1. Update salt pillar data to 8.0.5 repo to reflect new splunk repo.
-        0.2 Dump all passwords from the password store PRIOR to upgrade. 
-            0.2.1 Run on the HF: `| rest /services/storage/passwords`
+           - 0.2 Dump all passwords from the password store PRIOR to upgrade. 
+              - 0.2.1 Run on the HF: `| rest /services/storage/passwords`
         2. Ensure recent backup of SH EBS
         3. upgrade indexers: stop all at the same time
-        3.1. apply the updated pillar data`salt afs* saltutil.refresh_pillar`
-        3.2. verify the pillar is updated`salt afs* pillar.item yumrepos:splunk`
-        3.3. verify there is enough disk space
+           - 3.1. apply the updated pillar data`salt afs* saltutil.refresh_pillar`
+           - 3.2. verify the pillar is updated`salt afs* pillar.item yumrepos:splunk`
+           - 3.3. verify there is enough disk space
         4. Upgrade CM
-            0.1 Setup silence on Sensu for ALL servers
-            1. Run: `state.sls splunk.new_install` to update repo ; yes it will restart splunk. (ROOM FOR IMPROVEMENT: Make new saltstate for splunk repo)
-            2. Stop splunk `cmd.run 'systemctl stop splunk'`
-            3. Upgrade splunk `pkg.upgrade name=splunk`
-            3.1 Splunk is now waiting for accept license. Do Not Start Splunk Until after indexers are upgraded.
+           - 0.1 Setup silence on Sensu for ALL servers
+              1. Run: `state.sls splunk.new_install` to update repo ; yes it will restart splunk. (ROOM FOR IMPROVEMENT: Make new saltstate for splunk repo)
+              2. Stop splunk `cmd.run 'systemctl stop splunk'`
+              3. Upgrade splunk `pkg.upgrade name=splunk`
+                 - 3.1 Splunk is now waiting for accept license. Do Not Start Splunk Until after indexers are upgraded.
         5. Upgrade SH
-            0.1 Setup silence on Sensu
-            1. Run: `state.sls splunk.new_install` to update repo
-            2. Stop splunk `cmd.run 'systemctl stop splunk'`
-            2.1 Backup /opt/splunk `tar -cvzf /opt/splunk/opt-splunk-backup.tar.gz /opt/splunk`
-            3. Upgrade splunk `pkg.upgrade name=splunk`
-            3.1 Splunk is now waiting for accept license.
+            - 0.1 Setup silence on Sensu
+              1. Run: `state.sls splunk.new_install` to update repo
+              2. Stop splunk `cmd.run 'systemctl stop splunk'`
+                 - 2.1 Backup /opt/splunk `tar -cvzf /opt/splunk/opt-splunk-backup.tar.gz /opt/splunk`
+              3. Upgrade splunk `pkg.upgrade name=splunk`
+                 - 3.1 Splunk is now waiting for accept license.
         6. Upgrade Indexers
-            0.1 Setup silence on Sensu
-            1. Run: `state.sls splunk.new_install` to update repo
-            2. Stop splunk `cmd.run 'systemctl stop splunk'`
-            3. Upgrade splunk `pkg.upgrade name=splunk`
-            3. Start indexers and accept license `cmd.run 'systemctl start splunk'`
-            3.1 `cmd.run '/opt/splunk/bin/splunk version'`
-            3.2 `cmd.run '/opt/splunk/bin/splunk status'`
+            - 0.1 Setup silence on Sensu
+              1. Run: `state.sls splunk.new_install` to update repo
+              2. Stop splunk `cmd.run 'systemctl stop splunk'`
+              3. Upgrade splunk `pkg.upgrade name=splunk`
+              4. Start indexers and accept license `cmd.run 'systemctl start splunk'`
+                  - 4.1 `cmd.run '/opt/splunk/bin/splunk version'`
+                  - 3.2 `cmd.run '/opt/splunk/bin/splunk status'`
         7. Start CM and SH
             1. Start CM/SH and accept license `cmd.run 'systemctl start splunk'`
         8. Upgrade HF (slice only, not POPs)
             1. Run: `state.sls splunk.new_install` to update repo
             2. Stop splunk `cmd.run 'systemctl stop splunk'`
-            2.1 Backup /opt/splunk `tar -cvzf /opt/splunk/opt-splunk-backup.tar.gz /opt/splunk`
+               - 2.1 Backup /opt/splunk `tar -cvzf /opt/splunk/opt-splunk-backup.tar.gz /opt/splunk`
             3. Upgrade splunk `pkg.upgrade name=splunk`
             4. Start indexers and accept license `cmd.run 'systemctl start splunk'`
         9. After Splunk App Upgrades  
             1. Upgrade ES 5.0.1 -> 6.2.0
-                1. The app failed to upload to the SH. ( takes a long time ). Modify the etc/system/local/web.conf to allow large uploads. 
+                1. The app failed to upload to the SH. ( takes a long time ). Modify the `etc/system/local/web.conf` to allow large uploads. 
                 ```[settings]
-                    max_upload_size = 1024```
+                    max_upload_size = 1024
+                ```
             2. See Matrix for other apps ( upgrade apps slowly so Brandon can troubleshoot errors!!!!)
             3. run geo ip DB update
                 1. `/usr/local/bin/maxmind-downloader.sh`
-            4. (Prevents 3 green checkmarks on CM) Update the CM bundle to include _cluster see here: https://github.xdr.accenturefederalcyber.com/mdr-engineering/msoc-afs-cm/pull/9  (index _metrics and _introspection not in _cluster)
-            5. NGA has an additional check on the splunk HF IAM role for externalID. Besure to add the "patch" back in. See here:  https://jira.xdr.accenturefederalcyber.com/browse/MSOCI-623. This is for the splunk_TA_aws app.
+            4. (Prevents 3 green checkmarks on CM) Update the CM bundle to include `_cluster` see here: [Fixes for not replicating indexes?](https://github.xdr.accenturefederalcyber.com/mdr-engineering/msoc-afs-cm/pull/9)  (index _metrics and _introspection not in _cluster)
+            5. NGA has an additional check on the splunk HF IAM role for `externalID`. Besure to add the "patch" back in. See here: [Jira Ticket - MSOCI-623 - Splunk AWS TA doesn't support --external-id when assuming an IAM role](https://jira.xdr.accenturefederalcyber.com/browse/MSOCI-623). This is for the `splunk_TA_aws` app.
         10. Delete Sensu Silences
         11. Check lastchance index for unusual data. If the upgrade of ES introducing new indexes, and the new indexes are not on the Splunk indexers, then the data will be put into the lastchance index. 
 

+ 58 - 45
Terraform Notes.md

@@ -1,76 +1,80 @@
 # Terraform Notes.md
 
-Hashicorp Terraform is used to deploy AWS resources by writing code. 
+[Hashicorp Terraform](https://www.terraform.io/) is used to deploy AWS resources by writing code. 
 
 ## Folder Structure
 
-`00-cis-hardening` - CIS Hardening for MDR root - (Ryan D'Amour, how does this go to other accounts)
-`00-organizations-and-iam` - IAM Roles and Policies across accounts (NOTE: No workspaces, applies everywhere)
-`00-state-mgmt` - S3 buckets for state management (may be prerequisite for others)
-`01-eips` - Elastic IPs and Associated DNS Record (protection from accidentally deletion)
-`02-msoc_vpc` - Managed SOC VPC (msoc is old name) - Meat and potatoes of command and control
-`03-mgmt` - ? Maybe Unused ? - Most appears to be junk, tread carefully.
-`04-ghe` - GitHub Enterprise - May be junk, GHE may be created elsewehere. Tread carefully.
-`05-customer_portal` - Web App for Customers in Docker using ECR, in its own vpc, running on ec2 running docker, not in fargate)
-`10-custpod1` - Splunk Monitoring Console + junk (Could probably burn and update)
-`11-codebuild` - Code Build to make RPMs
-`12-fargate` - Fargate for syslog-ng that gets ghe logs into moose
-`100-moose` - Our splunk environment (watch for modules of modules of modules)
-`101-afs` - AFS Customer Environment
-`102-saf` - SAF ("Smart and Final") - Powered Down through console - DO NOT TOUCH THE TF
-`103-nga` - *FEDRAMP SPONSOR* NGA ("National Gallery of Art"), sometimes referred to as Gallery.
-`104-coalfire` - Our FedRAMP Auditors (Standard customer with kali box)
-`105-cf2` - Our FedRAMP Auditors 2nd Environment
-`106-ma-c19` - Massachusetts Covid-19 (Internal AFS customer)
-`107-la-c19` - Louisiana Covid-19 (Internal AFS customer)
-`common` - Common files that are symbolicly linked into other folders
-`modules` - Reusable code - Do not run terraform here! A mix of homebrewed and third party modules.
+`00-cis-hardening` - CIS Hardening for MDR root - (Ryan D'Amour, how does this go to other accounts)      
+`00-organizations-and-iam` - IAM Roles and Policies across accounts (NOTE: No workspaces, applies everywhere)     
+`00-state-mgmt` - S3 buckets for state management (may be prerequisite for others)      
+`01-eips` - Elastic IPs and Associated DNS Record (protection from accidentally deletion)     
+`02-msoc_vpc` - Managed SOC VPC (msoc is old name) - Meat and potatoes of command and control     
+`03-mgmt` - ? Maybe Unused ? - Most appears to be junk, tread carefully.      
+`04-ghe` - GitHub Enterprise - May be junk, GHE may be created elsewehere. Tread carefully.     
+`05-customer_portal` - Web App for Customers in Docker using ECR, in its own vpc, running on ec2 running docker, not in fargate)      
+`10-custpod1` - Splunk Monitoring Console + junk (Could probably burn and update)     
+`11-codebuild` - Code Build to make RPMs      
+`12-fargate` - Fargate for syslog-ng that gets ghe logs into moose      
+`100-moose` - Our splunk environment (watch for modules of modules of modules)      
+`101-afs` - AFS Customer Environment      
+`102-saf` - SAF ("Smart and Final") - Powered Down through console - DO NOT TOUCH THE TF      
+`103-nga` - *FEDRAMP SPONSOR* NGA ("National Gallery of Art"), sometimes referred to as Gallery.      
+`104-coalfire` - Our FedRAMP Auditors (Standard customer with kali box)     
+`105-cf2` - Our FedRAMP Auditors 2nd Environment      
+`106-ma-c19` - Massachusetts Covid-19 (Internal AFS customer)     
+`107-la-c19` - Louisiana Covid-19 (Internal AFS customer)     
+`common` - Common files that are symbolicly linked into other folders     
+`modules` - Reusable code - Do not run terraform here! A mix of homebrewed and third party modules.     
 
 ## TFswitcher
 
-https://warrensbox.github.io/terraform-switcher/
+* [Introduction to tfswitch - Main site](https://tfswitch.warrensbox.com/)
+* [tfswitch Github](https://github.com/warrensbox/terraform-switcher/)
+* [Installation](https://tfswitch.warrensbox.com/Install/)
 
-`brew install warrensbox/tap/tfswitch`
-`brew install warrensbox/tap/tgswitch`
+```
+brew install warrensbox/tap/tfswitch
+brew install warrensbox/tap/tgswitch
+```
 
 If there is a file that has a terraform version specified, running `tfswitch` will automatically switch to that version.
 
 ## Debug
 06/2020
-
+```
 Enable debug
 export TF_LOG=DEBUG
 export TF_LOG_PATH=./terraform.log
 
 Disable debug
 export TF_LOG=
-
+```
 ## Workspaces
 05/2020
 
 
 ------------------
-workspaces are being used to break up environments. 
-
+```
+#workspaces are being used to break up environments. 
 terraform workspace list
 terraform workspace select test
 
 
-Strange errors? Unexpected results? try this
+#Strange errors? Unexpected results? try this
 rm .terraform
 terraform init
 
-State issues
+#State issues
 terraform state show aws_ami.msoc_base
 terraform refresh -target=data.aws_ami.msoc_base
+```
 
 Terraform also has a DynamoDB State lock (msoc-terraform-lock). This will prevent terraform state breakage. 
 
-To manually remove the lock: https://www.terraform.io/docs/cli/commands/force-unlock.html
+To manually remove the lock: [Terraform CLI - Command: force-unlock](https://www.terraform.io/docs/cli/commands/force-unlock.html)
 
 ------------------
-View TF code
-https://github.com/terraform-aws-modules
+View TF code [Terraform AWS modules Github](https://github.com/terraform-aws-modules)
 
 
 -------------------
@@ -78,40 +82,48 @@ Modules
 
 We are using the aws ec2-instance module
 
-https://registry.terraform.io/modules/terraform-aws-modules/ec2-instance/aws/2.13.0
-https://github.com/terraform-aws-modules/terraform-aws-ec2-instance
+* [Terrform Registry Modules](https://registry.terraform.io/browse/modules)
+
+* [Terraform module which creates EC2 instance(s) on AWS - v2.13.0](https://registry.terraform.io/modules/terraform-aws-modules/ec2-instance/aws/2.13.0)
 
+* [Terraform AWS Module Github](https://github.com/terraform-aws-modules/terraform-aws-ec2-instance)
 
-var.something means this is a module that needs the variable to run. Your code will fill the variable. 
+
+`var.something` means this is a module that needs the variable to run. Your code will fill the variable. 
 data is a read-only terrafom object that queries provider or generates something on the localhost
 locals are variables that can refer to variables or other locals
 variables - expecting data from somewhere else.
 provider instance of the API
 
 Some files are symlinks.
-`ln -s ../common/variables.tf variables.tf`
-`ln -s ../amis.tf amis.tf`
-`ln -s ../../../../prod/aws-us-gov/mdr-prod-c2/090-instance-vault/README.md README.md`
+```
+ln -s ../common/variables.tf variables.tf
+ln -s ../amis.tf amis.tf
+ln -s ../../../../prod/aws-us-gov/mdr-prod-c2/090-instance-vault/README.md README.md
+```
 
 
 --------------------
 IAM Role 
 
 get this error?
+```
 aws_iam_policy.nga_instance_policy: Error creating IAM policy nga_instance_tag_read: AccessDenied:
+```
 
 add this
+```
   provider = "aws.iam_admin"
-  
+```
 -------------------
 
-in terraform .tf files when the self = true. that is for putting the security group into itself. e.g. add the security group to the security groups rules. 
-
-the terraform is setup in folders. each folder is a project and apply should be run in the folder. Common is the execption as some of the projects are dependent on that folder. 
+In terraform `.tf` files when the `self = true`. That is for putting the security group into itself. e.g. add the security group to the security groups rules. 
 
-role and policy have to be done in the IAM terraform
+the Terraform is setup in folders. each folder is a project and apply should be run in the folder. `Common` is the execption as some of the projects are dependent on that folder. 
 
+Role and Policy have to be done in the IAM terraform
 
+```
 iam_data.tf
 
 02-msoc_vpc/lambda.tf with security groups
@@ -127,6 +139,7 @@ terraform apply -target=module.vpc_default_security_groups.aws_security_group_ru
 
 
 terraform apply -target=module.afs_cluster.module.vpc_default_security_groups.aws_security_group_rule.typical_host_outbound_to_sensu_5672 -target=module.afs_cluster.module.vpc_default_security_groups.aws_security_group_rule.typical_host_outbound_to_sensu_8081
+```
 
 ## Updating to TF13 and AWS 3
 

+ 56 - 55
Vault Notes.md

@@ -2,9 +2,9 @@
 
 Vualt is setup with dynamoDB as the backend. Vault has 3 nodes in a cluster and an AWS ALB as the frontend. The vault is unsealed with AWS KMS instead of the usual master key.
 
-the vault binary is located at /usr/local/bin/vault
+the vault binary is located at `/usr/local/bin/vault`
 
-Additional Notes are located here: https://github.xdr.accenturefederalcyber.com/mdr-engineering/msoc-infrastructure/blob/master/salt/fileroots/vault/README.md 
+Additional Notes are located here: [msoc-infrastructure - Vault README.md](https://github.xdr.accenturefederalcyber.com/mdr-engineering/msoc-infrastructure/blob/master/salt/fileroots/vault/README.md) 
 
 ## How to log into CLI on the Vault server. 
 
@@ -13,8 +13,8 @@ Additional Notes are located here: https://github.xdr.accenturefederalcyber.com/
 3. run this on vault-1 `vault login`
 4. paste token and login
 
-Auth Error? Try populating the Bash variables. 
-`export VAULT_ADDR=https://vault.mdr-test.defpoint.com`
+Auth Error? Try populating the Bash variables.      
+`export VAULT_ADDR=https://vault.pvt.xdrtest.accenturefederalcyber.com`
 
 1. change made to the service file
 Unknown lvalue 'StartLimitIntervalSec' in section 'Service'
@@ -27,57 +27,58 @@ Oct 30 13:31:32 vault-1 systemd: [/etc/systemd/system/vault.service:16] Failed t
 
 ## TEST VAULT Notes
 
-https://github.xdr.accenturefederalcyber.com/mdr-engineering/msoc-infrastructure/tree/master/salt/fileroots/vault
+[msoc-infrastructure - Vault README.md](https://github.xdr.accenturefederalcyber.com/mdr-engineering/msoc-infrastructure/blob/master/salt/fileroots/vault/README.md)
 
 1. stop vault service from salt on all vault instances
-1.1 salt vault* cmd.run 'systemctl stop vault'
+ - 1.1 `salt vault* cmd.run 'systemctl stop vault'`
 2. wipe dynamoDB (select items-> actions -> delete) until there are no more items (BESURE to BACKUP FIRST!)
 3. start vault
-3.1 run salt state to ensure it is in the correct state with all policies on disk. 
-3.2 salt vault* state.sls vault
-4. on vault-1, init vault RUN on the server not salt (avoid the recovery keys from getting into logs)
-4.1 VAULT_ADDR=https://vault.mdr-test.defpoint.com vault operator init -tls-skip-verify=true -recovery-shares=5 -recovery-threshold=2
+ - 3.1 run salt state to ensure it is in the correct state with all policies on disk. 
+ - 3.2 `salt vault* state.sls vault`
+4. On `vault-1`, init vault RUN on the server not salt (avoid the recovery keys from getting into logs)
+ - 4.1 `VAULT_ADDR=https://vault.pvt.xdrtest.accenturefederalcyber.com vault operator init -tls-skip-verify=true -recovery-shares=5 -recovery-threshold=2`
 5. login 
-5.1 VAULT_ADDR=https://vault.mdr-test.defpoint.com vault login -tls-skip-verify=true -method=token
-5.2 Do yourself a favor and setup some Bash Variables or run commands from salt 
-    export VAULT_ADDR=https://vault.mdr-test.defpoint.com
+ - 5.1 `VAULT_ADDR=https://vault.pvt.xdrtest.accenturefederalcyber.com vault login -tls-skip-verify=true -method=token`
+ - 5.2 Do yourself a favor and setup some Bash Variables or run commands from salt
+ ``` 
+    export VAULT_ADDR=https://vault.pvt.xdrtest.accenturefederalcyber.com
     export VAULT_ADDR=https://127.0.0.1
-    export VAULT_ADDR=https://vault.mdr.defpoint.com
+    export VAULT_ADDR=https://vault.pvt.xdrtest.accenturefederalcyber.com
     export VAULT_SKIP_VERIFY=1
-    
+ ```
 
 6. setup okta auth
-6.1 VAULT_ADDR=https://vault.mdr-test.defpoint.com vault auth enable okta
-6.2 VAULT_ADDR=https://vault.mdr-test.defpoint.com vault write -tls-skip-verify=true auth/okta/config base_url="okta.com" organization="mdr-multipass" token="api_token_here"
-6.2 VAULT_ADDR=https://vault.mdr-test.defpoint.com vault write -tls-skip-verify=true auth/okta/config base_url="okta.com" organization="mdr-multipass" token="$( cat ~/.okta-token )"
-6.3 VAULT_ADDR=https://vault.mdr-test.defpoint.com vault auth list
-6.4 set the TTL for the okta auth method
-6.4.1 VAULT_ADDR=https://vault.mdr-test.defpoint.com vault auth tune -default-lease-ttl=3h -max-lease-ttl=3h okta/
+ - 6.1 `VAULT_ADDR=https://vault.pvt.xdrtest.accenturefederalcyber.com vault auth enable okta`
+ - 6.2 `VAULT_ADDR=https://vault.pvt.xdrtest.accenturefederalcyber.com vault write -tls-skip-verify=true auth/okta/config base_url="okta.com" organization="mdr-multipass" token="api_token_here"`
+ - 6.2 `VAULT_ADDR=https://vault.pvt.xdrtest.accenturefederalcyber.com vault write -tls-skip-verify=true auth/okta/config base_url="okta.com" organization="mdr-multipass" token="$( cat ~/.okta-token )"`
+ - 6.3 `VAULT_ADDR=https://vault.pvt.xdrtest.accenturefederalcyber.com vault auth list`
+ - 6.4 `set the TTL for the okta auth method`
+   - 6.4.1 `VAULT_ADDR=https://vault.pvt.xdrtest.accenturefederalcyber.com vault auth tune -default-lease-ttl=3h -max-lease-ttl=3h okta/`
 
 
 7. Enable/add Policies
-7.1 vault policy write -tls-skip-verify=true admins /etc/vault/admins.hcl
-7.2 vault policy write -tls-skip-verify=true engineers /etc/vault/engineers.hcl
-7.2 vault policy write -tls-skip-verify=true clu /etc/vault/clu.hcl
-7.2 vault policy write -tls-skip-verify=true onboarding /etc/vault/onboarding.hcl
-7.2 vault policy write -tls-skip-verify=true portal /etc/vault/portal.hcl
-7.2 vault policy write -tls-skip-verify=true soc /etc/vault/soc.hcl
-7.2 vault policy write salt-master /etc/vault/salt-master.hcl
-7.2 vault policy write saltstack/minions /etc/vault/salt-minions.hcl
-
-8 Add external groups
-8.1 vault write identity/group name="admins" policies="admins" type="external"
-8.2 vault write identity/group name="mdr-engineers" policies="engineers" type="external"
-8.3 vault write identity/group name="vault-admins" policies="admins" type="external"
-8.4 vault write identity/group name="soc-lead" policies="soc" type="external"
-8.5 vault write identity/group name="soc-tier-3" policies="soc" type="external"
+ - 7.1 vault policy write -tls-skip-verify=true admins /etc/vault/admins.hcl
+ - 7.3 vault policy write -tls-skip-verify=true engineers /etc/vault/engineers.hcl
+ - 7.4 vault policy write -tls-skip-verify=true clu /etc/vault/clu.hcl
+ - 7.5 vault policy write -tls-skip-verify=true onboarding /etc/vault/onboarding.hcl
+ - 7.6 vault policy write -tls-skip-verify=true portal /etc/vault/portal.hcl
+ - 7.7 vault policy write -tls-skip-verify=true soc /etc/vault/soc.hcl
+ - 7.8 vault policy write salt-master /etc/vault/salt-master.hcl
+ - 7.9 vault policy write saltstack/minions /etc/vault/salt-minions.hcl
+
+8. Add external groups
+ - 8.1 vault write identity/group name="admins" policies="admins" type="external"
+ - 8.2 vault write identity/group name="mdr-engineers" policies="engineers" type="external"
+ - 8.3 vault write identity/group name="vault-admins" policies="admins" type="external"
+ - 8.4 vault write identity/group name="soc-lead" policies="soc" type="external"
+ - 8.5 vault write identity/group name="soc-tier-3" policies="soc" type="external"
 
 9 add alias through the GUI. (use the root token to login or a temp root token (better))
-9.1 Access -> Groups -> admins -> Aliases -> Create alias -> mdr-admins
-9.2 Access -> Groups -> mdr-engineers -> Aliases -> Create alias -> mdr-engineers
-9.3 Access -> Groups -> vault-admins -> Aliases -> Create alias -> vault-admin
-9.4 Access -> Groups -> soc-lead -> Aliases -> Create alias -> Analyst-Shift-Lead
-9.5 Access -> Groups -> soc-tier-3 -> Aliases -> Create alias -> Analyst-Tier-3 
+ - 9.1 Access -> Groups -> admins -> Aliases -> Create alias -> mdr-admins
+ - 9.2 Access -> Groups -> mdr-engineers -> Aliases -> Create alias -> mdr-engineers
+ - 9.3 Access -> Groups -> vault-admins -> Aliases -> Create alias -> vault-admin
+ - 9.4 Access -> Groups -> soc-lead -> Aliases -> Create alias -> Analyst-Shift-Lead
+ - 9.5 Access -> Groups -> soc-tier-3 -> Aliases -> Create alias -> Analyst-Tier-3 
 
 groups              alias               policy
 admins              mdr-admins          admins
@@ -86,22 +87,22 @@ vault-admins        vault-admin         admins
 soc-lead            Analyst-Shift-Lead  soc
 soc-tier-3          Analyst-Tier-3      soc
 
-10 enable the file audit 
-10.1 VAULT_ADDR=https://vault.mdr-test.defpoint.com vault audit enable -tls-skip-verify=true file file_path=/var/log/vault.log
+10. enable the file audit 
+- 10.1 `VAULT_ADDR=https://vault.pvt.xdrtest.accenturefederalcyber.com vault audit enable -tls-skip-verify=true file file_path=/var/log/vault.log`
 
-11 enable the aws & approle auth
-11.1 VAULT_ADDR=https://vault.mdr-test.defpoint.com vault auth enable -tls-skip-verify=true aws
-11.2 setup approle auth using the salt-master policy
-11.2.1 vault auth enable approle
-11.2.2 vault write auth/approle/role/salt-master token_max_ttl=3h token_policies=salt-master
+11. enable the aws & approle auth
+- 11.1 `VAULT_ADDR=https://vault.pvt.xdrtest.accenturefederalcyber.com vault auth enable -tls-skip-verify=true aws`
+- 11.2 `setup approle auth using the salt-master policy`
+   - 11.2.1 `vault auth enable approle`
+   - 11.2.2 `vault write auth/approle/role/salt-master token_max_ttl=3h token_policies=salt-master`
 
-12 configure the aws policies on the role (clu and portal) UPDATE THE AWS ACCOUNT!!!
-12.1  VAULT_ADDR=https://vault.mdr-test.defpoint.com vault write auth/aws/role/portal auth_type=iam bound_iam_principal_arn=arn:aws:iam::527700175026:role/portal-instance-role policies=portal max_ttl=24h
-12.2 VAULT_ADDR=https://vault.mdr-test.defpoint.com vault write auth/aws/role/clu auth_type=iam bound_iam_principal_arn=arn:aws:iam::527700175026:role/clu-instance-role policies=clu max_ttl=24h
+12. configure the aws policies on the role (clu and portal) UPDATE THE AWS ACCOUNT!!!
+12.1  VAULT_ADDR=https://vault.pvt.xdrtest.accenturefederalcyber.com vault write auth/aws/role/portal auth_type=iam bound_iam_principal_arn=arn:aws:iam::527700175026:role/portal-instance-role policies=portal max_ttl=24h
+12.2 VAULT_ADDR=https://vault.pvt.xdrtest.accenturefederalcyber.com vault write auth/aws/role/clu auth_type=iam bound_iam_principal_arn=arn:aws:iam::527700175026:role/clu-instance-role policies=clu max_ttl=24h
 
 
-13 Create the kv V2 secret engines
-VAULT_ADDR=https://vault.mdr-test.defpoint.com ~/Documents/MDR/Vault/vault secrets enable -path=engineering kv-v2
+13. Create the kv V2 secret engines
+VAULT_ADDR=https://vault.pvt.xdrtest.accenturefederalcyber.com ~/Documents/MDR/Vault/vault secrets enable -path=engineering kv-v2
 vault secrets enable -path=engineering kv-v2
 vault secrets enable -path=ghe-deploy-keys kv-v2
 vault secrets enable -path=jenkins kv-v2
@@ -118,7 +119,7 @@ vault write salt/pillar_data auth="abc123"
 
 
 
-14 export the secrets (be sure to export your bash variable for VAULT_TOKEN DON'T Use ROOT TOKEN!)
+14. export the secrets (be sure to export your bash variable for VAULT_TOKEN DON'T Use ROOT TOKEN!)
 
 #export
 ```
@@ -188,7 +189,7 @@ curl -v --header "X-Vault-Token:$VAULT_TOKEN" --request LIST \
 
 
 8. map okta to policies ( not needed )
-8.1 VAULT_ADDR=https://vault.mdr-test.defpoint.com vault policy write -tls-skip-verify=true auth/okta/groups/mdr-admins policies=admins
+8.1 VAULT_ADDR=https://vault.pvt.xdrtest.accenturefederalcyber.com vault policy write -tls-skip-verify=true auth/okta/groups/mdr-admins policies=admins
 
 
 ## Vault Logs

+ 5 - 5
duane-random.md

@@ -5,10 +5,10 @@
 Software vendors who do "cloud management" things, roughly described as
 double-checking either your IAC or your actual built cloud stuff.
 
-* https://www.accurics.com/products/terrascan/
-* https://bridgecrew.io/
-* https://www.cloudtamer.io/
-* https://turbot.com/features/
-* https://cloudcheckr.com/
+* [Terrascan](https://www.accurics.com/products/terrascan/)
+* [bridgecrew](https://bridgecrew.io/)
+* [cloudtamer](https://www.cloudtamer.io/)
+* [turbot](https://turbot.com/features/)
+* [CloudCheckr](https://cloudcheckr.com/)