Brad Poulton 4 rokov pred
rodič
commit
d9d526d047

+ 34 - 11
AWS New Account Setup Notes.md

@@ -1,5 +1,9 @@
 # XDR AWS New Account Setup Notes
 
+## Timecode
+
+You should be using the customer T&E charge code. If you don't have one you can put the time into a suspense code and switch it to the correct timecode when you get it. The suspense code is: SSPNS.500.001.001 Contract Civilian Sus Lbr.
+
 ## request new account from aws from AFS 
 AFS Help -> Submit a request -> non standard software and pre-approved project management tools -> cloud managed services
 
@@ -27,9 +31,9 @@ git clone https://github.com/duckfez/aws-mfa.git # This is a patched version to
 1. Record all account information in [msoc-infrastructure-wiki `cloud-accounts.md`](https://github.mdr.defpoint.com/mdr-engineering/msoc-infrastructure/wiki/cloud-accounts) doc
 1. Go to https://vault.mdr.defpoint.com
 1. Navigate to `engineering/cloud/aws/root-creds/`:
-  * Create new entry for the account alias.
+  * Create new entry for the account alias. Use the naming scheme, mdr-prod-${CUSTOMERPREFIX}
   * Copy json from existing entry - should contain both commercial and govcloud records
-  * Create a new version of the new secret and add the json.
+  * Create a new version of the new secret and add the json
   * if needed, add a field for the MFA secret called commerical_mfa_secret and gov_mfa_secret
 1. Login to the AWS account via web browser.
 1. It's possible that CAMRS will make "our user" named `IAMAdmin`, but also possible it will be `MDRAdmin`.  We have
@@ -40,11 +44,12 @@ things that expect it to be `MDRAdmin`.  If the account we get is `IAMAdmin` the
    4. Attach the policy `IAMUserChangePassword` directly to the user
    5. put the user in the `camrs-group-iam` group
    6. Log out of `IAMAdmin`, log in to `MDRAdmin` 
-   7. Delete `IAMAdmin` from AWS and your personal virtual authenticator and proceed
 1. Change password to something that does not include json characters and record in the vault.
 2. Follow instructions for ["Using Vault for TOTP things", section "Adding a new TOTP Code" in cloud-accounts.md](https://github.mdr.defpoint.com/mdr-engineering/msoc-infrastructure/wiki/cloud-accounts#adding-a-new-totp-code---especially-for-an-aws-account) to configure and store the MFA token for the root account.
+3. Put the MFA secret key into the *_mfa_secret field in Vault for both commerical and gov. 
 3. Sign out and back in. (Not optional! Required because MFA requirement in IAM policies)
 4. Go back to IAM and create access keys.
+5. Delete `IAMAdmin` from AWS and your personal virtual authenticator.
 
 Repeat for additional accounts
 
@@ -57,8 +62,11 @@ aws_access_key_id = <blah>
 aws_secret_access_key = <blah>
 aws_mfa_device = arn:{partition}:iam::{account}:mfa/MDRAdmin
 ```
-1. Run `aws-mfa --profile tmp --region={region}` ( Note: No `-long-term`, because script assumes it )
-1. Verify account number: `AWS_PROFILE=tmp aws sts get-caller-identity  --region={region}`
+Partition should be `aws` or `aws-us-gov`. 
+Region should be `us-gov-east-1` or `us-east-1`.
+
+1. Run `aws-mfa --profile tmp --region={region}` ( Note: No `-long-term`, because script assumes it ). To switch from gov to commerical use the `--force` flag. 
+1. Verify account number: `AWS_PROFILE=tmp aws sts get-caller-identity --region={region}`
 1. Update and branch xdr-terraform-live Git repo
 1. Name the branch feature/${INITIALS}_${TICKET}_CustomerSetup_${CUSTOMERPREFIX}
 1. This branch will be used in future steps
@@ -68,16 +76,26 @@ cp -r ~/xdr-terraform-live/000-skeleton/ ~/xdr-terraform-live/prod/aws-us-gov/md
 cd ~/xdr-terraform-live/prod/aws-us-gov/mdr-prod-${CUSTOMERPREFIX}
 vim README.md # Add a description of the account
 vim account.hcl # Fill in all "TODO" items, but leave "LATER" items (such as qualys) to be completed later.
-cd 005-iam
 ```
+
+If the account is NOT GOING TO BE USED run these commands. NOTE: This would probably be only for the commercial account. 
+
+```
+echo "This account is unused" > UNUSED.ACCOUNT
+rm -rf 010-vpc-splunk/ 021-qualys-connector-role/ 025-test-instance/ 072-salt-master-inventory-role/ 140-splunk-frozen-bucket/ 150-splunk-cluster-master/ 160-splunk-indexer-cluster/ 170-splunk-searchhead/ 180-splunk-heavy-forwarder/
+```
+
+1. cd into the IAM directory
+`cd 005-iam`
+
 1. Double-check / fix the profile
 
 ```
 vim terragrunt.hcl
-# Check TODO items, make sure the profile listed is right / matches what you have in above step
+# Check TODO items, make sure the profile (tmp) listed is right / matches what you have in above step
 ```
 
-1. Apply the configuration: # *There may be a step missing here, where we set the profile somewhere?*
+1. Apply the configuration: 
 ```
 saml2aws -a commercial login
 saml2aws -a govcloud login
@@ -85,6 +103,9 @@ terragrunt init
 terragrunt validate
 terragrunt apply
 ```
+
+If the `terragrunt apply` takes forever and doesn't do anything, you need to authenticate with aws-mfa again. 
+
 1. Comment-out the provisioning provider block and validate that terragrunt can be applied with the normal xdr-terraformer roles from root account
 ```
 vim terragrunt.hcl
@@ -93,14 +114,16 @@ terragrunt apply
 # Should be no changes
 ```
 
-If necessary, repeat for the 'commercial' account, but we generally do not need to do this and let the commercial account sit idle.  (TRUE?  Don't we want account standards at least?)
+If necessary, repeat for the 'commercial' account. The commercial account needs to be configued.
 
 If everything is working correct, delete the AWS access keys from the MDRAdmin user. Update `AWS Notes.md` and add the new account to the shared AWS confiugration. The new configuration should match this format.
 
 `vim ~/.aws/config`
+
+GovCloud Format
 ```
-[profile mdr-prod-bas]
-role_arn = arn:aws-us-gov:iam::081915784976:role/user/mdr_terraformer
+[profile mdr-prod-${CUSTOMERPREFIX}-gov]
+role_arn = arn:aws-us-gov:iam::{account}:role/user/mdr_terraformer
 region = us-gov-east-1
 color = 369e1a
 source_profile = govcloud

+ 6 - 0
AWS Notes.md

@@ -355,6 +355,12 @@ region = us-gov-east-1
 color = 369e1a
 source_profile = govcloud
 
+[profile mdr-prod-doed-gov]
+role_arn = arn:aws-us-gov:iam::137793331041:role/user/mdr_terraformer
+region = us-gov-east-1
+color = 369e1a
+source_profile = govcloud
+
 ;
 ;CYBERRANGE
 ;

+ 8 - 0
DNSSEC Notes.md

@@ -1,8 +1,16 @@
 # DNSSEC Notes
 
+
 ## unbound server
 2020-08-05
 
+Unbound is installed on the 2 resolver servers.  
+gc-prod-resolver-govcloud-2
+gc-prod-resolver-govcloud
+
+If DNS resolution stops working, restart the unbound service.
+`systemctl status unbound`
+
 AWS resolvers can't play any part whatsoever in DNSSEC. They just break it.
 
 So unbound servers need external DNS.

+ 86 - 56
New Customer Setup Notes - GovCloud.md

@@ -33,6 +33,8 @@ You will need the following. Setting environment variables will help with some o
 
 IMPORTANT NOTE: Each time you run this, it will generate new passwords. So make sure you use the same window to perform all steps!
 
+Do you have a Splunk license yet? No? Can you use a temp/dev license until the real one shows up? I hate doing that, but not much of a choice. 
+
 Commands tested on OSX and may not (probably won't) work on windows/linux.
 
 ```
@@ -58,10 +60,11 @@ Connect to production VPN
 
 Log into vault at https://vault.pvt.xdr.accenturefederalcyber.com (legacy: https://vault.mdr.defpoint.com)
 
-Record the following into `secrets/engineering/customer_slices/${CUSTOMERPREFIX}`
+Record the following into `engineering/customer_slices/${CUSTOMERPREFIX}`
 
 ```
 echo $ADMINPASS  # record as `${CUSTOMERPREFIX}-splunk-cm admin`
+echo "${CUSTOMERPREFIX}-splunk-cm admin"
 ```
 
 At this time, we don't set the others on a per-account basis through salt, though it looks like admin password has been changed for some clients.
@@ -69,6 +72,8 @@ At this time, we don't set the others on a per-account basis through salt, thoug
 
 ## Step x: Update and Branch Git
 
+You may have already created a new branch in xdr-terraform-live in a previous step.
+
 ```
 cd ~/msoc-infrastructure
 git checkout develop
@@ -116,10 +121,12 @@ Add permissions for the okta apps:
     * Analyst
     * mdr-admins
     * mdr-engineers
-  * For CM:
+  * For CM & HF:
     * mdr-admins
     * mdr-engineers
 
+1) while logged into OKTA, add the Splunk logo to the Apps. It is located in  msoc-infrastructure/tools/okta_app_maker/okta-logo-splunk.png
+
 
 ## Step x: Add the license file to salt
 
@@ -174,47 +181,23 @@ Review the file to make sure everything looks good.
 
 Add to gitfs pillars and allow salt access:
 ```
-vim salt_master.conf
-# Copy one of the customer_repos with the new customer
+vim salt_master.sls
+# Copy one of the customer_repos with the new customer. Update both the CM repo and the DS repo, unless you know there will not be LCP/POP nodes. 
 vim ~/msoc-infrastructure/salt/fileroots/salt_master/files/etc/salt/master.d/default_acl.conf
 #Add customer prefix to ACL
-git add salt_master.conf ~/msoc-infrastructure/salt/fileroots/salt_master/files/etc/salt/master.d/default_acl.conf
+
 ```
 
-Migrate pillars through to master branch:
+Migrate changes through to master branch:
 ```
-git add top.sls ${CUSTOMERPREFIX}_variables.sls
+git add ../fileroots/splunk/files/licenses/${CUSTOMERPREFIX}/<your-license-file>
+git add ../fileroots/salt_master/files/etc/salt/master.d/default_acl.conf
+git add salt_master.sls top.sls ${CUSTOMERPREFIX}_variables.sls os_settings.sls
 git commit -m "Adds ${CUSTOMERPREFIX} variables. Will promote to master immediately."
 git push origin feature/${INITIALS}_${TICKET}_CustomerSetup_${CUSTOMERPREFIX}
 ```
 
-Follow the link to create the PR, and then submit another PR to master.
-
-
-## Step x: Update the salt master
-
-Once approved, update the salt master
-
-```
-ssh gc-prod-salt-master
-sudo vim /etc/salt/master.d/default_acl.conf
-# Grant users access to the new prefix
-# save and exit
-salt 'salt*' cmd.run 'salt-run fileserver.update'
-sudo service salt-master restart
-exit
-```
-
-Brad's way default_acl.conf should have been updated in git
-
-```
-ssh gc-prod-salt-master
-salt 'salt*' cmd.run 'salt-run fileserver.update'
-salt 'salt*' state.sls salt_master.salt_master_configs test=true
-sudo salt 'salt*' state.sls salt_master.salt_posix_acl test=true
-exit
-```
-
+Follow the link to create the PR, and then submit another PR to master and get the changes merged in to master branch. 
 
 ## Step x: Create customer repositories
 
@@ -233,7 +216,21 @@ Create a new repository using the cm template:
 	* automation - Read
 	* onboarding - Write
 
-Clone and modify the password (TODO: Just take care of this in salt):
+Repeat for pop repo, unless customer will not have pop nodes. 
+
+1. Browse to https://github.mdr.defpoint.com/mdr-engineering/msoc_skeleton_pop
+2. Click "use this template"
+  a. Name the new repository `msoc-${CUSTOMERPREFIX}-pop`
+  b. Give it the description: `Splunk POP Configuration for [CUSTOMER DESCRIPTION]`
+  c. Set permissions to 'Private'
+  d. Click 'create repository from template'
+3. Click on 'Settings', then 'Collaborators and Teams', and add the following:
+	* infrastructure - Admin
+	* automation - Read
+	* onboarding - Write
+
+
+Clone and modify the password in the CM repo (TODO: Just take care of this in salt):
 
 ```
 mkdir ~/tmp
@@ -246,7 +243,19 @@ git add passwd
 git commit -m "Stored hashed passwords"
 git push origin master
 ```
-Repeat for msoc-${CUSTOMERPREFIX}-pop repo 
+
+
+## Step x: Update the salt master with new configs
+
+Now that we have the gir repos created, let's update the salt master. 
+
+```
+ssh gc-prod-salt-master
+salt 'salt*' cmd.run 'salt-run fileserver.update'
+salt 'salt*' state.sls salt_master.salt_master_configs --state-output=changes test=true
+sudo salt 'salt*' state.sls salt_master.salt_posix_acl --state-output=changes test=true
+exit
+```
 
 ## Step x: Set up xdr-terraform-live account
 
@@ -257,17 +266,19 @@ cd ~/xdr-terraform-live/prod/aws-us-gov/mdr-prod-${CUSTOMERPREFIX}
 vim account.hcl # Fill in all "TODO" items. Leave the "LATER" variables for later steps.
 ```
 1. Update the ref version in all the terragrunt.hcl files to match latest tag on modules git repo. Replace v1.XX.XX with the current tag. 
-2. `find . -name "terragrunt.hcl" -not -path "*/.terragrunt-cache/*" -exec sed -i '' s/?ref=v1.0.0/?ref=v1.XX.XX/ {} \;`
+2. `find . -name "terragrunt.hcl" -not -path "*/.terragrunt-cache/*" -exec sed -i '' s/?ref=v1.10.17/?ref=v1.xx.xx/ {} \;`
 
 
 ## Step x: Add account to global variables, and apply necessary prerequisites
 
 1. Add the account number to `account_map["prod"]` in :
-  * `~/xdr-terraform-live/prod/aws-us-gov/partition.hcl`
-  * `~/xdr-terraform-live/common/aws-us-gov/partition.hcl`
-2. `cd ~/xdr-terraform-live/prod/aws-us-gov/mdr-prod-c2`
-2. Create PR and get changes approved
+  * `~/xdr-terraform-live/prod/aws-us-gov/partition.hcl` OR `vim ../partition.hcl`
+  * `~/xdr-terraform-live/common/aws-us-gov/partition.hcl` OR `vim ../../../common/aws-us-gov/partition.hcl`
+2. `cd ~/xdr-terraform-live/prod/aws-us-gov/mdr-prod-c2` OR `cd ../mdr-prod-c2/`
+2. Create PR and get changes approved commit message should be, "Adds ${CUSTOMERPREFIX} customer"
 3. Apply the modules: (* draft: are there more external requirements? *)
+
+Copy and paste these commands into cmd line and run them. 
 ```
 for module in 005-account-standards-c2 008-transit-gateway-hub
 do
@@ -277,7 +288,7 @@ do
 done
 ```
 4. `cd ~/xdr-terraform-live/common/aws-us-gov/afs-mdr-common-services-gov/`
-4. `cd ../../../../common/aws-us-gov/afs-mdr-common-services-gov/`
+4. `cd ../../../common/aws-us-gov/afs-mdr-common-services-gov/`
 5. Apply the modules:
 ```
 for module in 008-xdr-binaries 010-shared-ami-key 
@@ -294,7 +305,7 @@ done
 The new AWS account needs permissions to access the AMIs before trying to create EC2 instances. Replace the aws-account-id in the below command. 
 
 ```
-cd ~/xdr-terraform-live/bin/
+cd ~/xdr-terraform-live/bin/ # OR cd ../../../bin/
 AWS_PROFILE=mdr-common-services-gov update-ami-accounts <aws-account-id>
 ```
 
@@ -305,7 +316,7 @@ The `xdr-terraform-live/bin` directory should be in your path. You will need it
 (n.b., if you are _certain_ everything is good to go, you can do a `yes yes |` before the `terragrunt-apply-all` to bypass prompts. This does not leave you an out if you make a mistake, however, becasue it is difficult to break out of terragrunt/terraform without causing issues.)
 
 ```
-cd ~/xdr-terraform-live/prod/aws-us-gov/mdr-prod-${CUSTOMERPREFIX}
+cd ~/xdr-terraform-live/prod/aws-us-gov/mdr-prod-${CUSTOMERPREFIX} # OR cd ../prod/aws-us-gov/mdr-prod-${CUSTOMERPREFIX}
 terragrunt-apply-all --skipqualys --notlocal
 ```
 
@@ -327,6 +338,7 @@ Workaround is:
 cd 010-vpc-splunk
 terragrunt apply -target module.vpc
 terragrunt apply
+cd ..
 ```
 You might run into an error when applying the test instance module `025-test-instance`.
 Error reads as:
@@ -362,9 +374,10 @@ Short version:
 1. Pick the Regions that should be in scope (all of them), hit "Continue"
 1. Check the "Automatically Activate" buttons for VM and PC Scanning application
 1. Pick these tag(s):  AWS_Prod,
+1. Hit "Continue", then "Finish".
 1. Should be done with the wizard now. Back in the main list view click the drop-down and pick "Run" to pull current Assets
 
-It should come back with a number of assets, no errors, and a hourglass for a bit.
+It should come back with a number of assets ( probably about 6 ), no errors, and a hourglass for a bit.
 
 
 ## Step x: Finalize the Salt
@@ -399,8 +412,9 @@ System hangs appear to be because of a race condition with startup of firewalld
 
 
 
-## Update the salt pillars with the encrypted forms
+## TODO: Update the salt pillars with the encrypted forms
 
+Because we are not managing the splunk.secret, the pass4SymmKey gets encrypted into different values on each of the indexers. This causes the file containing the pass4SymmKey to be updated by Splunk on every Salt highstate. To resolve this, we would need to manage the splunk.secret file.   
 TODO: Document a step of updating the `pillars/${CUSTOMERPREFIX}_variables.sls` with encrypted forms of the passwords.
 
 
@@ -408,7 +422,7 @@ TODO: Document a step of updating the `pillars/${CUSTOMERPREFIX}_variables.sls`
 
 Log into https://${CUSTOMERPREFIX}-splunk.pvt.xdr.accenturefederalcyber.com
 `echo "https://${CUSTOMERPREFIX}-splunk.pvt.xdr.accenturefederalcyber.com"`
-`echo "https://${CUSTOMERPREFIX}-splunk-cm.pvt.xdr.accenturefederalcyber.com"`
+`echo "https://${CUSTOMERPREFIX}-splunk-cm.pvt.xdr.accenturefederalcyber.com:8000"`
 
 It should "just work".
 
@@ -442,7 +456,7 @@ Note from Duane:  Should work anywhere.  Main goal was to see that the cluster b
 
 ## Splunk configuration
 
-* Install ES on the search head Version 6.2 ( NOT 6.4.0 ! Until George tells us otherwise )
+* Install ES on the search head Version 6.2 ( NOT 6.4.0 ! Until George tells us otherwise! )
 
 
 1. Download ES app from Splunk using your splunk creds
@@ -459,24 +473,40 @@ Note from Duane:  Should work anywhere.  Main goal was to see that the cluster b
 1. `salt ${CUSTOMERPREFIX}-splunk-sh* cmd.run 'echo "[settings]" > /opt/splunk/etc/system/local/web.conf'`
 1. `salt ${CUSTOMERPREFIX}-splunk-sh* cmd.run 'echo "max_upload_size = 1024" >> /opt/splunk/etc/system/local/web.conf'`
 1. `salt ${CUSTOMERPREFIX}-splunk-sh* cmd.run 'cat /opt/splunk/etc/system/local/web.conf'`
-1. Restart SH via the GUI and upload via the GUI ( takes a long time to upload )
+1. `salt ${CUSTOMERPREFIX}-splunk-sh* cmd.run 'systemctl restart splunk'`
+1.  upload via the GUI ( takes a long time to upload )
 1. Choose "Set up now" and "Start Configuration Process"
 1. ES should complete app actions on its own, then prompt for a restart
 
 ### remove the web.conf file
 1. `salt ${CUSTOMERPREFIX}-splunk-sh* cmd.run 'cat /opt/splunk/etc/system/local/web.conf'`
 1. `salt ${CUSTOMERPREFIX}-splunk-sh* cmd.run 'rm -rf /opt/splunk/etc/system/local/web.conf'`
-1. restart SH
+1. `salt ${CUSTOMERPREFIX}-splunk-sh* cmd.run 'systemctl restart splunk'`
+
 
+## Monitoring Console ( skip if demo cluster )
 
-## Monitoring Console ( skip if demo cluster? )
+Note: Once the Legacy Monitoring Console has moved to GC, the SGs will need to be fixed.
 
-* Add cluster to monitoring console 
-* Peer with CM, SH, and HF
-* Update MC topology
+1. Add master_uri and pass4symmkey to salt/pillar/mc_variables.sls
+1. Commit to git with the message, "Adds variables for Monitoring Console" and once approved, highstate the Moose MC.
+1. `sudo salt-run fileserver.update`
+1. `salt splunk-mc* state.sls splunk.monitoring_console --output-diff test=true`
+1. Splunk should restart and the new Splunk CMs will show up in the MC ( Settings -> Indexer Clustering )
+1. After applying the code, pull the encypted values of the pass4symmkey out of the Splunk config file and replace them in the salt state. Then create a PR for git. Set the git commit message to, "Swaps pass4symmkey with encrypted value".  
+1. `salt splunk-mc* cmd.run 'cat /opt/splunk/etc/apps/connected_clusters/local/server.conf'`
+1. Ensure new cluster is showing up in the Settings -> Indexer Clustering ( should see 4 check marks and at least 3 peers ). If not, verify firewall rules. 
+1. Add CM as a search peer by going to Settings -> Distributed Search -> Search Peers
+1. Input the Peer URI (https://${CUSTOMERPREFIX}-splunk-cm.pvt.xdr.accenturefederalcyber.com:8089) and remote admin credentials. For the CM, the remote admin credentials are in Vault at engineering -> customer_slices -> ${CUSTOMERPREFIX}
+1. Repeat for SH and HF and use the correct Splunk creds. `salt ${CUSTOMERPREFIX}-splunk-sh* pillar.get secrets:splunk_admin_password`
+1. Verify all customer instances are connected to the search peer by searching for customer prefix in the search peers webpage. 
+1. Update MC topology ( settings -> Monitoring Console -> Settings -> General Setup -> Apply Changes )
 
 
 ## Create New Vault KV Engine for Customer for Feed Management
+1. Log into Vault
+1. Enable new engine of type KV
+1. Change path and enable engine. 
 Naming Scheme: onboarding-<customer-name>
 Example: onboarding-la-covid
 
@@ -543,8 +573,6 @@ salt -C "${CUSTOMERPREFIX}* and G@msoc_pop:True" state.sls os_modifications.rhel
 salt -C "${CUSTOMERPREFIX}* and G@msoc_pop:True" state.sls os_modifications
 
 
-
-
 salt ${CUSTOMERPREFIX}-splunk
 Start with ds
 salt ${CUSTOMERPREFIX}-splunk-ds\* state.highstate --output-diff
@@ -554,6 +582,8 @@ salt ${CUSTOMERPREFIX}-splunk-syslog-\* state.sls os_modifications
 
 ## LCP Troubleshooting
 
+REMEMBER: Our Customers are responsible for setting up the salt minion with grains and allow traffic through the outbound firewall. If they have not done that yet, you will get more errors. 
+
 ISSUE: Help, the environment grain is not showing up!
 SOLUTION: This command will add a static grain in /etc/salt/minion.d/cloud_init_grains.conf. 
 `salt 'target' state.sls salt_minion.salt_grains pillar='{"environment": "prod"}' test=true --output-diff` 

+ 5 - 0
OSContext Notes.md

@@ -16,3 +16,8 @@ our external IPs to whitelist.  Here's the current list they have as of 2020-10-
 
 ```
 
+## OS Context Wiki Page
+
+https://github.mdr.defpoint.com/MDR-Content/mdr-content/wiki/Reference%3A-Threat-Intel-Vendor-Services#open-source-context---passive-dns
+
+`dig -t txt tor.domain.v.ble.oscontext.com.`

+ 2 - 0
OpenVPN Notes.md

@@ -8,6 +8,8 @@ the admin username is openvpn
 Helpful...
 https://openvpn.net/vpn-server-resources/managing-settings-for-the-web-services-from-the-command-line/
 
+There is a strict dependency that openvpn be started after firewalld.
+
 
 ## How to Reset ldap.read 
 

+ 4 - 6
Salt Upgrade Notes.md → Salt Upgrade 2019 -> 3001 Notes.md

@@ -1,28 +1,26 @@
-# Salt Upgrade Notes.md
+# Salt Upgrade 2019 -> 3001 Notes.md
 
 ### Places where code might need to be updated for a new version ( salt.repo )
 - packer/scripts/add-saltstack-repo.sh
 - base/salt_master/cloud-init/provision_salt_master.sh
 - salt/pillar/dev/yumrepos.sls
 
-## 3001 Upgrade
-
 ### Prep
 - update Pillars yumrepos:salt:version and yumrepos:salt:baseurl
 
 ### On the master
-- update repo `salt salt* state.sls os_modifications.repo_update --state-output=changes`
+- update repo `salt salt* state.sls os_modifications.repo_update --output-diff`
 - install gitpython on salt master for py3 `pip3 install gitpython`
 - `salt salt-master* cmd.run 'yum clean all ; yum makecache fast'`
 - `salt salt* cmd.run 'yum check-update'`
 - update `salt salt* pkg.upgrade name=salt-master`
-- `salt salt* state.sls salt_master.salt_posix_acl`
+- `salt salt* state.sls salt_master.salt_posix_acl --output-diff`
 - `salt salt* cmd.run 'systemctl restart salt-master'`
 - `salt salt*com state.sls salt_master.salt_master_configs test=true`
 
 
 ### On the minions
-- update repo `salt salt* state.sls os_modifications.repo_update --state-output=changes`
+- update repo `salt salt* state.sls os_modifications.repo_update --output-diff`
 - `salt salt* cmd.run 'yum clean all ; yum makecache fast'`
 - `salt salt* cmd.run 'yum check-update'`
 - update `salt salt* pkg.upgrade name=salt-minion`

+ 60 - 0
Salt Upgrade 3001.2 -> 3001.6 Notes.md

@@ -0,0 +1,60 @@
+# Salt Upgrade 3001.2 -> 3001.6 Notes.md
+
+### Places where code might need to be updated for a new version ( salt.repo )
+
+- packer/scripts/add-saltstack-repo.sh
+- salt/pillar/dev/yumrepos.sls
+- salt/pillar/prod/yumrepos.sls ( you can wait until after testing is done in test before deploying to prod )
+
+For your reference....
+- packer/scripts/provision-salt-master.sh   <- salt master is installed here
+- base/salt_master/cloud-init/provision_salt_master.sh   <- salt master is configured here
+
+
+## 3001.2 -> 3001.6
+
+- dev
+- legacy test
+- gc test
+- legacy prod
+- gc prod
+- LCP nodes
+
+Prep
+In the dev environment, the salt minion failed to start up after the upgrade. Might need a cronjob on the LCP nodes. 
+
+Ensure the pillar has been updated to the correct version. 
+```
+salt salt* cmd.run 'salt-run fileserver.update'
+salt salt* pillar.get yumrepos:salt:version
+```
+
+Update repo 
+```
+salt salt* cmd.run 'cat /etc/yum.repos.d/salt.repo'
+salt salt* state.sls os_modifications.repo_update test=true --output-diff
+salt salt* cmd.run 'cat /etc/yum.repos.d/salt.repo'
+salt salt* cmd.run 'yum clean all ; yum makecache fast'
+salt salt* cmd.run 'yum check-update | grep salt'
+salt salt* pkg.upgrade name=salt-master
+sudo salt salt* state.sls salt_master.salt_posix_acl --output-diff
+```
+
+Ack the minions didn't come back! stupid salt! Let's try something different
+
+```
+salt salt* cmd.run 'cat /etc/yum.repos.d/salt.repo'
+salt salt* state.sls os_modifications.repo_update test=true --output-diff
+salt salt* cmd.run 'cat /etc/yum.repos.d/salt.repo'
+salt salt* cmd.run 'yum clean all ; yum makecache fast'
+salt salt* cmd.run 'yum check-update | grep salt'
+cmd.run_bg 'systemd-run --scope yum update salt-minion -y && sleep 240 && systemctl daemon-reload && sleep 20 && systemctl start salt-minion'
+```
+
+Did you miss any?
+`salt -G saltversion:3001.3 test.ping`
+
+
+
+
+

+ 3 - 1
Splunk NGA Data Pull Request Notes.md

@@ -1,6 +1,8 @@
 # Splunk NGA Data Pull Request Notes
 
-Stand up a new "search head" that just has splunk installed on it, no need to configure the splunk instance. the splunk instance will query the actual search head and pull the data out. See hurricane labs python script. 
+Stand up a new "search head" that just has splunk installed on it, no need to configure the splunk instance. the splunk instance will query the actual search head and pull the data out. See hurricane labs python script.  
+
+https://hurricanelabs.com/splunk-tutorials/the-best-guide-for-exporting-massive-amounts-of-data-from-splunk/
 
 https://jira.mdr.defpoint.com/browse/MSOCI-1013
 

+ 2 - 0
Terraform Notes.md

@@ -66,6 +66,8 @@ terraform refresh -target=data.aws_ami.msoc_base
 
 Terraform also has a DynamoDB State lock (msoc-terraform-lock). This will prevent terraform state breakage. 
 
+To manually remove the lock: https://www.terraform.io/docs/cli/commands/force-unlock.html
+
 ------------------
 View TF code
 https://github.com/terraform-aws-modules

+ 1 - 1
Terragrunt Notes.md

@@ -63,7 +63,7 @@ These notes will walk you through the Terragrunt git flow for making changes.
 - use terragrunt-local to try the changes 
 - ( did you run the saml command to login?)
 - use tgswitch to change versions
-- rm -rf .terragrunt-cache to resolve "strange" errors
+- `rm -rf .terragrunt-cache` to resolve "strange" errors
 - push new branch to github 
 - get pr approved and merged in
 - tag master to latest tag that is set in terragrunt.hcl