Brad Poulton 5 жил өмнө
commit
b9c6d3b6da
45 өөрчлөгдсөн 2891 нэмэгдсэн , 0 устгасан
  1. BIN
      AFS ServiceNow AWS Account Request.pdf
  2. 12 0
      MDR AWS New Account Setup Notes.txt
  3. 5 0
      MDR AWS New User Setup Notes.txt
  4. 39 0
      MDR AWS Notes.txt
  5. 33 0
      MDR Atlantis Notes.txt
  6. 18 0
      MDR Customer Decommision Notes.txt
  7. 43 0
      MDR FedRAMP.txt
  8. 13 0
      MDR Fluentd Notes.txt
  9. 77 0
      MDR Full Drive Notes.txt
  10. 28 0
      MDR GitHub Server Notes.txt
  11. 3 0
      MDR GovCloud Notes.txt
  12. 1 0
      MDR Jenkins Notes.txt
  13. 77 0
      MDR Migration to Sensu Go.txt
  14. 10 0
      MDR Okta Notes.txt
  15. 53 0
      MDR OpenVPN Notes.txt
  16. 14 0
      MDR POP Node Notes.txt
  17. 29 0
      MDR Packer Notes.txt
  18. 71 0
      MDR Packer Salt Master FIPS Notes.txt
  19. 362 0
      MDR Patching Notes.txt
  20. 5 0
      MDR Phantom Notes.txt
  21. 38 0
      MDR Phantom Upgrade Notes.txt
  22. 87 0
      MDR Portal Lamba Notes.txt
  23. 92 0
      MDR Portal Notes.txt
  24. 104 0
      MDR Portal WAF Notes.txt
  25. 15 0
      MDR Qualys Notes.txt
  26. 21 0
      MDR RedHat Notes.txt
  27. 19 0
      MDR Reposerver Notes.txt
  28. 38 0
      MDR Salt Notes.txt
  29. 21 0
      MDR Salt Splunk Whitelisting FedRAMP Notes.txt
  30. 161 0
      MDR Salt Upgrade.txt
  31. 104 0
      MDR ScaleFT Notes.txt
  32. 88 0
      MDR Sensu Notes.txt
  33. 75 0
      MDR Splunk MSCAS Notes.txt
  34. 95 0
      MDR Splunk NGA Data Pull Request.txt
  35. 72 0
      MDR Splunk Notes.txt
  36. 364 0
      MDR Splunk SAF Offboarding Notes.txt
  37. 74 0
      MDR Terraform Notes.txt
  38. 170 0
      MDR Terraform Splunk ASG Notes.txt
  39. 170 0
      MDR Vault Notes.txt
  40. 11 0
      MDR Vault Prod Refresh Notes.txt
  41. 1 0
      MDR VictorOps Notes.txt
  42. 164 0
      MDR salt_splunk_HEC Notes.txt
  43. 8 0
      MDR sftp Notes.txt
  44. 4 0
      MSR Sudo Replay Notes.txt
  45. 2 0
      README.md

BIN
AFS ServiceNow AWS Account Request.pdf


+ 12 - 0
MDR AWS New Account Setup Notes.txt

@@ -0,0 +1,12 @@
+MDR AWS New Account Setup Notes
+
+request new account from aws from AFS 
+AFS Help -> Submit a request -> non standard software and pre-approved project management tools -> cloud managed services
+
+
+CFM approver: jordana.lang
+P104 approver: jennifer.l.combs
+
+VERY Helpful Guy to fill out the AWS request: Osman Soofi. osman.soofi@accenturefederal.com
+
+New AWS accounts MDRAdmin MFA is stored in Vault. 

+ 5 - 0
MDR AWS New User Setup Notes.txt

@@ -0,0 +1,5 @@
+https://gpgtools.org/ download and install
+use gpg keychain to generate pub/private keys
+something something terraform
+echo "wcFMA2sXDKYLpzaU<redacted>bf6clQ043oDkHIrcWK509UIy5GUpEqBV/WLmuCMHkXUgnxy12HY8qBErF58vB7/VXs5pCKp4SDYWEtK73fKmYZ5wJDW6j6OHkpYI4USZXjVYb+Utt56Qprk4KiT6VlFNNPo00r2YDABDdtxPJS3N9REzHqp+7oR2SQkiyEhcF3ZwILk2fH4mc1VQUiFu68RCqbt+QfmDt3OHIRZVPvrS4AHkCbj2fdgkbAaRMJ/21TBn8OE8WuDR4NHh5w/gWeK5m6754DzkjVLxDpsvPG2UR9ErwANEo+BI4upil2vgT+S63PIVsAmTew/7QpPavttP4rUBM47h5cMA"|base64 -D  | gpg -d
+

+ 39 - 0
MDR AWS Notes.txt

@@ -0,0 +1,39 @@
+Root Account Alias: defpoint-mdr-root
+Root AWS Account ID: 350838957895
+
+Test Account ID: 527700175026
+Prod Account ID: 477548533976
+
+assumeRole to the test and prod accounts
+
+terraform has been setup to handle the cis checks for AWS. they are found in terraform/00-cis-hardening.
+
+Get a encoded error message from AWS?
+
+AWS_PROFILE=mdr-test aws sts decode-authorization-message --encoded-message Q7h4sTOW_n_znBB7ojNotL
+
+
+-------------------------------------------
+Cloudtrail metric Alarms
+
+
+so .. cloudtrail writes a trail
+
+that trail is written into a cloudwatch logs log group
+
+in the log group, there are a number of metric filters
+
+the metric filters create metrics, upon which a metric alarm is set
+
+when events matching the metric filter arrive, the metric goes up, triggerting the alarm
+new messages
+
+the alarm has an SNS topic it writes to that emails me that the "metric was exceeded"
+
+----------------------------------------------
+AWS Systems Manager agent
+
+systemctl start amazon-ssm-agent
+
+----------------------------------------------
+

+ 33 - 0
MDR Atlantis Notes.txt

@@ -0,0 +1,33 @@
+Atlantis allows for applying your TF code from a github comment. 
+Atlantis Lock is NOT a Terraform Lock. Atlantis lock is only for two Git PRs. 
+
+
+@atlantis plan
+
+@atlantis apply CAUTION! will apply changes to ALL environments
+
+Only apply to one environment. 
+@atlantis apply -p msoc_vpc_prod
+
+
+It is 100% aware of modules and workspaces so it will do the needful
+there is no such thing as a authorize list tho so ANYONE who leaves the comment "atlantis apply" will trigger it
+
+
+
+
+
+
+
+--------------------
+How to delete locks
+
+#1 option,
+If atlantis runs a plan and doesn't unlock terraform, delete the fargate docker and rebuild it (should be a quick action)
+
+I have to go into the AWS console and make the SG on the LB open to everything hit it and delete the lock
+then remove my any any rule
+
+Root AWS account -> Atlantis LB -> http://atlantis-1897367069.us-east-1.elb.amazonaws.com:4141
+http://3.231.59.107:4141/
+--------------------

+ 18 - 0
MDR Customer Decommision Notes.txt

@@ -0,0 +1,18 @@
+MDR Customer decommision Notes.txt
+
+salt saf-splunk-syslog-* cmd.run 'systemctl stop syslog-ng'
+salt saf-splunk-syslog-* cmd.run 'systemctl disable syslog-ng'
+salt saf-splunk-dcn-* cmd.run 'docker stop mdr-syslog-ng'
+
+
+salt saf-splunk-syslog-* cmd.run 'systemctl stop splunk'
+salt saf-splunk-syslog-* cmd.run 'systemctl disable splunk'
+
+salt -C 'saf-splunk-* not *.local' cmd.run 'systemctl stop splunk'
+salt -C 'saf-splunk-* not *.local' cmd.run 'rm -rf /opt/*'
+
+salt -C 'saf-splunk-* not *.local' cmd.run 'rm -rf /var/log/*'
+salt -C 'saf-splunk-* not *.local' cmd.run 'rm -rf /etc/salt/minion && shutdown now'
+
+remove SG rules to block access. salt master and splunk indexers
+12.42.184.208

+ 43 - 0
MDR FedRAMP.txt

@@ -0,0 +1,43 @@
+AWS artifacts provide insight into AWS fedramp package. 
+Okta fedRAMP notes:
+https://www.okta.com/resources/whitepaper/configuring-okta-for-fedramp-compliance/
+
+
+ 
+
+CM-6(a)-2 CIS SCAP Checklist
+Update parameter field to indicate the the CIS QA checklist is stored in Qualys and is SCAP compatible. 
+MDR employs configuration scanning software (Qualys) that, using SCAP compliant checklists, produces SCAP compliant output. 
+MDR employs c
+
+Do we have an alternative to github? is gitlab FIPS certified? It doesn't look like it. 
+https://gitlab.com/gitlab-org/gitlab-foss/issues/41463
+
+
+Use Qualys as the CIS QA Checklist- input deviations to CIS
+
+CM-6(c)-1 CIS deviations
+Add link to the CIS deviations wiki page to the SSP? Provide access to customers upon request. Copy and paste the wiki page into the SSP?    Need to ask clarifing question. All Info sys components could have deviations, should we list every server? These are handled in the Qualys scanning tool. 
+
+
+FIPS Certificate numbers
+Red Hat SSH: 3538
+Red Hat GnuTLS: 3571
+Red Hat kernel: 3565
+Okta mobile: 3344
+Okta: 3353
+Splunk: 3126
+
+
+Whitelisting Applications Process
+
+New applications are introduced into the production environment after they have been approved by at least two MDR engineers through the change management process. Only members of the MDR engineers are able to approve new software. Before the change to add the new software is approved, a hash of the software should be generated and documented. The hash should be compared to the vendors documented hash. To ensure the hash of the software doesn’t change, the Splunk app Process List Whitelisting is used to gather and verify the hashes have not changed. In the event that an unapproved hash is detected an email alert is sent.
+
+
+DNSSEC
+https://nvd.nist.gov/800-53/Rev4/control/SC-20
+https://nvd.nist.gov/800-53/Rev4/control/SC-21
+ 
+SC-20 says (roughly) that your authoritative name servers should be publishing DNS records that are cryptographically signed using DNSSEC all the way back to the root.  DNSSEC attempts to protect DNS data from being tampered in transit.  Having a set of robust digital signatures on “my” DNS records, and on those of “my parent” and on those of “my grandparent” all the way back to the root of the DNS tree makes it possible to cryptographically “prove” that when someone looks up my domain – say www.accenturefederal.com – that there was no tampering with the responses to that query. 
+ 
+SC-21 says (roughly) that when your clients do a DNS lookup that those lookups are done in a way that the DNSSEC signatures are checked and validated, and if the results cannot be cryptographically validated they are not used.

+ 13 - 0
MDR Fluentd Notes.txt

@@ -0,0 +1,13 @@
+MDR Fluentd Notes.txt
+
+Fluentd is part of Treasure Data. So the service name is td-agent. 
+
+systemctl status td-agent
+
+Fluentd is installed on afs-splunk-syslog-1. Fluentd will not start unless the directories specifid in the config file are created. 
+
+salt -L 'afs-splunk-syslog-1' cmd.run 'ls -larth /opt/syslog-ng/'
+salt -L 'afs-splunk-syslog-1' cmd.run 'mkdir /opt/syslog-ng/zscaler_firewall/'
+salt -L 'afs-splunk-syslog-1' cmd.run 'mkdir /opt/syslog-ng/zscaler_dns/'
+salt -L 'afs-splunk-syslog-1' cmd.run 'chown td-agent:td-agent /opt/syslog-ng/zscaler_firewall/'
+salt -L 'afs-splunk-syslog-1' cmd.run 'chown td-agent:td-agent /opt/syslog-ng/zscaler_dns/'

+ 77 - 0
MDR Full Drive Notes.txt

@@ -0,0 +1,77 @@
+sudo: unable to mkdir /var/log/sudo-io/00/00/08: No space left on device
+sudo: error initializing I/O plugin sudoers_io
+
+Use AWS Systems Manager to run a bash shell command
+
+rm-rf /var/log/hubble
+
+du -sh /var/log/* |sort -h
+du -sh /opt/syslog-ng/* |sort -h
+du -sh /opt/
+rm -rf /var/log/hubble
+
+
+
+afs-splunk-hf.msoc.defpoint.local  /  prod-afs-splunk-hf   (websense runs once per day.)
+salt afs-splunk-hf.msoc.defpoint.local cmd.run 'ls -larth /opt/websense/json_removal.sh'
+salt afs-splunk-hf.msoc.defpoint.local cmd.run 'df -h'
+
+DO THIS: HAMMER TIME
+for i in hosted*.gz; do rm -rf "$i"; done
+ALSO
+salt afs-splunk-hf.msoc.defpoint.local cmd.run 'sh /opt/websense/json_removal.sh'
+salt afs-splunk-hf.msoc.defpoint.local cmd.run 'sh /opt/websense/gz_removal.sh'
+/opt/websense/json_removal.sh
+rm -rf /opt/websense/hosted_agg04k_41208_116.50.57.190_100_156*
+rm -rf *165*.gz
+rm -rf hosted_agg04k*.gz
+rm -rf hosted_agg03k*.gz
+rm -rf hosted_agg02k*.gz
+rm -rf hosted_agg01k*.gz
+
+
+aws-syslog1-tts.nga.gov / nga-splunk-syslog-1
+salt nga-splunk-syslog-1 cmd.run 'du -sh /opt/syslog-ng/* |sort -h'
+salt nga-splunk-syslog-1 cmd.run 'du -sh /opt/syslog-ng/old_logs/* |sort -h'
+salt nga-splunk-syslog-1 cmd.run 'du -sh /opt/syslog-ng/old_logs/fortigate/* |sort -h'
+salt nga-splunk-syslog-1 cmd.run 'rm -rf /opt/syslog-ng/old_logs/fortigate/2020-01-10/*' --out=txt
+salt nga-splunk-syslog-1 cmd.run 'du -sh /opt/syslog-ng/old_logs/netscaler/* |sort -h'
+salt nga-splunk-syslog-1 cmd.run 'rm -rf /opt/syslog-ng/old_logs/netscaler/2020-01-10/*' --out=txt
+salt nga-splunk-syslog-1 cmd.run 'du -sh /opt/syslog-ng/fortigate/* |sort -h'
+salt nga-splunk-syslog-1 cmd.run 'du -sh /opt/syslog-ng/fortigate/aws-syslog1-tts.nga.gov/log/* |sort -h'
+salt nga-splunk-syslog-1 cmd.run 'rm -rf /opt/syslog-ng/fortigate/aws-syslog1-tts.nga.gov/log/2020-01-26/*' --out=txt
+
+
+***2***
+aws-syslog2-tts.nga.gov / nga-splunk-syslog-2
+salt nga-splunk-syslog-2 cmd.run 'df -h'
+salt nga-splunk-syslog-2 cmd.run 'du -sh /opt/syslog-ng/* |sort -h'
+salt nga-splunk-syslog-2 cmd.run 'du -sh /opt/syslog-ng/old_logs/* |sort -h'
+salt nga-splunk-syslog-2 cmd.run 'du -sh /opt/syslog-ng/old_logs/fortigate/* |sort -h'
+salt nga-splunk-syslog-2 cmd.run 'rm -rf /opt/syslog-ng/old_logs/fortigate/2020-01-10/*' --out=txt
+salt nga-splunk-syslog-2 cmd.run 'du -sh /opt/syslog-ng/fortigate/aws-syslog2-tts.nga.gov/log/* |sort -h'
+salt nga-splunk-syslog-2 cmd.run 'rm -rf /opt/syslog-ng/fortigate/aws-syslog2-tts.nga.gov/log/2020-03-01/*' --out=txt
+
+
+
+nga-splunk-indexer-1.msoc.defpoint.local 
+salt nga-splunk-indexer-1.msoc.defpoint.local cmd.run 'rm -rf /var/log/hubble'
+salt nga-splunk-indexer-1.msoc.defpoint.local cmd.run 'du -sh /var/log/* |sort -h'
+ 
+ 
+afssplhf103.us.accenturefederal.com  afs-splunk-syslog-1
+salt afs-splunk-syslog-1* cmd.run 'df -h'
+salt afs-splunk-syslog-1* cmd.run 'du -sh /opt/syslog-ng/* |sort -h'
+salt afs-splunk-syslog-1* cmd.run 'du -sh /opt/syslog-ng/old_logs/* |sort -h'
+salt afs-splunk-syslog-1* cmd.run 'du -sh /opt/syslog-ng/old_logs/junos/* |sort -h'
+salt afs-splunk-syslog-1* cmd.run 'du -sh /opt/syslog-ng/old_logs/cisco_asa/* |sort -h'
+salt afs-splunk-syslog-1* cmd.run 'du -sh /opt/syslog-ng/old_logs/mcas/* |sort -h'
+salt afs-splunk-syslog-1* cmd.run 'rm -rf /opt/syslog-ng/old_logs/junos/2020-11-02/*' --out=txt
+salt afs-splunk-syslog-1* cmd.run 'rm -rf /opt/syslog-ng/old_logs/cisco_asa/2020-01-08/*' --out=txt
+salt afs-splunk-syslog-1* cmd.run 'rm -rf /opt/syslog-ng/old_logs/mcas/2020-01-27/*' --out=txt
+salt afs-splunk-syslog-1* cmd.run 'du -sh /opt/syslog-ng/junos/afssplhf103.us.accenturefederal.com/log/* |sort -h'
+salt afs-splunk-syslog-1* cmd.run 'rm -rf /opt/syslog-ng/junos/afssplhf103.us.accenturefederal.com/log/2020-01-29/*' --out=txt
+
+ 
+ 
+ 

+ 28 - 0
MDR GitHub Server Notes.txt

@@ -0,0 +1,28 @@
+MDR GitHub Server Notes.txt
+
+GitHub Enterprise Server is an APPLIANCE. No salt minion, No sft. 
+To SSH in you must have your public key manually added. 
+
+Host github
+  Port 122
+  User admin
+  HostName 10.80.101.78
+  
+  
+# Updating 
+
+ghe-update-check
+ghe-upgrade /var/lib/ghe-updates/github-enterprise-2.17.22.hpkg
+
+
+Upgrading major version
+ghe-upgrade
+
+fdisk -l
+
+two partitions are installed. when you run an upgrade the VM will install the upgrade to the other partiion. After the upgrade it will switch the primary boot partitions. This leaves the previous version available for roll back. 
+
+
+Hit ghe- (TAB) to view all ghe commands. 
+https://help.github.com/en/enterprise/2.17/admin/installation/command-line-utilities
+

+ 3 - 0
MDR GovCloud Notes.txt

@@ -0,0 +1,3 @@
+What services are needed in GovCloud?
+ECR
+codebuild

+ 1 - 0
MDR Jenkins Notes.txt

@@ -0,0 +1 @@
+MDR Jenkins Notes

+ 77 - 0
MDR Migration to Sensu Go.txt

@@ -0,0 +1,77 @@
+MDR Migration to Sensu Go
+
+Currently sensu is installed, going to migrate us to Sensu Go
+
+1. move packages to repo server via wget be sure to check sha512
+cd /var/www/html/redhat/msoc/Packages
+wget https://packagecloud.io/sensu/stable/packages/el/7/sensu-go-cli-5.15.0-7782.x86_64.rpm/download.rpm
+mv download.rpm  sensu-go-cli-5.15.0-el7.x86_64.rpm
+
+wget https://packagecloud.io/sensu/stable/packages/el/7/sensu-go-agent-5.15.0-7782.x86_64.rpm/download.rpm
+mv download.rpm sensu-go-agent-5.15.0-7782.x86_64.rpm
+
+wget https://packagecloud.io/sensu/stable/packages/el/7/sensu-go-backend-5.15.0-7782.x86_64.rpm/download.rpm
+mv download.rpm sensu-go-backend-5.15.0-7782.x86_64.rpm
+
+https://sensu.io/downloads
+
+wget -O sensu-go-backend-5.16.1-8521.x86_64.rpm  https://packagecloud.io/sensu/stable/packages/el/7/sensu-go-backend-5.16.1-8521.x86_64.rpm/download.rpm
+wget -O sensu-go-agent-5.16.1-8521.x86_64.rpm https://packagecloud.io/sensu/stable/packages/el/7/sensu-go-agent-5.16.1-8521.x86_64.rpm/download.rpm
+wget -O sensu-go-cli-5.16.1-8521.x86_64.rpm https://packagecloud.io/sensu/stable/packages/el/7/sensu-go-cli-5.16.1-8521.x86_64.rpm/download.rpm
+
+chown apache: sensu-go-*
+chmod 640 sensu-go-*
+
+[prod]root@reposerver:/var/www/html/redhat/msoc/Packages:# sha512sum sensu-go-*
+da69e33d8b9bb493cf261bd7fae261aabc19346a2c9942ada8a6005774ed9042fe129321f45425c300680036a2c9b14217db701c9b4e58843e486df24cc1e7d1  sensu-go-agent-5.15.0-7782.x86_64.rpm
+510839b01ca37a1733d1656b9c6672b4a3be08fdd4b12f910beb232ac2d2a60a3a75d0fc011920f2c489be6f8a2290aac133d8f9627cc8fdeb9bc285fd449036  sensu-go-backend-5.15.0-7782.x86_64.rpm
+196641d17d774e1c82c8b3842736821736a739d25a8f0b214de26a1c2ec80a06cb0caa7713fb8026209a5d2454d458c502f3c887e48fa221646520c8f75423d6  sensu-go-cli-5.15.0-el7.x86_64.rpm
+
+[dev]root@reposerver:/var/www/html/redhat/msoc/Packages:# sha512sum sensu-go-*
+da69e33d8b9bb493cf261bd7fae261aabc19346a2c9942ada8a6005774ed9042fe129321f45425c300680036a2c9b14217db701c9b4e58843e486df24cc1e7d1  sensu-go-agent-5.15.0-7782.x86_64.rpm
+36ee9bf1afd2c837e0d1d4b9151cf9f9e1a1ac09546832d2ad840ffa48694cdb509373e8e7ca9152475d8e2fba9f3e62e0e206a543018b8667e883acedbe2e18  sensu-go-agent-5.16.1-8521.x86_64.rpm
+510839b01ca37a1733d1656b9c6672b4a3be08fdd4b12f910beb232ac2d2a60a3a75d0fc011920f2c489be6f8a2290aac133d8f9627cc8fdeb9bc285fd449036  sensu-go-backend-5.15.0-7782.x86_64.rpm
+b449d093c219bc6262ad82cf281ed12f83d0e42f1a83c6eeca53527278cfed61f97054b51a971ed4e9a1c0cfd3bdd5f17955f166093c44f3435515c8307cf953  sensu-go-backend-5.16.1-8521.x86_64.rpm
+196641d17d774e1c82c8b3842736821736a739d25a8f0b214de26a1c2ec80a06cb0caa7713fb8026209a5d2454d458c502f3c887e48fa221646520c8f75423d6  sensu-go-cli-5.15.0-el7.x86_64.rpm
+f8b107e90bbd9a3b2348592d39ca69ed0e7e0cb02e0fc65caaedc31296f926077387c059d274554b099159169259355f4c5288855d6c6cadc62c70fdcbf6408c  sensu-go-cli-5.16.1-8521.x86_64.rpm
+
+FOLLOW INSTRUCTIONS IN reposerver notes to finish setting up packages
+
+
+remove old software
+yum remove uchiwa sensu jemalloc redis erlang rabbitmq-server
+
+prep vault
+create policy
+add secret
+adjust salt_master configuration in vault config
+
+  policies:
+    - saltstack/minions
+    - saltstack/minion/{minion}
+
+ext_pillar:
+  - vault: path=salt/pillar_data
+  - vault: path=salt/minions/{minion}/pass
+
+adjust security groups through terraform
+
+run salt state
+salt sensu.msoc.defpoint.local saltutil.refresh_pillar
+salt sensu.msoc.defpoint.local state.sls sensu_master
+
+
+Client to Agent migration
+uninstall client
+pkg.remove sensu
+cmd.run 'rm -rf /etc/sensu/*'
+saltutil.refresh_pillar
+state.sls sensu_agent
+
+
+Sensu Prod
+41 clients
+7 silenced
+clu-keepalive/jenkins/nginx, atlantis - none, dps-idm- keepalive, phantom-splukn_indexer_ports
+
+

+ 10 - 0
MDR Okta Notes.txt

@@ -0,0 +1,10 @@
+Okta -> Admin -> input username -> assign applications
+
+OKTA API Tokens
+Don't use the GUI for okta tokens. Chris can generate a new okta token with the correct user and access. Also, better to look in the bash history for okta tokens
+
+
+Password expiration report
+OKTA -> Reports -> Okta Password Health
+Open with Brackets Not excel
+

+ 53 - 0
MDR OpenVPN Notes.txt

@@ -0,0 +1,53 @@
+To admin openvpn, SSH into the openvpn server and use the admin user that is located in Vault. 
+
+the admin username is openvpn
+
+
+
+----------
+Reset ldap.read
+
+ldap.read@defpoint.com is the okta user that openvpn uses to auth to okta. the ldap.read account's password expires atfer 60 days. to see when the password will expire, go to Reports -> Okta Password Health. Don't open with EXCEL!
+
+1. Log into OKTA in an incognito window using the ldap.read username and the current password from Vault. Brad's phone is currently setup with the Push notification for the account. The MFA is required for the account. To change the password without Brad, remove MFA with your account in OKTA and set it up on your own phone. 
+2. Once the password has been updated, update vault in this location, engineering/root with a key of ldap.read@defpoint.com. You will have to create a new version of engineering/root to save the password. 
+3. Store the new password and the creds for openvpn and drop off the VPN. Log into the openVPN web GUI (https://openvpn.mdr.defpoint.com/admin/) as the openvpn user (password in Vault) and update the credentials for ldap.read. Authentication -> ldap -> update password -> Save Settings. Then update running server. Repeat this for the test environment (https://openvpn.mdr-test.defpoint.com/admin/) 
+4. Verify that you are able to login to the VPN. 
+5. Update the Sensu ldap.read password in salt/pillar/sensu_master.sls. It will need to be encypted prior to being used. 
+6. put the password in a deleteme.txt file and run this command (see google doc for additional info)
+7. cat deleteme.txt | gpg -easr salt | gpg -d
+7.5 paste in file and use tab to indent correctly. No indent = salt errors. 
+8. commit to git
+9. push to sensu & restart
+9.1 salt sensu* state.sls sensu_master
+9.2 salt sensu* cmd.run 'systemctl restart sensu-backend'
+
+------------
+when okta push is slow, get the 6 digits from your okta app
+and put into viscosity your password as  password,123456
+clearly your password should have no commas in it
+
+
+
+-------------
+LDAP config
+
+Primary server: mdr-multipass.ldap.okta.com
+Bind Anon? NO
+Use creds? YES
+
+
+BIND DN:
+uid=ldap.read@defpoint.com, dc=mdr-multipass, dc=okta, dc=com
+
+BASE DN for Users
+ou=users, dc=mdr-multipass, dc=okta, dc=com
+
+Usernaem Attribute
+uid
+
+
+------------
+OpenVPN License
+
+TEST -> YOLO via web interface. This means i did not take the time to reconfigure the Salt states to handle a prod and test license. 

+ 14 - 0
MDR POP Node Notes.txt

@@ -0,0 +1,14 @@
+MDR POP Node Notes
+
+SDC drive
+ 3014  03/20/20 19:41:43 +0000 salt 'afs*syslog-[5678]*' cmd.run 'pvcreate /dev/sdc'
+ 3015  03/20/20 19:42:08 +0000 salt 'afs*syslog-[5678]*' cmd.run 'vgcreate vg_syslog /dev/sdc'
+ 3016  03/20/20 19:42:42 +0000 salt 'afs*syslog-[5678]*' cmd.run 'lvcreate -n lv_syslog'
+ 3017  03/20/20 19:42:51 +0000 sudo lvcreate --help
+ 3018  03/20/20 19:43:23 +0000 salt 'afs*syslog-[5678]*' cmd.run 'lvcreate -L 500G -n lv_syslog vg_syslog'
+ 3019  03/20/20 19:43:32 +0000 salt 'afs*syslog-[5678]*' cmd.run 'lvcreate -L 499G -n lv_syslog vg_syslog'
+ 3020  03/20/20 19:44:18 +0000 salt 'afs*syslog-[5678]*' cmd.run 'mkfs -t ext4 /dev/vg_syslog/lv_syslog'
+1:46
+sorry needs to be xfs
+1:46
+salt 'afs*syslog-[5678]*' cmd.run 'mkfs -t xfs -f  /dev/vg_syslog/lv_syslog'

+ 29 - 0
MDR Packer Notes.txt

@@ -0,0 +1,29 @@
+Used to create the AWS AMI. run this on your local laptop. Part of the process is on the local laptop and part is in AWS. 
+https://packer.io/
+create a symlink to the DVD iso so Git doesn't try to commit it. 
+
+the Makefile is used to document the different images that you are trying to build.  
+make
+
+Packer is required to partition the hard drives out before ec2 launch. 
+
+
+
+
+username: centos
+key in vault called msoc-build 
+
+to run through a new build use the make command. 
+
+make aws-test
+AWS_PROFILE=mdr-test packer build -on-error=ask -only=master -var-file=rhel7_hardened_variables_test.json rhel7_hardened_multi_ami.json
+
+AWS_PROFILE=mdr-test packer build -on-error=ask -debug -only=master -var-file=rhel7_hardened_variables_test.json rhel7_hardened_multi_ami.json
+
+----------------------
+Troubleshooting Help
+Add --debug to pause execution at each stage
+add --on-error=ask
+
+Having issues with the RHEL subscription manager in TEST? switch it to the prod one. 
+

+ 71 - 0
MDR Packer Salt Master FIPS Notes.txt

@@ -0,0 +1,71 @@
+MDR Packer Salt Master FIPS Notes
+
+check for FIPS
+cat /proc/sys/crypto/fips_enabled
+1
+
+Latest in test: MSOC_RedHat_Master_201909301534
+Latest in prod: MSOC_RedHat_Master_201907012051
+
+move this
+terraform/02-msoc_vpc/conf/provision_salt_master.sh
+
+to here
+packer/rhel7_hardened_saltmaster_ami.json
+
+
+
+
+AWS_PROFILE=mdr-test aws secretsmanager get-secret-value --secret-id saltmaster/ssh_key --query SecretString --output text
+
+Build error
+==> master: + sudo firewall-cmd --permanent --zone=public --add-port=4505-4506/tcp
+    master: success
+==> master: + sudo firewall-cmd --reload
+==> master: + sudo systemctl enable salt-master
+    master: success
+==> master: Created symlink from /etc/systemd/system/multi-user.target.wants/salt-master.service to /usr/lib/systemd/system/salt-master.service.
+==> master: /home/centos/script_7740.sh: line 56: unexpected EOF while looking for matching `"'
+==> master: Provisioning step had errors: Running the cleanup provisioner, if present...
+==> master: Terminating the source AWS instance...
+
+
+
+test instance
+packer_5e700a93-aa62-0731-0405-1488fc6aa885
+
+
+
+PROD Steps
+Document the salt keys currently accepted to ensure they all come back.
+poweroff salt-master
+create snapshot of salt-master EBS
+check on TF plan 
+terminate salt-master
+use TF to re-create salt-master
+log into salt-master via bastion + msoc_build key
+wait for cloud-init scripts to finish running
+wait for state.highstate to finish running (like solid 15 minutes)
+verify cloud-init scripts completed successfully (check on stuff) /var/lib/cloud/instance/scripts/part-002
+Ensure vault.conf is not foobar and messing up pillars
+if needed run salt_master state like this salt-call state.sls salt_master
+salt salt* pillar.item my-pillar
+salt-call state.sls os_modifications.ssh_motd
+salt-call state.sls os_modifications.ssh_banner
+salt-call state.sls sensu_agent
+
+clean up SFT and remove old salt-master
+
+restart local minions via SSM/SSH
+pop nodes should reconnect to elastic IP of salt master ( no DNS issue)
+
+Run with SSM
+systemctl restart salt-minion
+
+
+"missing" minions
+github-enterprise-0
+qualys_scanner
+qualys_scanner_2
+
+

+ 362 - 0
MDR Patching Notes.txt

@@ -0,0 +1,362 @@
+
+
+
+-----------
+Date: Wednesday, November 13, 2019 at 3:53 PM
+To: "Leonard, Wesley A." <wesley.a.leonard@accenturefederal.com>, "Waddle, Duane E." <duane.e.waddle@accenturefederal.com>
+Subject: November Patching for MDR
+ 
+Again, we are going to attempt to do much of the patching during business hours during the week.  Everything - including Customer POPs - needs patches this time.  We will be doing the servers in 2 waves.
+ 
+Wave 1 is hot patching of all systems; this will fix the sudo and stage the kernel patches.
+ 
+Wave 2 will be the needed reboots; as this is where we see the customer impact.
+ 
+In the slack channel, #mdr-patching.  You can join that to get real-time announcements on what is going down and when.
+ 
+Rough preliminary plans are:
+ 
+ 
+Wed Nov 13:
+Moose and Internal infrastructure
+Wave 1
+ 
+Thursday Nov 14:
+Moose and Internal
+Wave 2
+All Customer PoP
+Wave 1 (AM)
+Wave 2 (PM)
+Monday Nov 18:
+All Customer MDR Cloud
+Wave 1
+All Search heads
+Wave 2 (PM)
+Tuesday Nov 19:
+All Remaining MDR Cloud
+Wave 2 (AM)
+ 
+The Customer // User impact will be darning the reboots that is why I am doing them in batches so our total downtime is less.
+
+----------------------
+
+
+
+
+#restarting the indexers one at a time (one from each group). Use the CM to see if the indexer comes back up properly. 
+salt -C ' ( *moose* or *saf* ) and *indexer-1*' cmd.run 'shutdown -r now'
+#check to ensure the hot volume is mounted /opt/splunkdata/hot
+salt -C '( *moose* or *saf* ) and *indexer-1*' cmd.run 'df -h'
+
+#WAIT FOR 3 checks in the CM before restarting the next indexer. 
+
+#repeat for indexer 2
+salt -C ' ( *moose* or *saf* ) and *indexer-2*' cmd.run 'shutdown -r now'
+#check to ensure the hot volume is mounted /opt/splunkdata/hot
+salt -C ' ( *moose* or *saf* ) and *indexer-2*' cmd.run 'df -h'
+
+#WAIT FOR 3 checks in the CM before restarting the next indexer.
+
+#repeat for indexer 3
+salt -C ' ( *moose* or *saf* ) and *indexer-3*' cmd.run 'shutdown -r now'
+#check to ensure the hot volume is mounted /opt/splunkdata/hot
+salt -C ' ( *moose* or *saf* ) and *indexer-3*' cmd.run 'df -h'
+
+
+IF/WHEN and indexer doesn't come back up follow these steps:
+in AWS grab the instance id. 
+
+run the MDR/get-console.sh
+look for "Please enter passphrase for disk splunkhot"
+
+in AWS console stop instance (which will remove ephemeral splunk data) then start it. 
+Then ensure the /opt/splunkdata/hot exists.
+if it doesn't then manually run the cloudinit boot hook. 
+sh /var/lib/cloud/instance/boothooks/part-002
+
+ensure the hot folder is owned by splunk:splunk
+it will be waiting for the luks.key
+systemctl deamon-reload
+systemctl restart systemd-cryptsetup@splunkhot
+It is waiting for command prompt, when you restart the service it picks up the key from a file. Systemd sees the crypt setup service as a dependency for the splunk service. 
+
+
+
+
+
+
+#restart indexers (one at a time; wait for 3 green checkmarks in Cluster Master)
+salt -C 'nga*indexer-1*' test.ping
+salt -C 'nga*indexer-1*' cmd.run 'shutdown -r now'
+
+#Repeat for indexer-2 and indexer-3
+
+#Ensure all have been restarted. Then done with NGA
+salt -C '*nga*' cmd.run 'uptime'
+
+
+
+-------------------------------------------------
+
+
+
+
+
+#############
+#
+###############################################################################################################################################
+#
+#############
+
+
+Brad's Actual Patching
+Starting with moose and internal infra Wave 1. Check disk space for potential issues. 
+
+salt -C '* not ( afs* or saf* or nga* )' test.ping --out=txt
+salt -C '* not ( afs* or saf* or nga* )' cmd.run 'df -h /boot'  
+salt -C '* not ( afs* or saf* or nga* )' cmd.run 'df -h /var/log'   # some at 63%
+salt -C '* not ( afs* or saf* or nga* )' cmd.run 'df -h /var'        # one at 74%
+salt -C '* not ( afs* or saf* or nga* )' cmd.run 'df -h'
+#review packages that will be updated. some packages are versionlocked (Collectd, Splunk,etc.).
+salt -C '* not ( afs* or saf* or nga* )' cmd.run 'yum check-update' 
+salt -C '* not ( afs* or saf* or nga* )' pkg.upgrade
+
+This error: error: unpacking of archive failed on file /usr/lib/python2.7/site-packages/urllib3/packages/ssl_match_hostname: cpio: rename failed
+pip uninstall urllib3
+
+This error is caused by the versionlock on the package. Use this to view the list
+yum versionlock list
+Error: Package: salt-minion-2018.3.4-1.el7.noarch (@salt-2018.3)
+                       Requires: salt = 2018.3.4-1.el7
+                       Removing: salt-2018.3.4-1.el7.noarch (@salt-2018.3)
+                           salt = 2018.3.4-1.el7
+                       Updated By: salt-2018.3.5-1.el7.noarch (salt-2018.3)
+                           salt = 2018.3.5-1.el7
+
+
+Error: installing package kernel-3.10.0-1062.12.1.el7.x86_64 needs 7MB on the /boot filesystem
+## Install yum utils ##
+yum install yum-utils
+
+## Package-cleanup set count as how many old kernels you want left ##
+package-cleanup --oldkernels --count=1
+
+If VPN server stops working, try a stop and start of the vpn server. The private IP will probably change. 
+
+###
+Wave 2 Internals 
+###
+
+Be sure to select ALL events in sensu for silencing not just the first 25. 
+Sensu -> Entities -> Sort (name) -> Select Entity and Silence. This will silence both keepalive and other checks. 
+Some silenced events will not unsilence and will need to be manually unsilenced.
+Some silenced events will still trigger. Not sure why. The keepalive still triggers victorops. 
+***IDEA! restart the sensu server and the vault-3 server first. this helps with the clearing of the silenced entities.
+salt -L 'vault-3.msoc.defpoint.local,sensu.msoc.defpoint.local' test.ping
+salt -L 'vault-3.msoc.defpoint.local,sensu.msoc.defpoint.local' cmd.run 'shutdown -r now'
+salt -C '* not ( moose-splunk-indexer* or afs* or saf* or nga* or vault-3* or sensu* )' test.ping --out=txt
+salt -C '* not ( moose-splunk-indexer* or afs* or saf* or nga* or vault-3* or sensu* )' cmd.run 'shutdown -r now'
+#you will lose connectivity to openvpn and salt master
+#log back in and verify they are back up
+salt -C '* not ( moose-splunk-indexer* or afs* or saf* or nga* )' cmd.run 'uptime' --out=txt
+
+###
+Wave 2 Moose
+###
+
+salt -C 'moose-splunk-indexer*' test.ping --out=txt
+salt -C 'moose-splunk-indexer-1.msoc.defpoint.local' cmd.run 'shutdown -r now'
+#indexers take a while to restart
+ping moose-splunk-indexer-1.msoc.defpoint.local
+#WAIT FOR SPLUNK CLUSTER TO HAVE 3 CHECKMARKS
+indexer2 is not coming back up...look at screenshot in aws... see this: Probing EDD (edd=off to disable)... ok
+look at system log in AWS see this:  Please enter passphrase for disk splunkhot!:
+
+In AWS console stop instance (which will remove ephemeral splunk data) then start it. 
+Then ensure the /opt/splunkdata/hot exists.
+salt -C 'moose-splunk-indexer-1.msoc.defpoint.local' cmd.run 'df -h'
+IF the MOUNT for /opt/splunkdata/hot DOESN"T EXISTS STOP SPLUNK! Splunk will write to the wrong volume. 
+before mounting the new volume clear out the wrong /opt/splunkdata/
+
+salt -C 'moose-splunk-indexer-1.msoc.defpoint.local' cmd.run 'systemctl stop splunk'
+Ensure the /opt/splunkdata doesn't already exist, before the boothook. (theory that this causes the issue) 
+ssh prod-moose-splunk-indexer-1
+if it doesn't then manually run the cloudinit boot hook. 
+sh /var/lib/cloud/instance/boothooks/part-002
+salt -C 'nga-splunk-indexer-2.msoc.defpoint.local' cmd.run 'sh /var/lib/cloud/instance/boothooks/part-002'
+
+ensure the hot folder is owned by splunk:splunk
+ll /opt/splunkdata/
+salt -C 'moose-splunk-indexer-1.msoc.defpoint.local' cmd.run 'ls -larth /opt/splunkdata'
+chown -R splunk: /opt/splunkdata/
+salt -C 'nga-splunk-indexer-2.msoc.defpoint.local' cmd.run 'chown -R splunk: /opt/splunkdata/'
+it will be waiting for the luks.key
+systemctl daemon-reload
+salt -C 'moose-splunk-indexer-1.msoc.defpoint.local' cmd.run 'systemctl daemon-reload'
+salt -C 'moose-splunk-indexer-1.msoc.defpoint.local' cmd.run 'systemctl restart systemd-cryptsetup@splunkhot'
+salt -C 'moose-splunk-indexer-1.msoc.defpoint.local' cmd.run 'systemctl | egrep cryptset'
+It is waiting for command prompt, when you restart the service it picks up the key from a file. Systemd sees the crypt setup service as a dependency for the splunk service. 
+
+
+#look for this. this is good, it is ready for restart of splunk
+Cryptography Setup for splunkhot
+
+systemctl restart splunk
+salt -C 'moose-splunk-indexer-1.msoc.defpoint.local' cmd.run 'systemctl restart splunk'
+ 
+once the /opt/splunkdata/hot is visible in df -h and the splunk service is started, then wait for the cluster to have 3 green checkmarks. 
+
+check the servers again to ensure all of them have rebooted. 
+salt -C ''moose-splunk-indexer*'' cmd.run 'uptime' --out=txt | sort
+
+Ensure all Moose and Internal have been rebooted
+salt -C '* not ( afs* or saf* or nga* )' cmd.run 'uptime' --out=txt | sort
+
+###
+Wave 1 POPs
+###
+
+salt -C '* not *.local' test.ping --out=txt
+salt -C '* not *.local' cmd.run 'yum check-update'
+salt -C '* not *.local' cmd.run 'uptime'
+#check for sufficent HD space
+salt -C '* not *.local' cmd.run 'df -h /boot'
+salt -C '* not *.local' cmd.run 'df -h /var/log'
+salt -C '* not *.local' cmd.run 'df -h /var'
+salt -C '* not *.local' cmd.run 'df -h'
+salt -C '* not *.local' pkg.upgrade disablerepo=msoc-repo
+salt -C '* not *.local' pkg.upgrade
+
+###
+Wave 2 POPs
+###
+
+DO NOT restart all POP at the same time
+salt -C '*syslog-1* not *.local' cmd.run 'uptime'
+salt -C '*syslog-1* not *.local' cmd.run 'shutdown -r now'
+
+salt -C '*syslog-1* not *.local' cmd.run 'ps -ef | grep syslog-ng | grep -v grep' 
+#look for /usr/sbin/syslog-ng -F -p /var/run/syslogd.pid
+
+SAF will need the setenforce run
+salt saf-splunk-syslog-1 cmd.run 'setenforce 0'
+salt saf-splunk-syslog-1 cmd.run 'systemctl stop rsyslog'
+salt saf-splunk-syslog-1 cmd.run 'systemctl start syslog-ng'
+
+salt -C '*syslog-2* not *.local' cmd.run 'uptime'
+salt -C '*syslog-2* not *.local' cmd.run 'shutdown -r now'
+salt -C '*syslog-2* not *.local' cmd.run 'ps -ef | grep syslog-ng | grep -v grep'
+salt -L 'nga-splunk-syslog-2,saf-splunk-syslog-2' cmd.run 'ps -ef | grep syslog-ng | grep -v grep'
+
+salt -C '*syslog-3* not *.local' cmd.run 'uptime'
+salt -C '*syslog-3* not *.local' cmd.run 'shutdown -r now'
+
+salt -C '*syslog-4* not *.local' cmd.run 'uptime'
+salt -C '*syslog-4* not *.local' cmd.run 'shutdown -r now'
+
+repeat for syslog-5, syslog-6, syslog-7, and syslog-8  
+(might be able to reboot some of these at the same time. if they are if different locations. check the location grain on them.)
+grains.item location
+
+afs-splunk-syslog-8: {u'location': u'az-east-us-2'}
+afs-splunk-syslog-6: {u'location': u'az-central-us'}
+
+afs-splunk-syslog-7: {u'location': u'az-east-us-2'}
+afs-splunk-syslog-5: {u'location': u'az-central-us'}
+afs-splunk-syslog-4: {u'location': u'San Antonio'}
+
+salt -C 'afs-splunk-syslog*  grains.item location
+
+salt -L 'afs-splunk-syslog-6, afs-splunk-syslog-8' cmd.run 'uptime'
+salt -L 'afs-splunk-syslog-6, afs-splunk-syslog-8' cmd.run 'shutdown -r now'
+
+salt -L 'afs-splunk-syslog-5, afs-splunk-syslog-7' cmd.run 'uptime'
+salt -L 'afs-splunk-syslog-5, afs-splunk-syslog-7' cmd.run 'shutdown -r now'
+
+# verify logs are flowing
+https://saf-splunk-sh.msoc.defpoint.local:8000/en-US/app/search/search
+ddps03.corp.smartandfinal.com
+index=* source=/opt/syslog-ng/* host=ddps* earliest=-15m | stats  count by host
+
+https://afs-splunk-sh.msoc.defpoint.local:8000/en-US/app/search/search
+afssplhf103.us.accenturefederal.com
+index=* source=/opt/syslog-ng/* host=afs* earliest=-15m | stats  count by host
+
+https://nga-splunk-sh.msoc.defpoint.local:8000/en-US/app/search/search
+aws-syslog1-tts.nga.gov
+index=network sourcetype="citrix:netscaler:syslog" earliest=-15m
+index=* source=/opt/syslog-ng/* host=aws* earliest=-60m | stats count by host
+
+POP ds (could these be restarted at the same time? Or in 2 batches?)
+salt -C '*splunk-ds-1* not *.local' cmd.run 'uptime'
+salt -C '*splunk-ds-1* not *.local' cmd.run 'shutdown -r now'
+
+salt -C '*splunk-ds-2* not *.local' cmd.run 'uptime'
+salt -C '*splunk-ds-2* not *.local' cmd.run 'shutdown -r now'
+
+salt afs-splunk-ds-[2,3,4] cmd.run 'uptime'
+salt afs-splunk-ds-[2,3,4] cmd.run 'shutdown -r now'
+
+Don't forget ds-3 and ds-4
+
+salt '*splunk-ds*' cmd.run 'systemctl status splunk'
+
+
+POP dcn
+salt -C '*splunk-dcn-1* not *.local' cmd.run 'uptime'
+salt -C '*splunk-dcn-1* not *.local' cmd.run 'shutdown -r now'
+
+Did you get all of them?
+salt -C ' * not *local ' cmd.run 'uptime'
+
+###
+Customer Slices Wave 1
+###
+
+salt -C 'afs*local or saf*local or nga*local' test.ping --out=txt
+salt -C 'afs*local or saf*local or nga*local' cmd.run 'uptime'
+salt -C 'afs*local or saf*local or nga*local' cmd.run 'df -h'
+salt -C 'afs*local or saf*local or nga*local' pkg.upgrade
+
+epel repo is enabled on afs-splunk-hf ( I don't know why)
+had to run this to avoid issue with collectd package on msoc-repo
+
+yum update --disablerepo epel
+
+Silence Sensu first!
+Customer Slices Search Heads Only Wave 2
+salt -C 'afs-splunk-sh*local or saf-splunk-sh*local or nga-splunk-sh*local' test.ping --out=txt
+salt -C 'afs-splunk-sh*local or saf-splunk-sh*local or nga-splunk-sh*local' cmd.run 'df -h'
+salt -C 'afs-splunk-sh*local or saf-splunk-sh*local or nga-splunk-sh*local' cmd.run 'shutdown -r now'
+salt -C 'afs-splunk-sh*local or saf-splunk-sh*local or nga-splunk-sh*local' cmd.run 'uptime'
+
+###
+Customer Slices CMs Wave 2
+###
+
+Silence Sensu first!
+salt -C '( *splunk-cm* or *splunk-hf* ) not moose*' test.ping --out=txt
+salt -C '( *splunk-cm* or *splunk-hf* ) not moose*' cmd.run 'df -h'
+salt -C '( *splunk-cm* or *splunk-hf* ) not moose*' cmd.run 'shutdown -r now'
+salt -C '( *splunk-cm* or *splunk-hf* ) not moose*' cmd.run 'systemctl status splunk'
+salt -C '( *splunk-cm* or *splunk-hf* ) not moose*' cmd.run 'uptime'
+
+afs-splunk-hf has a hard time restarting. Might need to stop then start the instance. 
+
+reboot indexers 1 at a time (AFS cluster gets backed up when an indexer is rebooted)
+salt -C '*splunk-indexer-1* not moose*' test.ping --out=txt
+salt -C '*splunk-indexer-1* not moose*' cmd.run 'df -h'
+salt -C '*splunk-indexer-1* not moose*' cmd.run 'shutdown -r now'
+
+Wait for 3 green check marks
+#repeat for indexers 2 & 3
+salt -C '*splunk-indexer-2* not moose*' test.ping
+
+3 green checkmarks 
+salt -C '*splunk-indexer-3* not moose*' test.ping
+salt -L 'afs-splunk-indexer-3.msoc.defpoint.local,saf-splunk-indexer-3.msoc.defpoint.local' cmd.run 'df -h'
+
+NGA had a hard time getting 3 checkmarks The CM was waiting on stuck buckets. Force rolled the buckets to get green checkmarks.
+
+salt -C '* not *.local' cmd.run 'uptime | grep days'
+***MAKE SURE the Sensu checks are not silenced. ***

+ 5 - 0
MDR Phantom Notes.txt

@@ -0,0 +1,5 @@
+MDR Phantom Notes
+
+Stop and Start the services
+/opt/phantom/bin/stop_phantom.sh
+/opt/phantom/bin/start_phantom.sh

+ 38 - 0
MDR Phantom Upgrade Notes.txt

@@ -0,0 +1,38 @@
+MDR Phantom Upgrade Notes
+
+package installation order matters
+postgresql94-libs-9.4.15-1PGDG.rhel7.x86_64.rpm
+before 
+postgresql94-9.4.15-1PGDG.rhel7.x86_64.rpm
+
+Ugh, just use the repo 
+--CENTOS 7 / RedHat 7--
+rpm -Uvh https://repo.phantom.us/phantom/4.5/base/7Server/x86_64/phantom_repo-4.5.15922-1.x86_64.rpm
+rpm -Uvh https://repo.phantom.us/phantom/4.6/base/7Server/x86_64/phantom_repo-4.6.19142-1.x86_64.rpm
+rpm -Uvh https://repo.phantom.us/phantom/4.8/base/7Server/x86_64/phantom_repo-4.8.24304-1.x86_64.rpm
+
+RUN this after upgrade (look for on-screen prompt )
+phenv python /opt/phantom/bin/ibackup.pyc --setup
+
+nginx failed to start with old version of phantom
+
+upgrade time!
+
+vagrant phantom creds
+admin
+Password1
+
+
+TEST
+phantom state.apply installed splunkuf
+
+
+PROD
+stop phantom
+take snapshot of drive
+clean yum cache
+install RPM for repo
+upgrade phantom
+
+
+

+ 87 - 0
MDR Portal Lamba Notes.txt

@@ -0,0 +1,87 @@
+MDR Portal Lambda Notes MSOCI-1067
+
+read moose port 8089
+send to portal port 443 HTTPS
+
+
+
+need execution role (IAM role needs perms to upload logs to )
+policy 
+policy_portal_data_sync_lambda
+description
+IAM policy for portal_data_sync_lambda
+
+role
+portal-data-sync-lambda-role
+description
+Allows Lambda functions to call AWS services on your behalf.
+
+
+
+
+create new lambda
+test_portal_data_sync
+
+
+VPC Moose
+vpc-0b455a7f22a13412b
+subnet-0b1e9d82bcd8c0a2c
+subnet-0d65c22aa4f76b634
+
+sg-03b225559f97d7a5e
+
+CREATE new SG that can only access 
+
+
+Access to Moose + portal
+8089 -> 10.96.101.59
+443 -> ANY
+
+portal-data-sync-lambda-sg
+allow lambda access to Moose
+	
+sg-0a0974a250be2cf07
+
+
+
+
+vpc -same as portal (test)
+vpc-075e58bd7619dc5b0
+
+subnet
+subnet-02575f16e22431ad6
+subnet-0662ad00a4fbf3034
+
+
+Create test for lambda function
+{
+  "test_read_issues": "True",
+  "test_splunk_search": "True",
+  "test_token": "redacted"
+}
+
+I think the token is for portal?
+
+Splunk username & password will be needed to access SH on port 8089
+See vault for creds
+test
+api-portal-data-sync-lambda
+M7*P6U9!0uHL3s1blTW*
+
+increase timeout to 20 seconds
+figure out sg for access proxy
+
+Terraform
+terraform apply -target=aws_lambda_function.portal_data_sync -target=aws_iam_policy.policy_portal_data_sync_lambda -target=aws_iam_role_policy_attachment.lambda-role -target=aws_iam_role_policy_attachment.lambda-role -target=aws_cloudwatch_log_group.function -target=aws_security_group.portal_lambda_sg -target=aws_security_group.portal_lambda_splunk_sg -target=aws_security_group_rule.portal_lambda_https -target=aws_security_group_rule.portal_lambda_splunk_in -target=aws_security_group_rule.portal_lambda_splunk_out -target=aws_cloudwatch_event_rule.portal_event_rule -target=aws_cloudwatch_event_target.portal_lambda_cloudwatch_target -target=aws_lambda_permission.allow_cloudwatch_to_call_portal_lambda
+
+
+
+
+
+Vault auth
+vault write auth/aws/role/portal auth_type=iam bound_iam_principal_arn=arn:aws:iam::527700175026:role/portal-data-sync-lambda-role policies=portal max_ttl=24h
+vault write auth/aws/role/portal-data-sync-lambda-role auth_type=iam bound_iam_principal_arn=arn:aws:iam::527700175026:role/portal-data-sync-lambda-role policies=portal max_ttl=24h
+
+vault write auth/aws/role/portal-data-sync-lambda-role auth_type=iam bound_iam_principal_arn=arn:aws:iam::527700175026:role/portal-data-sync-lambda-role policies=portal max_ttl=24h
+
+vault write auth/aws/role/portal-data-sync-lambda-role auth_type=iam bound_iam_principal_arn=arn:aws:iam::477548533976:role/portal-data-sync-lambda-role policies=portal max_ttl=24h

+ 92 - 0
MDR Portal Notes.txt

@@ -0,0 +1,92 @@
+salt 'ip-10*' cmd.run 'docker images --digests'MDR Portal Notes
+https://github.mdr.defpoint.com/MDR-Content/customer_portal/wiki
+
+Portal is a custom application running on Django app in docker. 
+
+
+------------
+Deploy Process
+
+salt 'ip-10*' test.ping
+salt 'ip-10*' cmd.run 'docker images'
+salt 'ip-10*' cmd.run 'docker container ls'
+salt 'ip-10*' cmd.run 'docker stop portal'
+salt 'ip-10*' cmd.run 'docker stop nginx'
+salt 'ip-10*' cmd.run 'docker rm portal'
+salt 'ip-10*' cmd.run 'docker rm nginx'
+salt 'ip-10*' cmd.run 'docker images'
+salt 'ip-10*' cmd.run 'docker images --digests'
+salt 'ip-10*' cmd.run 'docker rmi <image-id>'
+salt 'ip-10*' state.sls docker
+salt 'ip-10*' state.sls docker.portal
+
+
+(from the wiki page https://github.mdr.defpoint.com/MDR-Content/customer_portal/wiki)
+Last time i tried the ec2_tags grain targeting did not work.
+
+salt -G ‘ec2_tags:Name:customer-portal’ cmd.run “docker images” – You will need to grab the docker image ID for the container that needs to be updated
+salt -G ‘ec2_tags:Name:customer-portal’ cmd.run “docker stop portal”
+salt -G ‘ec2_tags:Name:customer-portal’ cmd.run “docker rm portal”
+salt -G ‘ec2_tags:Name:customer-portal’ cmd.run “docker rmi ${image id from above}
+salt -G ‘ec2_tags:Name:customer-portal’ state.sls docker
+salt -G ‘ec2_tags:Name:customer-portal’ state.sls docker.portal
+
+--------
+Troubleshooting the docker image
+salt 'ip-10*' cmd.run 'docker container ls'
+salt 'ip-10*' cmd.run 'docker exec portal ls'
+salt 'ip-10*' cmd.run 'docker exec portal cat /opt/portal/saml/idps.json'
+salt 'ip-10*' cmd.run 'docker exec portal cat /opt/portal/saml/sp.json'
+This will init the portal variables by pulling them from vault. SHOULD NOT NEED TO RUN IT
+salt 'ip-10*' cmd.run 'docker exec portal sh /opt/portal/init.sh'
+salt 'ip-10*' cmd.run 'docker exec portal cat /opt/portal/init.sh'
+Portal auths to Vault then pulls the creds
+salt 'ip-10*' cmd.run 'docker exec portal cat /usr/local/src/vault_auth.sh'
+
+docker exec -ti portal /usr/local/src/vault_auth.sh test
+
+---
+Command line access
+docker exec -ti nginx bash
+
+salt 'ip-10*' cmd.run 'docker restart portal'
+salt 'ip-10*' cmd.run 'docker rm -f portal'
+salt 'ip-10*' cmd.run 'docker rm -f nginx'
+salt 'ip-10*' cmd.run 'docker pull 350838957895.dkr.ecr.us-east-1.amazonaws.com/portal_server'
+salt 'ip-10*' cmd.run 'docker pull 350838957895.dkr.ecr.us-east-1.amazonaws.com/django_nginx'
+salt 'ip-10*' cmd.run 'docker image ls'
+
+salt 'ip-10*' state.sls docker.portal
+
+ALL THE ERRORS:
+nginx: [emerg] host not found in upstream "portal:8000" in /etc/nginx/nginx.conf:27
+
+{"errors":["error making upstream request: error making request: Post https://sts.amazonaws.com//: dial tcp 52.94.241.129:443: i/o timeout"]}
+
+[WARNING ] The following arguments were ignored because they are not recognized by docker-py: [u'dns-search', u'network-alias']
+[WARNING ] The following arguments were ignored because they are not recognized by docker-py: [u'dns-search']
+
+SOULTION:
+NOT SURE! try stopping docker containers and service and starting back up with salt state. 
+seems to be proxy issue
+working server...
+[dev]root@ip-10-97-10-248:~:# docker exec portal wget portal
+--2020-04-30 17:44:37--  http://portal/
+Resolving proxy.msoc.defpoint.local (proxy.msoc.defpoint.local)... 10.96.101.188
+Connecting to proxy.msoc.defpoint.local (proxy.msoc.defpoint.local)|10.96.101.188|:80... connected.
+Proxy request sent, awaiting response... 503 Service Unavailable
+2020-04-30 17:44:38 ERROR 503: Service Unavailable.
+broken server...
+[dev]root@ip-10-97-9-59:~:# docker exec portal wget portal
+--2020-04-30 17:27:45--  http://portal/
+Resolving proxy.msoc.defpoint.local (proxy.msoc.defpoint.local)... failed: Name or service not known.
+wget: unable to resolve host address 'proxy.msoc.defpoint.local'
+
+docker exec portal wget portal
+
+
+
+sha256:598168ec922e79106fa3f8af35dd33313aa32ae859e77673b65d52ce93852810
+
+
+

+ 104 - 0
MDR Portal WAF Notes.txt

@@ -0,0 +1,104 @@
+Reference OWASP whitepaper
+https://d0.awsstatic.com/whitepapers/Security/aws-waf-owasp.pdf
+
+
+portal-generic-restrict-sizes DONE
+Filters in portal-generic-size-restrictions
+The length of the Body is greater than 4096.
+The length of the Query string is greater than 1024.
+The length of the Header 'cookie' is greater than 4093.
+The length of the URI is greater than 512.
+/complete/saml
+
+
+TEST
+failing
+is it The length of the Header 'cookie' is greater than 4093. ? nope
+is it The length of the URI is greater than 512. ? nope
+The length of the Query string is greater than 1024. ? nope
+The length of the Body is greater than 4096. ? YES!
+
+trying
+The length of the Body is greater than 8000. Nope
+The length of the Body is greater than 12000. YES!
+The length of the Body is greater than 11168. sometimes!
+The length of the Body is greater than 12288. YES!
+
+***try to exclude the saml URI****
+URI starts with: "/complete/saml" after decoding as URL.
+TODO
+
+Add URL filter for the rule portal-generic-restrict-sizes NOT complete/saml
+URI starts with: "/complete/saml" after decoding as URL.
+
+portal-generic-match-admin-company-url
+portal-generic-match-api-url
+URI starts with: "/api/issue/" after decoding as URL.
+/api/issue/
+
+
+
+portal-generic-enforce-csrf 
+https://stackoverflow.com/questions/38485028/what-is-the-difference-between-set-cookie-and-cookie
+The length of the Header 'cookie' is equal to 118.
+HTTP method matches exactly to: "post" after converting to lowercase.
+/complete/saml
+
+TEST
+test is using set-cookie
+Csrftoken cookie is size 172
+csrftoken=0aHJ5IjG7jegZikOds5IFWRya2k60UuN7qvyqAXsJ4W2DkwKdr1e8oguzwywmgS3; expires=Wed, 03 Feb 2021 16:28:19 GMT; HttpOnly; Max-Age=31449600; Path=/; SameSite=Lax; Secure
+csrftoken=bEGgb6Z8ggr4q4Urxw4a9J7JEHWhwTAecBWpXYlxo82FEpZlpLYTnHnej98ff5ex; expires=Mon, 01 Feb 2021 23:18:07 GMT; HttpOnly; Max-Age=31449600; Path=/; SameSite=Lax; Secure
+sessionid=9b3azu262faw7n16e94zwiwijwsyycf5; HttpOnly; Path=/; SameSite=Lax; Secure
+sessionid=29b3rlsvbbijp64jcnrzn78ctzqvlm8d; HttpOnly; Path=/; SameSite=Lax; Secure
+Cookie header does NOT exist
+
+***see if we can use the equal not less than***
+***can we use an OR option?***
+
+PROD
+prod using cookie
+Cookie header does exist
+
+
+portal-generic-detect-admin-access
+
+TEST
+have whole page locked down to no need for difference in test/prod. the 0.0.0.0/0 should give access. 
+    "12.245.107.250/32",   # DPS Office Legato
+    "12.204.167.162/32",   # DPS Office San Antonio
+    "54.86.98.62/32",      # DPS AWS User VPN
+    "75.138.227.80/32",    # Duane Waddle
+    "24.11.231.98/32",     # George Starcher
+    "99.151.37.185/32",    # Wesley Leonard
+    "70.106.200.157/32",   # John Reuther
+    "108.243.20.48/32",    # Ryan Plas
+    "73.10.53.113/32",     # Rick Page Home
+    "50.21.207.50/32",     # Brad Poulton
+    "70.160.60.248/32",    # Brandon Naughton 
+    "173.71.212.4/32",     # Ryan Howard
+
+
+PROD
+have admin page locked down to whitelisted IPs
+73.10.53.113/32
+99.151.37.185/32
+170.248.173.247/32
+170.248.173.245/32
+
+TEST
+***TEST WAF rule should be 0.0.0.0 for ADMIN access; SG will provide protection****
+
+10.* only for /admin
+
+    cidr_blocks = ["${lookup(local.workspace-default-portal-cidrs,terraform.workspace,"")}"]  <-SG No changes needed
+    
+    
+    
+admin_remote_ipset <- WAF changes needed test = 0.0.0.0 PROD = these IPs
+73.10.53.113/32
+99.151.37.185/32
+170.248.173.247/32
+170.248.173.245/32
+
+

+ 15 - 0
MDR Qualys Notes.txt

@@ -0,0 +1,15 @@
+How to purge results
+Assets -> Assets Search -> Actions -> Purge
+
+List of tickets that are associated with POAMs that are being ignored/suppressed. (Chris is the owner of the ticket probably becuse he is asset owner.)
+Remediation -> tickets
+
+-------------
+Scanner types
+https://aws.amazon.com/marketplace/pp/Qualys-Inc-Qualys-Virtual-Scanner-Appliance-Pre-Au/B01BLHOYPW
+
+the "pre-authorized" only understands how to scan using the EC2 connector workflow
+the not-pre-authorized can (in theory) scan any IP, but cannot use the EC2 connector workflow
+the pre-authorized scanner can only scan EC2 instances, nothing more and nothing less
+the not-pre-authorized scanner can never scan EC2 instances, full stop
+and for the purposes of this discussion ... an RDS instance is not an EC2 instance

+ 21 - 0
MDR RedHat Notes.txt

@@ -0,0 +1,21 @@
+Review commands run on the command line:
+sudoreplay or "ausearch -ua <username>"
+
+---------------------------
+Redhat.com
+Prod Subscription Account Number: 6195362
+Test : 6076020
+
+TEST!
+subscription-manager register --activationkey=packerbuilder  --org=11696629
+
+Pillar for RHEL subscription
+salt/pillar/dev/rhel_subs.sls
+salt/pillar/prod/rhel_subs.sls
+
+
+----------------------------
+System emails are being sent to Moose Splunk. 
+index=junk sourcetype=_json "headers.Subject"="*rotatelogs.sh"
+
+-----------------------------

+ 19 - 0
MDR Reposerver Notes.txt

@@ -0,0 +1,19 @@
+How to add a new package to the reposerver (which we want to move to S3)
+
+drop it in /var/www/html/redhat/msoc/Packages
+make sure it's own by apache
+sudo -u apache /bin/bash
+cd /var/www/html/redhat && createrepo msoc
+exit
+restorecon -R /var/www/html/redhat/
+
+#oneliner
+chown -R apache:apache /var/www/html/redhat/msoc/Packages/ && cd /var/www/html/redhat/ && sudo -u apache createrepo msoc && restorecon -R /var/www/html/redhat/
+
+#clean out the cache on the target server
+yum clean all
+
+#view the available packages
+yum --disablerepo="*" --enablerepo="msoc-repo" list available
+
+yum install savinstpkg

+ 38 - 0
MDR Salt Notes.txt

@@ -0,0 +1,38 @@
+Deploying Salt event monitoring for Splunk
+1. push new git files
+2. sync_all
+3. refresh_pillar
+4. salt state for updating minions config
+    
+    
+--------
+Custom grains
+_grains/mdr_environment.py
+This file discovers which aws account the ec2 instnace is in. 
+grain is called mdr_environment
+but it is broken on salt master, the minion has a static file /etc/salt/grains
+saltutil.sync_grains
+
+
+--------
+Cron job for state.apply <DISABLED DURING REFACTOR>
+salt manages a cron job on the master 
+
+
+--------
+salt-minion reactor
+when a salt-minion restarts the reactor kicks off a state.apply. This causes a notification when the salt-minion starts up and you try to apply a state. 
+
+
+
+--------
+gitfs lock file
+/var/cache/salt/master/gitfs/gitfs-base-msoc/.git/update.lk
+
+
+--------
+Switch branch on test salt-master for testing
+salt-run fileserver.update
+salt-run fileserver.file_list | grep mystuff
+
+

+ 21 - 0
MDR Salt Splunk Whitelisting FedRAMP Notes.txt

@@ -0,0 +1,21 @@
+Notes from talking with Fred
+Salt State -> Push cron job + bash script to Minions -> Bash script writes to file -> Splunk UF reads file and indexes it. -> Splunk creates lookup file which compares to a baseline lookup file. Differneces between the two are displayed on a dashboard and can be "approved". the approve button runs a search that will merge the two lookups and updates the baseline. 
+
+Prelinking needs to be turned off
+https://access.redhat.com/solutions/61691
+
+proc f
+
+
+Dashboard is broken need to fix it. Remove the blacklist variable and it will start working. 
+
+app uses SHA256 hashes
+
+Splunk search containing whitelist
+|inputlookup ProcessLookup
+|inputlookup ProcessLookup | search process=*splunk*
+|inputlookup ProcessLookup | search process=*splunk* | dedup file_hash
+
+Don't look for salt as a process. It is started with the python process. 
+
+

+ 161 - 0
MDR Salt Upgrade.txt

@@ -0,0 +1,161 @@
+MDR Salt Upgrade.txt
+https://jira.mdr.defpoint.com/browse/MSOCI-1164 
+
+Done when:
+
+All salt minions are running same version (2018)
+All server minions are pegged to specific version (that can be changed at upgrade time)
+Remove yum locks for minion
+
+Notes:
+
+Packer installs 2019 repo (packer/scripts/add-saltstack-repo.sh & packer/scripts/provision-salt-minion.sh) , then os_modifications ( os_modifications.repo_update) overwrites the repo with 2018. This leaves the salt minion stuck at the 2019 version without being able to upgrade. 
+
+#salt master (two salt repo files)
+
+/etc/yum.repos.d/salt.repo (salt/fileroots/os_modifications/minion_upgrade.sls)
+
+[salt-2018.3]
+name=SaltStack 2018.3 Release Channel for Python 2 RHEL/Centos $releasever
+baseurl=https://repo.saltstack.com/yum/redhat/7/$basearch/2018.3
+failovermethod=priority
+enabled=1
+/etc/yum.repos.d/salt-2018.3.repo
+
+[salt-2018.3]
+name=SaltStack 2018.3 Release Channel for Python 2 RHEL/Centos $releasever
+baseurl=https://repo.saltstack.com/yum/redhat/7/$basearch/2018.3
+failovermethod=priority
+enabled=1
+gpgcheck=1
+gpgkey=file:///etc/pki/rpm-gpg/saltstack-signing-key, file:///etc/pki/rpm-gpg/centos7-signing-key
+
+#reposerver.msoc.defpoint.local
+/etc/yum.repos.d/salt.repo
+
+[salt-2018.3]
+name=SaltStack 2018.3 Release Channel for Python 2 RHEL/Centos $releasever
+baseurl=https://repo.saltstack.com/yum/redhat/7/$basearch/2018.3
+failovermethod=priority
+enabled=1
+gpgcheck=0
+Two repo files in salt, both are 2018.3; one has proxy=none other doesn't.  the salt_rhel.repo is just for RHEL and the other is for CENTOS. 
+
+salt/fileroots/os_modifications/files/salt.repo (salt/fileroots/os_modifications/repo_update.sls uses this file and it is actively pushed to CENTOS minions)
+
+salt/fileroots/os_modifications/files/salt_rhel.repo  (salt/fileroots/os_modifications/repo_update.sls uses this file and it is actively pushed to RHEL minions)
+
+
+
+/etc/yum.repos.d/salt-2018.3.repo ( not sure how this file is being pushed. possibly pushed from Chris fixing stuff )
+
+
+STEPS
+1. remove /etc/yum.repos.d/salt-2018.3.repo from test
+1.2 remove yum versionlock in test (if there are any; None found)
+1.3 yum clean all ; yum makecache fast
+2. use git to update os_modifications/files/salt_rhel.repo file to 2019.2.2 ( match salt master)
+2.1 use salt + repo to update minion to 2019.2.2
+2.5 salt minion cmd.run 'rm -rf /etc/yum.repos.d/salt-2018.3.repo'
+2.5.1 salt minion cmd.run 'ls /etc/yum.repos.d/salt*'
+2.6 salt salt-master* state.sls os_modifications.repo_update
+2.7 salt salt-master* cmd.run 'yum clean all ; yum makecache fast'
+2.8 salt minion cmd.run 'yum update salt-minion -y' 
+2.9 salt minion cmd.run 'yum remove salt-repo -y'
+3. upgrade salt master to 2019.2.3 using repo files as a test
+4. upgrade salt mininos to 2019.2.3 using repo files as a test
+5. push to prod. 
+
+
+
+
+
+PROBLEMS
+bastion.msoc.defpoint.local
+error: unpacking of archive failed on file /var/log/salt: cpio: lsetfilecon
+mailrelay.msoc.defpoint.local
+pillar broken
+
+
+PROD
+
+1. remove dup repos
+1.1 remove /etc/yum.repos.d/salt-2018.3.repo from environment (looks like it was installed with a RPM) 
+1.1.1 salt minion cmd.run 'yum remove salt-repo -y' (does not remove the proper salt.repo file)
+1.1.2 salt minion cmd.run 'rm -rf /etc/yum.repos.d/salt-2018.3.repo'   (just to make sure)
+1.2 remove yum versionlock
+ yum versionlock list
+1.2.1 salt minion cmd.run 'yum versionlock delete salt-minion'
+1.2.2 salt minion cmd.run 'yum versionlock delete salt'
+1.2.3 salt minion cmd.run 'yum versionlock delete salt-master'
+2. use salt + repo to update master/minion to 2019.2.2
+2.1 use git to update os_modifications/files/salt_rhel.repo file to 2019.2.2 pin to minor release (match TEST)(https://repo.saltstack.com/yum/redhat/$releasever/$basearch/archive/2019.2.2)
+2.2 Check for environment grain ( needed for repo_update state file. )
+2.2.1 salt minion grains.item environment
+2.6 salt salt-master* state.sls os_modifications.repo_update
+2.7 salt salt-master* cmd.run 'yum clean all ; yum makecache fast'
+2.7.5 salt minion cmd.run 'yum check-update | grep salt'
+2.8 salt minion cmd.run 'yum update salt-minion -y' 
+OR salt minion pkg.upgrade name=salt-minion
+  salt minion pkg.upgrade name=salt-minion fromrepo=salt-2019.2.4
+2.9 salt master cmd.run 'yum update salt-master -y'
+3. ensure salt master and minions are at that minor version. 
+3.1 salt * test.version
+6. upgrade test and prod to 2019.2.3 via repo files to ensure upgrade process works properly. 
+6.5 fix permissions on master to allow non-root users to be able to run ( or run highstate )
+6.5.1 chmod 700 /etc/salt/master.d/
+6.5.2 then restart master
+7. never upgrade salt again. 
+
+PROBLEMS
+the pillar depends on a custom grain, the custom grain depends on specific python modules. the moose servers seem to have python module issues. 
+these commands helped fix them. python yum VS. pip 
+ImportError: cannot import name certs
+pip list | grep requests
+yum list installed | grep requests
+sudo pip uninstall requests
+sudo pip uninstall urllib3
+sudo yum install python-urllib3
+sudo yum install python-requests
+pip install boto3 (this installs urllib3 via pip as a dependency!)
+pip install boto
+
+
+slsutil.renderer salt://os_modifications/repo_update.sls
+if the grain is wrong on the salt master, but correct with salt-call restart the minion. 
+
+salt moose* grains.item environment
+cmd.run 'salt-call grains.get environment'
+cmd.run 'salt-call -ldebug --local grains.get environment'
+cmd.run 'salt-call -lerror --local grains.get environment'
+
+boto3 issue
+on indexers python3 is installed and pip points to python3 not python2
+/usr/local/lib/python3.6/site-packages/pip
+
+Salt root is setup with python3
+salt moose-splunk-indexer-1* cmd.run 'pip install boto3'
+salt 'moose*indexer*' cmd.run 'pip install boto3'
+
+salt-call is different connecting to python2
+/bin/bash: pip: command not found
+salt 'moose*indexer*' cmd.run "salt-call cmd.run 'pip install boto3'"
+
+
+resolution steps
+Duane will remove /usr/local/bin/pip which is pointing to python3
+pip should be at /use/bin/pip
+yum --enablerepo=epel -y reinstall python2-pip
+
+to proceed:
+1. install boto3 via pip
+2. salt '*.local' cmd.run 'pip install --upgrade urllib3'
+
+
+Permissions issue? Run this command as root:
+salt salt* state.sls salt_master.salt_posix_acl
+
+
+

+ 104 - 0
MDR ScaleFT Notes.txt

@@ -0,0 +1,104 @@
+OKTA owns scaleft
+
+------------
+To add a user to a scaleFT group, just add them to the matching group in OKTA, ScaleFT will automagically query OKTA to pull the new user in via a service account. 
+
+-------------
+Client Setup
+Download and install Sft. 
+
+https://www.scaleft.com/docs/setup/enrolling-a-client/
+
+Enrole a new client
+sft enroll --team mdr
+
+#this will configure your location ssh config file. Add !User as shown below.
+sft ssh-config
+$HOME/.ssh/config
+
+sft list-servers
+
+use a bastion host with scaleft
+sft ssh -bastion dev-bastion dev-salt-master
+
+resolve server (get ID)
+sft resolve proxy
+
+ssh into id of the server
+ssh d430bf67-c655-4280-b8ab-9b8bd90ec074
+
+~/.ssh/config FOR MACS 
+#SFT configuration. Add the !User centos to ssh using the msoc_build key
+Match exec "/usr/local/bin/sft resolve -q  %h" !User centos
+    ProxyCommand "/usr/local/bin/sft" proxycommand  %h
+    UserKnownHostsFile "/Users/bradpoulton/Library/Application Support/ScaleFT/proxycommand_known_hosts"
+
+
+SCP push a file works with scaleFT
+scp deleteme.txt dev-bastion:~/deleteme
+scp junk_index_new.tar.gz dev-bastion:~/junk_index_new.tar.gz
+
+SCP pull a file
+
+Duane's script
+
+#!/usr/bin/env bash
+#
+#
+# sftp_as2 afs-splunk-sh splunk
+REMOTE_HOST=$1
+REMOTE_USER=$2
+SFTP_SUBSYSTEM="/usr/libexec/openssh/sftp-server"
+function usage {
+echo "sftp_as afs-splunk-sh splunk"
+}
+if [[ $# -ne 2 ]]; then
+        usage
+        exit 1
+fi
+sftp -s "sudo -i -u $REMOTE_USER $SFTP_SUBSYSTEM" $REMOTE_HOST
+
+#usage
+./sftp_as2 dev-saf-splunk-indexer-1 brad_poulton
+
+-------------
+Agent/Server Setup
+
+Salt pushes out token and agent then starts the agent. The agent connects to ScaleFT and updates the webpage. 
+
+Reenroll the agent if they are not showing up in the scaleft.com website. 
+systemctl restart sftd
+
+Install dir
+/etc/sft
+
+enrollment token (gets deleted after server is enrolled successfully)
+/etc/sft/enrollment.token
+
+remove the server's auth token to force them to reauth with scaleft.com (use this if you have deleted the server in the webpage)
+rm -rf /var/lib/sftd/device.token
+
+Configuration file
+cat /etc/sft/sftd.yaml
+
+Salt grain/pillar is used to determine if dev or prod
+
+salt '' state.sls os_modifications.scaleft
+
+Troubleshooting 
+level=error msg="task init failed" err="Server is deleted" task=refreshServerToken
+remove device.token, place the enrollment.token and restart
+
+Temporarily change the name (salt state currently is not working on the name for dev-salt-master)
+Change the name and cannical name in sftd.yaml
+vim /etc/sft/sftd.yaml
+
+restart the service
+systemctl restart sftd
+
+
+---------------
+Projects 
+
+servers belong to projects
+people / groups can be granted access to projects, which gives access to the related servers

+ 88 - 0
MDR Sensu Notes.txt

@@ -0,0 +1,88 @@
+See MDR MIgration to Sensu Go.txt file for more details
+
+In version 5.16 the default password was removed in favor of a sensu-backend init with bash variables. 
+
+Sen$uP@ssw0rd!
+
+systemctl start sensu-backend
+export SENSU_BACKEND_CLUSTER_ADMIN_USERNAME=YOUR_USERNAME
+export SENSU_BACKEND_CLUSTER_ADMIN_PASSWORD=YOUR_PASSWORD
+sensu-backend init
+
+
+sensuctl create --file filename.json
+
+---
+type: oidc
+api_version: authentication/v2
+metadata:
+  name: oidc_okta
+spec:
+  additional_scopes:
+  - groups
+  client_id: <nope>
+  client_secret: <nope>
+  redirect_uri: https://sensu.msoc.defpoint.local:8000/api/enterprise/authentication/v2/oidc/callback
+  server: https://mdr-multipass.okta.com
+  groups_claim: groups
+  groups_prefix: 'okta'
+  username_claim: email
+  username_prefix: 'okta'
+  
+#cluster role binding for okta
+sensuctl cluster-role-binding create okta --cluster-role=cluster-admin --group=okta:mdr-admins
+sensuctl cluster-role-binding create mdr-admin --cluster-role=mdr-admin --group=ldap:mdr-admins
+  
+sensuctl cluster-role-binding list
+
+
+type: ClusterRoleBinding
+api_version: core/v2
+metadata:
+  name: cluster-admin
+spec:
+  role_ref:
+    name: cluster-admin
+    type: ClusterRole
+  subjects:
+  - name: okta:group
+    type: Group
+    
+    
+
+running ldap search with basedn \"ou=groups, dc=mdr-multipass, dc=okta, dc=com\" and filter \"(\u0026(objectclass=groupOfNames)(uniqueMember=uid=brad.poulton,ou=users,dc=mdr-multipass,dc=okta,dc=com))\"
+ldapsearch -x -H ldaps://mdr-multipass.ldap.okta.com -b dc=mdr-multipass,dc=okta,dc=com -D "uid=ldap.read@defpoint.com,dc=mdr-multipass,dc=okta,dc=com" -W
+
+
+# brad.poulton, users, mdr-multipass.okta.com
+dn: uid=brad.poulton,ou=users,dc=mdr-multipass,dc=okta,dc=com
+objectClass: top
+objectClass: person
+objectClass: organizationalPerson
+objectClass: inetOrgPerson
+uid: brad.poulton
+uniqueIdentifier: 00u22ymdgdKPTDyR5297
+organizationalStatus: ACTIVE
+givenName: Brad
+sn: Poulton
+cn: Brad Poulton
+mail: brad.poulton@accenturefederal.com
+mobile: 4355126342
+
+
+# mdr-admins, groups, mdr-multipass.okta.com
+dn: cn=mdr-admins,ou=groups,dc=mdr-multipass,dc=okta,dc=com
+objectClass: top
+objectClass: groupofUniqueNames
+cn: mdr-admins
+uniqueIdentifier: 00g1m5jakrmiDwISV297
+uniqueMember: uid=chris.lynch,ou=users,dc=mdr-multipass,dc=okta,dc=com
+uniqueMember: uid=ryan.damour,ou=users,dc=mdr-multipass,dc=okta,dc=com
+uniqueMember: uid=duane.waddle,ou=users,dc=mdr-multipass,dc=okta,dc=com
+uniqueMember: uid=brad.poulton,ou=users,dc=mdr-multipass,dc=okta,dc=com
+
+
+Jan 14 23:48:51 sensu sensu-backend: {"component":"authentication/v2","level":"debug","msg":"running ldap search with basedn \"ou=groups, dc=mdr-multipass, dc=okta, dc=com\" and filter \"(\u0026(objectclass=groupOfNames)(uniqueMember=uid=brad.poulton,ou=users,dc=mdr-multipass,dc=okta,dc=com))\"","time":"2020-01-14T23:48:51Z"}
+
+brad-test
+SensuA123

+ 75 - 0
MDR Splunk MSCAS Notes.txt

@@ -0,0 +1,75 @@
+For smart and final customer
+
+https://jira.mdr.defpoint.com/browse/MSOCI-890
+https://docs.microsoft.com/en-us/cloud-app-security/siem
+https://splunkbase.splunk.com/app/3110/
+
+
+https://github.mdr.defpoint.com/mdr-engineering/msoc-infrastructure/blob/master/salt/fileroots/syslog/files/customers/afs/conf.d/010-mcas.conf 
+
+sourcetype=microsoft:cas
+index=app_mscas sourcetype="microsoft:cas"
+
+/opt/syslog-ng/mcas/afssplhf103.us.accenturefederal.com/log
+/opt/syslog-ng/mcas/afssplhf103.us.accenturefederal.com/log/2019-09-11/afsspaf101.us.accenturefederal.com/afsspaf101.us.accenturefederal.com/security.log
+
+
+
+start EC2 instance
+then build docker container
+is this just a HF or is it syslog-ng also? two docker containers one for syslog and one for 
+one docker container for java and one for Splunk HF
+
+java agent to send to syslog-ng
+
+ec2 instance
+ms-cas
+t2.small
+install docker
+add java docker container
+add java code to container
+
+------------------------------
+Going to try openjdk because oracle java requires login to pull the images
+https://hub.docker.com/_/openjdk
+docker pull openjdk
+
+JAVA Command
+java -jar mcas-siemagent-0.87.20-signed.jar [--logsDirectory DIRNAME] [--proxy ADDRESS[:PORT]] --token TOKEN &
+
+Docker commands
+cd 
+docker image build -t customjava .
+docker run -d --name customjava --volume /root/java:/logs -t customjava
+
+
+FROM openjdk:12
+COPY . /usr/src/myapp
+WORKDIR /usr/src/myapp
+RUN mkdir /logs
+VOLUME /logs
+RUN echo "This is the place" > /logs/thisisit.txt
+CMD java -jar mcas-siemagent-0.111.126-signed.jar --token yourmom --logsDirectory /logs
+
+
+Because we are using a custom docker image we would like it to be stored in the docker repo. this is the headache. 
+
+DUANE! 
+MS CAS has a dumb little java agent
+It has to connect to MS servers, and output CAS data via syslog - it can't do anything else
+[ and they don't publicize the API it uses ]
+so, our approach was to run dumb little java agent in a docker container, on customer premises in the POP
+
+we have a POP node in smart and final called the "data collection node"  (dcn)
+(and evolution just for crap like this)
+but it's become a huge yak that needs shaved
+to run the agent I need a container
+so I made a container
+to run the container I need to upload the container to a registry
+so I uploaded it to a our registry
+now to give the nodes on customer prem access to the registry they need AWS API credentials
+to give them API creds I need to be able to distribute said creds from the salt master
+to distribute them from the master, I need <something>, and decided on GPG encrypted pillars
+before I could enable encrypted pillars I needed to clean up the salt master config files
+which is done
+so I can finish the encrypted pillars, give creds to the DCN node, connect it to the registry, get the container running

+ 95 - 0
MDR Splunk NGA Data Pull Request.txt

@@ -0,0 +1,95 @@
+stand up a new "search head" that just has splunk installed on it, no need to configure the splunk instance. the splunk instance will query the actual search head and pull the data out. See hurricane labs python script. 
+
+https://jira.mdr.defpoint.com/browse/MSOCI-1013
+
+vpc-05e0cf38982e048db
+
+subnet-0a2384bce743cf303
+
+MSOC_RedHat_Minion_201807250350 (ami-01c2c25dc719d3546) USED CENTOS 7 AWS AMI 
+
+m4.large
+
+generated SSH key pair bradp.pem
+ 
+nga-splunk-searches
+
+username is centos
+
+delete key pair when done from AWS and the bastion host! bradp
+
+delete svc-searches from nga splunk SH when done
+
+delete 1TB EBS volume when done
+
+
+
+search "index=network sourcetype=qos_syslog CA98C333-F830-0B45-A543-4450CDFDA84A 1571414560 Accept 47048" -output rawdata -maxout 0 -max_time 0 -uri https://10.2.2.122:8089
+
+
+
+start fail
+1019_1020export.raw
+1018_1019 times:
+head - 2019-09-15T09:14:59
+tail - 2019-09-15T09:09:31
+
+end fail
+1091_1092export.raw
+1093_1094 times:
+head - 2019-09-14T14:14:59
+tail - 2019-09-14T14:00:00
+
+
+i=5000
+start time 2019-09-15T09:14:59
+stop time 2019-09-14T14:00:00
+
+
+start fail
+784_785export.raw
+783_784 times:
+head - 2019-09-17T19:59:59
+tail 2019-09-17T19:46:54
+
+end fail
+857_858export.raw
+859_860 times:
+head  2019-09-17T00:29:59
+tail 2019-09-17T00:15:00
+
+i=6000
+start time 2019-09-17T20:00:00
+stop time 2019-09-17T00:15:00
+
+start fail
+909_910export.raw
+907_908 times:
+head - 2019-09-16T12:59:59
+tail - 2019-09-16T12:45:00
+
+end fail
+982_983export.raw
+985_986 times:
+head - 2019-09-15T17:29:59
+tail - 2019-09-15T17:15:00
+
+i=7000
+start time 2019-09-15T17:30:00
+stop time 2019-09-16T12:45:00
+
+
+
+
+
+#from my mac
+aws s3 ls s3://nga-mdr-data-pull
+aws s3 cp nga-splunk-pull.zip s3://nga-mdr-data-pull
+aws --profile=mdr-prod s3 presign s3://nga-mdr-data-pull/nga-splunk-pull.zip --expires-in 86400
+
+aws --profile=mdr-prod s3 presign s3://nga-mdr-data-pull/nga-splunk-pull.zip --expires-in 604800
+
+https://nga-mdr-data-pull.s3.amazonaws.com/nga-splunk-pull.zip?AWSAccessKeyId=ASIAW6MA4LDMBGUOE7Q6&Signature=6WZ9KdHfH4rj28Ey5hrTib8HcHM%3D&x-amz-security-token=FQoGZXIvYXdzEFIaDCbQsc24x7kkQnhLQSL%2FAV4UBSVowGvhyMyS41rQtbtnmznvrbIu5Y9CCrxJ65RP%2BMeHz7Jkwu8BFEzNeeIT5M6Dfcd1NdFkqXBjE54y6G6HujSSLPk8gp2UqGDKkqMDE3qzrXfHRKaIlMInkACQi6VPpRDjFYGnnILS8vO5gjzqr9HUAsIgfVwpEuVf%2FPBbEcuUH87kZS6FqyQHTBc%2BcPk8KetsX2IuLmpOVAysip3IGgx2duVETNqKH0uXOM%2FUBygyJ7gD3DLoQWqCHQvxG0AfO0vEkRAZxgLKSDm6E2c8d9mJ5I6yXl2xBK7ii5bKWmhWtnPGYrErVFTxhfqeI6SHwzJOsLlNdkAC6nSKRyi1wMztBQ%3D%3D&Expires=1572625186
+
+
+tail -1 1018_1019export.raw

+ 72 - 0
MDR Splunk Notes.txt

@@ -0,0 +1,72 @@
+MDR Splunk Notes
+
+
+------------
+Change user to Splunk
+
+sudo -iu splunk
+
+
+
+---
+How to apply the git changes to the CM or customer DS. Be patient, it is splunk. Review logs in salt
+----
+Chris broke Jenkins.but he moved the splunk git repo to gitfs
+
+1. add your changes to the appropriate git repo (msoc-moose-cm) 
+2. then use the salt state to push the changes and apply the new bundle
+    salt 'moose-splunk-cm*' state.sls splunk.master.apply_bundle_master
+    salt 'afs-splunk-cm*' state.sls splunk.master.apply_bundle_master
+
+Apply the git changes to the splunk UFs (Salt Deployment Server)
+
+Moose DS has a salt file for pushing apps out directly to UFs. 
+
+Customer DS
+salt 'afs-splunk-ds*' state.sls splunk.deployment_server.reload_ds
+
+to view the splunk command output look at the logs in splunk under the return.cmd_...changes.stdout or stderr
+index=salt sourcetype=salt_json fun="state.sls"
+
+------
+Splunk CM is the license master and the salt master is used to push out a new license. Each customer has its own license. 
+
+------
+
+TEST SPLUNK CM admin password
+admin
+6VB^8V3CFjbaiZ4Q#hLjNW3a1
+
+
+    
+    
+##############
+#
+#SEARCHES
+#
+###############
+| tstats values(sourcetype) where index=* group by index
+
+
+#aws cloudtrail 
+index=app_aws sourcetype=aws:cloudtrail
+
+#proxy
+index=web sourcetype=squid:access:json
+
+CLI search
+/opt/splunk/bin/splunk search 'index=bro' -earliest_time '-5m' output=raw > test.text
+
+#NGA data request for checkpoint logs
+index=network sourcetype=qos_syslog (service=443 OR service=80) NOT src=172.20.109.16 NOT src=172.20.109.17 NOT dst=172.20.109.16 NOT dst=172.20.109.17 NOT (action=Drop src=172.20.8.3)
+
+updated
+index=network sourcetype=qos_syslog (service=443 OR service=80) NOT (action=Drop src=172.20.8.3)
+
+#Vault
+index=app_vault
+
+| rest /services/data/indexes/
+| search title=app_mscas OR title = app_o365 OR title=dns OR title=forescout OR title=network OR title=security OR title=Te
+
+

+ 364 - 0
MDR Splunk SAF Offboarding Notes.txt

@@ -0,0 +1,364 @@
+MDR Splunk SAF Data Offboarding.txt
+
+Currently a 3 node multi-site cluster. Possible solution, set search and rep factor to 3 and 3 then pull the index files off one of the indexers to a new instance. On the new instance, setup multi-site cluster with one site and see if you can read the indexed files.
+
+
+https://docs.splunk.com/Documentation/Splunk/8.0.2/Indexer/Decommissionasite
+https://docs.splunk.com/Documentation/Splunk/7.0.3/Indexer/Multisitedeploymentoverview
+
+1 - cluster master
+1 - indexer with search 
+
+
+
+/opt/splunkdata/hot/normal_primary/
+
+indexes:
+app_mscas
+app_o365
+dns
+forescout
+network
+security
+te
+
+
+File paths
+#Where in salt are the search / rep factors
+salt/pillar/saf_variables.sls 
+splunk:
+  cluster_name: saf
+  license_master: saf-splunk-cm
+  idxc:
+    label: "saf_index_cluster"
+    pass4SymmKey: "$1$ekY601SK1y5wfbd2ogCNRIhn+gPeQ+UYKzY3MMAnPmmz"
+    rep_factor: 2
+    search_factor: 2
+
+#where in splunk are the configs written to?
+/opt/splunk/etc/system/local/server.conf
+
+[clustering]
+mode = master
+multisite = true
+replication_factor = 2
+search_factor = 2
+max_peer_sum_rep_load = 15
+max_peer_rep_load = 15
+max_peer_build_load = 6
+summary_replication = true
+site_search_factor = origin:1, total:2
+site_replication_factor = origin:1,site1:1,site2:1,site3:1,total:3
+available_sites = site1,site2,site3
+cluster_label = afs_index_cluster
+
+
+
+Steps
+1. change /opt/splunk/etc/system/local/server.conf  site_search_factor to origin:1,site1:1,site2:1,site3:1,total:3 This will ensure we have a searchable copy of all the buckets on all the sites. Should I change site_replication_factor to origin:1, total:1? this would reduce the size of the index. 
+2. restart CM ( this will apply the site_search_factor )
+3. send data to junk index (oneshot)
+3.1 /opt/splunk/bin/splunk add oneshot /opt/splunk/var/log/splunk/splunkd.log -sourcetype splunkd -index junk
+4. stop one indexer and copy index to new cluster. 
+5. on new cluster, setup CM and 1 indexer in multisite cluster. the clustermaster will be a search head in the same site
+6. setup new cluster to have site_mappings = default:site1
+7. attempt to search on new cluster
+
+
+made the new junk index on test saf
+number of events: 64675
+latest = 02/21/20 9:32:01 PM UTC
+earlest = 02/19/20 2:32:57 PM UTC
+
+Before copying the buckets, ensure they are ALL WARM buckets, HOT buckets maybe be deleted on startup. 
+
+#check on the buckets 
+| dbinspect index=junk
+
+uploaded brad_LAN key pair to AWS for new instances. 
+
+vpc-041edac5e3ca49e4d
+subnet-0ca93c00ac57c9ebf
+sg-0d78af22d0afd0334
+
+saf-offboarding-cm-deleteme
+saf-offboarding-indexer-1-deleteme
+
+CentOS 7 (x86_64) - with Updates HVM
+
+t2.medium (2 CPU 4 GB RAM)
+100 GB drive
+
+msoc-default-instance-role
+
+saf-offboarding-ssh Security group <- delete this not needed just SSH from Bastion host
+
+splunk version 7.0.3
+
+setup proxy for yum and wget
+vi /etc/yum.conf
+proxy=http://proxy.msoc.defpoint.local:80
+yum install vim wget
+vim /etc/wgetrc
+http_proxy = http://proxy.msoc.defpoint.local:80
+https_proxy = http://proxy.msoc.defpoint.local:80
+
+Download Splunk
+wget -O splunk-7.0.3-fa31da744b51-linux-2.6-x86_64.rpm 'https://www.splunk.com/page/download_track?file=7.0.3/linux/splunk-7.0.3-fa31da744b51-linux-2.6-x86_64.rpm&ac=&wget=true&name=wget&platform=Linux&architecture=x86_64&version=7.0.3&product=splunk&typed=release'
+
+install it
+yum localinstall splunk-7.0.3-fa31da744b51-linux-2.6-x86_64.rpm
+
+#setup https
+vim /opt/splunk/etc/system/local/web.conf
+
+[settings]
+enableSplunkWebSSL = 1
+
+#start it
+/opt/splunk/bin/splunk start --accept-license
+
+#CM
+https://10.1.2.170:8000/en-US/app/launcher/home
+
+#Indexer
+https://10.1.2.236:8000/en-US/app/launcher/home
+
+Change password for admin user
+/opt/splunk/bin/splunk edit user admin -password Jtg0BS0nrAyD -auth admin:changeme
+
+Turn on distributed search in the GUI
+
+#on CM
+/opt/splunk/etc/system/local/server.conf
+[general]
+site = site1
+
+[clustering]
+mode = master
+multisite = true
+replication_factor = 2
+search_factor = 2
+max_peer_sum_rep_load = 15
+max_peer_rep_load = 15
+max_peer_build_load = 6
+summary_replication = true
+site_search_factor = origin:1,site1:1,site2:1,site3:1,total:3
+site_replication_factor = origin:1,site1:1,site2:1,site3:1,total:3
+available_sites = site1,site2,site3
+cluster_label = saf_index_cluster
+pass4SymmKey = password
+site_mappings = default:site1
+
+#on IDX
+/opt/splunk/etc/system/local/server.conf
+[general]
+site = site1
+
+[clustering]
+master_uri = https://10.1.2.170:8089
+mode = slave
+pass4SymmKey = password
+[replication_port://9887]
+
+***ensure networking is allowed between the hosts***
+
+The indexer will show up in the Cluster master 
+
+#create this file on the indexer
+/opt/splunk/etc/apps/saf_all_indexes/local/indexes.conf
+
+[junk]
+homePath       = $SPLUNK_DB/junk/db
+coldPath       = $SPLUNK_DB/junk/colddb
+thawedPath     = $SPLUNK_DB/junk/thaweddb
+
+#copy the index over to the indexer
+cp junk_index.targz /opt/splunk/var/lib/splunk/
+tar -xzvf junk_index.targz
+
+
+###################################################################################
+PROD testing Notes
+
+SAF PROD Cluster testing with the te index.
+The indexers do not have the space to move to search/rep factor 3/3. Duane suggests keeping the current 2/3 and letting the temp splunk cluster  make the buckets searchable. according to the monitoring console:
+
+te index gathered on Feb 26
+total index size: 3.1 GB
+total raw data size uncompressed: 10.37 GB
+total events: 12,138,739
+earliest event: 2019-05-17 20:40:00
+latest event: 2020-02-26 16:43:32
+
+| dbinspect index=te | stats count by splunk_server
+count of buckets
+indexer1: 105
+indexer2: 103
+indexer3: 104
+
+| dbinspect index=te | search state=hot
+currently 6 hot buckets
+
+index=te | stats count ALL TIME fast mode
+6069419
+
+size on disk
+1.1 GB
+
+size of tarball
+490 MB
+
+
+Allow instance to write to S3 bucket
+
+{
+    "Id": "Policy1582738262834",
+    "Version": "2012-10-17",
+    "Statement": [
+        {
+            "Sid": "Stmt1582738229969",
+            "Action": [
+                "s3:PutObject"
+            ],
+            "Effect": "Allow",
+            "Resource": "arn:aws:s3:::mdr-saf-off-boarding/*",
+            "Principal": {
+                "AWS": [
+                    "arn:aws:iam::477548533976:role/msoc-default-instance-role"
+                ]
+            }
+        }
+    ]
+}
+
+./aws s3 cp rst2odt.py s3://mdr-saf-off-boarding
+./aws s3 cp /opt/splunkdata/hot/normal_primary/saf_te_index.tar.gz s3://mdr-saf-off-boarding
+
+aws --profile=mdr-prod s3 presign s3://mdr-saf-off-boarding/saf_te_index.tar.gz --expires-in 604800
+
+uploaded brad_LAN key pair to AWS for new instances. 
+
+vpc-0202aedf3d0417cd3
+subnet-01bc9f77742ff132d
+sg-03dcc0ecde42fc8c2, sg-077ca2baaca3d8d97
+
+saf-offboarding-splunk-cm
+saf-offboarding-splunk-indexer
+
+CentOS 7 (x86_64) - with Updates HVM
+
+t2.medium (2 CPU 4 GB RAM)
+100 GB drive for te index test
+
+msoc-default-instance-role
+
+tag instances 
+Client saf     
+
+use the msoc_build key
+
+#CM
+ip-10-1-3-72
+
+#indexer-1
+ip-10-1-3-21
+
+#indexer-2
+ip-10-1-3-24
+
+#indexer-3
+ip-10-1-3-40
+
+
+use virtualenv to grab awscli
+
+export https_proxy=http://proxy.msoc.defpoint.local:80
+sudo -E ./pip install awscli
+
+./aws s3 cp s3://mdr-saf-off-boarding/saf_te_index.tar.gz /opt/splunk/var/lib/splunk/saf_te_index.tar.gz
+
+move index to CM rep buckets are not expanding to Search buckets
+
+1. rm -rf saf_all_indexes
+2. create it on the CM
+2.1 mkdir -p /opt/splunk/etc/master-apps/saf_all_indexes/local/
+2.2 vim /opt/splunk/etc/master-apps/saf_all_indexes/local/indexes.conf
+[te]
+homePath      = $SPLUNK_DB/te/db
+coldPath      = $SPLUNK_DB/te/colddb
+thawedPath    = $SPLUNK_DB/te/thaweddb
+repFactor=auto
+
+2.3 cluster bundle push
+2.3.1 /opt/splunk/bin/splunk list cluster-peers
+2.3.1 splunk validate cluster-bundle
+2.3.2 splunk apply cluster-bundle
+
+
+
+###################
+#
+# Actual PROD offboarding!
+#
+##################
+
+#estimate size and age
+
+| rest /services/data/indexes/
+| search title=app_mscas OR title = app_o365 OR title=dns OR title=forescout OR title=network OR title=security OR title=Te
+| eval indexSizeGB = if(currentDBSizeMB >= 1 AND totalEventCount >=1, currentDBSizeMB/1024, null())
+| eval elapsedTime = now() - strptime(minTime,"%Y-%m-%dT%H:%M:%S%z")
+| eval dataAge = ceiling(elapsedTime / 86400)
+| stats sum(indexSizeGB) AS totalSize max(dataAge) as oldestDataAge by title
+| eval totalSize = if(isnotnull(totalSize), round(totalSize, 2), 0)
+| eval oldestDataAge = if(isNum(oldestDataAge), oldestDataAge, "N/A")
+| rename title as "Index" totalSize as "Total Size (GB)" oldestDataAge as "Oldest Data Age (days)"
+
+
+1. adjust CM and push out new data retention limits per customer email
+2. allow indexers to prune old data
+3. stop splunk on one indexer
+4. tar up splunk directory
+5. upload to s3
+6. download from s3 to temp indexers and extract to ensure data is readable
+7. repeat for all indexes
+
+
+prune data based on time
+Updated Time
+1/6/2020, 1:59:50 PM
+Active Bundle ID?
+73462849B9E88F1DB2B9C60643A06F67
+Latest Bundle ID?
+73462849B9E88F1DB2B9C60643A06F67
+Previous Bundle ID?
+FF9104B61366E1841FEDB1AF2DE901C2
+
+
+4
+With encryption
+tar cvzf saf_myindex_index.tar.gz myindex/
+
+without encryption
+tar cvf /hubble.tar hubble/
+
+trying this: https://github.com/jeremyn/s3-multipart-uploader
+
+use virtualenv 
+
+bin/python s3-multipart-uploader-master/s3_multipart_uploader.py -h
+
+bucket name mdr-saf-off-boarding
+
+bin/aws s3 cp /opt/splunkdata/hot/saf_te_index.tar.gz s3://mdr-saf-off-boarding/saf_te_index.tar.gz
+
+DID NOT NEED TO USE THE MULTIPART uploader! 
+
+aws --profile=mdr-prod s3 presign s3://mdr-saf-off-boarding/saf_app_mscas_index.tar.gz --expires-in 86400
+aws --profile=mdr-prod s3 presign s3://mdr-saf-off-boarding/saf_app_o365_index.tar.gz --expires-in 86400
+aws --profile=mdr-prod s3 presign s3://mdr-saf-off-boarding/saf_dns_index.tar.gz --expires-in 86400
+aws --profile=mdr-prod s3 presign s3://mdr-saf-off-boarding/saf_forescout_index.tar.gz --expires-in 86400
+aws --profile=mdr-prod s3 presign s3://mdr-saf-off-boarding/saf_network_index.tar --expires-in 86400
+aws --profile=mdr-prod s3 presign s3://mdr-saf-off-boarding/saf_security_index.tar.gz --expires-in 86400
+aws --profile=mdr-prod s3 presign s3://mdr-saf-off-boarding/saf_te_index.tar.gz --expires-in 86400

+ 74 - 0
MDR Terraform Notes.txt

@@ -0,0 +1,74 @@
+------------------
+workspaces are being used to break up environments. 
+
+terraform workspace list
+terraform workspace select test
+
+
+Strange errors? Unexpected results? try this
+rm .terraform
+terraform init
+
+State issues
+terraform state show aws_ami.msoc_base
+terraform refresh -target=data.aws_ami.msoc_base
+
+Terraform also has a DynamoDB State lock (msoc-terraform-lock). This will prevent terraform state breakage. 
+
+------------------
+View TF code
+https://github.com/terraform-aws-modules
+
+
+-------------------
+Modules
+
+We are using the aws ec2-instance module
+
+https://registry.terraform.io/modules/terraform-aws-modules/ec2-instance/aws/2.13.0
+https://github.com/terraform-aws-modules/terraform-aws-ec2-instance
+
+
+var.something means this is a module that needs the variable to run. Your code will fill the variable. 
+data is a read-only terrafom object that queries provider or generates something on the localhost
+locals are variables that can refer to variables or other locals
+variables - expecting data from somewhere else.
+provider instance of the API
+
+
+
+
+--------------------
+IAM Role 
+
+get this error?
+aws_iam_policy.nga_instance_policy: Error creating IAM policy nga_instance_tag_read: AccessDenied:
+
+add this
+  provider = "aws.iam_admin"
+  
+-------------------
+
+in terraform .tf files when the self = true. that is for putting the security group into itself. e.g. add the security group to the security groups rules. 
+
+the terraform is setup in folders. each folder is a project and apply should be run in the folder. Common is the execption as some of the projects are dependent on that folder. 
+
+role and policy have to be done in the IAM terraform
+
+
+iam_data.tf
+
+02-msoc_vpc/lambda.tf with security groups
+
+terraform plan -target=
+terraform plan -target=module.sensu_go_server.aws_instance.this -target=module.sensu_go_server.aws_route53_record.private
+
+terraform apply -target=module.sensu_server.aws_route53_record.private -target=module.sensu_server.aws_instance.this
+
+terraform apply -target=aws_security_group_rule.outbound_to_sensu -target=module.sensu_servers_sg.aws_security_group_rule.ingress_with_cidr_blocks[0] -target=module.sensu_servers_sg.aws_security_group_rule.ingress_with_cidr_blocks[1]
+
+terraform apply -target=module.vpc_default_security_groups.aws_security_group_rule.typical_host_outbound_to_sensu_8081 -target=aws_security_group_rule.vault_server_to_sensu -target=module.vpc_default_security_groups.aws_security_group_rule.typical_host_outbound_to_sensu_5672
+
+
+terraform apply -target=module.afs_cluster.module.vpc_default_security_groups.aws_security_group_rule.typical_host_outbound_to_sensu_5672 -target=module.afs_cluster.module.vpc_default_security_groups.aws_security_group_rule.typical_host_outbound_to_sensu_8081
+

+ 170 - 0
MDR Terraform Splunk ASG Notes.txt

@@ -0,0 +1,170 @@
+module.moose_cluster.module.indexer_cluster.module.indexer2.aws_launch_configuration.splunk_indexer
+
+
+module.moose_cluster.module.indexer_cluster.module.indexer2.aws_autoscaling_group.splunk_indexer_asg
+
+
+
+terraform destroy -target=module.moose_cluster.module.indexer_cluster.module.indexer1.aws_launch_configuration.splunk_indexer -target=module.moose_cluster.module.indexer_cluster.module.indexer1.aws_autoscaling_group.splunk_indexer_asg -target=module.moose_cluster.module.indexer_cluster.module.indexer2.aws_launch_configuration.splunk_indexer -target=module.moose_cluster.module.indexer_cluster.module.indexer2.aws_autoscaling_group.splunk_indexer_asg -target=module.moose_cluster.module.indexer_cluster.module.indexer0.aws_launch_configuration.splunk_indexer -target=module.moose_cluster.module.indexer_cluster.module.indexer0.aws_autoscaling_group.splunk_indexer_asg
+
+
+terraform destroy -target=module.moose_cluster.module.indexer_cluster.module.indexer1.aws_launch_template.splunk_indexer -target=module.moose_cluster.module.indexer_cluster.module.indexer1.aws_autoscaling_group.splunk_indexer_asg -target=module.moose_cluster.module.indexer_cluster.module.indexer2.aws_launch_template.splunk_indexer -target=module.moose_cluster.module.indexer_cluster.module.indexer2.aws_autoscaling_group.splunk_indexer_asg -target=module.moose_cluster.module.indexer_cluster.module.indexer0.aws_launch_template.splunk_indexer -target=module.moose_cluster.module.indexer_cluster.module.indexer0.aws_autoscaling_group.splunk_indexer_asg
+
+
+Current moose subnet: subnet-07312c554fb87e4b5  (main-infrastructure-public-us-east-1c)
+ASG subnet: subnet-0b1e9d82bcd8c0a2c (main-infrastructure-public-us-east-1a)
+
+
+
+
+
+moose-splunk-indexer-i-0a30e6cbd4d7461ba.msoc.defpoint.local:
+    Data failed to compile:
+----------
+    The function "state.apply" is running as PID 11878 and was started at 2020, Apr 28 14:58:18.662679 with jid 20200428145818662679
+moose-splunk-indexer-i-08151d5e4b73af430.msoc.defpoint.local:
+    Data failed to compile:
+----------
+    The function "state.apply" is running as PID 11879 and was started at 2020, Apr 28 14:58:18.677356 with jid 20200428145818677356
+moose-splunk-indexer-i-0e7519cfe60483af1.msoc.defpoint.local:
+    Data failed to compile:
+----------
+    The function "state.apply" is running as PID 11817 and was started at 2020, Apr 28 14:58:19.465731 with jid 20200428145819465731
+
+
+
+
+
+
+
+resource "aws_launch_configuration" "splunk_indexer" {
+    name                        = "${var.launch_conf_name}"
+    instance_type               = "${var.idx_instance_type}"
+    image_id                    = "${var.ami}"
+    user_data                   = "${var.user_data}"
+    security_groups             = ["${var.indexer_security_group_ids}"]
+    associate_public_ip_address = false
+    key_name                    = "${var.key_name}"
+    iam_instance_profile        = "${var.iam_instance_profile}"
+    root_block_device           = "${var.root_block_device}"
+    ebs_block_device            = "${local.ebs_block_device}"
+    ebs_optimized               = true
+    ephemeral_block_device = [
+        {
+        device_name  = "xvdaa"
+        virtual_name = "ephemeral0"
+        },
+        {
+        device_name = "xvdab"
+        virtual_name = "ephemeral1"
+        },
+        {
+        device_name = "xvdac"
+        virtual_name = "ephemeral2"
+        },
+        {
+        device_name = "xvdad"
+        virtual_name = "ephemeral3"
+        },
+        {
+        device_name = "xvdae"
+        virtual_name = "ephemeral4"
+        },
+        {
+        device_name = "xvdaf"
+        virtual_name = "ephemeral5"
+        },
+        {
+        device_name = "xvdag"
+        virtual_name = "ephemeral6"
+        },
+        {
+        device_name = "xvdah"
+        virtual_name = "ephemeral7"
+        },
+        {
+        device_name = "xvdai"
+        virtual_name = "ephemeral8"
+        },
+        {
+        device_name = "xvdaj"
+        virtual_name = "ephemeral9"
+        },
+        {
+        device_name = "xvdak"
+        virtual_name = "ephemeral10"
+        },
+        {
+        device_name = "xvdal"
+        virtual_name = "ephemeral11"
+        },
+        {
+        device_name = "xvdam"
+        virtual_name = "ephemeral12"
+        },
+        {
+        device_name = "xvdan"
+        virtual_name = "ephemeral13"
+        },
+        {
+        device_name = "xvdao"
+        virtual_name = "ephemeral14"
+        },
+        {
+        device_name = "xvdap"
+        virtual_name = "ephemeral15"
+        },
+        {
+        device_name = "xvdaq"
+        virtual_name = "ephemeral16"
+        },
+        {
+        device_name = "xvdar"
+        virtual_name = "ephemeral17"
+        },
+        {
+        device_name = "xvdas"
+        virtual_name = "ephemeral18"
+        },
+        {
+        device_name = "xvdat"
+        virtual_name = "ephemeral19"
+        },
+        {
+        device_name = "xvdau"
+        virtual_name = "ephemeral20"
+        },
+        {
+        device_name = "xvdav"
+        virtual_name = "ephemeral21"
+        },
+        {
+        device_name = "xvdaw"
+        virtual_name = "ephemeral22"
+        },
+        {
+        device_name = "xvdax"
+        virtual_name = "ephemeral23"
+        },
+    ]
+    lifecycle {
+        create_before_destroy = true
+    }
+}
+
+
+ERROR:
+* module.moose_cluster.module.indexer_cluster.module.indexer0.aws_autoscaling_group.splunk_indexer_asg: 1 error(s) occurred:
+
+* aws_autoscaling_group.splunk_indexer_asg: "moose-splunk-asg-0": Waiting up to 10m0s: Need at least 1 healthy instances in ASG, have 0. Most recent activity: {
+  ActivityId: "71d5c796-f6b8-7b06-600c-167c09da9b56",
+  AutoScalingGroupName: "moose-splunk-asg-0",
+  Cause: "At 2020-05-05T16:49:03Z an instance was started in response to a difference between desired and actual capacity, increasing the capacity from 0 to 1.",
+  Description: "Launching a new EC2 instance.  Status Reason: The requested configuration is currently not supported. Please check the documentation for supported configurations. Launching EC2 instance failed.",
+  Details: "{\"Subnet ID\":\"subnet-0b1e9d82bcd8c0a2c\",\"Availability Zone\":\"us-east-1a\"}",
+  EndTime: 2020-05-05 16:49:05 +0000 UTC,
+  Progress: 100,
+  StartTime: 2020-05-05 16:49:05.566 +0000 UTC,
+  StatusCode: "Failed",
+  StatusMessage: "The requested configuration is currently not supported. Please check the documentation for supported configurations. Launching EC2 instance failed."
+}

+ 170 - 0
MDR Vault Notes.txt

@@ -0,0 +1,170 @@
+Vualt is setup with dynamoDB as the backend. Vault has 3 nodes in a cluster and an AWS ALB as the frontend. The vault is unsealed with AWS KMS instead of the usual master key.
+
+the vault binary is located at /usr/local/bin/vault
+
+
+1. change made to the service file
+Unknown lvalue 'StartLimitIntervalSec' in section 'Service'
+
+Failed to parse capability in bounding/ambient set, ignoring: CAP_IPC_LOCK,CAP_NET_BIND_SERVICE
+
+Oct 30 13:31:32 vault-1 systemd: [/etc/systemd/system/vault.service:16] Failed to parse capability in bounding/ambient set, ignoring: CAP_IPC_LOCK,CAP_NET_BIND_SERVICE
+
+
+
+TEST VAULT
+
+https://github.mdr.defpoint.com/mdr-engineering/msoc-infrastructure/tree/master/salt/fileroots/vault
+
+1. stop vault service from salt on all vault instances
+1.1 salt vault* cmd.run 'systemctl stop vault'
+2. wipe dynamoDB (select items-> actions -> delete) until there are no more items (BESURE to BACKUP FIRST!)
+3. start vault
+3.1 run salt state to ensure it is in the correct state with all policies on disk. 
+3.2 salt vault* state.sls vault
+4. on vault-1, init vault RUN on the server not salt (avoid the recovery keys from getting into logs)
+4.1 VAULT_ADDR=https://vault.mdr-test.defpoint.com vault operator init -tls-skip-verify=true -recovery-shares=5 -recovery-threshold=2
+5. login 
+5.1 VAULT_ADDR=https://vault.mdr-test.defpoint.com vault login -tls-skip-verify=true -method=token
+5.2 Do yourself a favor and setup some Bash Variables or run commands from salt 
+    export VAULT_ADDR=https://vault.mdr-test.defpoint.com
+    export VAULT_ADDR=https://127.0.0.1
+    export VAULT_ADDR=https://vault.mdr.defpoint.com
+    export VAULT_SKIP_VERIFY=1
+    
+
+6. setup okta auth
+6.1 VAULT_ADDR=https://vault.mdr-test.defpoint.com vault auth enable okta
+6.2 VAULT_ADDR=https://vault.mdr-test.defpoint.com vault write -tls-skip-verify=true auth/okta/config base_url="okta.com" organization="mdr-multipass" token="api_token_here"
+6.2 VAULT_ADDR=https://vault.mdr-test.defpoint.com vault write -tls-skip-verify=true auth/okta/config base_url="okta.com" organization="mdr-multipass" token="$( cat ~/.okta-token )"
+6.3 VAULT_ADDR=https://vault.mdr-test.defpoint.com vault auth list
+6.4 set the TTL for the okta auth method
+6.4.1 VAULT_ADDR=https://vault.mdr-test.defpoint.com vault auth tune -default-lease-ttl=3h -max-lease-ttl=3h okta/
+
+
+7. Enable/add Policies
+7.1 vault policy write -tls-skip-verify=true admins /etc/vault/admins.hcl
+7.2 vault policy write -tls-skip-verify=true engineers /etc/vault/engineers.hcl
+7.2 vault policy write -tls-skip-verify=true clu /etc/vault/clu.hcl
+7.2 vault policy write -tls-skip-verify=true onboarding /etc/vault/onboarding.hcl
+7.2 vault policy write -tls-skip-verify=true portal /etc/vault/portal.hcl
+7.2 vault policy write -tls-skip-verify=true soc /etc/vault/soc.hcl
+7.2 vault policy write salt-master /etc/vault/salt-master.hcl
+7.2 vault policy write saltstack/minions /etc/vault/salt-minions.hcl
+
+8 Add external groups
+8.1 vault write identity/group name="admins" policies="admins" type="external"
+8.2 vault write identity/group name="mdr-engineers" policies="engineers" type="external"
+8.3 vault write identity/group name="vault-admins" policies="admins" type="external"
+8.4 vault write identity/group name="soc-lead" policies="soc" type="external"
+8.5 vault write identity/group name="soc-tier-3" policies="soc" type="external"
+
+9 add alias through the GUI. (use the root token to login or a temp root token (better))
+9.1 Access -> Groups -> admins -> Aliases -> Create alias -> mdr-admins
+9.2 Access -> Groups -> mdr-engineers -> Aliases -> Create alias -> mdr-engineers
+9.3 Access -> Groups -> vault-admins -> Aliases -> Create alias -> vault-admin
+9.4 Access -> Groups -> soc-lead -> Aliases -> Create alias -> Analyst-Shift-Lead
+9.5 Access -> Groups -> soc-tier-3 -> Aliases -> Create alias -> Analyst-Tier-3 
+
+groups              alias               policy
+admins              mdr-admins          admins
+mdr-engineers       mdr-engineers       engineers
+vault-admins        vault-admin         admins
+soc-lead            Analyst-Shift-Lead  soc
+soc-tier-3          Analyst-Tier-3      soc
+
+10 enable the file audit 
+10.1 VAULT_ADDR=https://vault.mdr-test.defpoint.com vault audit enable -tls-skip-verify=true file file_path=/var/log/vault.log
+
+11 enable the aws & approle auth
+11.1 VAULT_ADDR=https://vault.mdr-test.defpoint.com vault auth enable -tls-skip-verify=true aws
+11.2 setup approle auth using the salt-master policy
+11.2.1 vault auth enable approle
+11.2.2 vault write auth/approle/role/salt-master token_max_ttl=3h token_policies=salt-master
+
+12 configure the aws policies on the role (clu and portal) UPDATE THE AWS ACCOUNT!!!
+12.1  VAULT_ADDR=https://vault.mdr-test.defpoint.com vault write auth/aws/role/portal auth_type=iam bound_iam_principal_arn=arn:aws:iam::527700175026:role/portal-instance-role policies=portal max_ttl=24h
+12.2 VAULT_ADDR=https://vault.mdr-test.defpoint.com vault write auth/aws/role/clu auth_type=iam bound_iam_principal_arn=arn:aws:iam::527700175026:role/clu-instance-role policies=clu max_ttl=24h
+
+
+13 Create the kv V2 secret engines
+VAULT_ADDR=https://vault.mdr-test.defpoint.com ~/Documents/MDR/Vault/vault secrets enable -path=engineering kv-v2
+vault secrets enable -path=engineering kv-v2
+vault secrets enable -path=ghe-deploy-keys kv-v2
+vault secrets enable -path=jenkins kv-v2
+vault secrets enable -path=onboarding kv-v2
+vault secrets enable -path=onboarding-afs kv-v2
+vault secrets enable -path=onboarding-gallery kv-v2
+vault secrets enable -path=onboarding-saf kv-v2
+vault secrets enable -path=portal kv-v2
+vault secrets enable -path=soc kv-v2
+vault secrets enable -version=1 -path=salt kv 
+
+vault write salt/pillar_data auth="abc123"
+
+
+14 export the secrets (be sure to export your bash variable for VAULT_TOKEN DON'T Use ROOT TOKEN!)
+/Users/bradpoulton/.go/src/vault-backend-migrator/vault-backend-migrator -export engineering/data/ -metadata engineering/metadata/ -file engineering-secrets.json -ver 2
+
+/Users/bradpoulton/.go/src/vault-backend-migrator/vault-backend-migrator -export ghe-deploy-keys/data/ -metadata ghe-deploy-keys/metadata/ -file ghe-deploy-keys-secrets.json -ver 2
+
+/Users/bradpoulton/.go/src/vault-backend-migrator/vault-backend-migrator -export jenkins/data/ -metadata jenkins/metadata/ -file jenkins-secrets.json -ver 2
+
+/Users/bradpoulton/.go/src/vault-backend-migrator/vault-backend-migrator -export onboarding/data/ -metadata onboarding/metadata/ -file onboarding-secrets.json -ver 2
+
+/Users/bradpoulton/.go/src/vault-backend-migrator/vault-backend-migrator -export onboarding-afs/data/ -metadata onboarding-afs/metadata/ -file onboarding-afs-secrets.json -ver 2
+
+/Users/bradpoulton/.go/src/vault-backend-migrator/vault-backend-migrator -export onboarding-gallery/data/ -metadata onboarding-gallery/metadata/ -file onboarding-gallery-secrets.json -ver 2
+
+/Users/bradpoulton/.go/src/vault-backend-migrator/vault-backend-migrator -export onboarding-saf/data/ -metadata onboarding-saf/metadata/ -file onboarding-saf-secrets.json -ver 2
+
+/Users/bradpoulton/.go/src/vault-backend-migrator/vault-backend-migrator -export portal/data/ -metadata portal/metadata/ -file portal-secrets.json -ver 2
+
+
+15 import the json secret files back into vault
+/Users/bradpoulton/.go/src/vault-backend-migrator/vault-backend-migrator -import engineering/ -file engineering-secrets.json -ver 2
+
+/Users/bradpoulton/.go/src/vault-backend-migrator/vault-backend-migrator -import ghe-deploy-keys/ -file ghe-deploy-keys-secrets.json -ver 2
+
+/Users/bradpoulton/.go/src/vault-backend-migrator/vault-backend-migrator -import jenkins/ -file jenkins-secrets.json -ver 2
+
+/Users/bradpoulton/.go/src/vault-backend-migrator/vault-backend-migrator -import onboarding/ -file onboarding-secrets.json -ver 2
+
+/Users/bradpoulton/.go/src/vault-backend-migrator/vault-backend-migrator -import onboarding-afs/ -file onboarding-afs-secrets.json -ver 2
+
+/Users/bradpoulton/.go/src/vault-backend-migrator/vault-backend-migrator -import onboarding-gallery/ -file onboarding-gallery-secrets.json -ver 2
+
+/Users/bradpoulton/.go/src/vault-backend-migrator/vault-backend-migrator -import onboarding-saf/ -file onboarding-saf-secrets.json -ver 2
+
+/Users/bradpoulton/.go/src/vault-backend-migrator/vault-backend-migrator -import portal/ -file portal-secrets.json -ver 2
+
+
+
+
+
+
+AWS auth 
+the vault instances have access to AWS IAM Read. 
+
+curl -v --header "X-Vault-Token:$VAULT_TOKEN" --request LIST \
+    https://vault.mdr.defpoint.com:443/v1/auth/aws/roles --insecure
+
+
+
+
+
+
+
+8. map okta to policies ( not needed )
+8.1 VAULT_ADDR=https://vault.mdr-test.defpoint.com vault policy write -tls-skip-verify=true auth/okta/groups/mdr-admins policies=admins
+
+
+Vault Logs
+
+cat 0c86fda6-1139-7914-fef5-6b7532e9fb5a | grep -v -F '"operation":"list"' | grep -v -F '"operation":"read"'
+cat c3c0b50b-9429-355d-8c8f-038e093c3e4b | grep -v -F '"operation":"list"' | grep -v -F '"operation":"read"'
+
+entity_34d6c410 -< nothing in logs    
+"entity_id":"c3c0b50b-9429-355d-8c8f-038e093c3e4b
+entity_ba27bb07 < - nothing in logs
+0c86fda6-1139-7914-fef5-6b7532e9fb5a

+ 11 - 0
MDR Vault Prod Refresh Notes.txt

@@ -0,0 +1,11 @@
+Vault Prod Refresh Notes
+
+1. backup all vault secret engines
+2. import secrets into test 
+3. stop vault service
+4. wipe dynamoDB
+5. Follow MDR Vault Notes
+
+
+salt vault-1* cmd.run "/usr/local/bin/vault status -tls-skip-verify=true" env='{"VAULT_ADDR": "https://127.0.0.1:443"}'
+

+ 1 - 0
MDR VictorOps Notes.txt

@@ -0,0 +1 @@
+Collectd -> Splunk -> VictorOps -> My Phone

+ 164 - 0
MDR salt_splunk_HEC Notes.txt

@@ -0,0 +1,164 @@
+on HF had to uninstall requests and reinstall it to get a new version to get rid of the error about the certs import error
+ImportError: cannot import name certs
+pip list | grep requests
+yum list installed | grep requests
+sudo pip uninstall requests
+sudo pip uninstall urllib3
+sudo yum install python-urllib3
+sudo yum install python-requests
+pip install boto3
+
+yum list installed | grep ssl_match
+ssl_match_hostname is not getting picked up my requests and crashing the HF returner to splunk
+next step deploy to all minions and see if HF is unquie or not. 
+***ERROR: python import requests ***
+
+salt salt* pillar.item splunk_http_forwarder
+
+
+index=salt | spath fun | search fun="grains.items"
+
+salt saf*local state.sls salt_minion.salt_minion_configs
+network.connect iratemoses.mdr.defpoint.com 8088
+cmd.run 'tail -50 /var/log/salt/minion'
+pillar.item splunk_http_forwarder
+
+moose-splunk-indexer-3 | 10.80.103.198 | vpc-0b676c4efd7fad548
+
+not working 
+ip-10-81-9-10.msoc.defpoint.local   | customer-portal   | vpc-0f45bf3132d4e25f3 | subnet-0de23b03ea0a6bf1d
+ip-10-81-8-205.msoc.defpoint.local  | customer-portal   | vpc-0f45bf3132d4e25f3 | subnet-0c173d841b5b59a24
+Connection timed out need to update terraform git PR submitted waiting on review/approval
+
+----
+saf-splunk-ds-1 10.1.10.161                     |  ddps01.corp.smartandfinal.com
+saf-splunk-syslog-1 10.1.10.163                 |  ddps03.corp.smartandfinal.com 
+saf-splunk-syslog-2 10.1.10.164                 |  ddps04.corp.smartandfinal.com
+saf-splunk-dcn-1 10.1.10.162                    |  ddps02.corp.smartandfinal.com
+ERRORs:
+ConnectionError: ('Connection aborted.', error(104, 'Connection reset by peer'))
+
+
+Name mismatch
+afssplds100.us.accenturefederal.com     | afs-splunk-ds-2
+afssplds102.us.accenturefederal.com     | ...
+afssplhf101.us.accenturefederal.com
+afssplhf102.us.accenturefederal.com
+afssplhf103.us.accenturefederal.com
+afssplhf104.us.accenturefederal.com
+
+aws-splnks1-tts.nga.gov
+aws-syslog1-tts.nga.gov
+aws-syslog2-tts.nga.gov
+
+
+
+[prod]brad_poulton@salt-master:~:$ salt '*' network.get_hostname
+moose-splunk-indexer-2.msoc.defpoint.local:
+    moose-splunk-indexer-2.msoc.defpoint.local
+saf-splunk-indexer-3.msoc.defpoint.local:
+    saf-splunk-indexer-3.msoc.defpoint.local
+afs-splunk-indexer-1.msoc.defpoint.local:
+    afs-splunk-indexer-1.msoc.defpoint.local
+phantom.msoc.defpoint.local:
+    phantom.msoc.defpoint.local
+moose-splunk-hf.msoc.defpoint.local:
+    moose-splunk-hf.msoc.defpoint.local
+nga-splunk-sh.msoc.defpoint.local:
+    nga-splunk-sh.msoc.defpoint.local
+saf-splunk-cm.msoc.defpoint.local:
+    saf-splunk-cm.msoc.defpoint.local
+openvpn.msoc.defpoint.local:
+    openvpn.msoc.defpoint.local
+ip-10-81-8-205.msoc.defpoint.local:
+    ip-10-81-8-205.msoc.defpoint.local
+afs-splunk-indexer-2.msoc.defpoint.local:
+    afs-splunk-indexer-2.msoc.defpoint.local
+saf-splunk-indexer-2.msoc.defpoint.local:
+    saf-splunk-indexer-2.msoc.defpoint.local
+moose-splunk-indexer-3.msoc.defpoint.local:
+    moose-splunk-indexer-3.msoc.defpoint.local
+reposerver.msoc.defpoint.local:
+    reposerver.msoc.defpoint.local
+nga-splunk-indexer-2.msoc.defpoint.local:
+    nga-splunk-indexer-2.msoc.defpoint.local
+afs-splunk-hf.msoc.defpoint.local:
+    afs-splunk-hf.msoc.defpoint.local
+nga-splunk-indexer-3.msoc.defpoint.local:
+    nga-splunk-indexer-3.msoc.defpoint.local
+afs-splunk-syslog-2:
+    afssplhf104.us.accenturefederal.com
+moose-splunk-indexer-1.msoc.defpoint.local:
+    moose-splunk-indexer-1.msoc.defpoint.local
+moose-splunk-sh.msoc.defpoint.local:
+    moose-splunk-sh.msoc.defpoint.local
+sensu.msoc.defpoint.local:
+    sensu.msoc.defpoint.local
+moose-splunk-cm.msoc.defpoint.local:
+    moose-splunk-cm.msoc.defpoint.local
+saf-splunk-indexer-1.msoc.defpoint.local:
+    saf-splunk-indexer-1.msoc.defpoint.local
+ip-10-81-9-10.msoc.defpoint.local:
+    ip-10-81-9-10.msoc.defpoint.local
+afs-splunk-sh.msoc.defpoint.local:
+    afs-splunk-sh.msoc.defpoint.local
+nga-splunk-indexer-1.msoc.defpoint.local:
+    nga-splunk-indexer-1.msoc.defpoint.local
+clu.msoc.defpoint.local:
+    clu.msoc.defpoint.local
+dps-idm-1.msoc.defpoint.local:
+    dps-idm-1.msoc.defpoint.local
+vault-3.msoc.defpoint.local:
+    vault-3.msoc.defpoint.local
+nga-splunk-cm.msoc.defpoint.local:
+    nga-splunk-cm.msoc.defpoint.local
+afs-splunk-ds-1:
+    afssplds102.us.accenturefederal.com
+afs-splunk-syslog-1:
+    afssplhf103.us.accenturefederal.com
+afs-splunk-syslog-3:
+    afssplhf101.us.accenturefederal.com
+bastion.msoc.defpoint.local:
+    bastion.msoc.defpoint.local
+saf-splunk-dcn-1:
+    ddps02.corp.smartandfinal.com
+saf-splunk-sh.msoc.defpoint.local:
+    saf-splunk-sh.msoc.defpoint.local
+saf-splunk-hf.msoc.defpoint.local:
+    saf-splunk-hf.msoc.defpoint.local
+salt-master.msoc.defpoint.local:
+    salt-master.msoc.defpoint.local
+splunk-mc.msoc.defpoint.local:
+    splunk-mc.msoc.defpoint.local
+nga-splunk-hf.msoc.defpoint.local:
+    nga-splunk-hf.msoc.defpoint.local
+jira-server.msoc.defpoint.local:
+    jira-server.msoc.defpoint.local
+mailrelay.msoc.defpoint.local:
+    mailrelay.msoc.defpoint.local
+proxy.msoc.defpoint.local:
+    proxy.msoc.defpoint.local
+afs-splunk-syslog-4:
+    afssplhf102.us.accenturefederal.com
+nga-splunk-syslog-1:
+    aws-syslog1-tts.nga.gov
+vault-1.msoc.defpoint.local:
+    vault-1.msoc.defpoint.local
+saf-splunk-syslog-2:
+    ddps04.corp.smartandfinal.com
+afs-splunk-ds-2:
+    afssplds100.us.accenturefederal.com
+afs-splunk-cm.msoc.defpoint.local:
+    afs-splunk-cm.msoc.defpoint.local
+afs-splunk-indexer-3.msoc.defpoint.local:
+    afs-splunk-indexer-3.msoc.defpoint.local
+nga-splunk-ds-1:
+    aws-splnks1-tts.nga.gov
+nga-splunk-syslog-2:
+    aws-syslog2-tts.nga.gov
+vault-2.msoc.defpoint.local:
+    vault-2.msoc.defpoint.local
+saf-splunk-syslog-1:
+    ddps03.corp.smartandfinal.com
+saf-splunk-ds-1:
+    ddps01.corp.smartandfinal.com

+ 8 - 0
MDR sftp Notes.txt

@@ -0,0 +1,8 @@
+https://github.mdr.defpoint.com/MDR-Content/mdr-content/wiki/MDR-Project-Onboarding
+
+get = downloads a file
+put = uploads a file
+cd = change directory on server
+lcd = change local directory (on client) 
+pwd = print wd
+ls = ls

+ 4 - 0
MSR Sudo Replay Notes.txt

@@ -0,0 +1,4 @@
+Sudo Replay Notes
+
+/var/log/sudo-io 
+

+ 2 - 0
README.md

@@ -0,0 +1,2 @@
+# Brads All Encompassing Compendium of the Entirety of XDR Knowledge
+