Browse Source

Adds DGI AWS Account and other changes

Brad Poulton 4 years ago
parent
commit
6da16065f3
5 changed files with 160 additions and 90 deletions
  1. 11 7
      AWS New Account Setup Notes.md
  2. 0 4
      GovCloud Notes.md
  3. 142 78
      New Customer Setup Notes - GovCloud.md
  4. 1 1
      OpenVPN Notes.md
  5. 6 0
      files/config

+ 11 - 7
AWS New Account Setup Notes.md

@@ -78,7 +78,7 @@ Region should be `us-gov-east-1` or `us-east-1`.
 ```
 CUSTOMERPREFIX=<customer-prefix>
 INITIALS=bp
-TICKET=MSOCI-1726
+TICKET=MSOCI-<ticket number>
 # cd to xdr-terraform-live folder
 git checkout master
 git fetch --all
@@ -104,22 +104,26 @@ For Accounts that will be used ( e.g. GovCloud ).
 cp -r 000-skeleton/ prod/aws-us-gov/mdr-prod-${CUSTOMERPREFIX}
 cd prod/aws-us-gov/mdr-prod-${CUSTOMERPREFIX}
 vim README.md # Add a description of the account
-vim account.hcl # Fill in all "TODO" items, but leave "LATER" items (such as qualys) to be completed later.
+vim account.hcl # Fill in all "TODO" items, but leave "LATER" items (such as qualys) to be completed later. If you don't know the LCP IPs yet, comment out the splunk_data_sources cidr.  
+# if needed cd to commerical dir for next steps
+cd ../../../aws/mdr-prod-${CUSTOMERPREFIX}
 ```
 
 These steps should be run on both Commerical and GovCloud accounts. Start with the Commerical account to use the AWS keys.  
 
-1. cd into the IAM directory
+cd into the IAM directory
 `cd 005-iam`
 
-1. Double-check / fix the profile
-
+Double-check / fix the profile
 ```
 vim terragrunt.hcl
 # Check TODO items, make sure the profile (tmp) listed is right / matches what you have in above step
 ```
 
-1. Apply the configuration: 
+Update the tag reference to the lastest. 
+TODO add better notes here. 
+
+Apply the configuration: 
 ```
 saml2aws -a commercial login
 saml2aws -a govcloud login
@@ -130,7 +134,7 @@ terragrunt apply
 
 If the `terragrunt apply` takes forever and doesn't do anything, you need to authenticate with aws-mfa again. 
 
-1. Comment-out the provisioning provider block and validate that terragrunt can be applied with the normal `xdr-terraformer` roles from `root` account
+Comment-out the provisioning provider block and validate that terragrunt can be applied with the normal `xdr-terraformer` roles from `root` account
 ```
 vim terragrunt.hcl
 # comment out the provider generation parts

+ 0 - 4
GovCloud Notes.md

@@ -1,4 +0,0 @@
-What services are needed in GovCloud?
-ECR
-codebuild
-

+ 142 - 78
New Customer Setup Notes - GovCloud.md

@@ -35,7 +35,7 @@ IMPORTANT: Each time you run this, it will generate new passwords. So make sure
 
 Do you have a Splunk license yet? No? Can you use a temp/dev license until the real one shows up? I hate doing that, but not much of a choice. 
 
-Do you know what size Splunk Indexers to use? Ask Wes to provide this information. Or look in the finance folder ( XDR Finance Private Group > Documents > General > Cost build up ).
+Do you know what size Splunk Indexers to use? Ask Wes to provide this information. Or look in the finance folder in OneDrive ( XDR Finance Private Group > Documents > General > Cost build up ). You might not have permissions to view it. 
 
 Commands tested on OSX and may not (probably won't) work on windows/linux.
 
@@ -60,7 +60,7 @@ ESSJOBSHASH="`echo $ESSJOBSPASS | python3 -c "from passlib.hash import sha512_cr
 
 Connect to production VPN
 
-Log into vault at (not yet) https://vault.pvt.xdr.accenturefederalcyber.com (legacy: https://vault.mdr.defpoint.com)
+Log into vault at https://vault.pvt.xdr.accenturefederalcyber.com
 
 Record the following into `engineering/customer_slices/${CUSTOMERPREFIX}`
 
@@ -94,6 +94,8 @@ git checkout -b feature/${INITIALS}_${TICKET}_CustomerSetup_${CUSTOMERPREFIX}
 
 ## Step x: Set up Okta
 
+Notice: the CUSTOMERPREFIX will keep the case used. You might want to manually input uppercase ${CUSTOMERPREFIX} instead of using lowercase ${CUSTOMERPREFIX} here. OR add code to force uppercase in okta_app_maker.py.
+
 ```
 cd tools/okta_app_maker
 
@@ -114,7 +116,7 @@ okta:
 {% endif %}
 ```
 
-Substite `REPLACEME` with `${CUSTOMERPREFIX}-splunk-sh`, `-cm`, or `-hf` and *record them*.. You will need all 3.
+Substite `<REPLACEME>` with `${CUSTOMERPREFIX}-splunk-sh`, `-cm`, or `-hf` and *record them*.. You will need all 3.
 
 Add permissions for the okta apps:
 1) Log into the okta webpage (https://mdr-multipass.okta.com/)
@@ -144,7 +146,7 @@ cd ../../salt/fileroots/splunk/files/licenses/${CUSTOMERPREFIX}
 # If license is not yet available, ... ? Not sure. For testing, I copied something in there but that's not a good practice.
 ```
 
-## Step x: Set up Scaleft
+## Step x: Set up Scaleft ( LEGACY You can SKIP this)
 
 * Add the "project" using the CUSTOMERPREFIX as the name
 * Assign groups to the project
@@ -191,7 +193,7 @@ vim salt_master.sls
 
 # Add customer prefix to ACL
 vim ../fileroots/salt_master/files/etc/salt/master.d/default_acl.conf
-:%s/frtib\*/frtib\* or ca-c19\*/
+:%s/ca-c19\*/ca-c19\* or dgi\*/g
 
 # Add Account number to xdr_asset_inventory.sh under GOVCLOUDACCOUNTS
 vim ../fileroots/salt_master/files/xdr_asset_inventory/xdr_asset_inventory.sh
@@ -242,11 +244,12 @@ Repeat for pop repo, unless customer will not have pop nodes.
 	* onboarding - Write
 
 
-Clone and modify the password in the CM repo (TODO: Just take care of this in salt):
+Clone and modify the password in the CM repo:
 
 ```
 mkdir ~/tmp
 cd ~/tmp
+# alternate cd ../../../
 git clone git@github.xdr.accenturefederalcyber.com:mdr-engineering/msoc-${CUSTOMERPREFIX}-cm.git
 cd msoc-${CUSTOMERPREFIX}-cm
 sed -i "" "s#ADMINHASH#${ADMINHASH}#" passwd
@@ -275,24 +278,28 @@ During the bootstrap process, you copied the skeleton across. Review the variabl
 
 ```
 cd ~/xdr-terraform-live/prod/aws-us-gov/mdr-prod-${CUSTOMERPREFIX}
+# cd ../xdr-terraform-live/prod/aws-us-gov/mdr-prod-${CUSTOMERPREFIX}
 vim account.hcl # Fill in all "TODO" items. Leave the "LATER" variables for later steps.
 ```
-1. Update the ref version in all the terragrunt.hcl files to match latest tag on modules git repo. Replace v1.XX.XX with the current tag. 
+1. Update the ref version in all the terragrunt.hcl files to match latest tag on modules git repo. https://github.xdr.accenturefederalcyber.com/mdr-engineering/xdr-terraform-modules/tags  Replace v1.XX.XX with the current tag. 
+`../../../bin/update_refs --newtag v1.XX.XX`
+IGNORE BELOW, JUST USE SCRIPT! ^^^
+```
 2. `find . -name "terragrunt.hcl" -not -path "*/.terragrunt-cache/*" -exec sed -i '' s/?ref=v1.21.0/?ref=v1.x.x/ {} \;`
 2. `find . -name "terragrunt.hcl" -not -path "*/.terragrunt-cache/*" -exec sed -i '' s/?ref=v1.0.0/?ref=v1.x.x/ {} \;`
 Did you get them all? Don't forget about the subfolders in account_standards_regional. 
 `cat */terragrunt.hcl | grep ref | grep -v 1.xx.xx`
 `cat */*/terragrunt.hcl | grep ref`
-
+```
 
 ## Step x: Add account to global variables, and apply necessary prerequisites
 
 1. Add the account number to `account_map["prod"]` in :
   * `~/xdr-terraform-live/prod/aws-us-gov/partition.hcl` OR `vim ../partition.hcl`
-  * `~/xdr-terraform-live/common/aws-us-gov/partition.hcl` OR `vim ../../../common/aws-us-gov/partition.hcl`
+  ( Same file just a ln )* `~/xdr-terraform-live/common/aws-us-gov/partition.hcl` OR `vim ../../../common/aws-us-gov/partition.hcl`
 2. `cd ~/xdr-terraform-live/prod/aws-us-gov/mdr-prod-c2` OR `cd ../mdr-prod-c2/`
 2. Create PR and get changes approved commit message should be, "Adds ${CUSTOMERPREFIX} customer"
-3. Apply the modules: (* draft: are there more external requirements? *)
+3. Apply the modules: 
 
 Copy and paste these commands into cmd line and run them. 
 ```
@@ -310,7 +317,7 @@ oneliner
 4. `cd ../../../common/aws-us-gov/afs-mdr-common-services-gov/`
 5. Apply the modules:
 ```
-for module in 008-xdr-binaries 010-shared-ami-key 
+for module in 008-xdr-binaries 010-shared-ami-key 050-lcp-ami-sharing
 do
   pushd $module
   terragrunt apply
@@ -319,12 +326,14 @@ done
 ```
 
 
+
 ## Step x: Share the AMI with the new account
 
 The new AWS account needs permissions to access the AMIs before trying to create EC2 instances. Replace the aws-account-id in the below command. 
 
 ```
-cd ~/xdr-terraform-live/bin/ # OR cd ../../../bin/
+cd ~/xdr-terraform-live/bin/ 
+# OR cd ../../../bin/
 # Dump a list of AMIs matching the filter just to get a good looky-loo
 AWS_PROFILE=mdr-common-services-gov update-ami-accounts 'MSOC*'
 
@@ -347,25 +356,13 @@ Also add the new account number to the packer build so that when new
 AMIs get built they are shared automatically with this account.
 
 ```
-cd ~/msoc-infrastructure/packer or cd ../../msoc-infrastructure/packer
+cd ~/msoc-infrastructure/packer 
+#or cd ../../msoc-infrastructure/packer
 vi Makefile
 # Add the account(s) to GOVCLOUD_ACCOUNTS / COMMERCIAL_ACCOUNTS
 # as needed.  PR it and exit
-cd cd ../../xdr-terraform-live/bin/
+cd ../../xdr-terraform-live/bin/
 ```
-## Step x: Allow Access from Customer Accounts
-
-Ask Fred for clarification. This section may need to be moved. 
-The new AWS account needs permissions to access the XDR Trumpet S3 bucket. If the LCPs are going to be in commercial put in common/aws/partition.hcl. 
-
-vim common/aws/partition.hcl
-#GC LCP servers. 
-vim common/aws-us-gov/partition.hcl
-
-Update the customer_accounts variable. 
-
-Run `terragrunt apply` to apply the changes to `xdr-terraform-live/common/aws-us-gov/afs-mdr-common-services-gov/300-s3-xdr-trumpet` and `us-west-1/300-s3-xdr-trumpet`.
-
 
 ## Step x: Apply the Terraform in order
 
@@ -374,7 +371,8 @@ The `xdr-terraform-live/bin` directory should be in your path. You will need it
 (IMPORTANT:, if you are _certain_ everything is good to go, you can do a `yes yes |` before the `terragrunt-apply-all` to bypass prompts. This does not leave you an out if you make a mistake, however, becasue it is difficult to break out of terragrunt/terraform without causing issues.)
 
 ```
-cd ~/xdr-terraform-live/prod/aws-us-gov/mdr-prod-${CUSTOMERPREFIX} # OR cd ../prod/aws-us-gov/mdr-prod-${CUSTOMERPREFIX}
+cd ~/xdr-terraform-live/prod/aws-us-gov/mdr-prod-${CUSTOMERPREFIX} 
+# OR cd ../prod/aws-us-gov/mdr-prod-${CUSTOMERPREFIX}
 terragrunt-apply-all --skipqualys --notlocal
 ```
 
@@ -421,6 +419,14 @@ cd ~/xdr-terraform-live/bin/
 AWS_PROFILE=mdr-common-services-gov update-ami-accounts 'MSOC*' <aws-account-id>
 ```
 
+Error: "x.x.x.x/32" is not a valid CIDR block: invalid CIDR address: x.x.x.x/32
+Issue: the splunk_data_sources variable in account.hcl is not filled out correctly. 
+Solution: If you don't have LCP IPs, yet, just comment this line out. 
+
+Error: No RAM Resource Share (arn:aws-us-gov:ram:us-gov-east-1:721817724804:resource-share/b116e73a-82c4-4f84-910f-2b3a53bf458e) invitation found
+Issue: The resource share on the mdr-prod-c2 side is messed up. 
+Solution: Try maunually applying /xdr-terraform-live/prod/aws-us-gov/mdr-prod-c2/008-transit-gateway-hub then run the customer apply again. 
+
 
 ## Step x: Connect to Qualys
 
@@ -448,9 +454,22 @@ Push the changes in xdr-terraform-live for a PR in git.
 
 ## Setup Vulnerablity and Compliance Scanning
 
-TODO
+### Qualys Vulnerability Scanning
+Head back to Qualys
+Setup a new vuln scan
+VMDR > Scans > Schedules > New Ec2 Scan 
+Title: ${CUSTOMERPREFIX}-mdr-prod-gov Daily Scan
+Option Profile: DPS Base Option Profile (default)
+Connector: ${CUSTOMERPREFIX}-mdr-prod-gov
+Platform: EC2-VPC (All VPCs in Region)
+Available Regions: US Gov Cloud (US-East)
+Scanner Appliance: XDR_Prod_Govcloud_Preauthorized
+Include Host Tags: AWS_Prod AWS_PROD Auto:tag xdr-gc-prod
+Exclude tags: Purged
+Scheduling: Daily at an unused hour
+
 
-### Qualys
+### Qualys Compliance Scanning
 
 I hate that i have to make this section when Tenable is so close. 
 
@@ -480,24 +499,24 @@ Substitute environment variables here:
 ```
 ssh gc-prod-salt-master
 CUSTOMERPREFIX=<entercustomerprefix}
-sudo salt-key -L | grep $CUSTOMERPREFIX # Wait for all 6 servers to be listed (cm, sh, hf, and 3 idxs)
+salt-key | grep $CUSTOMERPREFIX # Wait for all 6 servers to be listed (cm, sh, hf, and 3 idxs)
 sleep 300 # Wait 5 minutes
-salt ${CUSTOMERPREFIX}\* test.ping
+salt ${CUSTOMERPREFIX}-\* test.ping
 # Repeat until 100% successful
-salt ${CUSTOMERPREFIX}\* saltutil.sync_all
-salt ${CUSTOMERPREFIX}\* saltutil.refresh_pillar
-salt ${CUSTOMERPREFIX}\* saltutil.refresh_modules
-salt ${CUSTOMERPREFIX}\* grains.get environment
-salt ${CUSTOMERPREFIX}\* state.highstate --output-diff
+salt ${CUSTOMERPREFIX}-\* saltutil.sync_all
+salt ${CUSTOMERPREFIX}-\* saltutil.refresh_pillar
+salt ${CUSTOMERPREFIX}-\* saltutil.refresh_modules
+salt ${CUSTOMERPREFIX}-\* grains.get environment
+salt ${CUSTOMERPREFIX}-\* state.highstate --output-diff
 # Review changes from above. Though i've seen indexers get hung. If they do, see note below
 # splunk_service may fail, this is expected (it's waiting for port 8000)
-salt ${CUSTOMERPREFIX}\* test.version
-salt ${CUSTOMERPREFIX}\* pkg.upgrade ( this may break connectivity if there is a salt minion upgrade! )
-salt ${CUSTOMERPREFIX}\* system.reboot
+salt ${CUSTOMERPREFIX}-\* test.version
+salt ${CUSTOMERPREFIX}-\* pkg.upgrade ( this may break connectivity if there is a salt minion upgrade! )
+salt ${CUSTOMERPREFIX}-\* system.reboot
 # Wait 5+ minutes
-salt ${CUSTOMERPREFIX}\* test.ping
+salt ${CUSTOMERPREFIX}-\* test.ping
 # Apply the cluster bundle
-salt ${CUSTOMERPREFIX}\*-cm\* state.sls splunk.master.apply_bundle_master --output-diff
+salt ${CUSTOMERPREFIX}-\*-cm\* state.sls splunk.master.apply_bundle_master --output-diff
 exit
 ```
 
@@ -554,16 +573,28 @@ Note from Brad: Donkey! ( see Shrek 2 Dinner scene. https://www.youtube.com/watc
 
 ## Portal Lambda Env Var
 
-TODO: Add new customer to Customer Portal Lambda
-Add the customer to the portal lambda env vars. base/customer_portal_lambda/main.tf
+TODO: improve these steps by working with Wes. 
+Add the customer to the portal lambda env vars. prod/aws-us-gov/mdr-prod-c2/205-customer-portal-lambda/terragrunt.hcl
+
+Also, in Prod Vault, update the Customer Portal Vars
+Vault > portal > lambda_sync_env
+
+The Token is generated by Wes??
+
+Is this accurate???
+in customer splunk SH create svc-portal-data-sync-lambda user. 
+Name: svc-portal-data-sync-lambda
+Full Name: Portal Data Sync
+
+
 
 ## Splunk configuration
 
-* Install ES on the search head Version 6.2 ( NOT 6.4.0 ! Until George tells us otherwise! )
+* Install ES on the search head Version 6.4.1 ( Unless George wants a diff version)
 
 
-1. Download ES app from Splunk using your splunk creds
-1. Check hash on your laptop `shasum -a 256 splunk-enterprise-security_620.spl`
+1. Download ES app from Splunk using your AFS splunk creds
+1. Check hash on your laptop `shasum -a 256 splunk-enterprise-security_641.spl`
 1. Temporarily modify the etc/system/local/web.conf to allow large uploads
   ```
   [settings]
@@ -571,49 +602,57 @@ Add the customer to the portal lambda env vars. base/customer_portal_lambda/main
   ```
 
 On the salt master...
-1. `CUSTOMERPREFIX=modelclient`
-1. `salt ${CUSTOMERPREFIX}-splunk-sh* cmd.run 'ls -larth /opt/splunk/etc/system/local/web.conf'`
-1. `salt ${CUSTOMERPREFIX}-splunk-sh* cmd.run 'touch /opt/splunk/etc/system/local/web.conf'`
-1. `salt ${CUSTOMERPREFIX}-splunk-sh* cmd.run 'chown splunk: /opt/splunk/etc/system/local/web.conf'`
-1. `salt ${CUSTOMERPREFIX}-splunk-sh* cmd.run 'echo "[settings]" > /opt/splunk/etc/system/local/web.conf'`
-1. `salt ${CUSTOMERPREFIX}-splunk-sh* cmd.run 'echo "max_upload_size = 1024" >> /opt/splunk/etc/system/local/web.conf'`
-1. `salt ${CUSTOMERPREFIX}-splunk-sh* cmd.run 'cat /opt/splunk/etc/system/local/web.conf'`
-1. `salt ${CUSTOMERPREFIX}-splunk-sh* cmd.run 'systemctl restart splunk'`
+```
+#CUSTOMERPREFIX=modelclient
+salt ${CUSTOMERPREFIX}-splunk-sh* test.ping
+salt ${CUSTOMERPREFIX}-splunk-sh* cmd.run 'ls -larth /opt/splunk/etc/system/local/web.conf'
+salt ${CUSTOMERPREFIX}-splunk-sh* cmd.run 'touch /opt/splunk/etc/system/local/web.conf'
+salt ${CUSTOMERPREFIX}-splunk-sh* cmd.run 'chown splunk: /opt/splunk/etc/system/local/web.conf'
+salt ${CUSTOMERPREFIX}-splunk-sh* cmd.run 'echo "[settings]" > /opt/splunk/etc/system/local/web.conf'
+salt ${CUSTOMERPREFIX}-splunk-sh* cmd.run 'echo "max_upload_size = 1024" >> /opt/splunk/etc/system/local/web.conf'
+salt ${CUSTOMERPREFIX}-splunk-sh* cmd.run 'cat /opt/splunk/etc/system/local/web.conf'
+salt ${CUSTOMERPREFIX}-splunk-sh* cmd.run 'systemctl restart splunk'
+```
 1. Upload via the GUI ( takes a long time to upload )
 1. Choose "Set up now" and "Start Configuration Process"
-1. ES should complete app actions on its own, then prompt for a restart
+1. ES should complete app actions on its own, then prompt for a restart ( or need a manual restart; check messages)
 
 ### remove the web.conf file
-1. `salt ${CUSTOMERPREFIX}-splunk-sh* cmd.run 'cat /opt/splunk/etc/system/local/web.conf'`
-1. `salt ${CUSTOMERPREFIX}-splunk-sh* cmd.run 'rm -rf /opt/splunk/etc/system/local/web.conf'`
-1. `salt ${CUSTOMERPREFIX}-splunk-sh* cmd.run 'systemctl restart splunk'`
-
+```
+salt ${CUSTOMERPREFIX}-splunk-sh* cmd.run 'cat /opt/splunk/etc/system/local/web.conf'
+salt ${CUSTOMERPREFIX}-splunk-sh* cmd.run 'rm -rf /opt/splunk/etc/system/local/web.conf'
+salt ${CUSTOMERPREFIX}-splunk-sh* cmd.run 'systemctl restart splunk'
+```
 
-## Monitoring Console ( skip if demo cluster )
+## Monitoring Console and FM Shared Search ( skip if demo cluster )
 
 Note: Once the Legacy Monitoring Console has moved to GC, the SGs will need to be fixed.
 
-1. Add master_uri and pass4symmkey to salt/pillar/mc_variables.sls
+1. Add master_uri and pass4symmkey to salt/pillar/mc_variables.sls and salt/pillar/fm_shared_search.sls
 1. `echo $PASS4KEY` 
-1. Commit to git with the message, "Adds variables for Monitoring Console" and once approved, highstate the Moose MC.
-1. `sudo salt-run fileserver.update`
-1. `salt splunk-mc* state.sls splunk.monitoring_console --output-diff test=true`
-1. Splunk should restart and the new Splunk CMs will show up in the MC ( Settings -> Indexer Clustering )
+1. Commit to git with the message, "Adds variables for Monitoring Console and FM SH" and once approved, apply the changes.
+```
+# SSH to the salt master
+sudo salt-run fileserver.update
+sudo salt-run git_pillar.update
+salt splunk-mc* state.sls splunk.monitoring_console --output-diff test=true
+salt fm-shared-search-0* state.sls splunk.fm_shared_search --output-diff test=true
+```
+
+
+In both MC and FM SH do the following
 1. After applying the code, pull the encypted values of the pass4symmkey out of the Splunk config file and replace them in the salt state. Then create a PR for git. Set the git commit message to, "Swaps pass4symmkey with encrypted value".  
 1. `salt splunk-mc* cmd.run 'cat /opt/splunk/etc/apps/connected_clusters/local/server.conf'`
 1. Ensure new cluster is showing up in the Settings -> Indexer Clustering ( should see 4 check marks and at least 3 peers ). If not, verify firewall rules. 
+
+Make changes to just the Monitoring Console SH
+
 1. Add CM as a search peer by going to Settings -> Distributed Search -> Search Peers
 1. Input the Peer URI (echo https://${CUSTOMERPREFIX}-splunk-cm.pvt.xdr.accenturefederalcyber.com:8089) and remote admin credentials. For the CM, the remote admin credentials are in Vault at engineering -> customer_slices -> ${CUSTOMERPREFIX} or `echo $ADMINPASS`
 1. Repeat for SH and HF and use the correct Splunk creds. `salt ${CUSTOMERPREFIX}-splunk-sh* pillar.get secrets:splunk_admin_password`
 1. Verify all customer instances are connected to the search peer by searching for customer prefix in the search peers webpage. 
 1. Update MC topology ( settings -> Monitoring Console -> Settings -> General Setup -> Apply Changes )
 
-## FM Shared Search Console ( skip if demo cluster )
-
-TODO: improve these notes
-1. Update  salt/pillar/fm_shared_search.sls 
-1. add the master_uri and pass4SymmKey to the pillar file
-1. Apply the salt state salt/fileroots/splunk/fm_shared_search/init.sls 
 
 ## Create New Vault KV Engine for Customer for Feed Management
 1. Log into Vault
@@ -622,11 +661,6 @@ TODO: improve these notes
 Naming Scheme: onboarding-<customer-name>
 Example: onboarding-la-covid
 
-
-## Keep George Happy and push out maxmind
-`salt -C '*splunk-indexer* or *splunk-idx* or *splunk-sh* or *splunk-hf*' state.sls splunk.maxmind.pusher --state-verbose=False --state-output=terse`
-
-
 ## Create the LCP Build Sheet if the customer needs LCP nodes
 
 Go to https://afs365.sharepoint.com/sites/MDR-Documentation/Shared%20Documents/Forms/AllItems.aspx?viewid=76d97d05%2Dab42%2D455a%2D8259%2D24b51862b35e&id=%2Fsites%2FMDR%2DDocumentation%2FShared%20Documents%2FOnboarding%2FCustomer%20Onboarding 
@@ -636,7 +670,7 @@ Do you see a customer folder already created? Put the Build Sheet in there. If n
 Copy the Blank Template LCP Build Sheet and rename with customer prefix
 find and replace 
 
-## Got POP nodes? 
+## Got LCP nodes? 
 
 Got customer public IPs after you were done standing up the Splunk cluster? This section is for you!
 
@@ -652,6 +686,36 @@ index=app_aws_flowlogs sourcetype="aws:cloudwatchlogs:vpcflow" vpcflow_action=RE
 index=app_aws_flowlogs eni-017d2e433b9f821d8 dest_port IN (4505,4506) |  timechart count by src_ip
 ```
 
+### Got LCP Nodes in Customer AWS Account?
+
+Need to share the LCP AMIs with a customer in AWS (NOTICE: NOT XDR CUSTOMER SLICE AWS ACCOUNT!) ? Ask Duane for clarification, if needed.
+This section might need to be moved.
+
+```
+#Share the AMI with the customer's AWS account
+AWS_PROFILE=mdr-common-services-gov update-ami-accounts '*LCP*' <customeraccountID>
+
+#Update the packer files to share future AMIs with the customer's AWS account
+#update one or the other for commerical/govcloud
+~/msoc-infrastructure/packer/lcp/aws/vars-commercial-prod.hcl
+~/msoc-infrastructure/packer/lcp/aws/vars-govcloud-prod.hcl
+```
+
+### Got Customer AWS account that needs to be monitored?
+
+The new customer owned ( NOTICE: NOT CUSTOMER SLICE! ) AWS account needs permissions to access the XDR Trumpet S3 bucket. Allowing access for the cloudformation template to run in the customer's AWS account. This only needs to be done if the customer has an AWS account. 
+
+Edit common/*/partition.hcl file based on where the Customer aws account is. commerical = aws, govcloud = aws-us-gov. If the customer has both, then edit both files. 
+
+```
+vim ../common/aws/partition.hcl
+# OR/AND
+vim ../common/aws-us-gov/partition.hcl
+```
+
+Update the customer_accounts variable. NOT the account_map!!!
+
+Run `terragrunt apply` to apply the changes to `xdr-terraform-live/common/aws-us-gov/afs-mdr-common-services-gov/300-s3-xdr-trumpet` and `us-west-1/300-s3-xdr-trumpet`.
 
 ### Steps to allow LCP nodes through SG
 

+ 1 - 1
OpenVPN Notes.md

@@ -18,7 +18,7 @@ ldap.read@defpoint.com is the okta user that openvpn uses to auth to okta. the l
 0. Be on prod VPN.
 1. Log into OKTA in an incognito window using the ldap.read username and the current password from Vault (engineering/root). Brad's phone is currently setup with the Push notification for the account. The MFA is required for the account. To change the password without Brad, remove MFA with your account in OKTA and set it up on your own phone. 
 2. Once the password has been updated, update vault in this location, engineering/root with a key of ldap.read@defpoint.com. You will have to create a new version of engineering/root to save the password. 
-3. Store the new password and the creds for openvpn and drop off the VPN. Log into the openVPN web GUI (https://openvpn.mdr.defpoint.com/admin/  -  https://openvpn.xdr.accenturefederalcyber.com/admin/) as the openvpn user (password in Vault) and update the credentials for ldap.read. Authentication -> ldap -> update password -> Save Settings. Then update running server. Repeat this for the test environment (https://openvpn.mdr-test.defpoint.com/admin/  https://openvpn.xdrtest.accenturefederalcyber.com/admin/ ) 
+3. Store the new password and the creds for openvpn and drop off the VPN. Log into the openVPN web GUI (https://openvpn.xdr.accenturefederalcyber.com/admin/) as the openvpn user (password in Vault) and update the credentials for ldap.read. Authentication -> ldap -> update password -> Save Settings. Then update running server. Repeat this for the test environment ( https://openvpn.xdrtest.accenturefederalcyber.com/admin/ ) 
 4. Verify that you are able to login to the VPN. 
 5. Set reminder in your calendar to reset the password in less than 60 days. 
 

+ 6 - 0
files/config

@@ -215,6 +215,12 @@ region = us-gov-east-1
 color = ff1a1a
 source_profile = govcloud
 
+[profile mdr-prod-dgi-gov]
+role_arn = arn:aws-us-gov:iam::455571784901:role/user/mdr_terraformer
+region = us-gov-east-1
+color = ff1a1a
+source_profile = govcloud
+
 ;
 ;CYBERRANGE
 ;