|
@@ -58,7 +58,7 @@ ESSJOBSHASH="`echo $ESSJOBSPASS | python3 -c "from passlib.hash import sha512_cr
|
|
|
|
|
|
Connect to production VPN
|
|
|
|
|
|
-Log into vault at https://vault.pvt.xdr.accenturefederalcyber.com (legacy: https://vault.mdr.defpoint.com)
|
|
|
+Log into vault at (not yet)https://vault.pvt.xdr.accenturefederalcyber.com (legacy: https://vault.mdr.defpoint.com)
|
|
|
|
|
|
Record the following into `engineering/customer_slices/${CUSTOMERPREFIX}`
|
|
|
|
|
@@ -117,7 +117,7 @@ Substite `REPLACEME` with `${CUSTOMERPREFIX}-splunk-sh`, `-cm`, or `-hf` and *re
|
|
|
Add permissions for the okta apps:
|
|
|
1) Log into the okta webpage (https://mdr-multipass.okta.com/)
|
|
|
1) Go to Admin->Applications
|
|
|
-1) for each `${CUSTOMERPREFIX}` site, click 'Assign to Groups' and add the following groups:
|
|
|
+1) for each `${CUSTOMERPREFIX}` application, click 'Assign to Groups' and add the following groups:
|
|
|
* For Search heads:
|
|
|
* Analyst
|
|
|
* mdr-admins
|
|
@@ -149,7 +149,7 @@ cd ../../salt/fileroots/splunk/files/licenses/${CUSTOMERPREFIX}
|
|
|
* mdr-admins: admin / sync group yes
|
|
|
* mdr-engineers: user / sync group yes
|
|
|
* Create an enrollment token with a description of "salt"
|
|
|
-* Put the enrollment token `~/msoc-infrastructure/salt/pillar/os_settings.sls` or `vim ../../../../../pillar/os_settings.sls`, under the jinja if/else
|
|
|
+* Put the enrollment token `~/msoc-infrastructure/salt/pillar/os_settings.sls` or `vim ../../../../../pillar/os_settings.sls`, under the jinja if/else. Use "y" to yank in vim and "p" to paste.
|
|
|
|
|
|
## Step x: Set up the pillars
|
|
|
|
|
@@ -180,15 +180,16 @@ cat >> ${CUSTOMERPREFIX}_variables.sls
|
|
|
```
|
|
|
|
|
|
Review the file to make sure everything looks good.
|
|
|
-`vim frtib_variables.sls`
|
|
|
+`vim ${CUSTOMERPREFIX}_variables.sls`
|
|
|
|
|
|
Add to gitfs pillars and allow salt access:
|
|
|
```
|
|
|
-# Copy one of the customer_repos and update with the new customer prefix. Update both the CM repo and the DS repo, unless you know there will not be LCP/POP nodes.
|
|
|
+# In the salt_master.sls file, copy one of the customer_repos and update with the new customer prefix. Update both the CM repo and the DS repo (deployment_servers), unless you know there will not be LCP/POP nodes.
|
|
|
vim salt_master.sls
|
|
|
|
|
|
# Add customer prefix to ACL
|
|
|
vim ../fileroots/salt_master/files/etc/salt/master.d/default_acl.conf
|
|
|
+:%s/frtib\*/frtib\* or ca-c19\*/
|
|
|
|
|
|
# Add Account number to xdr_asset_inventory.sh under GOVCLOUDACCOUNTS
|
|
|
vim ../fileroots/salt_master/files/xdr_asset_inventory/xdr_asset_inventory.sh
|
|
@@ -275,8 +276,11 @@ cd ~/xdr-terraform-live/prod/aws-us-gov/mdr-prod-${CUSTOMERPREFIX}
|
|
|
vim account.hcl # Fill in all "TODO" items. Leave the "LATER" variables for later steps.
|
|
|
```
|
|
|
1. Update the ref version in all the terragrunt.hcl files to match latest tag on modules git repo. Replace v1.XX.XX with the current tag.
|
|
|
-2. `find . -name "terragrunt.hcl" -not -path "*/.terragrunt-cache/*" -exec sed -i '' s/?ref=v1.10.17/?ref=v1.xx.xx/ {} \;`
|
|
|
-did you get them all? `cat */terragrunt.hcl | grep ref | grep -v 1.xx.xx`
|
|
|
+2. `find . -name "terragrunt.hcl" -not -path "*/.terragrunt-cache/*" -exec sed -i '' s/?ref=v1.21.0/?ref=v1.x.x/ {} \;`
|
|
|
+2. `find . -name "terragrunt.hcl" -not -path "*/.terragrunt-cache/*" -exec sed -i '' s/?ref=v1.0.0/?ref=v1.x.x/ {} \;`
|
|
|
+Did you get them all? Don't forget about the subfolders in account_standards_regional.
|
|
|
+`cat */terragrunt.hcl | grep ref | grep -v 1.xx.xx`
|
|
|
+`cat */*/terragrunt.hcl | grep ref`
|
|
|
|
|
|
|
|
|
## Step x: Add account to global variables, and apply necessary prerequisites
|
|
@@ -297,6 +301,9 @@ do
|
|
|
popd
|
|
|
done
|
|
|
```
|
|
|
+oneliner
|
|
|
+`for module in 005-account-standards-c2 008-transit-gateway-hub; do pushd $module; terragrunt apply; popd; done`
|
|
|
+
|
|
|
4. `cd ~/xdr-terraform-live/common/aws-us-gov/afs-mdr-common-services-gov/`
|
|
|
4. `cd ../../../common/aws-us-gov/afs-mdr-common-services-gov/`
|
|
|
5. Apply the modules:
|
|
@@ -334,16 +341,15 @@ region = us-gov-east-1
|
|
|
color = ff0000
|
|
|
```
|
|
|
|
|
|
-Optionally, also add the new account number to the packer build so that when new
|
|
|
+Also add the new account number to the packer build so that when new
|
|
|
AMIs get built they are shared automatically with this account.
|
|
|
|
|
|
```
|
|
|
-
|
|
|
-cd ~/msoc-infrastructure/packer
|
|
|
+cd ~/msoc-infrastructure/packer or cd ../../msoc-infrastructure/packer
|
|
|
vi Makefile
|
|
|
# Add the account(s) to GOVCLOUD_ACCOUNTS / COMMERCIAL_ACCOUNTS
|
|
|
# as needed. PR it and exit
|
|
|
-
|
|
|
+cd cd ../../xdr-terraform-live/bin/
|
|
|
```
|
|
|
|
|
|
|
|
@@ -358,6 +364,13 @@ cd ~/xdr-terraform-live/prod/aws-us-gov/mdr-prod-${CUSTOMERPREFIX} # OR cd ../pr
|
|
|
terragrunt-apply-all --skipqualys --notlocal
|
|
|
```
|
|
|
|
|
|
+You might run into an error when applying the module `006-account-standards`.
|
|
|
+
|
|
|
+```
|
|
|
+Error creating CloudTrail: InsufficientS3BucketPolicyException: Incorrect S3 bucket policy is detected for bucket: xdr-cloudtrail-logs-prod
|
|
|
+```
|
|
|
+Resolution: Did you run terragrunt apply in mdr-prod-c2/005-account-standards-c2 ???
|
|
|
+
|
|
|
You might run into an error when applying the VPC module `010-vpc-splunk`. Error reads as:
|
|
|
|
|
|
```
|
|
@@ -400,6 +413,7 @@ AWS_PROFILE=mdr-common-services-gov update-ami-accounts <aws-account-id>
|
|
|
For complete details, see https://github.xdr.accenturefederalcyber.com/mdr-engineering/msoc-infrastructure/wiki/Qualys.
|
|
|
|
|
|
Short version:
|
|
|
+1. Change xdr-terraform-live to customer branch before making changes to files.
|
|
|
1. Browse to https://mdr-multipass.okta.com/, and pick qualys
|
|
|
1. In Qualys Console click AssetView -> Connectors -> "Create EC2 Connector", this will pop a wizard. ( If you don't see the Connectors menu item, you don't have the correct permissions. )
|
|
|
1. Name the connector and pick account type based on partition
|
|
@@ -412,10 +426,11 @@ Short version:
|
|
|
1. Check the "Automatically Activate" buttons for VM and PC Scanning application
|
|
|
1. Pick these tag(s): AWS_Prod,
|
|
|
1. Hit "Continue", then "Finish".
|
|
|
-1. Should be done with the wizard now. Back in the main list view click the drop-down and pick "Run" to pull current Assets
|
|
|
+1. Should be done with the wizard now. Back in the main list view click the drop-down next to the customer's name and pick "Run" to pull current Assets.
|
|
|
|
|
|
-It should come back with a number of assets ( probably about 6 ), no errors, and a hourglass for a bit.
|
|
|
+After waiting 1-2 minutes hit the refresh icon. It should come back with a number of assets ( probably about 6 ), no errors, and a hourglass for a bit.
|
|
|
|
|
|
+Push the changes in xdr-terraform-live for a PR in git.
|
|
|
|
|
|
## Step x: Finalize the Salt
|
|
|
|
|
@@ -435,6 +450,7 @@ salt ${CUSTOMERPREFIX}\* grains.get environment
|
|
|
salt ${CUSTOMERPREFIX}\* state.highstate --output-diff
|
|
|
# Review changes from above. Though i've seen indexers get hung. If they do, see note below
|
|
|
# splunk_service may fail, this is expected (it's waiting for port 8000)
|
|
|
+salt ${CUSTOMERPREFIX}\* test.version
|
|
|
salt ${CUSTOMERPREFIX}\* pkg.upgrade ( this may break connectivity if there is a salt minion upgrade! )
|
|
|
salt ${CUSTOMERPREFIX}\* system.reboot
|
|
|
# Wait 5+ minutes
|
|
@@ -507,6 +523,8 @@ Note from Brad: Donkey! ( see Shrek 2 Dinner scene. https://www.youtube.com/watc
|
|
|
[settings]
|
|
|
max_upload_size = 1024
|
|
|
```
|
|
|
+
|
|
|
+On the salt master...
|
|
|
1. `CUSTOMERPREFIX=modelclient`
|
|
|
1. `salt ${CUSTOMERPREFIX}-splunk-sh* cmd.run 'ls -larth /opt/splunk/etc/system/local/web.conf'`
|
|
|
1. `salt ${CUSTOMERPREFIX}-splunk-sh* cmd.run 'touch /opt/splunk/etc/system/local/web.conf'`
|
|
@@ -557,6 +575,13 @@ Example: onboarding-la-covid
|
|
|
`salt -C '*splunk-indexer* or *splunk-idx* or *splunk-sh* or *splunk-hf*' state.sls splunk.maxmind.pusher --state-verbose=False --state-output=terse`
|
|
|
|
|
|
|
|
|
+## Create the LCP Build Sheet if the customer needs LCP nodes
|
|
|
+
|
|
|
+Go to https://afs365.sharepoint.com/sites/MDR-Documentation/Shared%20Documents/Forms/AllItems.aspx?RootFolder=%2Fsites%2FMDR%2DDocumentation%2FShared%20Documents%2FOnboarding%2FLCP%20Build%20Sheets
|
|
|
+
|
|
|
+Copy the Blank Template LCP Build Sheet and rename with customer prefix
|
|
|
+find and replace
|
|
|
+
|
|
|
## Got POP nodes? Ensure they are talking to Moose Splunk for Splunk UFs
|
|
|
|
|
|
Got customer public IPs after you were done standing up the Splunk cluster?
|