|
@@ -33,6 +33,8 @@ You will need the following. Setting environment variables will help with some o
|
|
|
|
|
|
IMPORTANT NOTE: Each time you run this, it will generate new passwords. So make sure you use the same window to perform all steps!
|
|
|
|
|
|
+Do you have a Splunk license yet? No? Can you use a temp/dev license until the real one shows up? I hate doing that, but not much of a choice.
|
|
|
+
|
|
|
Commands tested on OSX and may not (probably won't) work on windows/linux.
|
|
|
|
|
|
```
|
|
@@ -58,10 +60,11 @@ Connect to production VPN
|
|
|
|
|
|
Log into vault at https://vault.pvt.xdr.accenturefederalcyber.com (legacy: https://vault.mdr.defpoint.com)
|
|
|
|
|
|
-Record the following into `secrets/engineering/customer_slices/${CUSTOMERPREFIX}`
|
|
|
+Record the following into `engineering/customer_slices/${CUSTOMERPREFIX}`
|
|
|
|
|
|
```
|
|
|
echo $ADMINPASS # record as `${CUSTOMERPREFIX}-splunk-cm admin`
|
|
|
+echo "${CUSTOMERPREFIX}-splunk-cm admin"
|
|
|
```
|
|
|
|
|
|
At this time, we don't set the others on a per-account basis through salt, though it looks like admin password has been changed for some clients.
|
|
@@ -69,6 +72,8 @@ At this time, we don't set the others on a per-account basis through salt, thoug
|
|
|
|
|
|
## Step x: Update and Branch Git
|
|
|
|
|
|
+You may have already created a new branch in xdr-terraform-live in a previous step.
|
|
|
+
|
|
|
```
|
|
|
cd ~/msoc-infrastructure
|
|
|
git checkout develop
|
|
@@ -116,10 +121,12 @@ Add permissions for the okta apps:
|
|
|
* Analyst
|
|
|
* mdr-admins
|
|
|
* mdr-engineers
|
|
|
- * For CM:
|
|
|
+ * For CM & HF:
|
|
|
* mdr-admins
|
|
|
* mdr-engineers
|
|
|
|
|
|
+1) while logged into OKTA, add the Splunk logo to the Apps. It is located in msoc-infrastructure/tools/okta_app_maker/okta-logo-splunk.png
|
|
|
+
|
|
|
|
|
|
## Step x: Add the license file to salt
|
|
|
|
|
@@ -174,47 +181,23 @@ Review the file to make sure everything looks good.
|
|
|
|
|
|
Add to gitfs pillars and allow salt access:
|
|
|
```
|
|
|
-vim salt_master.conf
|
|
|
-# Copy one of the customer_repos with the new customer
|
|
|
+vim salt_master.sls
|
|
|
+# Copy one of the customer_repos with the new customer. Update both the CM repo and the DS repo, unless you know there will not be LCP/POP nodes.
|
|
|
vim ~/msoc-infrastructure/salt/fileroots/salt_master/files/etc/salt/master.d/default_acl.conf
|
|
|
#Add customer prefix to ACL
|
|
|
-git add salt_master.conf ~/msoc-infrastructure/salt/fileroots/salt_master/files/etc/salt/master.d/default_acl.conf
|
|
|
+
|
|
|
```
|
|
|
|
|
|
-Migrate pillars through to master branch:
|
|
|
+Migrate changes through to master branch:
|
|
|
```
|
|
|
-git add top.sls ${CUSTOMERPREFIX}_variables.sls
|
|
|
+git add ../fileroots/splunk/files/licenses/${CUSTOMERPREFIX}/<your-license-file>
|
|
|
+git add ../fileroots/salt_master/files/etc/salt/master.d/default_acl.conf
|
|
|
+git add salt_master.sls top.sls ${CUSTOMERPREFIX}_variables.sls os_settings.sls
|
|
|
git commit -m "Adds ${CUSTOMERPREFIX} variables. Will promote to master immediately."
|
|
|
git push origin feature/${INITIALS}_${TICKET}_CustomerSetup_${CUSTOMERPREFIX}
|
|
|
```
|
|
|
|
|
|
-Follow the link to create the PR, and then submit another PR to master.
|
|
|
-
|
|
|
-
|
|
|
-## Step x: Update the salt master
|
|
|
-
|
|
|
-Once approved, update the salt master
|
|
|
-
|
|
|
-```
|
|
|
-ssh gc-prod-salt-master
|
|
|
-sudo vim /etc/salt/master.d/default_acl.conf
|
|
|
-# Grant users access to the new prefix
|
|
|
-# save and exit
|
|
|
-salt 'salt*' cmd.run 'salt-run fileserver.update'
|
|
|
-sudo service salt-master restart
|
|
|
-exit
|
|
|
-```
|
|
|
-
|
|
|
-Brad's way default_acl.conf should have been updated in git
|
|
|
-
|
|
|
-```
|
|
|
-ssh gc-prod-salt-master
|
|
|
-salt 'salt*' cmd.run 'salt-run fileserver.update'
|
|
|
-salt 'salt*' state.sls salt_master.salt_master_configs test=true
|
|
|
-sudo salt 'salt*' state.sls salt_master.salt_posix_acl test=true
|
|
|
-exit
|
|
|
-```
|
|
|
-
|
|
|
+Follow the link to create the PR, and then submit another PR to master and get the changes merged in to master branch.
|
|
|
|
|
|
## Step x: Create customer repositories
|
|
|
|
|
@@ -233,7 +216,21 @@ Create a new repository using the cm template:
|
|
|
* automation - Read
|
|
|
* onboarding - Write
|
|
|
|
|
|
-Clone and modify the password (TODO: Just take care of this in salt):
|
|
|
+Repeat for pop repo, unless customer will not have pop nodes.
|
|
|
+
|
|
|
+1. Browse to https://github.mdr.defpoint.com/mdr-engineering/msoc_skeleton_pop
|
|
|
+2. Click "use this template"
|
|
|
+ a. Name the new repository `msoc-${CUSTOMERPREFIX}-pop`
|
|
|
+ b. Give it the description: `Splunk POP Configuration for [CUSTOMER DESCRIPTION]`
|
|
|
+ c. Set permissions to 'Private'
|
|
|
+ d. Click 'create repository from template'
|
|
|
+3. Click on 'Settings', then 'Collaborators and Teams', and add the following:
|
|
|
+ * infrastructure - Admin
|
|
|
+ * automation - Read
|
|
|
+ * onboarding - Write
|
|
|
+
|
|
|
+
|
|
|
+Clone and modify the password in the CM repo (TODO: Just take care of this in salt):
|
|
|
|
|
|
```
|
|
|
mkdir ~/tmp
|
|
@@ -246,7 +243,19 @@ git add passwd
|
|
|
git commit -m "Stored hashed passwords"
|
|
|
git push origin master
|
|
|
```
|
|
|
-Repeat for msoc-${CUSTOMERPREFIX}-pop repo
|
|
|
+
|
|
|
+
|
|
|
+## Step x: Update the salt master with new configs
|
|
|
+
|
|
|
+Now that we have the gir repos created, let's update the salt master.
|
|
|
+
|
|
|
+```
|
|
|
+ssh gc-prod-salt-master
|
|
|
+salt 'salt*' cmd.run 'salt-run fileserver.update'
|
|
|
+salt 'salt*' state.sls salt_master.salt_master_configs --state-output=changes test=true
|
|
|
+sudo salt 'salt*' state.sls salt_master.salt_posix_acl --state-output=changes test=true
|
|
|
+exit
|
|
|
+```
|
|
|
|
|
|
## Step x: Set up xdr-terraform-live account
|
|
|
|
|
@@ -257,17 +266,19 @@ cd ~/xdr-terraform-live/prod/aws-us-gov/mdr-prod-${CUSTOMERPREFIX}
|
|
|
vim account.hcl # Fill in all "TODO" items. Leave the "LATER" variables for later steps.
|
|
|
```
|
|
|
1. Update the ref version in all the terragrunt.hcl files to match latest tag on modules git repo. Replace v1.XX.XX with the current tag.
|
|
|
-2. `find . -name "terragrunt.hcl" -not -path "*/.terragrunt-cache/*" -exec sed -i '' s/?ref=v1.0.0/?ref=v1.XX.XX/ {} \;`
|
|
|
+2. `find . -name "terragrunt.hcl" -not -path "*/.terragrunt-cache/*" -exec sed -i '' s/?ref=v1.10.17/?ref=v1.xx.xx/ {} \;`
|
|
|
|
|
|
|
|
|
## Step x: Add account to global variables, and apply necessary prerequisites
|
|
|
|
|
|
1. Add the account number to `account_map["prod"]` in :
|
|
|
- * `~/xdr-terraform-live/prod/aws-us-gov/partition.hcl`
|
|
|
- * `~/xdr-terraform-live/common/aws-us-gov/partition.hcl`
|
|
|
-2. `cd ~/xdr-terraform-live/prod/aws-us-gov/mdr-prod-c2`
|
|
|
-2. Create PR and get changes approved
|
|
|
+ * `~/xdr-terraform-live/prod/aws-us-gov/partition.hcl` OR `vim ../partition.hcl`
|
|
|
+ * `~/xdr-terraform-live/common/aws-us-gov/partition.hcl` OR `vim ../../../common/aws-us-gov/partition.hcl`
|
|
|
+2. `cd ~/xdr-terraform-live/prod/aws-us-gov/mdr-prod-c2` OR `cd ../mdr-prod-c2/`
|
|
|
+2. Create PR and get changes approved commit message should be, "Adds ${CUSTOMERPREFIX} customer"
|
|
|
3. Apply the modules: (* draft: are there more external requirements? *)
|
|
|
+
|
|
|
+Copy and paste these commands into cmd line and run them.
|
|
|
```
|
|
|
for module in 005-account-standards-c2 008-transit-gateway-hub
|
|
|
do
|
|
@@ -277,7 +288,7 @@ do
|
|
|
done
|
|
|
```
|
|
|
4. `cd ~/xdr-terraform-live/common/aws-us-gov/afs-mdr-common-services-gov/`
|
|
|
-4. `cd ../../../../common/aws-us-gov/afs-mdr-common-services-gov/`
|
|
|
+4. `cd ../../../common/aws-us-gov/afs-mdr-common-services-gov/`
|
|
|
5. Apply the modules:
|
|
|
```
|
|
|
for module in 008-xdr-binaries 010-shared-ami-key
|
|
@@ -294,7 +305,7 @@ done
|
|
|
The new AWS account needs permissions to access the AMIs before trying to create EC2 instances. Replace the aws-account-id in the below command.
|
|
|
|
|
|
```
|
|
|
-cd ~/xdr-terraform-live/bin/
|
|
|
+cd ~/xdr-terraform-live/bin/ # OR cd ../../../bin/
|
|
|
AWS_PROFILE=mdr-common-services-gov update-ami-accounts <aws-account-id>
|
|
|
```
|
|
|
|
|
@@ -305,7 +316,7 @@ The `xdr-terraform-live/bin` directory should be in your path. You will need it
|
|
|
(n.b., if you are _certain_ everything is good to go, you can do a `yes yes |` before the `terragrunt-apply-all` to bypass prompts. This does not leave you an out if you make a mistake, however, becasue it is difficult to break out of terragrunt/terraform without causing issues.)
|
|
|
|
|
|
```
|
|
|
-cd ~/xdr-terraform-live/prod/aws-us-gov/mdr-prod-${CUSTOMERPREFIX}
|
|
|
+cd ~/xdr-terraform-live/prod/aws-us-gov/mdr-prod-${CUSTOMERPREFIX} # OR cd ../prod/aws-us-gov/mdr-prod-${CUSTOMERPREFIX}
|
|
|
terragrunt-apply-all --skipqualys --notlocal
|
|
|
```
|
|
|
|
|
@@ -327,6 +338,7 @@ Workaround is:
|
|
|
cd 010-vpc-splunk
|
|
|
terragrunt apply -target module.vpc
|
|
|
terragrunt apply
|
|
|
+cd ..
|
|
|
```
|
|
|
You might run into an error when applying the test instance module `025-test-instance`.
|
|
|
Error reads as:
|
|
@@ -362,9 +374,10 @@ Short version:
|
|
|
1. Pick the Regions that should be in scope (all of them), hit "Continue"
|
|
|
1. Check the "Automatically Activate" buttons for VM and PC Scanning application
|
|
|
1. Pick these tag(s): AWS_Prod,
|
|
|
+1. Hit "Continue", then "Finish".
|
|
|
1. Should be done with the wizard now. Back in the main list view click the drop-down and pick "Run" to pull current Assets
|
|
|
|
|
|
-It should come back with a number of assets, no errors, and a hourglass for a bit.
|
|
|
+It should come back with a number of assets ( probably about 6 ), no errors, and a hourglass for a bit.
|
|
|
|
|
|
|
|
|
## Step x: Finalize the Salt
|
|
@@ -399,8 +412,9 @@ System hangs appear to be because of a race condition with startup of firewalld
|
|
|
|
|
|
|
|
|
|
|
|
-## Update the salt pillars with the encrypted forms
|
|
|
+## TODO: Update the salt pillars with the encrypted forms
|
|
|
|
|
|
+Because we are not managing the splunk.secret, the pass4SymmKey gets encrypted into different values on each of the indexers. This causes the file containing the pass4SymmKey to be updated by Splunk on every Salt highstate. To resolve this, we would need to manage the splunk.secret file.
|
|
|
TODO: Document a step of updating the `pillars/${CUSTOMERPREFIX}_variables.sls` with encrypted forms of the passwords.
|
|
|
|
|
|
|
|
@@ -408,7 +422,7 @@ TODO: Document a step of updating the `pillars/${CUSTOMERPREFIX}_variables.sls`
|
|
|
|
|
|
Log into https://${CUSTOMERPREFIX}-splunk.pvt.xdr.accenturefederalcyber.com
|
|
|
`echo "https://${CUSTOMERPREFIX}-splunk.pvt.xdr.accenturefederalcyber.com"`
|
|
|
-`echo "https://${CUSTOMERPREFIX}-splunk-cm.pvt.xdr.accenturefederalcyber.com"`
|
|
|
+`echo "https://${CUSTOMERPREFIX}-splunk-cm.pvt.xdr.accenturefederalcyber.com:8000"`
|
|
|
|
|
|
It should "just work".
|
|
|
|
|
@@ -442,7 +456,7 @@ Note from Duane: Should work anywhere. Main goal was to see that the cluster b
|
|
|
|
|
|
## Splunk configuration
|
|
|
|
|
|
-* Install ES on the search head Version 6.2 ( NOT 6.4.0 ! Until George tells us otherwise )
|
|
|
+* Install ES on the search head Version 6.2 ( NOT 6.4.0 ! Until George tells us otherwise! )
|
|
|
|
|
|
|
|
|
1. Download ES app from Splunk using your splunk creds
|
|
@@ -459,24 +473,40 @@ Note from Duane: Should work anywhere. Main goal was to see that the cluster b
|
|
|
1. `salt ${CUSTOMERPREFIX}-splunk-sh* cmd.run 'echo "[settings]" > /opt/splunk/etc/system/local/web.conf'`
|
|
|
1. `salt ${CUSTOMERPREFIX}-splunk-sh* cmd.run 'echo "max_upload_size = 1024" >> /opt/splunk/etc/system/local/web.conf'`
|
|
|
1. `salt ${CUSTOMERPREFIX}-splunk-sh* cmd.run 'cat /opt/splunk/etc/system/local/web.conf'`
|
|
|
-1. Restart SH via the GUI and upload via the GUI ( takes a long time to upload )
|
|
|
+1. `salt ${CUSTOMERPREFIX}-splunk-sh* cmd.run 'systemctl restart splunk'`
|
|
|
+1. upload via the GUI ( takes a long time to upload )
|
|
|
1. Choose "Set up now" and "Start Configuration Process"
|
|
|
1. ES should complete app actions on its own, then prompt for a restart
|
|
|
|
|
|
### remove the web.conf file
|
|
|
1. `salt ${CUSTOMERPREFIX}-splunk-sh* cmd.run 'cat /opt/splunk/etc/system/local/web.conf'`
|
|
|
1. `salt ${CUSTOMERPREFIX}-splunk-sh* cmd.run 'rm -rf /opt/splunk/etc/system/local/web.conf'`
|
|
|
-1. restart SH
|
|
|
+1. `salt ${CUSTOMERPREFIX}-splunk-sh* cmd.run 'systemctl restart splunk'`
|
|
|
+
|
|
|
|
|
|
+## Monitoring Console ( skip if demo cluster )
|
|
|
|
|
|
-## Monitoring Console ( skip if demo cluster? )
|
|
|
+Note: Once the Legacy Monitoring Console has moved to GC, the SGs will need to be fixed.
|
|
|
|
|
|
-* Add cluster to monitoring console
|
|
|
-* Peer with CM, SH, and HF
|
|
|
-* Update MC topology
|
|
|
+1. Add master_uri and pass4symmkey to salt/pillar/mc_variables.sls
|
|
|
+1. Commit to git with the message, "Adds variables for Monitoring Console" and once approved, highstate the Moose MC.
|
|
|
+1. `sudo salt-run fileserver.update`
|
|
|
+1. `salt splunk-mc* state.sls splunk.monitoring_console --output-diff test=true`
|
|
|
+1. Splunk should restart and the new Splunk CMs will show up in the MC ( Settings -> Indexer Clustering )
|
|
|
+1. After applying the code, pull the encypted values of the pass4symmkey out of the Splunk config file and replace them in the salt state. Then create a PR for git. Set the git commit message to, "Swaps pass4symmkey with encrypted value".
|
|
|
+1. `salt splunk-mc* cmd.run 'cat /opt/splunk/etc/apps/connected_clusters/local/server.conf'`
|
|
|
+1. Ensure new cluster is showing up in the Settings -> Indexer Clustering ( should see 4 check marks and at least 3 peers ). If not, verify firewall rules.
|
|
|
+1. Add CM as a search peer by going to Settings -> Distributed Search -> Search Peers
|
|
|
+1. Input the Peer URI (https://${CUSTOMERPREFIX}-splunk-cm.pvt.xdr.accenturefederalcyber.com:8089) and remote admin credentials. For the CM, the remote admin credentials are in Vault at engineering -> customer_slices -> ${CUSTOMERPREFIX}
|
|
|
+1. Repeat for SH and HF and use the correct Splunk creds. `salt ${CUSTOMERPREFIX}-splunk-sh* pillar.get secrets:splunk_admin_password`
|
|
|
+1. Verify all customer instances are connected to the search peer by searching for customer prefix in the search peers webpage.
|
|
|
+1. Update MC topology ( settings -> Monitoring Console -> Settings -> General Setup -> Apply Changes )
|
|
|
|
|
|
|
|
|
## Create New Vault KV Engine for Customer for Feed Management
|
|
|
+1. Log into Vault
|
|
|
+1. Enable new engine of type KV
|
|
|
+1. Change path and enable engine.
|
|
|
Naming Scheme: onboarding-<customer-name>
|
|
|
Example: onboarding-la-covid
|
|
|
|
|
@@ -543,8 +573,6 @@ salt -C "${CUSTOMERPREFIX}* and G@msoc_pop:True" state.sls os_modifications.rhel
|
|
|
salt -C "${CUSTOMERPREFIX}* and G@msoc_pop:True" state.sls os_modifications
|
|
|
|
|
|
|
|
|
-
|
|
|
-
|
|
|
salt ${CUSTOMERPREFIX}-splunk
|
|
|
Start with ds
|
|
|
salt ${CUSTOMERPREFIX}-splunk-ds\* state.highstate --output-diff
|
|
@@ -554,6 +582,8 @@ salt ${CUSTOMERPREFIX}-splunk-syslog-\* state.sls os_modifications
|
|
|
|
|
|
## LCP Troubleshooting
|
|
|
|
|
|
+REMEMBER: Our Customers are responsible for setting up the salt minion with grains and allow traffic through the outbound firewall. If they have not done that yet, you will get more errors.
|
|
|
+
|
|
|
ISSUE: Help, the environment grain is not showing up!
|
|
|
SOLUTION: This command will add a static grain in /etc/salt/minion.d/cloud_init_grains.conf.
|
|
|
`salt 'target' state.sls salt_minion.salt_grains pillar='{"environment": "prod"}' test=true --output-diff`
|