|
@@ -585,7 +585,9 @@ Example: onboarding-la-covid
|
|
|
|
|
|
## Create the LCP Build Sheet if the customer needs LCP nodes
|
|
|
|
|
|
-Go to https://afs365.sharepoint.com/sites/MDR-Documentation/Shared%20Documents/Forms/AllItems.aspx?RootFolder=%2Fsites%2FMDR%2DDocumentation%2FShared%20Documents%2FOnboarding%2FLCP%20Build%20Sheets
|
|
|
+Go to https://afs365.sharepoint.com/sites/MDR-Documentation/Shared%20Documents/Forms/AllItems.aspx?viewid=76d97d05%2Dab42%2D455a%2D8259%2D24b51862b35e&id=%2Fsites%2FMDR%2DDocumentation%2FShared%20Documents%2FOnboarding%2FCustomer%20Onboarding
|
|
|
+
|
|
|
+Do you see a customer folder already created? Put the Build Sheet in there. If not, go to Documents > Onboarding > LCP Build Sheets
|
|
|
|
|
|
Copy the Blank Template LCP Build Sheet and rename with customer prefix
|
|
|
find and replace
|
|
@@ -621,6 +623,9 @@ Edit the c2_services_external_ips map and be sure to add a description.
|
|
|
|
|
|
Then reapply in 095-instance-sensu, 080-instance-repo-server, 071-instance-salt-master or `terragrunt-apply-all`.
|
|
|
|
|
|
+Don't forget Moose indexer SG! prod/aws-us-gov/mdr-prod-c2/160-splunk-indexer-cluster/terragrunt.hcl
|
|
|
+
|
|
|
+LEGACY NOT NEEDED...
|
|
|
For Legacy, update these files
|
|
|
terraform/02-msoc_vpc/security-groups.tf
|
|
|
terraform/common/variables.tf
|
|
@@ -646,10 +651,10 @@ CUSTOMERPREFIX=modelclient
|
|
|
cd salt/pillar
|
|
|
`echo " '${CUSTOMERPREFIX}* and G@msoc_pop:True':" >> top.sls`
|
|
|
`echo " - match: compound" >> top.sls`
|
|
|
-`echo " - match: compound" >> top.sls`
|
|
|
+`echo " - ${CUSTOMERPREFIX}_pop_settings" >> top.sls`
|
|
|
|
|
|
1. add LCP nodes to the salt top file
|
|
|
-cd salt/fileroots
|
|
|
+`cd ../fileroots/`
|
|
|
`echo " '${CUSTOMERPREFIX}-splunk-syslog*':" >> top.sls`
|
|
|
`echo " - splunk.heavy_forwarder" >> top.sls`
|
|
|
`echo " - splunk.pop_hf_license" >> top.sls`
|
|
@@ -665,15 +670,13 @@ salt -C "${CUSTOMERPREFIX}* and G@msoc_pop:True" saltutil.refresh_pillar
|
|
|
salt -C "${CUSTOMERPREFIX}* and G@msoc_pop:True" saltutil.refresh_modules
|
|
|
#did the customer set the roles correctly?
|
|
|
salt -C "${CUSTOMERPREFIX}* and G@msoc_pop:True" cmd.run 'cat /etc/salt/minion.d/minion_role_grains.conf'
|
|
|
-#ensure the msoc_pop grain is working properly and set to True
|
|
|
-salt -C "${CUSTOMERPREFIX}* and G@msoc_pop:True" grains.get msoc_pop
|
|
|
#ensure the ec2:billing_products grain is EMPTY unless node is in AWS. ( Do we get the RH subscription from AWS? Not for LCP nodes )
|
|
|
salt -C "${CUSTOMERPREFIX}* and G@msoc_pop:True" grains.get ec2:billing_products
|
|
|
#ensure the environment grain is available and set to prod
|
|
|
-salt -C "${CUSTOMERPREFIX}* and G@msoc_pop:True" grains.get environment ( not needed if in AWS?)
|
|
|
-#make sure the activation-key pillar is available
|
|
|
+salt -C "${CUSTOMERPREFIX}* and G@msoc_pop:True" grains.get environment ( not needed on LCP nodes?)
|
|
|
+#make sure the activation-key pillar is available ( VMware Only )
|
|
|
salt -C "${CUSTOMERPREFIX}* and G@msoc_pop:True" pillar.get os_settings:rhel:rh_subscription:activation-key
|
|
|
-#LCP nodes need manual RH Subscription enrollment before removing test=true ensure the command is filled out with the pillar, unless they are in AWS?
|
|
|
+#VMware LCP nodes need manual RH Subscription enrollment before removing test=true ensure the command is filled out with the pillar, unless they are in AWS.
|
|
|
salt -C "${CUSTOMERPREFIX}* and G@msoc_pop:True" state.sls os_modifications.rhel_registration test=true
|
|
|
# try out the os_modifications then try high state
|
|
|
salt -C "${CUSTOMERPREFIX}* and G@msoc_pop:True" state.sls os_modifications
|
|
@@ -682,8 +685,8 @@ salt -C "${CUSTOMERPREFIX}* and G@msoc_pop:True" state.sls os_modifications
|
|
|
Start with ds
|
|
|
salt ${CUSTOMERPREFIX}-splunk-ds\* state.highstate --output-diff
|
|
|
|
|
|
-
|
|
|
-salt ${CUSTOMERPREFIX}-splunk-syslog-\* state.sls os_modifications
|
|
|
+Finish with syslog servers
|
|
|
+salt ${CUSTOMERPREFIX}-splunk-syslog-\* state.highstate --output-diff
|
|
|
|
|
|
## Configure the Customer LCP/POP Git Repository
|
|
|
|
|
@@ -699,7 +702,7 @@ DSADMINHASH="`echo $DSADMINPASS | python3 -c "from passlib.hash import sha512_cr
|
|
|
echo $DSADMINHASH
|
|
|
echo ":admin:${DSADMINHASH}::Administrator:admin:changeme@example.com:::50000" > passwd
|
|
|
```
|
|
|
-Store the DSADMINPASS in Vault in the engineering/customer_slices/$CUSTOMERPREFIX secret. Create new version with key called "$CUSTOMERPREFIX-splunk-ds admin".
|
|
|
+Store the DSADMINPASS in Vault in the engineering/customer_slices/$CUSTOMERPREFIX secret. Create new version with key called `echo $CUSTOMERPREFIX-splunk-ds admin`.
|
|
|
|
|
|
Grab Salt Minion user password
|
|
|
```
|
|
@@ -708,24 +711,68 @@ echo $MINIONPASS
|
|
|
MINIONHASH="`echo $MINIONPASS | python3 -c "from passlib.hash import sha512_crypt; print(sha512_crypt.hash(input(), rounds=5000))"`"
|
|
|
echo $MINIONHASH
|
|
|
echo ":minion:${MINIONHASH}::Salt Minion:saltminion::::50000" >> passwd
|
|
|
+cat passwd
|
|
|
```
|
|
|
|
|
|
-Put these values in the passwd file in the Customer DS git repo (msoc-$CUSTOMERREFIX-pop) in the root directory. Use the below command to help verify the password hashed correctly.
|
|
|
+Put these values in the passwd file in the Customer DS git repo (msoc-$CUSTOMERPREFIX-pop) in the root directory. Use the below command to help verify the password hashed correctly (OPTIONAL).
|
|
|
|
|
|
```
|
|
|
echo $MINIONPASS | python3 -c "from passlib.hash import sha512_crypt; print(sha512_crypt.hash(input(), salt='<YOUR-SALT-HERE>', rounds=5000))"
|
|
|
```
|
|
|
|
|
|
-1. Add the appropriate apps to the Customer DS git repo (msoc-CUSTOMERPREFIX-pop). Double check with Duane/Brandon to ensure correct apps are pushed to the DS! The minimum apps are $CUSTOMERPREFIX_hf_outputs, xdr_pop_minion_authorize, xdr_pop_ds_summaries.
|
|
|
+1. Add the appropriate apps to the Customer DS git repo (msoc-CUSTOMERPREFIX-pop). Double check with Duane/Brandon to ensure correct apps are pushed to the DS! The minimum apps are cust_hf_outputs, xdr_pop_minion_authorize, xdr_pop_ds_summaries.
|
|
|
+
|
|
|
+update the cust_hf_outputs app ( command specific for MAC OS )
|
|
|
+`sed -i '' -e 's/CUSTOMER/'"${CUSTOMERPREFIX}"'/g' deployment-apps/cust_hf_outputs/local/outputs.conf`
|
|
|
+
|
|
|
+Commit the changes to the git repo.
|
|
|
+```
|
|
|
+git add passwd
|
|
|
+git add deployment-apps/cust_hf_outputs/
|
|
|
+git commit -m "Adds ${CUSTOMERPREFIX} LCP variables. Will promote to master immediately."
|
|
|
+git push origin master
|
|
|
+```
|
|
|
|
|
|
-- Rename the CUST_ouputs_hf app from the skeleton git repo.
|
|
|
+1. Add the ServerClass.conf to the Customer DS git repo. LET Feed Management DO THIS!
|
|
|
|
|
|
-1. Add the ServerClass.conf to the Customer DS git repo. Feed Management may do this???
|
|
|
+Move the files to the LCP. You can highstate the minions.
|
|
|
|
|
|
-Use salt to move the files to the LCP. You can highstate the minion
|
|
|
+```
|
|
|
+sudo salt-run fileserver.update
|
|
|
+salt -C "${CUSTOMERPREFIX}* and G@msoc_pop:True" state.highstate --output-diff
|
|
|
|
|
|
-`salt -C "${CUSTOMERPREFIX}* and G@msoc_pop:True" state.highstate`
|
|
|
+# the patch and reboot
|
|
|
+salt -C "${CUSTOMERPREFIX}* and G@msoc_pop:True" pkg.upgrade
|
|
|
+salt -C "${CUSTOMERPREFIX}* and G@msoc_pop:True" system.reboot
|
|
|
+```
|
|
|
|
|
|
+## Verify Splunk Connectivity
|
|
|
+Can you see the DS logs in the customer slice splunk?
|
|
|
+
|
|
|
+Customer Slice
|
|
|
+`index=_internal NOT host="*.pvt.xdr.accenturefederalcyber.com" source="/opt/splunk/var/log/splunk/splunkd.log" earliest=-1h`
|
|
|
+
|
|
|
+Moose
|
|
|
+`index=_internal earliest=-1h host=<host-from-previous-command>`
|
|
|
+
|
|
|
+
|
|
|
+## Email Feed Management
|
|
|
+
|
|
|
+SUBJECT: ${CUSTOMERPREFIX} LCP Servers Ready
|
|
|
+
|
|
|
+```
|
|
|
+Hello,
|
|
|
+
|
|
|
+This is notification that the ${CUSTOMERPREFIX} LCP servers are ready for Feed Management to configure for customer use.
|
|
|
+
|
|
|
+Successfully Completed Tasks
|
|
|
+- Salt highstate completed successfully
|
|
|
+- Servers fully patched and rebooted successfully
|
|
|
+- Servers connecting to Splunk customer slice successfully
|
|
|
+- Servers connecting to Splunk Moose successfully
|
|
|
+- Servers connecting to Sensu successfully
|
|
|
+
|
|
|
+```
|
|
|
|
|
|
|
|
|
## LCP Troubleshooting
|