Legacy Information - For Reference Only
This is in no specific order YET
Shameful but I just add via github UI.
Normally named msoc-<customer>-cm
. For collaborators/teams:
* infrastructure - Admin
* automation - Read
* onboarding - Write
Then I clone and copy from an existing customer. We need a template-type-thing for this. If you copy another customer remove any HEC tokens
Duane has some terrible things he uses for this. A script plus a docker image. Outputs something like
[dwaddle@DPS0591 msoc]$ gen-splunk-passwd.sh
PASSWORD: `v9!T^\6RVU6C^BF%34c
HASH : $6$Ly1tn9ZbftJ$CzTH0uQI5bAfTHBnSK8hZsRwmMg/qrSW1tZg3zEcqwJK2cGoRiPkrRvNM1Kj/dUO2bwp87eBdPpwY/aWB2zht/
Once you have the plaintext, put it in vault. When you have the ciphertext put it in the passwd file that is in the repo.
Normally named msoc-<customer>-pop
. For collaborators/teams:
* infrastructure - Admin
* automation - Read
* onboarding - Write
Then I clone and copy from an existing customer. We need a template-type-thing for this
Make the okta apps for CM, SH, HF. You'll need the App entityId and SignonURL for pillars
xxx-Splunk-CM
https://xxx-splunk-cm.msoc.defpoint.local:8000
Under "Sign On" tab click 'Edit'
splunk\-role\-.*
Use Duane's script to make the Okta apps like so:
[dwaddle@DPS0591 okta_app_maker(feature/MSOCI-1266)]$ ./okta_app_maker.py 'LA-C19 Splunk CM' 'https://la-c19-splunk-cm.msoc.defpoint.local:8000'
2020-05-12 17:10:20,215 DEBUG main len=3 args=['./okta_app_maker.py', 'LA-C19 Splunk CM', 'https://la-c19-splunk-cm.msoc.defpoint.local:8000']
2020-05-12 17:10:20,216 INFO main Creating app name="LA-C19 Splunk CM" for login url="https://la-c19-splunk-cm.msoc.defpoint.local:8000/saml/acs"
2020-05-12 17:10:20,216 INFO handle_app Checking for existence of app="LA-C19 Splunk CM"
2020-05-12 17:10:20,235 DEBUG _new_conn Starting new HTTPS connection (1): mdr-multipass.okta.com
2020-05-12 17:10:21,083 DEBUG _make_request https://mdr-multipass.okta.com:443 "GET /api/v1/apps?q=LA-C19+Splunk+CM HTTP/1.1" 200 None
2020-05-12 17:10:21,089 DEBUG handle_app Response code 200
2020-05-12 17:10:21,092 DEBUG _new_conn Starting new HTTPS connection (1): mdr-multipass.okta.com
2020-05-12 17:10:22,780 DEBUG _make_request https://mdr-multipass.okta.com:443 "POST /api/v1/apps HTTP/1.1" 200 None
2020-05-12 17:10:22,787 DEBUG create_saml_app Response code: 200
2020-05-12 17:10:22,787 DEBUG handle_app Getting the IDP metadata for app="LA-C19 Splunk CM"
2020-05-12 17:10:22,791 DEBUG _new_conn Starting new HTTPS connection (1): mdr-multipass.okta.com
2020-05-12 17:10:23,629 DEBUG _make_request https://mdr-multipass.okta.com:443 "GET /api/v1/apps/0oa3o3rmh6Ak4Ramo297/sso/saml/metadata HTTP/1.1" 200 2339
2020-05-12 17:10:23,635 DEBUG get_things_from_metadata Response code 200
2020-05-12 17:10:23,649 DEBUG handle_app entityId=http://www.okta.com/exk3o3rmh5x90JIhg297 ssoUrl=https://mdr-multipass.okta.com/app/mdrmultipass_lac19splunkcm_1/exk3o3rmh5x90JIhg297/sso/saml
2020-05-12 17:10:23,650 INFO update_app Updating app_id="0oa3o3rmh6Ak4Ramo297"
2020-05-12 17:10:23,654 DEBUG _new_conn Starting new HTTPS connection (1): mdr-multipass.okta.com
2020-05-12 17:10:24,815 DEBUG _make_request https://mdr-multipass.okta.com:443 "PUT /api/v1/apps/0oa3o3rmh6Ak4Ramo297 HTTP/1.1" 200 None
2020-05-12 17:10:24,821 DEBUG update_app Response code 200
MDR pillar format:
--------------------------------------------------------
{% if grains['id'].startswith('<REPLACEME>') %}
auth_method: "saml"
okta:
# This is the entityId / IssuerId
uid: "http://www.okta.com/exk3o3rmh5x90JIhg297"
# Login URL / Signon URL
login: "https://mdr-multipass.okta.com/app/mdrmultipass_lac19splunkcm_1/exk3o3rmh5x90JIhg297/sso/saml"
{% endif %}
Save the results of this into a temp file for stuffing into Pillars. Run it for the CM, SH, and each HF. This does not assign the apps to groups (yet!)
os_settings.sls
has an ugly jinja if/elsedefault_acl.conf
so that Salt cmds are available to peoplemsoc-infrastructure/salt/fileroots/splunk/files/licenses
:
matters.pillar/top.sls
fileroots/top.sls
This may not be needed?Everything up to this point is pre-setup / staging. Now you need to merge your changes into develop, and kick the tires on it.
sudo salt-run fileserver.update
salt 'salt-master*' state.apply --state-verbose=False
We're updating the master configs so it knows the new hotnesssudo systemctl restart salt-master
sudo salt-run fileserver.file_list
Here we are checking to make sure our new gitfs got picked up. Should see the new files from the new repo(s)cp ../106-ma-c19/common.tf .
cp ../106-ma-c19/customer.tf .
ln -s ../common/variables.tf .
ln -s ../common/amis.tf .
update terraform files
terraform
block in common.tf
for new S3 state bucket and new proj_id
variablecustomer.tf
especially the name
in the locals
blockvariables.tf
to add the CIDR. Use the next logical /22terraform init
terraform workspace new test
terraform apply
salt 'la-c19*' saltutil.sync_all
salt 'la-c19*' saltutil.refresh_grains
salt 'la-c19*' saltutil.refresh_pillar
salt 'la-c19*' state.apply --state-verbose=False
salt 'la-c19-splunk-cm*' state.sls splunk.master.apply_bundle_master
| rest /services/data/indexes splunk_server=*indexer*
| stats values(homePath_expanded) as home, values(coldPath_expanded) as cold, values(tstatsHomePath_expanded) as tstats by title
| sort home
Naming Scheme: onboarding- Example: onboarding-la-covid
salt -C '*splunk-indexer* or *splunk-sh* or *splunk-hf*' state.sls splunk.maxmind.pusher --state-verbose=False --state-output=terse
salt <customer-name>* grains.get environment
If the grain is not there, follow troubleshooting steps in Salt Upgrade Notes.md