How to set up a new customer in govcloud
Assumes your github repos are right off your ~
directory. Adjust paths accordingly.
Assumes this is a fresh account.
Assumes you're on Mac OSX
# There may be more that I just already had. If you find them, add them:
pip3 install passlib
pip3 install requests
pip3 install dictdiffer
If you don't have an OKTA API key then you should go get one.
Follow the instructions in (AWS New Account Setup Notes.md) to bootstrap the account.
You will need the following. Setting environment variables will help with some of the future steps, but manual substitution can be done, too.
IMPORTANT: Each time you run this, it will generate new passwords. So make sure you use the same window to perform all steps!
Do you have a Splunk license yet? No? Can you use a temp/dev license until the real one shows up? I hate doing that, but not much of a choice.
Commands tested on OSX and may not (probably won't) work on windows/linux.
export OKTA_API_TOKEN=<YOUR OKTA API KEY>
INITIALS=bp
TICKET=MSOCI-1550
# prefix should have hyphens
CUSTOMERPREFIX=modelclient
PASS4KEY=`uuidgen | tr '[:upper:]' '[:lower:]'`
DISCOVERYPASS4KEY=`uuidgen | tr '[:upper:]' '[:lower:]'`
ADMINPASS="`openssl rand -base64 24`"
MINIONPASS="`openssl rand -base64 24`"
ESSJOBSPASS="`openssl rand -base64 24`"
# If the below doesn't work for you, generate your SHA-512 hashes for splunk however you'd like
ADMINHASH="`echo $ADMINPASS | python3 -c "from passlib.hash import sha512_crypt; print(sha512_crypt.hash(input(), rounds=5000))"`"
MINIONHASH="`echo $MINIONPASS | python3 -c "from passlib.hash import sha512_crypt; print(sha512_crypt.hash(input(), rounds=5000))"`"
ESSJOBSHASH="`echo $ESSJOBSPASS | python3 -c "from passlib.hash import sha512_crypt; print(sha512_crypt.hash(input(), rounds=5000))"`"
Connect to production VPN
Log into vault at (not yet)https://vault.pvt.xdr.accenturefederalcyber.com (legacy: https://vault.mdr.defpoint.com)
Record the following into engineering/customer_slices/${CUSTOMERPREFIX}
echo $ADMINPASS # record as `${CUSTOMERPREFIX}-splunk-cm admin`
echo "${CUSTOMERPREFIX}-splunk-cm admin"
At this time, we don't set the others on a per-account basis through salt, though it looks like admin password has been changed for some clients.
You may have already created a new branch in xdr-terraform-live in a previous step.
cd ~/msoc-infrastructure
git checkout develop
git fetch --all
git pull origin develop
git checkout -b feature/${INITIALS}_${TICKET}_CustomerSetup_${CUSTOMERPREFIX}
#if needed...
cd ~/xdr-terraform-live
git checkout master
git fetch --all
git pull origin master
git checkout -b feature/${INITIALS}_${TICKET}_CustomerSetup_${CUSTOMERPREFIX}
cd tools/okta_app_maker
./okta_app_maker.py ${CUSTOMERPREFIX}' Splunk SH [Prod] [GC]' "https://${CUSTOMERPREFIX}-splunk.pvt.xdr.accenturefederalcyber.com"
./okta_app_maker.py ${CUSTOMERPREFIX}' Splunk CM [Prod] [GC]' "https://${CUSTOMERPREFIX}-splunk-cm.pvt.xdr.accenturefederalcyber.com:8000"
./okta_app_maker.py ${CUSTOMERPREFIX}' Splunk HF [Prod] [GC]' "https://${CUSTOMERPREFIX}-splunk-hf.pvt.xdr.accenturefederalcyber.com:8000"
Each run of okta_app_maker.py
will generate output similar to:
{% if grains['id'].startswith('<REPLACEME>') %}
auth_method: "saml"
okta:
# This is the entityId / IssuerId
uid: "http://www.okta.com/exk5kxd31hsbDuV7m297"
# Login URL / Signon URL
login: "https://mdr-multipass.okta.com/app/mdr-multipass_modelclientsplunkshtestgc_1/exk5kxd31hsbDuV7m297/sso/saml"
{% endif %}
Substite REPLACEME
with ${CUSTOMERPREFIX}-splunk-sh
, -cm
, or -hf
and record them.. You will need all 3.
Add permissions for the okta apps:
1) Log into the okta webpage (https://mdr-multipass.okta.com/)
1) Go to Admin->Applications
1) for each ${CUSTOMERPREFIX}
application, click 'Assign to Groups' and add the following groups:
1) while logged into OKTA, add the Splunk logo to the Apps. It is located in msoc-infrastructure/tools/okta_app_maker/okta-logo-splunk.png
mkdir ../../salt/fileroots/splunk/files/licenses/${CUSTOMERPREFIX}
cd ../../salt/fileroots/splunk/files/licenses/${CUSTOMERPREFIX}
# Copy license into this directory.
# Rename license to match this format trial-<license-size>-<expiration-date>.lic
# e.g. trial-15gb-20210305.lic
# If license is not a trial, match this format SO<sales order number>_PO<purchase order number>.lic
# e.g. SO180368_PO7500026902.lic
# If license is not yet available, ... ? Not sure. For testing, I copied something in there but that's not a good practice.
~/msoc-infrastructure/salt/pillar/os_settings.sls
or vim ../../../../../pillar/os_settings.sls
, under the jinja if/else. Use "y" to yank in vim and "p" to paste.Each customer gets a pillars file for its own variables. If you are setting up the syslog servers with Splunk, you will need to replace the FIXME value in the deployment_server pillar. The correct value of the deployment_server pillar is a customer provided DNS address pointing to the IP of the LCP deployment server.
IMPORTANT: In your sed commands, DISCOVERYPASS4KEY must be done before PASS4KEY to replace correctly.
#cd ~/msoc-infrastructure/salt/pillar/
cd ../../../../../pillar/
# Append the customer variables to a topfile
echo " '${CUSTOMERPREFIX}*':" >> top.sls
echo " - ${CUSTOMERPREFIX}_variables" >> top.sls
# Generate the password file
cat customer_variables.sls.skeleton \
| sed s#PREFIX#${CUSTOMERPREFIX}#g \
| sed s#DISCOVERYPASS4KEY#${DISCOVERYPASS4KEY}#g \
| sed s#PASS4KEY#${PASS4KEY}#g \
| sed s#MINIONPASS#${MINIONPASS}#g \
| sed s#ESSJOBSPASS#${ESSJOBSPASS}#g \
> ${CUSTOMERPREFIX}_variables.sls
# Append okta configuration
cat >> ${CUSTOMERPREFIX}_variables.sls
# Paste the 3 okta entries here, and finish with ctrl-d
Review the file to make sure everything looks good.
vim ${CUSTOMERPREFIX}_variables.sls
Add to gitfs pillars and allow salt access:
# In the salt_master.sls file, copy one of the customer_repos and update with the new customer prefix. Update both the CM repo and the DS repo (deployment_servers), unless you know there will not be LCP/POP nodes.
vim salt_master.sls
# Add customer prefix to ACL
vim ../fileroots/salt_master/files/etc/salt/master.d/default_acl.conf
:%s/frtib\*/frtib\* or ca-c19\*/
# Add Account number to xdr_asset_inventory.sh under GOVCLOUDACCOUNTS
vim ../fileroots/salt_master/files/xdr_asset_inventory/xdr_asset_inventory.sh
Migrate changes through to master branch:
git add ../fileroots/splunk/files/licenses/${CUSTOMERPREFIX}/<your-license-file>
git add ../fileroots/salt_master/files/etc/salt/master.d/default_acl.conf
git add ../fileroots/salt_master/files/xdr_asset_inventory/xdr_asset_inventory.sh
git add salt_master.sls top.sls ${CUSTOMERPREFIX}_variables.sls os_settings.sls
git commit -m "Adds ${CUSTOMERPREFIX} variables. Will promote to master immediately."
git push origin feature/${INITIALS}_${TICKET}_CustomerSetup_${CUSTOMERPREFIX}
Follow the link to create the PR, and then submit another PR to master and get the changes merged in to master branch.
For now, we only use a repository for the CM and POP. Clearly, we need one for the others.
Create a new repository using the cm template:
msoc-${CUSTOMERPREFIX}-cm
b. Give it the description: Splunk Cluster Master Configuration for [CUSTOMER DESCRIPTION]
c. Set permissions to 'Private'
d. Click 'create repository from template'Repeat for pop repo, unless customer will not have pop nodes.
msoc-${CUSTOMERPREFIX}-pop
b. Give it the description: Splunk POP Configuration for [CUSTOMER DESCRIPTION]
c. Set permissions to 'Private'
d. Click 'create repository from template'Clone and modify the password in the CM repo (TODO: Just take care of this in salt):
mkdir ~/tmp
cd ~/tmp
git clone git@github.xdr.accenturefederalcyber.com:mdr-engineering/msoc-${CUSTOMERPREFIX}-cm.git
cd msoc-${CUSTOMERPREFIX}-cm
sed -i "" "s#ADMINHASH#${ADMINHASH}#" passwd
sed -i "" "s#MINIONHASH#${MINIONHASH}#" passwd
git add passwd
git commit -m "Stored hashed passwords"
git push origin master
Now that we have the git repos created, let's update the salt master.
ssh gc-prod-salt-master
salt 'salt*' cmd.run 'salt-run fileserver.update'
salt 'salt*' state.sls salt_master.salt_master_configs --output-diff test=true
sudo salt 'salt*' state.sls salt_master.salt_posix_acl --output-diff test=true
exit
During the bootstrap process, you copied the skeleton across. Review the variables.
cd ~/xdr-terraform-live/prod/aws-us-gov/mdr-prod-${CUSTOMERPREFIX}
vim account.hcl # Fill in all "TODO" items. Leave the "LATER" variables for later steps.
find . -name "terragrunt.hcl" -not -path "*/.terragrunt-cache/*" -exec sed -i '' s/?ref=v1.21.0/?ref=v1.x.x/ {} \;
find . -name "terragrunt.hcl" -not -path "*/.terragrunt-cache/*" -exec sed -i '' s/?ref=v1.0.0/?ref=v1.x.x/ {} \;
Did you get them all? Don't forget about the subfolders in account_standards_regional.
cat */terragrunt.hcl | grep ref | grep -v 1.xx.xx
cat */*/terragrunt.hcl | grep ref
account_map["prod"]
in :
~/xdr-terraform-live/prod/aws-us-gov/partition.hcl
OR vim ../partition.hcl
~/xdr-terraform-live/common/aws-us-gov/partition.hcl
OR vim ../../../common/aws-us-gov/partition.hcl
cd ~/xdr-terraform-live/prod/aws-us-gov/mdr-prod-c2
OR cd ../mdr-prod-c2/
Copy and paste these commands into cmd line and run them.
for module in 005-account-standards-c2 008-transit-gateway-hub
do
pushd $module
terragrunt apply
popd
done
oneliner
for module in 005-account-standards-c2 008-transit-gateway-hub; do pushd $module; terragrunt apply; popd; done
cd ~/xdr-terraform-live/common/aws-us-gov/afs-mdr-common-services-gov/
cd ../../../common/aws-us-gov/afs-mdr-common-services-gov/
Apply the modules:
for module in 008-xdr-binaries 010-shared-ami-key
do
pushd $module
terragrunt apply
popd
done
The new AWS account needs permissions to access the AMIs before trying to create EC2 instances. Replace the aws-account-id in the below command.
cd ~/xdr-terraform-live/bin/ # OR cd ../../../bin/
# Dump a list of AMIs matching the filter just to get a good looky-loo
AWS_PROFILE=mdr-common-services-gov update-ami-accounts 'MSOC*'
# Now do the actual sharing of the AMIs with your new account
AWS_PROFILE=mdr-common-services-gov update-ami-accounts 'MSOC*' <aws-account-id>
One common problem here. You may need to add region= to your $HOME/.aws/config for mdr-common-services-gov, like so:
[profile mdr-common-services-gov]
source_profile = govcloud
role_arn = arn:aws-us-gov:iam::701290387780:role/user/mdr_terraformer
region = us-gov-east-1
color = ff0000
Also add the new account number to the packer build so that when new AMIs get built they are shared automatically with this account.
cd ~/msoc-infrastructure/packer or cd ../../msoc-infrastructure/packer
vi Makefile
# Add the account(s) to GOVCLOUD_ACCOUNTS / COMMERCIAL_ACCOUNTS
# as needed. PR it and exit
cd cd ../../xdr-terraform-live/bin/
The xdr-terraform-live/bin
directory should be in your path. You will need it for this step:
(IMPORTANT:, if you are certain everything is good to go, you can do a yes yes |
before the terragrunt-apply-all
to bypass prompts. This does not leave you an out if you make a mistake, however, becasue it is difficult to break out of terragrunt/terraform without causing issues.)
cd ~/xdr-terraform-live/prod/aws-us-gov/mdr-prod-${CUSTOMERPREFIX} # OR cd ../prod/aws-us-gov/mdr-prod-${CUSTOMERPREFIX}
terragrunt-apply-all --skipqualys --notlocal
You might run into an error when applying the module 006-account-standards
.
Error creating CloudTrail: InsufficientS3BucketPolicyException: Incorrect S3 bucket policy is detected for bucket: xdr-cloudtrail-logs-prod
Resolution: Did you run terragrunt apply in mdr-prod-c2/005-account-standards-c2 ???
You might run into an error when applying the VPC module 010-vpc-splunk
. Error reads as:
Error: Invalid for_each argument
on tgw.tf line 26, in resource "aws_route" "route_to_10":
26: for_each = toset(concat(module.vpc.private_route_table_ids, module.vpc.public_route_table_ids))
The "for_each" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the for_each depends on.
Workaround is:
cd 010-vpc-splunk
terragrunt apply -target module.vpc
terragrunt apply
cd ..
You might run into an error when applying the test instance module 025-test-instance
.
Error reads as:
Error: Your query returned no results. Please change your search criteria and try again.
Workaround is:
You forgot to share the AMI with the new account. See the instructions above and run this command in the appropriate folder and replace the aws-account-id.
cd ~/xdr-terraform-live/bin/
AWS_PROFILE=mdr-common-services-gov update-ami-accounts <aws-account-id>
For complete details, see https://github.xdr.accenturefederalcyber.com/mdr-engineering/msoc-infrastructure/wiki/Qualys.
Short version:
account.hcl
as qualys_connector_externalid
(search for 'LATER')cd 021-qualys-connector-role
terragrunt-local apply
, it will output qualys_role_arn
After waiting 1-2 minutes hit the refresh icon. It should come back with a number of assets ( probably about 6 ), no errors, and a hourglass for a bit.
Push the changes in xdr-terraform-live for a PR in git.
Substitute environment variables here:
ssh gc-prod-salt-master
CUSTOMERPREFIX=<entercustomerprefix}
sudo salt-key -L | grep $CUSTOMERPREFIX # Wait for all 6 servers to be listed (cm, sh, hf, and 3 idxs)
sleep 300 # Wait 5 minutes
salt ${CUSTOMERPREFIX}\* test.ping
# Repeat until 100% successful
salt ${CUSTOMERPREFIX}\* saltutil.sync_all
salt ${CUSTOMERPREFIX}\* saltutil.refresh_pillar
salt ${CUSTOMERPREFIX}\* saltutil.refresh_modules
salt ${CUSTOMERPREFIX}\* grains.get environment
salt ${CUSTOMERPREFIX}\* state.highstate --output-diff
# Review changes from above. Though i've seen indexers get hung. If they do, see note below
# splunk_service may fail, this is expected (it's waiting for port 8000)
salt ${CUSTOMERPREFIX}\* test.version
salt ${CUSTOMERPREFIX}\* pkg.upgrade ( this may break connectivity if there is a salt minion upgrade! )
salt ${CUSTOMERPREFIX}\* system.reboot
# Wait 5+ minutes
salt ${CUSTOMERPREFIX}\* test.ping
# Apply the cluster bundle
salt ${CUSTOMERPREFIX}\*-cm\* state.sls splunk.master.apply_bundle_master --output-diff
exit
Note: If systems get hung on their bootup highstate.
System hangs appear to be because of a race condition with startup of firewalld and its configuration. No known solution at this time. If it hangs, you can reboot the system and apply a highstate again.
Because we are not managing the splunk.secret, the pass4SymmKey gets encrypted into different values on each of the indexers. This causes the file containing the pass4SymmKey to be updated by Splunk on every Salt highstate. To resolve this, we would need to manage the splunk.secret file.
TODO: Document a step of updating the pillars/${CUSTOMERPREFIX}_variables.sls
with encrypted forms of the passwords.
Log into https://${CUSTOMERPREFIX}-splunk.pvt.xdr.accenturefederalcyber.com
echo "https://${CUSTOMERPREFIX}-splunk.pvt.xdr.accenturefederalcyber.com"
echo "https://${CUSTOMERPREFIX}-splunk-cm.pvt.xdr.accenturefederalcyber.com:8000"
It should "just work".
Should see 3 indexers:
index=_* | stats count by splunk_server
Should see all but the HF ( you might see the HF ):
index=_* | stats count by host
Note from Fred: I'm leaving this next one here, copied from the legacy instructions, but I'm not sure where it's supposed to be run. My test on the search head didn't have any results.
Note from Duane: Should work anywhere. Main goal was to see that the cluster bundle got pushed correctly and all the indexes we were expecting to see were listed. I should probably improve this search at some point.
Note from Brad: Donkey! ( see Shrek 2 Dinner scene. https://www.youtube.com/watch?v=rmpFmJfEZXs ). You should see non-default indexes such as app_* from the below search.
| rest /services/data/indexes splunk_server=*splunk-i*
| stats values(homePath_expanded) as home, values(coldPath_expanded) as cold, values(tstatsHomePath_expanded) as tstats by title
| sort home
shasum -a 256 splunk-enterprise-security_620.spl
Temporarily modify the etc/system/local/web.conf to allow large uploads
[settings]
max_upload_size = 1024
On the salt master...
CUSTOMERPREFIX=modelclient
salt ${CUSTOMERPREFIX}-splunk-sh* cmd.run 'ls -larth /opt/splunk/etc/system/local/web.conf'
salt ${CUSTOMERPREFIX}-splunk-sh* cmd.run 'touch /opt/splunk/etc/system/local/web.conf'
salt ${CUSTOMERPREFIX}-splunk-sh* cmd.run 'chown splunk: /opt/splunk/etc/system/local/web.conf'
salt ${CUSTOMERPREFIX}-splunk-sh* cmd.run 'echo "[settings]" > /opt/splunk/etc/system/local/web.conf'
salt ${CUSTOMERPREFIX}-splunk-sh* cmd.run 'echo "max_upload_size = 1024" >> /opt/splunk/etc/system/local/web.conf'
salt ${CUSTOMERPREFIX}-splunk-sh* cmd.run 'cat /opt/splunk/etc/system/local/web.conf'
salt ${CUSTOMERPREFIX}-splunk-sh* cmd.run 'systemctl restart splunk'
salt ${CUSTOMERPREFIX}-splunk-sh* cmd.run 'cat /opt/splunk/etc/system/local/web.conf'
salt ${CUSTOMERPREFIX}-splunk-sh* cmd.run 'rm -rf /opt/splunk/etc/system/local/web.conf'
salt ${CUSTOMERPREFIX}-splunk-sh* cmd.run 'systemctl restart splunk'
Note: Once the Legacy Monitoring Console has moved to GC, the SGs will need to be fixed.
echo $PASS4KEY
sudo salt-run fileserver.update
salt splunk-mc* state.sls splunk.monitoring_console --output-diff test=true
salt splunk-mc* cmd.run 'cat /opt/splunk/etc/apps/connected_clusters/local/server.conf'
echo $ADMINPASS
salt ${CUSTOMERPREFIX}-splunk-sh* pillar.get secrets:splunk_admin_password
salt -C '*splunk-indexer* or *splunk-idx* or *splunk-sh* or *splunk-hf*' state.sls splunk.maxmind.pusher --state-verbose=False --state-output=terse
Copy the Blank Template LCP Build Sheet and rename with customer prefix find and replace
Got customer public IPs after you were done standing up the Splunk cluster? Add the IPs to account.hcl and reapply 160-splunk-indexer-cluster to add the customer IPs for the splunk environment.
The IPs also need to be allowed for the salt-master, sensu, etc.
vim xdr-terraform-live/globals.hcl
Edit the c2_services_external_ips map and be sure to add a description.
Then reapply in 095-instance-sensu, 080-instance-repo-server, 071-instance-salt-master or terragrunt-apply-all
.
For Legacy, update these files terraform/02-msoc_vpc/security-groups.tf terraform/common/variables.tf and reapply 02-msoc_vpc. This should update salt master and repo. You can use --target, i won't tell on you.
These commands will add the pop settings pillar
copy an existing ${CUSTOMERPREFIX}_pop_settings.sls and rename it.
add LCP nodes to the pillar top file
cd salt/pillar
echo " '${CUSTOMERPREFIX}* and G@msoc_pop:True':" >> top.sls
echo " - match: compound" >> top.sls
echo " - ${CUSTOMERPREFIX}_pop_settings" >> top.sls
add LCP nodes to the salt top file
cd salt/fileroots
echo " '${CUSTOMERPREFIX}-splunk-syslog*':" >> top.sls
echo " - splunk.heavy_forwarder" >> top.sls
echo " - splunk.pop_hf_license" >> top.sls
Commit all the changes to git and open PR. Once the settings are in the master branch, come back and run these commands.
CUSTOMERPREFIX=modelclient salt -C "${CUSTOMERPREFIX}* and G@msoc_pop:True" test.ping #are the LCP images up-to-date on the salt minion version? See Salt Upgrade Notes.md. Upgrade salt minions before syncing ec2_tags it needs py3. Make sure the environment grain is set before trying to upgrade salt. salt -C "${CUSTOMERPREFIX}* and G@msoc_pop:True" test.version salt -C "${CUSTOMERPREFIX}* and G@msoc_pop:True" saltutil.sync_all salt -C "${CUSTOMERPREFIX}* and G@msoc_pop:True" saltutil.refresh_pillar salt -C "${CUSTOMERPREFIX}* and G@msoc_pop:True" saltutil.refresh_modules #ensure the msoc_pop grain is working properly and set to True salt -C "${CUSTOMERPREFIX}* and G@msoc_pop:True" grains.get msoc_pop #ensure the ec2:billing_products grain is EMPTY ( Do we get the RH subscription from AWS? Not for LCP nodes ) salt -C "${CUSTOMERPREFIX}* and G@msoc_pop:True" grains.get ec2:billing_products #ensure the environment grain is available and set to prod salt -C "${CUSTOMERPREFIX}* and G@msoc_pop:True" grains.get environment #make sure the activation-key pillar is available salt -C "${CUSTOMERPREFIX}* and G@msoc_pop:True" pillar.get os_settings:rhel:rh_subscription:activation-key #LCP nodes need manual RH Subscription enrollment before removing test=true ensure the command is filled out with the pillar salt -C "${CUSTOMERPREFIX}* and G@msoc_pop:True" state.sls os_modifications.rhel_registration test=true salt -C "${CUSTOMERPREFIX}* and G@msoc_pop:True" state.sls os_modifications
salt ${CUSTOMERPREFIX}-splunk Start with ds salt ${CUSTOMERPREFIX}-splunk-ds* state.highstate --output-diff
salt ${CUSTOMERPREFIX}-splunk-syslog-* state.sls os_modifications
REMEMBER: Our Customers are responsible for setting up the salt minion with grains and allow traffic through the outbound firewall. If they have not done that yet, you will get more errors.
ISSUE: Help, the environment grain is not showing up!
SOLUTION: This command will add a static grain in /etc/salt/minion.d/cloud_init_grains.conf.
salt 'target' state.sls salt_minion.salt_grains pillar='{"environment": "prod"}' test=true --output-diff
cmd.run 'rm -rf /var/cache/salt/minion/extmods/grains/ec2_tags.py'
Then restart the minion with service.restart salt-minion
then pillar.refresh
ISSUE: [ERROR ][2798] Failed to import grains ec2_tags, this is due most likely to a syntax error SOLUTION: python 3 needed upgrade salt!!
ISSUE:
http://pkg.scaleft.com/rpm/repodata/repomd.xml: [Errno 12] Timeout on http://pkg.scaleft.com/rpm/repodata/repomd.xml: (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 seconds')
Trying other mirror.
SOLUTION: Fix connectivity issues to scaleft
TEMP FIX: yum --disablerepo=okta_asa_repo_add pkg.upgrade
cmd.run 'yum install python-virtualenv -y --disablerepo=okta_asa_repo_add'
ISSUE:
2021-02-16 21:25:51,126 [salt.loaded.int.module.cmdmod:854 ][ERROR ][26641] Command '['useradd', '-U', '-M', '-d', '/opt/splunk', 'splunk']' failed with return code: 9
2021-02-16 21:25:51,127 [salt.loaded.int.module.cmdmod:858 ][ERROR ][26641] stderr: useradd: group splunk exists - if you want to add this user to that group, use -g.
2021-02-16 21:25:51,127 [salt.loaded.int.module.cmdmod:860 ][ERROR ][26641] retcode: 9
2021-02-16 21:25:51,127 [salt.state :328 ][ERROR ][26641] Failed to create new user splunk
SOLUTION: Manually create user and add to splunk group OR delete group and create user+group in one command.
cmd.run 'useradd -M -g splunk -d /opt/splunk splunk'
ISSUE:
splunk pkg.install
Public key for splunk-8.0.5-a1a6394cc5ae-linux-2.6-x86_64.rpm is not installed
Retrieving key from https://docs.splunk.com/images/6/6b/SplunkPGPKey.pub
GPG key retrieval failed: [Errno 14] curl#35 - "TCP connection reset by peer"
TEMP FIX: cmd.run 'yum --disablerepo=okta_asa_repo_add -y --nogpgcheck install splunk'