Splunk Notes.md 3.4 KB

Splunk Notes


Change user to Splunk

sudo -iu splunk


How to apply the git changes to the CM or customer DS. Be patient, it is Splunk. Review logs in salt

Chris broke Jenkins, but he moved the splunk git repo to gitfs

  1. add your changes to the appropriate git repo (msoc-moose-cm)
  2. then use the salt state to push the changes and apply the new bundle

    salt 'moose-splunk-cm*' state.sls splunk.master.apply_bundle_master
    salt 'afs-splunk-cm*' state.sls splunk.master.apply_bundle_master
    

Apply the git changes to the splunk UFs (Salt Deployment Server)

Moose DS has a salt file for pushing apps out directly to UFs.

Customer DS salt 'afs-splunk-ds*' state.sls splunk.deployment_server.reload_ds

to view the splunk command output look at the logs in splunk under the

return.cmd_...changes.stdout or stderr
index=salt sourcetype=salt_json fun="state.sls"

Splunk License

Splunk CM is the license master and the salt master is used to push out a new license. Each customer has its own license.

Updating Splunk License

Update the license file at salt/fileroots/splunk/files/licenses/<customer>/. Use the scripts in that folder to remove expired licenses, or view license details.

salt-run salt *cm* state.sls splunk.license_master --output-diff

To remove expired licenses, remove the license from the salt code and push the changes to master branch. Then use salt to remove the licenses. This will cause Splunk to restart and the license to be removed from the license master.

SEARCHES

#Splunk
| tstats values(sourcetype) where index=* group by index

#collectd
| mstats count WHERE index=collectd metric_name=* by host, metric_name


#aws cloudtrail 
index=app_aws sourcetype=aws:cloudtrail

#proxy
index=web sourcetype=squid:access:json

#Okta
index=auth sourcetype="OktaIM2:log"

CLI search
/opt/splunk/bin/splunk search 'index=bro' -earliest_time '-5m' output=raw > test.text

#NGA data request for checkpoint logs
index=network sourcetype=qos_syslog (service=443 OR service=80) NOT src=172.20.109.16 NOT src=172.20.109.17 NOT dst=172.20.109.16 NOT dst=172.20.109.17 NOT (action=Drop src=172.20.8.3)

updated
index=network sourcetype=qos_syslog (service=443 OR service=80) NOT (action=Drop src=172.20.8.3)

#Vault
index=app_vault

#Splunk
| rest /services/data/indexes/
| search title=app_mscas OR title = app_o365 OR title=dns OR title=forescout OR title=network OR title=security OR title=Te

#are we freezing data off?
index=_internal ayncfreezer freeze succeeded NOT ( _internaldb OR _introspection OR lastchance ) source=*splunkd.log

coldToFrozenScript

Yes, this is a mess. Moose is running a version of splunk that breaks with the coldToFrozen script being pushed from the CM in an app. To get around this, I moved it to /usr/local/bin. The other customers have the script in the app.

ERROR: runcoldToFrozen and get SyntaxError. 
SOLUTION: upgrade the awscli with pip3 ( run the splunk.indexer state. )

Make Splunk Proxy Settings Great Again

Read This: https://jira.xdr.accenturefederalcyber.com/browse/MSOCI-1516

Splunk HFs have an issue where the OS is configured to use the proxy. This causes the Splunk HF apps to use the proxy, even when trying to access 127.0.0.1. The apps then timeout due to trying to use the proxy to access 127.0.0.1. To fix this we added the extra code to splunk-launch.conf. See salt/fileroots/splunk/remove_proxy_splunklaunch.sls. Because of this