Explorar o código

mac, phantom, smartstore,

Brad Poulton %!s(int64=3) %!d(string=hai) anos
pai
achega
b87ce9e175

+ 11 - 0
AFS Macbook Notes.md

@@ -110,3 +110,14 @@ git config --global user.email "frederick.t.damstra@accenturefederal.com"
 Run 'Keychain Access'
 Import files/mdr\ root\ ca.crt
 Set certificate as trusted
+
+
+### local admin for your user with beyondtrust installed
+
+become root
+
+dscl . -append /Groups/admin GroupMembership duane.e.waddle
+
+undo
+
+dscl . -delete /Groups/admin GroupMembership duane.e.waddle

+ 8 - 1
Phantom Upgrade Notes.md

@@ -54,7 +54,7 @@ Phantom is shutting down for an update in 5 minutes!
 Stop Phantom 
 `/opt/phantom/bin/stop_phantom.sh`
 
-Take an AWS snapshot OF ALL DRIVES in addition to the automatic snapshots! Phantom uses the /tmp directory in addition to the /opt directory. Be sure to include the EBS volume that is storing the /opt data. It is 500 GB volume ( prod ) or a 60 GB volume ( TEST ).
+Take an AWS snapshot OF ALL DRIVES in addition to the automatic snapshots! Phantom uses the /tmp directory in addition to the /opt directory. Be sure to include the EBS volume that is storing the /opt data. It is 1000 GB volume ( prod ) or a 60 GB volume ( TEST ).
 
 Update the profile, InstanceId, and tag and run this command to create snapshots of all volumes.
 ```
@@ -128,6 +128,8 @@ NOTE: You should ignore the "Complete!" messages. They are not indicating that t
 
 Upgrade apps after a successful upgrade. 
 
+Unsilence Sensu Phantom
+
 ## Verify that Phantom is working properly
 - create new playbook
 - run playbook
@@ -137,6 +139,11 @@ Upgrade apps after a successful upgrade.
 - Ensure you can edit an Event
 - ?
 
+# 5.1.0
+1/2022
+To allow Phantom to run on a system without IPv6 enabled, the /etc/nginx/nginx.conf file needs to be edited and line 40 (listen [::]:80; ) needs to be commented out. This allows nginx to start and Phantom to work again.
+Splunk case number: 2847652
+
 # 4.10.6
 08/2021
 minor upgrade to upgrade Nginx due to Vuln scanner. Also removes use of TLSv1.1

+ 129 - 5
Splunk Smartstore Thaw Notes.md

@@ -14,14 +14,138 @@ How much data are you going to be thawing? It doesn't matter which indexer you c
 
 ## Create a new index
 
-- add index to CM repo and push it to the indexers. Ensure a thawedPath is specified. Name the index something similar to the smartstore index such as nonS2_index-name. Do not specify a remotePath in the indexer.
+- add index to CM repo and push it to the indexers. Ensure a thawedPath is specified. Name the index something similar to the smartstore index such as nonS2_index-name. Do not specify a remotePath in the indexer. The thawedPath can NOT have volume: in it. Files location: master-apps/moose_all_indexes/local/indexes.conf
+
+```
+#No Smartstore for thawing
+#/opt/splunkdata/hot/splunk_db
+[nons2_wineventlog]
+homePath   = volume:normal_primary/$_index_name/db
+coldPath   = volume:normal_primary/$_index_name/colddb
+thawedPath = $SPLUNK_DB/nons2_wineventlog/thaweddb
+tstatsHomePath = volume:normal_primary/$_index_name/datamodel_summary
+#override defaults to ensure the data doesn't go anywhere
+remotePath = 
+frozenTimePeriodInSecs = 188697600
+```
+
+## Restore the buckets from Glacier to S3
+
+RUN sc3cmd ON YOUR LAPTOP to restore the files from Glacier NOT the server ( maybe? Does it support Assume role (STS) auth yet?). The easiest way to restore S3 objects from Glacier is to use the S3cmd script. Run these commands AFTER you setup awscli. awscli grabs keys and s3cmd uses those keys ( i think? )
+https://github.com/s3tools/s3cmd
+
+```
+wget https://github.com/s3tools/s3cmd/archive/refs/heads/master.zip
+unzip master.zip
+cd s3cmd-master
+./s3cmd --configure
+```
+
+s3cmd configuration
+`./s3cmd --dump-config`
+```
+  Default Region: us-gov-east-1
+  S3 Endpoint: s3.us-gov-east-1.amazonaws.com
+  DNS-style bucket+hostname:port template for accessing a bucket: %(bucket)s.s3-us-gov-east-1.amazonaws.com
+  Encryption password:
+  Path to GPG program: /bin/gpg
+  Use HTTPS protocol: True
+  HTTP Proxy server name: <blank> ( two spaces worked )
+  HTTP Proxy server port: 80
+  host_base = s3.us-gov-east-1.amazonaws.com
+  host_bucket = %(bucket)s.s3-us-gov-east-1.amazonaws.com
+  bucket_location = us-gov-east-1
+```
+Don't test access! Just save the config. 
+`./s3cmd ls`
+
+Unthaw just one bucket
+`./s3cmd restore --restore-priority=expedited --restore-days=3 --recursive s3://xdr-moose-test-splunk-frozen/app_vault/frozendb/db_1620941330_1619445054_9_D783E822-5127-4A28-B6B5-6F42F5ACEC99/`
+
+`~/s3cmd-master/s3cmd restore --restore-priority=expedited --restore-days=3 --recursive s3://xdr-moose-test-splunk-frozen/app_vault/frozendb/db_1620941733_1619458749_12_E9AE582F-FAA8-4D0C-974F-148229DEB8A1/`
+
+Unthaw one whole index only. This will take 12 seconds per bucket. `1000 buckets = 15*1000 seconds or 4 hours.` Use tmux to keep the session alive!!! notice this will not thaw rb_* buckets if you use the exclude
+`./s3cmd restore --restore-priority=expedited --restore-days=14 --recursive --exclude="frozendb/rb_*" s3://xdr-afs-prod-splunk-frozen/wineventlog/`
 
 ## Copy the buckets from S3 into the thawpath in the new index
 
-`aws s3`
+Ensure the thaweddb folder exists after pushing out the new bundle to the indexers, then copy the bucket into the dir. aws cli is probably not installed so see splunk ma-c19 offbording notes for creating a python3 venv.
+
+
+`aws s3 ls`
+###  <- This one worked AFTER the files were restored -->
+~/awscli/bin/aws s3 cp s3://xdr-moose-test-splunk-frozen/app_vault/frozendb/db_1620941330_1619445054_9_D783E822-5127-4A28-B6B5-6F42F5ACEC99/ /opt/splunkdata/hot/splunk_db/nons2_app_vault/thaweddb/db_1620941330_1619445054_9_D783E822-5127-4A28-B6B5-6F42F5ACEC99/ --recursive --force
+
+~/awscli/bin/aws s3 cp s3://xdr-moose-test-splunk-frozen/app_vault/frozendb/db_1620941733_1619458749_12_E9AE582F-FAA8-4D0C-974F-148229DEB8A1/ /opt/splunkdata/hot/splunk_db/nons2_app_vault/thaweddb/db_1620941733_1619458749_12_E9AE582F-FAA8-4D0C-974F-148229DEB8A1/ --recursive --force
+
+### Setup for the zztop.sh script
+
+Once you can pull an individual bucket, use this to pull multiple buckets.
+
+Make list of ALL buckets in each index ( use this for Multiple indexes)
+
+Make list of indexes
+`aws s3 ls s3://mdr-afs-prod-splunk-frozen | awk '{ print $2 }' > foo1`
+
+`for i in $(cat foo1| egrep -v ^_); do aws s3 ls s3://mdr-afs-prod-splunk-frozen/${i}frozendb/ | egrep "db" | awk -v dir=$i '{ printf("s3://mdr-afs-prod-splunk-frozen/%sfrozendb/%s\n",dir,$2)}' ; done > bucketlist`
+
+Use this for ONE index
+`~/awscli/bin/aws s3 ls s3://xdr-afs-prod-splunk-frozen/wineventlog/frozendb/ | egrep "db_16(1|2|3)" | awk -v dir=wineventlog '{ printf("s3://xdr-afs-prod-splunk-frozen/%s/frozendb/%s\n",dir,$2)}' > bucketlist`
+
+break up list ( 10 indexers in this case )
+`cat bucketlist | awk '{ x=NR%10 }{print >> "indexerlist"x}'`
+
+break up list ( 3 indexers in this case )
+`cat bucketlist | awk '{ x=NR%3 }{print >> "indexerlist"x}'`
+
+Move the files to the salt master to distribute them Or just manually put them on different indexers.
+install the awscli on the indexers
+
+move the indexerlistX to each indexer
+copy zztop.sh to each indexer. 
+
+zztop.sh
+```
+#!/bin/bash
+DEST=$( echo $1 | awk -F/ '{ print "/opt/splunkdata/hot/splunk_db/nons2_wineventlog/thaweddb/"$6 }' )
+mkdir -p $DEST
+/root/awscli/bin/aws s3 cp $1 $DEST --recursive --force-glacier-transfer --no-progress
+```
+
+try out one line to ensure the DEST is correct. 
+`egrep -h "*" indexerlist* | head -1 |  awk -F/ '{ print "/opt/splunkdata/hot/splunk_db/nons2_wineventlog/thaweddb/"$6 }'`
+
+Try one splunk bucket
+`egrep -h "*" indexerlist* | head -1 | xargs -P 10 -n 1 ./zztop.sh`
+
+Go for it. use tmux to avoid session timeout. adjust -P # to increase threads.
+`egrep -h "*" indexerlist* | xargs -P 3 -n 1 ./zztop.sh`
+
+Change permissions to ensure folders are owned by splunk user. 
+`chmod -R splunk: *`
+
+Not sure why but it named the buckets with "inflight-". 
+
+fix_names.sh
+```
+#!/bin/bash
+
+prefix="inflight-"
+for i in $(ls)
+do
+	if [[ $i == *"inflight-"* ]]; then
+	  echo "$i"
+	  k=${i#"$prefix"}
+	  echo "$k"
+ 	  mv $i $k
+	fi
+done
+```
+
+## Restart the Indexers
 
-## Rebuild the buckets to make them searchable
+Just do a rolling restart?
 
-`splunk rebuild $SPLUNK_HOME/var/lib/splunk/defaultdb/thaweddb/db_1181756465_1162600547_1001`
+## Datamodel Acceleration
 
-## restart the indexer
+Please note that restoring the data will not add it to the current datamodels. Once the data is restored, Splunk will start accelerating the data assuming it is configured to do so and the restored data includes data that is configured to be accelerated. 

+ 1 - 1
Splunk ma-c19 Offboarding Notes.md

@@ -109,7 +109,7 @@ aws --version
 cd ~ && python3 -m venv awscli && source awscli/bin/activate && cd awscli/bin && pip install awscli && chmod +x aws && aws --version
 ```
 
-The aws cli should be able to use the IAM instance role to connect to S3. No need to add AWS keys but you will need to configure the region. 
+The aws cli should be able to use the IAM instance role to connect to S3. No need to add AWS keys but you will need to configure the region (us-gov-east-1). 
 
 ```
 aws configure