Browse Source

some of dis, some of dat

Brad Poulton 3 years ago
parent
commit
3134dd449b

+ 2 - 6
Decommission Customer Notes.md

@@ -182,13 +182,9 @@ https://moose-splunk.pvt.xdr.accenturefederalcyber.com/en-US/app/SplunkEnterpris
 
 
 ### Request AWS account be fully terminated
-help desk ticket with camrs. Or have Soofi, Osman <osman.soofi@accenturefederal.com> submit a CAMRS disconnect ticket. not sure which one is the best method yet. IMPORTANT: After the account is closed, AWS allows users to login for 90 days. 
+Create jira ticket for Soofi, Osman <osman.soofi@accenturefederal.com> to submit a CAMRS disconnect ticket in the Jira PMO project.  IMPORTANT: After the account is closed, AWS allows users to login for 90 days. 
 
-```
-AFS.Help <afs.help@accenturefederal.com>; XDR-Engineering <xdr.eng@accenturefederal.com>
-```
-
-SUBJECT: Decommission CAMRS AWS Account
+Summary: Decommission CAMRS AWS Account
 
 ```
 Hello,

+ 1 - 1
Splunk AFS Thaw Request Notes.md

@@ -3,7 +3,7 @@
 This documents the process for searching the frozen data that is stored in S3.
 
 Plan:
-- charge time to CIRT Ops Support     SPROJ.061
+- charge time to CIRT Ops Support     SPROJ.061 S&ID CIRT Ops Support_A
 - Don't use TF to manage the servers. 
 - stand up muliple ( minimum 3? ) centos7 servers with large EBS disks
 - stand up one SH for the indexers

+ 0 - 0
ExtremeRules.md → Splunk ExtremeRules.md


+ 27 - 0
Splunk Smartstore Thaw Notes.md

@@ -0,0 +1,27 @@
+# Splunk Smartstore Thaw Notes
+
+https://docs.splunk.com/Documentation/Splunk/latest/Indexer/Restorearchiveddata#Thaw_a_4.2.2B_archive
+
+## Thawing Frozen Data
+
+DO NOT thaw an archived (frozen) bucket into a SmartStore index!
+
+Create a separate, "classic" index that does not utilize SmartStore (no `remotePath`) and thaw the buckets into the `thawedPath` of that index. If you plan to thaw buckets frequently, you might want to create a set of non-SmartStore indexes that parallel the SmartStore indexes in name. For example, "nonS2_main".
+
+## Check disk size
+
+How much data are you going to be thawing? It doesn't matter which indexer you copy the buckets to. Start with the first indexer and fill up the drive to an acceptable amount then start coping buckets to the next indexer. If you run out of acceptable space on all the indexers, use TF to create more indexers and copy the buckets to the new indexers.  
+
+## Create a new index
+
+- add index to CM repo and push it to the indexers. Ensure a thawedPath is specified. Name the index something similar to the smartstore index such as nonS2_index-name. Do not specify a remotePath in the indexer.
+
+## Copy the buckets from S3 into the thawpath in the new index
+
+`aws s3`
+
+## Rebuild the buckets to make them searchable
+
+`splunk rebuild $SPLUNK_HOME/var/lib/splunk/defaultdb/thaweddb/db_1181756465_1162600547_1001`
+
+## restart the indexer