|
@@ -2,15 +2,14 @@
|
|
|
|
|
|
Currently a 3 node multi-site cluster. Possible solution, set search and rep factor to 3 and 3 then pull the index files off one of the indexers to a new instance. On the new instance, setup multi-site cluster with one site and see if you can read the indexed files.
|
|
|
|
|
|
+[Splunk Enterprise 8.0.2 - "Managing Indexers and Clusters of Indexers" - Decommission a site in a multisite indexer cluster](https://docs.splunk.com/Documentation/Splunk/8.0.2/Indexer/Decommissionasite)
|
|
|
+[Splunk Enterprise 7.0.3 - "Managing Indexers and Clusters of Indexers" - Multisite indexer cluster deployment](https://docs.splunk.com/Documentation/Splunk/7.0.3/Indexer/Multisitedeploymentoverview)
|
|
|
|
|
|
-https://docs.splunk.com/Documentation/Splunk/8.0.2/Indexer/Decommissionasite
|
|
|
-https://docs.splunk.com/Documentation/Splunk/7.0.3/Indexer/Multisitedeploymentoverview
|
|
|
-
|
|
|
-1 - cluster master
|
|
|
+1 - cluster master
|
|
|
1 - indexer with search
|
|
|
|
|
|
|
|
|
-
|
|
|
+```
|
|
|
/opt/splunkdata/hot/normal_primary/
|
|
|
|
|
|
indexes:
|
|
@@ -52,17 +51,17 @@ site_replication_factor = origin:1,site1:1,site2:1,site3:1,total:3
|
|
|
available_sites = site1,site2,site3
|
|
|
cluster_label = afs_index_cluster
|
|
|
|
|
|
-
|
|
|
+```
|
|
|
|
|
|
Steps
|
|
|
-1. change /opt/splunk/etc/system/local/server.conf site_search_factor to origin:1,site1:1,site2:1,site3:1,total:3 This will ensure we have a searchable copy of all the buckets on all the sites. Should I change site_replication_factor to origin:1, total:1? this would reduce the size of the index.
|
|
|
-2. restart CM ( this will apply the site_search_factor )
|
|
|
-3. send data to junk index (oneshot)
|
|
|
-3.1 /opt/splunk/bin/splunk add oneshot /opt/splunk/var/log/splunk/splunkd.log -sourcetype splunkd -index junk
|
|
|
-4. stop one indexer and copy index to new cluster.
|
|
|
-5. on new cluster, setup CM and 1 indexer in multisite cluster. the clustermaster will be a search head in the same site
|
|
|
-6. setup new cluster to have site_mappings = default:site1
|
|
|
-7. attempt to search on new cluster
|
|
|
+1. Change `/opt/splunk/etc/system/local/server.conf` `site_search_factor` to `origin:1,site1:1,site2:1,site3:1,total:3`. This will ensure we have a searchable copy of all the buckets on all the sites. Should I change `site_replication_factor` to `origin:1, total:1`? This would reduce the size of the index.
|
|
|
+2. Restart CM ( this will apply the `site_search_factor` )
|
|
|
+3. Send data to junk index (oneshot)
|
|
|
+ - 3.1 /opt/splunk/bin/splunk add oneshot /opt/splunk/var/log/splunk/splunkd.log -sourcetype splunkd -index junk
|
|
|
+4. Stop one indexer and copy index to new cluster.
|
|
|
+5. On new cluster, setup CM and 1 indexer in multisite cluster. the clustermaster will be a search head in the same site
|
|
|
+6. Setup new cluster to have site_mappings = default:site1
|
|
|
+7. Attempt to search on new cluster
|
|
|
|
|
|
|
|
|
made the new junk index on test saf
|
|
@@ -71,7 +70,7 @@ latest = 02/21/20 9:32:01 PM UTC
|
|
|
earlest = 02/19/20 2:32:57 PM UTC
|
|
|
|
|
|
Before copying the buckets, ensure they are ALL WARM buckets, HOT buckets maybe be deleted on startup.
|
|
|
-
|
|
|
+```
|
|
|
#check on the buckets
|
|
|
| dbinspect index=junk
|
|
|
|
|
@@ -95,18 +94,20 @@ saf-offboarding-ssh Security group <- delete this not needed just SSH from Basti
|
|
|
|
|
|
splunk version 7.0.3
|
|
|
|
|
|
-setup proxy for yum and wget
|
|
|
+#setup proxy for yum and wget
|
|
|
vi /etc/yum.conf
|
|
|
proxy=http://proxy.msoc.defpoint.local:80
|
|
|
yum install vim wget
|
|
|
vim /etc/wgetrc
|
|
|
http_proxy = http://proxy.msoc.defpoint.local:80
|
|
|
https_proxy = http://proxy.msoc.defpoint.local:80
|
|
|
+```
|
|
|
|
|
|
-Download Splunk
|
|
|
+#Download Splunk
|
|
|
+```
|
|
|
wget -O splunk-7.0.3-fa31da744b51-linux-2.6-x86_64.rpm 'https://www.splunk.com/page/download_track?file=7.0.3/linux/splunk-7.0.3-fa31da744b51-linux-2.6-x86_64.rpm&ac=&wget=true&name=wget&platform=Linux&architecture=x86_64&version=7.0.3&product=splunk&typed=release'
|
|
|
|
|
|
-install it
|
|
|
+#install it
|
|
|
yum localinstall splunk-7.0.3-fa31da744b51-linux-2.6-x86_64.rpm
|
|
|
|
|
|
#setup https
|
|
@@ -124,11 +125,14 @@ https://10.1.2.170:8000/en-US/app/launcher/home
|
|
|
#Indexer
|
|
|
https://10.1.2.236:8000/en-US/app/launcher/home
|
|
|
|
|
|
-Change password for admin user
|
|
|
+#Change password for admin user
|
|
|
/opt/splunk/bin/splunk edit user admin -password Jtg0BS0nrAyD -auth admin:changeme
|
|
|
|
|
|
+```
|
|
|
+
|
|
|
Turn on distributed search in the GUI
|
|
|
|
|
|
+```
|
|
|
#on CM
|
|
|
/opt/splunk/etc/system/local/server.conf
|
|
|
[general]
|
|
@@ -160,11 +164,12 @@ master_uri = https://10.1.2.170:8089
|
|
|
mode = slave
|
|
|
pass4SymmKey = password
|
|
|
[replication_port://9887]
|
|
|
+```
|
|
|
|
|
|
***ensure networking is allowed between the hosts***
|
|
|
|
|
|
The indexer will show up in the Cluster master
|
|
|
-
|
|
|
+```
|
|
|
#create this file on the indexer
|
|
|
/opt/splunk/etc/apps/saf_all_indexes/local/indexes.conf
|
|
|
|
|
@@ -176,14 +181,14 @@ thawedPath = $SPLUNK_DB/junk/thaweddb
|
|
|
#copy the index over to the indexer
|
|
|
cp junk_index.targz /opt/splunk/var/lib/splunk/
|
|
|
tar -xzvf junk_index.targz
|
|
|
+```
|
|
|
|
|
|
-
|
|
|
-###################################################################################
|
|
|
+###################################################################################
|
|
|
PROD testing Notes
|
|
|
|
|
|
-SAF PROD Cluster testing with the te index.
|
|
|
-The indexers do not have the space to move to search/rep factor 3/3. Duane suggests keeping the current 2/3 and letting the temp splunk cluster make the buckets searchable. according to the monitoring console:
|
|
|
-
|
|
|
+SAF PROD Cluster testing with the te index.
|
|
|
+The indexers do not have the space to move to search/rep factor 3/3. Duane suggests keeping the current 2/3 and letting the temp splunk cluster make the buckets searchable. According to the monitoring console:
|
|
|
+```
|
|
|
te index gathered on Feb 26
|
|
|
total index size: 3.1 GB
|
|
|
total raw data size uncompressed: 10.37 GB
|
|
@@ -208,10 +213,10 @@ size on disk
|
|
|
|
|
|
size of tarball
|
|
|
490 MB
|
|
|
-
|
|
|
+```
|
|
|
|
|
|
Allow instance to write to S3 bucket
|
|
|
-
|
|
|
+```
|
|
|
{
|
|
|
"Id": "Policy1582738262834",
|
|
|
"Version": "2012-10-17",
|
|
@@ -231,14 +236,16 @@ Allow instance to write to S3 bucket
|
|
|
}
|
|
|
]
|
|
|
}
|
|
|
-
|
|
|
+```
|
|
|
+```
|
|
|
./aws s3 cp rst2odt.py s3://mdr-saf-off-boarding
|
|
|
./aws s3 cp /opt/splunkdata/hot/normal_primary/saf_te_index.tar.gz s3://mdr-saf-off-boarding
|
|
|
|
|
|
aws --profile=mdr-prod s3 presign s3://mdr-saf-off-boarding/saf_te_index.tar.gz --expires-in 604800
|
|
|
+```
|
|
|
|
|
|
-uploaded brad_LAN key pair to AWS for new instances.
|
|
|
-
|
|
|
+uploaded `brad_LAN` key pair to AWS for new instances.
|
|
|
+```
|
|
|
vpc-0202aedf3d0417cd3
|
|
|
subnet-01bc9f77742ff132d
|
|
|
sg-03dcc0ecde42fc8c2, sg-077ca2baaca3d8d97
|
|
@@ -269,31 +276,32 @@ ip-10-1-3-24
|
|
|
|
|
|
#indexer-3
|
|
|
ip-10-1-3-40
|
|
|
+```
|
|
|
|
|
|
-
|
|
|
-use virtualenv to grab awscli
|
|
|
-
|
|
|
+use `virtualenv` to grab awscli
|
|
|
+```
|
|
|
export https_proxy=http://proxy.msoc.defpoint.local:80
|
|
|
sudo -E ./pip install awscli
|
|
|
|
|
|
./aws s3 cp s3://mdr-saf-off-boarding/saf_te_index.tar.gz /opt/splunk/var/lib/splunk/saf_te_index.tar.gz
|
|
|
+```
|
|
|
|
|
|
move index to CM rep buckets are not expanding to Search buckets
|
|
|
|
|
|
1. rm -rf saf_all_indexes
|
|
|
2. create it on the CM
|
|
|
-2.1 mkdir -p /opt/splunk/etc/master-apps/saf_all_indexes/local/
|
|
|
-2.2 vim /opt/splunk/etc/master-apps/saf_all_indexes/local/indexes.conf
|
|
|
+ - 2.1 `mkdir -p /opt/splunk/etc/master-apps/saf_all_indexes/local/`
|
|
|
+ - 2.2 `vim /opt/splunk/etc/master-apps/saf_all_indexes/local/indexes.conf`
|
|
|
[te]
|
|
|
homePath = $SPLUNK_DB/te/db
|
|
|
coldPath = $SPLUNK_DB/te/colddb
|
|
|
thawedPath = $SPLUNK_DB/te/thaweddb
|
|
|
repFactor=auto
|
|
|
|
|
|
-2.3 cluster bundle push
|
|
|
-2.3.1 /opt/splunk/bin/splunk list cluster-peers
|
|
|
-2.3.1 splunk validate cluster-bundle
|
|
|
-2.3.2 splunk apply cluster-bundle
|
|
|
+ - 2.3 cluster bundle push
|
|
|
+ - 2.3.1 `/opt/splunk/bin/splunk` list cluster-peers
|
|
|
+ - 2.3.1 `splunk validate cluster-bundle`
|
|
|
+ - 2.3.2 `splunk apply cluster-bundle`
|
|
|
|
|
|
|
|
|
|
|
@@ -303,17 +311,17 @@ repFactor=auto
|
|
|
#
|
|
|
##################
|
|
|
|
|
|
-#estimate size and age
|
|
|
+#estimate size and age
|
|
|
|
|
|
-| rest /services/data/indexes/
|
|
|
-| search title=app_mscas OR title = app_o365 OR title=dns OR title=forescout OR title=network OR title=security OR title=Te
|
|
|
-| eval indexSizeGB = if(currentDBSizeMB >= 1 AND totalEventCount >=1, currentDBSizeMB/1024, null())
|
|
|
-| eval elapsedTime = now() - strptime(minTime,"%Y-%m-%dT%H:%M:%S%z")
|
|
|
-| eval dataAge = ceiling(elapsedTime / 86400)
|
|
|
-| stats sum(indexSizeGB) AS totalSize max(dataAge) as oldestDataAge by title
|
|
|
-| eval totalSize = if(isnotnull(totalSize), round(totalSize, 2), 0)
|
|
|
-| eval oldestDataAge = if(isNum(oldestDataAge), oldestDataAge, "N/A")
|
|
|
-| rename title as "Index" totalSize as "Total Size (GB)" oldestDataAge as "Oldest Data Age (days)"
|
|
|
+| rest /services/data/indexes/
|
|
|
+| search title=app_mscas OR title = app_o365 OR title=dns OR title=forescout OR title=network OR title=security OR title=Te
|
|
|
+| eval indexSizeGB = if(currentDBSizeMB >= 1 AND totalEventCount >=1, currentDBSizeMB/1024, null())
|
|
|
+| eval elapsedTime = now() - strptime(minTime,"%Y-%m-%dT%H:%M:%S%z")
|
|
|
+| eval dataAge = ceiling(elapsedTime / 86400)
|
|
|
+| stats sum(indexSizeGB) AS totalSize max(dataAge) as oldestDataAge by title
|
|
|
+| eval totalSize = if(isnotnull(totalSize), round(totalSize, 2), 0)
|
|
|
+| eval oldestDataAge = if(isNum(oldestDataAge), oldestDataAge, "N/A")
|
|
|
+| rename title as "Index" totalSize as "Total Size (GB)" oldestDataAge as "Oldest Data Age (days)"
|
|
|
|
|
|
|
|
|
1. adjust CM and push out new data retention limits per customer email
|
|
@@ -343,18 +351,19 @@ tar cvzf saf_myindex_index.tar.gz myindex/
|
|
|
without encryption
|
|
|
tar cvf /hubble.tar hubble/
|
|
|
|
|
|
-trying this: https://github.com/jeremyn/s3-multipart-uploader
|
|
|
+trying this: [Github repo for s3-multipart-uploader](https://github.com/jeremyn/s3-multipart-uploader)
|
|
|
|
|
|
use virtualenv
|
|
|
-
|
|
|
+```
|
|
|
bin/python s3-multipart-uploader-master/s3_multipart_uploader.py -h
|
|
|
|
|
|
bucket name mdr-saf-off-boarding
|
|
|
|
|
|
bin/aws s3 cp /opt/splunkdata/hot/saf_te_index.tar.gz s3://mdr-saf-off-boarding/saf_te_index.tar.gz
|
|
|
+```
|
|
|
|
|
|
DID NOT NEED TO USE THE MULTIPART uploader!
|
|
|
-
|
|
|
+```
|
|
|
aws --profile=mdr-prod s3 presign s3://mdr-saf-off-boarding/saf_app_mscas_index.tar.gz --expires-in 86400
|
|
|
aws --profile=mdr-prod s3 presign s3://mdr-saf-off-boarding/saf_app_o365_index.tar.gz --expires-in 86400
|
|
|
aws --profile=mdr-prod s3 presign s3://mdr-saf-off-boarding/saf_dns_index.tar.gz --expires-in 86400
|
|
@@ -362,3 +371,4 @@ aws --profile=mdr-prod s3 presign s3://mdr-saf-off-boarding/saf_forescout_index.
|
|
|
aws --profile=mdr-prod s3 presign s3://mdr-saf-off-boarding/saf_network_index.tar --expires-in 86400
|
|
|
aws --profile=mdr-prod s3 presign s3://mdr-saf-off-boarding/saf_security_index.tar.gz --expires-in 86400
|
|
|
aws --profile=mdr-prod s3 presign s3://mdr-saf-off-boarding/saf_te_index.tar.gz --expires-in 86400
|
|
|
+```
|