瀏覽代碼

Splunk,openvpn,tenable

Brad Poulton 3 年之前
父節點
當前提交
5d3656c5a0
共有 7 個文件被更改,包括 61 次插入24 次删除
  1. 39 10
      Decommission Customer Notes.md
  2. 1 1
      Okta Notes.md
  3. 7 7
      OpenVPN Notes.md
  4. 5 0
      Phantom Upgrade Notes.md
  5. 1 1
      Splunk Notes.md
  6. 3 2
      Splunk ma-c19 Offboarding Notes.md
  7. 5 3
      Tenable Notes.md

+ 39 - 10
Decommission Customer Notes.md

@@ -34,19 +34,30 @@ Update TF code and remove whitelisted SG IPs and/or rules to remove access from
 
 - Silence instances in Sensu to avoid notifications
 - Disable termination protection in AWS console
-- Destroy the AWS objects with the `terragrunt destroy` command in all folders except 005-iam. Ignore error deleting S3 bucket BucketNotEmpty in 006-account-standards. (170-splunk-searchhead, 180-splunk-heavy-forwarder, 150-splunk-cluster-master, 160-splunk-indexer-cluster, 140-splunk-frozen-bucket, 010-vpc-splunk,072-salt-master-inventory-role, 021-qualys-connector-role, 007-backups, 006-account-standards-regional, 006-account-standards) 
+- Destroy the AWS objects with the `terragrunt destroy` command in all folders except 005-iam. Ignore error deleting S3 bucket BucketNotEmpty in 006-account-standards. 
+    180-splunk-heavy-forwarder
+    170-splunk-searchhead
+    165-splunk-legacy-hec ( only for accounts that got migrated from Legacy, Might error! )
+    160-splunk-indexer-cluster
+    150-splunk-cluster-master
+    140-splunk-frozen-bucket ( Use the console to empty bucket before TF will remove the bucket )
+    072-salt-master-inventory-role
+    021-qualys-connector-role
+    010-vpc-splunk
+    007-backups
+    006-account-standards-regional ( might be nothing )
+    006-account-standards
 - Create new git branch in XDR-Terraform-Live
-- Remove the folders that were just destoryed ( NOT 005-iam or account.hcl ) to ensure the instances can not be created again
+- Remove the folders that were just destroyed ( NOT 005-iam or account.hcl ) to ensure the instances can not be created again
 - Ensure the customer vpc is fully deleted in the AWS console
 - Remove AWS Account from the partition.hcl file in the account_map["prod"] variable  ( common/aws-us-gov/partition.hcl )
 
 
 #### Remove references to LCP nodes in the globals.hcl file. 
 
-
 - Remove customer IPs from C&C IP whitelisting in xdr-terraform-live/globals.hcl in the c2_services_external_ips variable
 - Remove customer IPs from Moose SG whitelisting in xdr-terraform-live/prod/aws-us-gov/mdr-prod-c2/account.hcl in the splunk_data_sources variable 
-- Remove customer from Portal Lambda customer_vars variable in xdr-terraform-live/prod/aws-us-gov/mdr-prod-c2/205-customer-portal-lambda/terragrunt.hcl 
+- Remove customer from Portal Lambda customer_vars variable in xdr-terraform-live/prod/aws-us-gov/mdr-prod-c2/205-customer-portal-lambda/terragrunt.hcl
 - Delete the sensu entities and resolve any alerts
 - On the salt master, delete the salt minion keys `sudo salt-key -d <CUSTOMER-PREFIX>*`
 - On ScaleFT website, delete the project and servers
@@ -60,13 +71,14 @@ Update TF code and remove whitelisted SG IPs and/or rules to remove access from
     - prod/aws-us-gov/mdr-prod-c2/095-instance-sensu
     - prod/aws-us-gov/mdr-prod-c2/080-instance-repo-server
     - prod/aws-us-gov/mdr-prod-c2/071-instance-salt-master
-    - prod/aws-us-gov/mdr-prod-c2/008-transit-gateway-hub
-    - prod/aws-us-gov/mdr-prod-c2/005-account-standards-c2
+    - prod/aws-us-gov/mdr-prod-c2/008-transit-gateway-hub ( don't worry about aws_ram_principal_association.share_with_accounts for other accounts )
+    - prod/aws-us-gov/mdr-prod-c2/005-account-standards-c2 ( many changes are made and looks scary )
 
 
-### Remove the GovCloud and Commercial AWS account ID from Packer and Salt 
+### Remove the GovCloud and Commercial AWS account ID from Packer and Salt
 
 https://github.xdr.accenturefederalcyber.com/mdr-engineering/msoc-infrastructure/wiki/Cloud-Accounts 
+Search the msoc-infrastructure.wiki repo for customer short name and remove references. 
 
 - Create new git branch in msoc_infrastructure
 - Remove Packer AWS accounts in packer/Makefile
@@ -85,18 +97,21 @@ Remove references of the customer from these places:
 - Salt okta auth in salt/pillar/os_settings.sls 
 - Salt gitfs pillar in salt/pillar/salt_master.sls 
 - Salt FM Shared Search in salt/pillar/fm_shared_search.sls
+- Salt ACL in salt/fileroots/salt_master/files/etc/salt/master.d/default_acl.conf
 
+Search for the customer short name to ensure nothing is missed. 
  
-Apply changes in salt to remove references to the old customer. 
+Open git PR, get it approved, in master branch, and apply changes in salt to remove references to the old customer. 
 
-Update salt master  
+Update salt master 
+`sudo salt-run fileserver.update` 
 `salt salt* state.sls salt_master --output-diff test=true`
 
 Update the FM search head and monitoring console
 `salt splunk-mc-0* state.sls splunk.monitoring_console --output-diff test=true`
 `salt fm-shared-search-0* state.sls splunk.fm_shared_search --output-diff test=true`
 
-Disable the instances in the Monitoring Console webpage ( how to delete the instances? )
+Disable the instances in the Monitoring Console webpage ( how to delete the instances? ), then save the changes. 
 Verify the search peers have been removed from the distributed search in the FM Shared Searchhead and the monitoring console.
 
 https://splunk-mc.pvt.xdr.accenturefederalcyber.com/en-US/manager/search/search/distributed/peers?sort_dir=desc&sort_key=health_status&search=Down&count=100&api.mode=extended
@@ -109,10 +124,24 @@ Each customer should have three applications. Deactive the app, then delete it.
 <CUSTOMER-PREFIX> Splunk SH
 
 
+### Moose HF Cleanup
+Remove the account from the Moose HF AWS app. 
+
+- Log into moose-splunk-hf
+- Go to apps->Splunk Addon for AWS
+- Go to Inputs, filter on the customer prefix, disable then remove each input.
+- Go to Configuration->IAM Role, remove the role for the account.
+
+
+
 ### Qualys Cleanup
 Go to Qualys Dashboard -> Cloud Agent -> Activation Keys
 Disable the key, not sure how to delete it. Perhaps have to wait a period of time?
 
+### Tenable Cleanup
+The Vuln data will age out over time. 
+The agents will be auto removed after 30 days from the Nessus Manager or they can be manually deleted.
+
 ### Archive Customer Git Repos
 Do this after the Salt Master gitfs has been updated to avoid any error messages. 
 

+ 1 - 1
Okta Notes.md

@@ -28,7 +28,7 @@ terragrunt apply
 
 ## Okta Rate Limiting
 
-Okta will rate limit us if we hit the API to frequently. This causes users to not be able to VPN in because the OpenVPN server cannot connect to the OKTA API in a timely manner. To see if this is happening you can log into OKTA and look for a banner indicating the rate limiting. We also pull logs into Moose Splunk via the OKTA API so you can run this Splunk search on Moose to see if we are getting errors. Finally if you log into the OpenVPN and see timeout errors that is an indicator that OKTA is rate limiting us on the OKTA API. 
+Okta will rate limit us if we hit the API to frequently. This causes users to not be able to VPN in because the OpenVPN server cannot connect to the OKTA API in a timely manner. To see if this is happening you can log into OKTA and look for a banner indicating the rate limiting. We also pull logs into Moose Splunk via the OKTA API so you can run this Splunk search on Moose to see if we are getting errors. Finally, if you log into the OpenVPN and see timeout errors that is an indicator that OKTA is rate limiting us on the OKTA API. 
 
 ```
 index=_internal host=moose-splunk-hf* source=*okta* rate limit pausing operations

+ 7 - 7
OpenVPN Notes.md

@@ -14,12 +14,12 @@ There is a strict dependency that OpenVPN be started after `firewalld`.
 
 `ldap.read@defpoint.com` is the Okta user that OpenVPN uses to auth to Okta. The `ldap.read` account's password expires after 60 days. To see when the password will expire, go to [Reports -> Okta Password Health](https://mdr-multipass-admin.okta.com/reports). Don't open with EXCEL! Add 60 days to the date in the last column.  
 
-1. Be on prod VPN.
-1. Log into OKTA in an Incognito window using the `ldap.read` username and the current password from Vault (`engineering/root`). Brad's phone is currently setup with the Push notification for the account. The MFA is required for the account. To change the password without Brad, remove MFA with your account in OKTA and set it up on your own phone. 
-2. Once the password has been updated, update Vault in this location, `engineering/root` with a key of `ldap.read@defpoint.com`. You will have to create a new version of engineering/root to save the password. 
-3. Store the new password and the creds for openvpn and drop off the VPN. Log into the [OpenVPN web GUI](https://openvpn.xdr.accenturefederalcyber.com/admin/) as the openvpn user (password in Vault) and update the credentials for `ldap.read`. Authentication -> ldap -> update password -> Save Settings. Then update running server. Repeat this for the [Dev Environment](https://openvpn.xdrtest.accenturefederalcyber.com/admin/) 
-4. Verify that you are able to login to the VPN. 
-5. Set reminder in your calendar to reset the password in less than 60 days. 
+- Be on prod VPN.
+- Log into OKTA in an Incognito window using the `ldap.read` username and the current password from Vault (`engineering/root`). Three failed logins will lock the user. Brad's phone is currently setup with the Push notification for the account. The MFA is required for the account. To change the password without Brad, remove MFA with your account in OKTA and set it up on your own phone. 
+- Once the password has been updated, update Vault in this location, `engineering/root` with a key of `ldap.read@defpoint.com`. You will have to create a new version of engineering/root to save the password. 
+- Store the new password and the creds for openvpn and drop off the VPN. Log into the [OpenVPN web GUI](https://openvpn.xdr.accenturefederalcyber.com/admin/) as the openvpn user (password in Vault) and update the credentials for `ldap.read`. Authentication -> ldap -> update password -> Save Settings. Then update running server. Repeat this for the [Dev Environment](https://openvpn.xdrtest.accenturefederalcyber.com/admin/) 
+- Verify that you are able to login to the VPN. 
+- Set reminder in your calendar to reset the password in less than 60 days. 
 
 
 ------------
@@ -99,7 +99,7 @@ For example, to disconnect a user who fails to transmit/receive at least 75,000
 
 # 15 minutes
 /usr/local/openvpn_as/scripts/sacli --user __DEFAULT__ --key prop_isec --value 900 UserPropPut
-/usr/local/openvpn_as/scripts/sacli --user __DEFAULT__ --key prop_ibytes --value 37500 UserPropPut
+/usr/local/openvpn_as/scripts/sacli --user __DEFAULT__ --key prop_ibytes --value 5000 UserPropPut
 
 #verify the setting is in place
 /usr/local/openvpn_as/scripts/confdba -us -p __DEFAULT__

+ 5 - 0
Phantom Upgrade Notes.md

@@ -104,8 +104,13 @@ use the rpm command to upgrade the repo package. ( RPM preferred )
 ```
 rpm -Uvh https://repo.phantom.us/phantom/<major version.minor version>/base/7Server/x86_64/phantom_repo-<major version.minor version.release.build number>-1.x86_64.rpm
 
+REDHAT ONLY
 rpm -Uvh https://repo.phantom.us/phantom/5.0/base/7Server/x86_64/phantom_repo-5.0.1.66250-1.x86_64.rpm
 rpm -Uvh https://repo.phantom.us/phantom/5.1/base/7Server/x86_64/phantom_repo-5.1.0.70187-1.x86_64.rpm
+
+CENTOS ONLY (CAASP)
+rpm -Uvh https://repo.phantom.us/phantom/5.0/base/7/x86_64/phantom_repo-5.0.1.66250-1.x86_64.rpm
+rpm -Uvh https://repo.phantom.us/phantom/5.1/base/7/x86_64/phantom_repo-5.1.0.70187-1.x86_64.rpm
 ```
 
 ## Upgrade

+ 1 - 1
Splunk Notes.md

@@ -39,7 +39,7 @@ Splunk CM is the license master and the salt master is used to push out a new li
 
 Update the license file at `salt/fileroots/splunk/files/licenses/<customer>/`. Use the scripts in that folder to remove expired licenses, or view license details. 
 
-`salt-run` 
+`salt-run fileserver.update` 
 `salt *cm* state.sls splunk.license_master --output-diff`
 
 To remove expired licenses, remove the license from the salt code and push the changes to master branch. Then use salt to remove the licenses. This will cause Splunk to restart and the license to be removed from the license master. 

+ 3 - 2
Splunk ma-c19 Offboarding Notes.md

@@ -70,15 +70,16 @@ index=_internal host=ma-c19-splunk-cm* source="/opt/splunk/var/log/splunk/licens
 /opt/splunkdata/hot/normal_primary
             Uncompressed    Compressed
 (5 parts uploaded)app_aws     368GB     246G   
-(4 parts uploaded)junk        223GB     163G  
 (3 parts uploaded)salesforce  185GB           112GB
+(4 parts uploaded)junk        223GB     163G  
 (done)azure       47GB            36GB
 (done)app_o365    4.8GB           3.3GB
 (done)defaultdb   12MB            7.1MB
 (done)audit       30GB            21GB
 total: 635GB
 total Compressed: 581 GB
-Progress Bar: 2/12
+Progress Bar: 12/12
+total progress bar: 16/16
 
 File sizes `du -sh * | sort -h`
 

+ 5 - 3
Tenable Notes.md

@@ -28,8 +28,9 @@ sudo /opt/nessus_agent/sbin/nessuscli -v
 
 - Download the latest RPM from [Tenable Download - Nessus](https://www.tenable.com/downloads/nessus)
 - Check the sha256 on your mac with `shasum -a 256 Nessus-8.15.1-es7.x86_64.rpm`
-- Use teleport to scp the file to the test and prod repo server; See [How to add a new package to the Reposerver](Reposerver%20Notes.md)
+- Use teleport web UI to upload the file to the test and prod repo server; See [How to add a new package to the Reposerver](Reposerver%20Notes.md)
 - Update the tenable repo per the Reposerver Notes above
+- Stop the service and take an EBS snapshot as a backup ( see below for details )
 - Note: You can upgrade all three Nessus servers at the same time with `salt nessus* cmd.run`
 - Run `yum clean all && yum makecache fast` on the appropriate server or `salt nessus-scan* pkg.upgrade name=Nessus` on salt-master to update the software from the repo server
 - For Nessus, you need to start the software after the upgrade with `systemctl start nessusd.service`
@@ -40,9 +41,10 @@ NOTE: The Tenable Agents upgrade themselves through the Nessus Manager.
 
 ### Security Patches
 Occasionally Tenable will release patches for Tenable.sc. These patches need to be installed on the commandline and not through the reposerver.
-- Download the security patch
+- Download the security patch to your Mac
 - Check the hash against the tenable provided one
-- Use teleport to scp the file directly to the Tenable.sc server
+    - `shasum -a 256 SC-202110.1-5.x-rh7-64.tgz`
+- Use teleport web UI to upload the file directly to the Tenable.sc server
 - Stop Tenable.sc and take a backup via snapshots
     - `systemctl stop SecurityCenter`
     - Use the AWS cli to take a snapshot of all EBS volumes