Decommission Customer Notes.md 9.5 KB

Customer decommision Notes.md

Follow these steps to permanently decommision a customer.

Remove the Customer POP/LCP Nodes

5/18/2020

Shutdown Splunk and disable to prevent new data going to the cluster.

salt saf-splunk-syslog-* cmd.run 'systemctl stop splunk'
salt saf-splunk-syslog-* cmd.run 'systemctl disable splunk'

salt -C 'saf-splunk-* not *.local' cmd.run 'systemctl stop splunk'
salt -C 'saf-splunk-* not *.local' cmd.run 'rm -rf /opt/*'

salt -C 'saf-splunk-* not *.local' cmd.run 'rm -rf /var/log/*'
salt -C 'saf-splunk-* not *.local' cmd.run 'rm -rf /etc/salt/minion && shutdown now'

salt saf-splunk-syslog-* cmd.run 'systemctl stop syslog-ng'
salt saf-splunk-syslog-* cmd.run 'systemctl disable syslog-ng'
salt saf-splunk-dcn-* cmd.run 'docker stop mdr-syslog-ng'

Follow these steps to terminate a customer slice

05/3/2021

See Splunk SAF Offboarding Notes.md for notes on pulled data off an indexer to give to the customer.

Terraform, Sensu, SFT Removal

Update TF code and remove whitelisted SG IPs and/or rules to remove access from POP to C&C, Salt master, and splunk indexers. This is stored in globals.hcl or account.hcl

  • Silence instances in Sensu to avoid notifications
  • Disable termination protection in AWS console
  • Destroy the AWS objects with the terragrunt destroy command in all folders except 005-iam. Ignore error deleting S3 bucket BucketNotEmpty in 006-account-standards. 180-splunk-heavy-forwarder 170-splunk-searchhead 165-splunk-legacy-hec ( only for accounts that got migrated from Legacy, Might error! ) 160-splunk-indexer-cluster 150-splunk-cluster-master 140-splunk-frozen-bucket ( Use the console to empty bucket before TF will remove the bucket ) 072-salt-master-inventory-role 021-qualys-connector-role 010-vpc-splunk 007-backups 006-account-standards-regional ( might be nothing ) 006-account-standards
  • Create new git branch in XDR-Terraform-Live
  • Remove the folders that were just destroyed ( NOT 005-iam or account.hcl ) to ensure the instances can not be created again
  • Ensure the customer vpc is fully deleted in the AWS console
  • Remove AWS Account from the partition.hcl file in the account_map["prod"] variable ( common/aws-us-gov/partition.hcl )

Remove references to LCP nodes in the globals.hcl file.

  • Remove customer IPs from C&C IP whitelisting in xdr-terraform-live/globals.hcl in the c2_services_external_ips variable
  • Remove customer IPs from Moose SG whitelisting in xdr-terraform-live/prod/aws-us-gov/mdr-prod-c2/account.hcl in the splunk_data_sources variable
  • Remove customer from Portal Lambda customer_vars variable in xdr-terraform-live/prod/aws-us-gov/mdr-prod-c2/205-customer-portal-lambda/terragrunt.hcl
  • Delete the sensu entities and resolve any alerts
  • On the salt master, delete the salt minion keys sudo salt-key -d <CUSTOMER-PREFIX>*
  • On ScaleFT website, delete the project and servers
  • In the redhat website, remove the entitlements. Check for LCP nodes that used an entitlement

  • Commit the changes to the xdr-terraform-live repo and get merged into master

  • After changes have been merged in git, apply the changes to remove the IPs from the security groups and the AWS account from transit gateway

    • prod/aws-us-gov/mdr-prod-c2/275-nessus-security-managers
    • prod/aws-us-gov/mdr-prod-c2/205-customer-portal-lambda
    • prod/aws-us-gov/mdr-prod-c2/160-splunk-indexer-cluster
    • prod/aws-us-gov/mdr-prod-c2/095-instance-sensu
    • prod/aws-us-gov/mdr-prod-c2/080-instance-repo-server
    • prod/aws-us-gov/mdr-prod-c2/071-instance-salt-master
    • prod/aws-us-gov/mdr-prod-c2/008-transit-gateway-hub ( don't worry about aws_ram_principal_association.share_with_accounts for other accounts )
    • prod/aws-us-gov/mdr-prod-c2/005-account-standards-c2 ( many changes are made and looks scary )

Remove the GovCloud and Commercial AWS account ID from Packer and Salt

You can lookup the account numbers here, but DO NOT remove them from the wiki. https://github.xdr.accenturefederalcyber.com/mdr-engineering/msoc-infrastructure/wiki/Cloud-Accounts

  • Create new git branch in msoc_infrastructure
  • Remove Packer AWS accounts in packer/Makefile
  • Remove AWS accounts in salt/fileroots/salt_master/files/xdr_asset_inventory/xdr_asset_inventory.sh

Be sure to check for both Gov and Commerical AWS Accounts

Remove the Customer from the Salt Code

Remove references of the customer from these places:

  • Splunk Monitoring Console - salt/pillar/mc_variables.sls
  • Salt master configs in salt/fileroots/salt_master/files/etc/salt/master.d/default_acl.conf )
  • Delete Salt Splunk files - salt/pillar/${CUSTOMERPREFIX}_variables.sls salt/pillar/${CUSTOMERPREFIX}_pop_settings.sls
  • Delete folder for Splunk license - salt/fileroots/splunk/files/licenses/${CUSTOMERPREFIX}
  • Salt top.sls and pillar/top.sls - salt/fileroots/top.sls - salt/pillar/top.sls
  • Salt okta auth in salt/pillar/os_settings.sls
  • Salt gitfs pillar in salt/pillar/salt_master.sls
  • Salt FM Shared Search in salt/pillar/fm_shared_search.sls
  • Salt ACL in salt/fileroots/salt_master/files/etc/salt/master.d/default_acl.conf

Search for the customer short name to ensure nothing is missed.

Open git PR, get it approved, in master branch, and apply changes in salt to remove references to the old customer.

Update salt master sudo salt-run fileserver.update salt salt* state.sls salt_master --output-diff test=true

Update the FM search head and monitoring console salt splunk-mc-0* state.sls splunk.monitoring_console --output-diff test=true salt fm-shared-search-0* state.sls splunk.fm_shared_search --output-diff test=true

Disable the instances in the Monitoring Console webpage ( how to delete the instances? ), then save the changes. Verify the search peers have been removed from the distributed search in the FM Shared Searchhead and the monitoring console.

https://splunk-mc.pvt.xdr.accenturefederalcyber.com/en-US/manager/search/search/distributed/peers?sort_dir=desc&sort_key=health_status&search=Down&count=100&api.mode=extended

Deactivate OKTA Apps

Each customer should have three applications. Deactive the app, then delete it. Splunk CM Splunk HF Splunk SH

Moose HF Cleanup

Remove the account from the Moose HF AWS app.

  • Log into moose-splunk-hf
  • Go to apps->Splunk Addon for AWS
  • Go to Inputs, filter on the customer prefix, disable then remove each input.
  • Go to Configuration->IAM Role, remove the role for the account.

Qualys Cleanup

Go to Qualys Dashboard -> Cloud Agent -> Activation Keys Disable the key, not sure how to delete it. Perhaps have to wait a period of time?

Tenable Cleanup

The Vuln data will age out over time. The agents will be auto removed after 30 days from the Nessus Manager or they can be manually deleted.

Archive Customer Git Repos

Do this after the Salt Master gitfs has been updated to avoid any error messages.

Git > Settings > Options > Archive this repository msoc--cm msoc--pop

Clean Up Vault Passwords

Delete engineering/customer_slices/ Disable onboarding- Remove customer from the Vault variables portal/lambda_sync_env

Report the Decommissioned Hosts to the ISSO/AFCC Team

Look in the splunk inventory for the Splunk names or look for emails indicating the logs are not sending.

afcc@accenturefederal.com;asha.a.nair@accenturefederal.com
Accenture Federal Cyber Center <afcc@accenturefederal.com>; Nair, Asha A. <asha.a.nair@accenturefederal.com>

SUBJECT: Decommissioned XDR Devices

Hello,

The below instances have been decommissioned from the environment and should be removed from any reports or inventories. 

<list full splunk UF name of instances>


This lookup also needs to be edited. https://moose-splunk.pvt.xdr.accenturefederalcyber.com/en-US/app/SplunkEnterpriseSecuritySuite/ess_lookups_edit?namespace=SA-IdentityManagement&transform=simple_asset_lookup

Request AWS account be fully terminated

Create jira ticket for Soofi, Osman osman.soofi@accenturefederal.com to submit a CAMRS disconnect ticket in the Jira PMO project. IMPORTANT: After the account is closed, AWS allows users to login for 90 days.

Summary: Decommission CAMRS AWS Account

Hello,

Please inform the CAMRS team that these AWS Accounts for <CUSTOMER-PREFIX> are no longer needed and can be decommissioned.

<AWS-ACCOUNT-ID-GOV>
<AWS-ACCOUNT-ID-COMMERCIAL>

Remove From Browser Plugin

Inform team to remove AWS account from the browser plugin. Post this in xdr-engineering-actual.

<CUSTOMER-PREFIX> has been decommissioned. Please manually remove the customer config from your browser plugin AWS Extend Switch Roles and update the infrastructure notes git repo to pull down the aws config with the customer removed.

Update the AWS Configuration

files/config in infrastructure-notes

Mark the AWS Account decommissioned in the WIKI once the email to decommission the AWS account has been sent. We should keep the AWS Account numbers just in case they are needed in the future. https://github.xdr.accenturefederalcyber.com/mdr-engineering/msoc-infrastructure/wiki/Cloud-Accounts

Remove Account from Terraform

IMPORTANT: After the account is closed, AWS allows users to login for 90 days. After AWS account has been decommissioned by CAMRS team, run terragrunt destroy in the 005-iam account to prevent users from assuming role into the account. Then remove the mdr-prod- folder from the xdr-terraform-live git repo.