# Phantom Notes Stop and Start the services ``` /opt/phantom/bin/stop_phantom.sh /opt/phantom/bin/start_phantom.sh ``` Postgres log location `/opt/phantom/data/db/pg_log` Phantom nginx log location `/var/log/nginx` Phantom log location `/var/log/phantom` Restart just pgbouncer `systemctl restart pgbouncer` ## How Do I View Moose Events in Phantom? Drop down > Cases > Filter on TENANT > MOOSE ## How Do I promote an Analyst to Shift Lead? When the user is already in Splunk SOAR, making the change in OKTA should be sufficent. If Splunk SOAR hates you it might not be enough. `psql -d phantom -h /tmp` (as the postgres user) ``` phantom=# select c.is_active, c.username, a.name from role a join role_users b on a.id=b.role_id join ph_user c on b.phuser_id=c.id where c.username='duane.waddle' and c.is_active='t'; is_active | username | name -----------+--------------+----------------- t | duane.waddle | Administrator t | duane.waddle | MDR Shift Leads (2 rows) ``` ## Installation in unprivledged + FIPS mode 08/2022 FOLLOW the steps in the DOCS! Ensure FIPS is enable `cat /proc/sys/crypto/fips_enabled` set home directory to /opt/splunksoar set port to 8443 Download the .tar.gz file to your home dir Extract .tar as your XDR user run `sudo splunk-soar/soar-prepare-system --no-spinners --splunk-soar-home /opt/splunksoar --https-port 8443` No GlusterFS we are not using external file share No ntpd ( we are using chronyd ) Yes to basic firewall No to https redirect ( let the LB do that) Yes to create phantom user `splunk-soar/soar-install --splunk-soar-home /opt/splunksoar --https-port 8443 --ignore-warnings` ignore warnings about space issues if in TEST. ## Phantom pgbouncer Issue (legacy) ``` [gc-prod]root@phantom-0:~:# ps -ef | grep pgbouncer | wc -l 96 ``` /var/log/pgbouncer/pgbouncer.log 2021-06-03 02:18:20.981 UTC [3034] WARNING C-0x7f66adca0ae8: (nodb)/(nouser)@unix(11235):6432 pooler error: no more connections allowed (max_client_conn) /var/log/phantom/wsgi.log /var/log/phantom/wsgi.log.4:psycopg2.OperationalError: ERROR: no more connections allowed (max_client_conn) there's a config file, /etc/pgbouncer/pgbouncer.ini. I bumped some limits in there last night from max connections = 750 to max connections = 2000 ``` [gc-prod]root@phantom-0:~:# egrep "750|2000" /etc/pgbouncer/pgbouncer.ini ;max_client_conn = 750 max_client_conn = 2000 ;default_pool_size = 750 default_pool_size = 2000 ;max_db_connections = 750 ;max_user_connections = 750 max_db_connections = 2000 max_user_connections = 2000 ``` ## Salesforce app needs outbound 443 When setting up a new "asset" (salesforce instance), Greg has to go through a "Connectivity Test" that uses oAuth. This doesn't very well support our outbound proxy. When he's doing this (and it's only needed during the setup / test) go into AWS console in legacy-mdr-prod and update sg-04de5c2a4a1ce3445. Add an outbound rule to 0.0.0.0/0 port 443. Remove it when he's done. ## TLS version 1.1 Vuln Phantom (v4.9) is allowing TLS version 1.1. This is a Qualys finding. `openssl s_client -connect 10.80.101.221:443 -tls1_1` `grep ssl_protocols /etc/nginx/conf.d/default.conf` ## 2021-04-21 Backup Issue - FTD While trying to migrate to govcloud, backups were unable to be taken. ``` $ sudo phenv python3 /opt/phantom/bin/ibackup.pyc --setup [pid: 26829] [12/Apr/2021 16:30:09] ibackup.py:293 INFO: Running ibackup.pyc - details will be logged to /var/log/phantom/backup/ibackup_2021-04-12T16:30:09.231947Z.log Setup will temporarily stop phantom If you wish to continue, enter yes to proceed: yes [pid: 26829] [12/Apr/2021 16:30:12] phproc.py:146 WARNING: unable to open log file '/var/log/phantom/backup/phantom-stanza-create.log': Permission denied NOTE: process will continue without log file. [pid: 26829] [12/Apr/2021 16:31:14] phproc.py:146 WARNING: ERROR [082]: : could not find WAL segment 00000001000000EC00000054 after 60 second(s) HINT: is archive_command configured correctly? HINT: use the check command to verify that PostgreSQL is archiving. ERROR [082]: : could not find WAL segment 00000001000000EC00000054 after 60 second(s) HINT: is archive_command configured correctly? HINT: use the check command to verify that PostgreSQL is archiving. WAL segment 00000001000000EC00000054 did not reach the archive:11-1 HINT: Check the archive_command to ensure that all options are correct (especially --stanza). HINT: Check the PostgreSQL server log for errors. Traceback (most recent call last): File "../setup/ibackup.py", line 377, in File "../setup/ibackup.py", line 319, in main File "../pycommon/phantom_common/backup/backup_manager.py", line 1204, in setup File "../pycommon/phantom_common/backup/pgbackrest.py", line 576, in setup File "../pycommon/phantom_common/backup/pgbackrest.py", line 607, in create File "../pycommon/phantom_common/backup/pgbackrest.py", line 706, in _run_pgbackrest_cmd File "../pycommon/phantom_common/phproc.py", line 249, in run_command File "../pycommon/phantom_common/phproc.py", line 157, in communicate phantom_common.phproc.PhCalledProcessError: Command 'pgbackrest --stanza=phantom --config=/opt/phantom/etc/pgbackrest.conf --log-level-console=info --log-level-file=info check' returned non-zero exit status 82. Output: 2021-04-12 16:30:12.084 P00 INFO: check command begin 2.15: --config=/opt/phantom/etc/pgbackrest.conf --log-level-console=info --log-level-file=info --log-path=/var/log/phantom/backup --pg1-path=/opt/phantom/data/db --pg1-socket-path=/tmp --repo1-path=/opt/phantom/data/ibackup/repo/pg --stanza=phantom 2021-04-12 16:31:14.140 P00 INFO: check command end: aborted with exception [082] Error output: ERROR [082]: : could not find WAL segment 00000001000000EC00000054 after 60 second(s) HINT: is archive_command configured correctly? HINT: use the check command to verify that PostgreSQL is archiving. ERROR [082]: : could not find WAL segment 00000001000000EC00000054 after 60 second(s) HINT: is archive_command configured correctly? HINT: use the check command to verify that PostgreSQL is archiving. WARN: WAL segment 00000001000000EC00000054 did not reach the archive:11-1 HINT: Check the archive_command to ensure that all options are correct (especially --stanza). HINT: Check the PostgreSQL server log for errors. ``` logfile /var/log/phantom/backup/ibackup_2021-04-12T16:03:51.468153Z.log: ``` $ sudo cat /var/log/phantom/backup/ibackup_2021-04-12T16:03:51.468153Z.log [pid: 8104] [12/Apr/2021 16:03:51] ibackup.py:288 DEBUG: Command: /opt/phantom/bin/ibackup.pyc --setup [pid: 8104] [12/Apr/2021 16:03:51] ibackup.py:289 DEBUG: Initializing BackupManager [pid: 8104] [12/Apr/2021 16:03:51] ibackup.py:293 INFO: Running ibackup.pyc - details will be logged to /var/log/phantom/backup/ibackup_2021-04-12T16:03:51.468153Z.log [pid: 8104] [12/Apr/2021 16:03:54] backup_manager.py:1177 INFO: Exiting setup ``` FIX: ``` chown -R postgres: /opt/phantom/data/ibackup ``` ## Migration to GovCloud ### Prep / Installation Notes 1. Stand it up ``` cd ~/xdr-terraform-live/test/aws-us-gov/mdr-test-c2/250-phantom terragrunt apply ``` 2. Highstate it ``` ssh gc-dev-salt-master salt 'phantom-0.pvt.xdr.accenturefederalcyber.com' state.highstate --output-diff; salt 'phantom-0.pvt.xdr.accenturefederalcyber.com' state.highstate --output-diff; salt 'phantom-0.pvt.xdr.accenturefederalcyber.com' pkg.upgrade exit ``` 3. Disable FIPS ``` ssh gc-dev-phantom-0 sudo yum remove dracut-fips* sudo cp -p /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).backup.beforeremovingfips sudo dracut -f sudo vim /etc/default/grub # Change "fips=1" to "fips=0" sudo grub2-mkconfig -o /boot/grub2/grub.cfg sudo shutdown -r now cat /proc/sys/crypto/fips_enabled ``` 4. Enable the optionals repo: ``` sudo vim /etc/yum.repos.d/redhat-rhui.repo # Find rhel-7-server-rhui-optional-rpms and change 'enabled' to 1 sudo yum update ``` Add phantom user to cron allow ( see salt/cis/rhel7-3_1_1/parameters/id/phantom-0.pvt.xdr.accenturefederalcyber.com.yaml in xdr-cis-benchmarks git repo. ) `vim /etc/cron.allow` # and add phantom 4. Install the installer NOTE: To install a particular version, you have to use the offline installer steps, available here: https://docs.splunk.com/Documentation/Phantom/4.10.2/Install/InstallOffline NOTE: See BRAD's WAY below. ``` ssh dev-phantom # Find the current version by sudo yum list installed | grep phantom.x86 sudo yum list installed | grep phantom.x86 ``` ``` # dev VERSION=4.9.37880 # prod VERSION=4.9.35731 wget https://download.splunk.com/products/phantom/release/linux/${VERSION}/phantom_offline_setup_rhel7-${VERSION}.tgz sudo mkdir -p /usr/local/src/upgrade-${VERSION} sudo chmod 755 /usr/local/src/upgrade-${VERSION} cd /usr/local/src/upgrade-${VERSION} sudo tar xvzf ~/phantom_offline_setup_rhel7-${VERSION}.tgz cd phantom_offline_setup_rhel7-${VERSION} sudo ./phantom_offline_setup_rhel.sh install # answer 'y' ``` BRAD's WAY: Don't use the offline installer, just use the specific version RPM and install with a specific version. /opt/phantom/bin/phantom_setup.sh install --version=4.10.3.51237-1 --no-space-check ## If you're installing, you're good. If you're migrating, continue: 6. Enable cross-system ssh ``` ssh gc-dev-phantom-0 ssh-keygen cat ~/.ssh/id_rsa.pub exit ssh dev-phantom mkdir .ssh cat > .ssh/authorized_keys # paste from above, then ctrl-d exit ssh gc-dev-phantom-0 ssh phantom.msoc.defpoint.local # validate that you can log in ``` 7. Run Initial Backup ``` ssh dev-phantom time sudo phenv python3 /opt/phantom/bin/ibackup.pyc --backup sudo ls -l /opt/phantom/data/backup/ ``` 8. Copy to new system ``` ssh gc-dev-phantom-0 sudo mkdir -p /opt/phantom/data/restore # copy only changed files time sudo rsync -r --progress \ -e "ssh -i /home/frederick_t_damstra/.ssh/id_rsa" \ --rsync-path="sudo rsync" \ frederick_t_damstra@phantom.msoc.defpoint.local:/opt/phantom/data/backup/ \ /opt/phantom/data/restore/ sudo chown -R postgres:postgres /opt/phantom/data/backup /opt/phantom/data/restore sudo ls -l /opt/phantom/data/restore ``` 9. Prep new system for restore ``` # setup backups (required for restore) # This will fail the first time, but it has to be done sudo phenv python3 /opt/phantom/bin/ibackup.pyc --setup # fix errors sudo chown -R postgres: /opt/phantom/data/ibackup # Fixes WAL error sudo chmod 644 /opt/phantom/etc/pgbackrest.conf # secodn fix for WAL error sudo find /opt/phantom/data/ -type d -exec chmod o+rx {} \; sudo find /opt/phantom/data/db -type d -exec chmod o-rx {} \; # Disable WAL sudo vim /opt/phantom/data/db/postgresql.phantom.conf # change 'archive_mode' to 'off' # restart postgres sudo /opt/phantom/bin/phsvc restart postgresql-11 # setup backups (required for restore) - should work this time sudo phenv python3 /opt/phantom/bin/ibackup.pyc --setup ``` ### Final cutover 1. Stop phantom and create the last backup ``` ssh dev-phantom time sudo phenv python3 /opt/phantom/bin/ibackup.pyc --backup sudo ls -l /opt/phantom/data/backup/ sudo /opt/phantom/bin/stop_phantom.sh sudo systemctl disable phantom_watchdogd exit ``` 2. Copy the backup across ``` ssh gc-dev-phantom-0 time sudo rsync -r --progress \ -e "ssh -i /home/frederick_t_damstra/.ssh/id_rsa" \ --rsync-path="sudo rsync" \ frederick_t_damstra@phantom.msoc.defpoint.local:/opt/phantom/data/backup/ \ /opt/phantom/data/restore/ sudo chown -R postgres:postgres /opt/phantom/data/backup /opt/phantom/data/restore sudo ls -l /opt/phantom/data/restore ``` 3. Restore the backup ``` cd /opt/phantom/bin/ sudo ls -l /opt/phantom/data/restore/ # Specify the latest backup file: time sudo phenv python3 /opt/phantom/bin/ibackup.pyc --restore /opt/phantom/data/restore/TODO ``` 4. Reset Root PW ``` # Update the admin pw: sudo bash cd /opt/phantom/www phenv python3 manage.py changepassword admin # set password ``` 4. Restart phantom ``` sudo /opt/phantom/bin/stop_phantom.sh sudo /opt/phantom/bin/start_phantom.sh ``` 5. Fix settings: login to the website, go to administration->app settings, update the proxy to `http://proxy.pvt.xdrtest.accenturefederalcyber.com:80` and click save changes administration->user management->authentication->saml2 Record original values: ``` SSO Url: https://mdr-multipass.okta.com/app/mdrmultipass_mdrphantom_1/exk1m6x7ri1WgvXCB297/sso/saml New URL: Issuer ID: http://www.okta.com/exk1m6x7ri1WgvXCB297 New ID : Base URL: https://phantom.msoc.defpoint.local New URL: Metadata: MIIDqjCCApKgAwIBAgIGAWrbB00GMA0GCSqGSIb3DQEBCwUAMIGVMQswCQYDVQQGEwJVUzETMBEG A1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzENMAsGA1UECgwET2t0YTEU MBIGA1UECwwLU1NPUHJvdmlkZXIxFjAUBgNVBAMMDW1kci1tdWx0aXBhc3MxHDAaBgkqhkiG9w0B CQEWDWluZm9Ab2t0YS5jb20wHhcNMTkwNTIxMTUzMzA5WhcNMjkwNTIxMTUzNDA5WjCBlTELMAkG A1UEBhMCVVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28xDTAL BgNVBAoMBE9rdGExFDASBgNVBAsMC1NTT1Byb3ZpZGVyMRYwFAYDVQQDDA1tZHItbXVsdGlwYXNz MRwwGgYJKoZIhvcNAQkBFg1pbmZvQG9rdGEuY29tMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIB CgKCAQEAjVWbGnlG3G858/K0b8jVw5OFAef+eFWNmjD6eAfGMgOzQ3ZhJmZ5TAFxaUaH15Q7Vi10 p/zKHo8rZAurh31r35ED9JT+45J/IsDtOUK55quSEeh4d0Ih7NTBXgP5yEsSa7YVqBL4mI450JRr 8BTTfatUP0/TRxSx92QxlNhLi0jYmGtgzQ/3TeTEWIzZKntTkX7Arn42Dt7JkCdI+ElEfcNQYV3l //Olv0TEVFasbmIb8iNgVOi+ssq5UyqAjoWYJOc2VvkerUE9FDs7DkC3S1/sXR72vpTfXpz1fW+x /aHJjgwXgB2SW9fZk8CQjqEI5s6QCMBsHSOhU+xDkbzAnwIDAQABMA0GCSqGSIb3DQEBCwUAA4IB AQCKqio8wrvhbkGRptCD6sEnRmC7/NBE133tIv7Z3R/Cve8DgO3GcKKrCUh+gZJLFV3eWw95FTWW MY7KrYEd353mKP8hL7mEc+qSmWuwfFw+6JePHsNDiFKCY2PfzbWgsG9nX7T6H7n8cn2hzVn4gBmb 8TAXei+x0id9h24oSvtISZhMg+ED72c0BbO4wPZOQeisXPO4vugdRdbyB5wvIU2ILHb7WJnDNSai XSHqKUBigvQua2KSjh+GW7fMlvRbDkYxq3okj6sZlyCLN79IM4NZgKfCC4t8FoUA9ofIDUV9u70G +Utb6eeVogPzFlv4LuMRAEKbnV9G3yyDbxYsEcpYurn:oasis:names:tc:SAML:1.1:nameid-format:emailAddressurn:oasis:names:tc:SAML:2.0:nameid-format:persistent ``` Update saml with settings from the saml provider metadata (available from okta, application, login settings) Log out and log back in via okta. Run the backup prep. --- I got 500: Server Error. Things I did: tried accepting eula at https://phantom.pvt.xdrtest.accenturefederalcyber.com/eula/ Double checked saml config Set hostname and fqdn in administration->company settings 4. Start phantom: ``` sudo /opt/phantom/bin/stop_phantom.sh sudo /opt/phantom/bin/start_phantom.sh ```