Browse Source

Major Clean UP

Brad Poulton 5 years ago
parent
commit
d53fe8445e

+ 170 - 0
AWS NVME Notes.md

@@ -0,0 +1,170 @@
+# AWS NVME Notes
+
+In the nitro-based instances (m5, i3en, etc), AWS presents all drives as NVMe volumes. This
+provides better I/O performance, but can make life complicated when mapping drives from 
+the terraform or web console to the device the machine actually sees.
+
+## Determining EBS Mapping
+The EBS mapping is stored in the NVMe metadata. You can grab it by running:
+```
+nvme id-ctrl --raw-binary /dev/nvmXXX | cut -c 3073-3104 | sed 's/ //g'
+```
+
+## Useful Scripts
+
+### /usr/local/bin/ebs-nvme-mapping:
+```
+#!/bin/bash
+#
+# Creates symbolic links from the nvme drives to /dev/sdX or /dev/xvdX.
+# This may not be compatible with instances that have local storage, such
+# as i3's
+PATH="${PATH}:/usr/sbin"
+
+for blkdev in $( nvme list | awk '/^\/dev/ { print $1 }' ) ; do
+	mapping=$(nvme id-ctrl --raw-binary "${blkdev}" 2>/dev/null | cut -c3073-3104 | sed 's/ //g')
+	if [[ "${mapping}" == xvd* ]]; then
+		( test -b "${blkdev}" && test -L "/dev/${mapping}" ) || ln -s "${blkdev}" "/dev/${mapping}"
+		for partition in $( ls -1 ${blkdev}p* 2> /dev/null ) ; do
+			ln -s ${partition} /dev/${mapping}${partition/${blkdev}p/}
+		done
+	fi
+done
+```
+
+### /root/cf/define_logical_volume.sh:
+```
+#!/bin/bash
+#
+# Provides a simple way to create and format logical volumes for
+# nitro-based AWS instances based off their configured mount point.
+# NOTE: It does not create the fstab entry
+#
+# Syntax: define_logical_volume.sh <LABEL> <VOLGROUP> <LOGICALVOL> <DEVICE>
+# Sample: define_logical_volume.sh SPLUNKFROZEN vg_frozen lv_frozen xvdj
+LABEL=$1
+VOLGRP=$2
+LOGVOL=$3
+DEVICE=$4
+# Iterate over all the nvme devices, looking for those in /dev
+for blkdev in $( nvme list | awk '/^\/dev/ {print $1 }' ); do
+	# For each device grab the desired device name from the vendor data
+	mapping=$(nvme id-ctrl --raw-binary "${blkdev}" 2>/dev/null | cut -c3073-3104 | sed 's/ //g')
+	# If the desired device name is one of those currently requested
+	if `echo $* | egrep -q "\<${mapping}\>"`; then
+		# Repoint our device variable to the real device
+		DEVICE="$blkdev"
+		# Then partition it for use
+		parted $DEVICE --script -- mklabel gpt
+		parted -a optimal $DEVICE mkpart primary 0% 100%
+		partprobe
+		sleep 1
+	fi
+done
+vgcreate $VOLGRP ${DEVICE}p1
+lvcreate -l 100%FREE -n ${LOGVOL} ${VOLGRP}
+mkfs.ext4 -L $LABEL /dev/mapper/${VOLGRP}-${LOGVOL}
+```
+ 
+### /root/cf/define_swap_volume.sh:
+```
+#!/bin/bash
+# Create a simple way to prepare and initialize a swap
+# volume on AWS nitro based instances.
+# NOTE: Unlike create_logical_volume, this script DOES
+#       create an fstab entry.
+#
+# Syntax: define_swap_volume.sh <LABEL> <DEVICE>
+# Sample: define_swap_volume.sh SWAP xvdb
+LABEL=$1
+DEVICE=$2
+# Iterate over all the nvme devices, looking for those in /dev
+for blkdev in $( nvme list | awk '/^\/dev/ {print $1 }' ); do
+	# For each device grab the desired device name from the vendor data
+	mapping=$(nvme id-ctrl --raw-binary "${blkdev}" 2>/dev/null | cut -c3073-3104 | sed 's/ //g')
+	# If the desired device name is one of those currently requested
+	if `echo $* | egrep -q "\<${mapping}\>"`; then
+		# Repoint our device variable to the real device
+		DEVICE="$blkdev"
+		# Then partition it for use
+		parted $DEVICE --script -- mklabel gpt
+		parted -a optimal $DEVICE mkpart primary 0% 100%
+		partprobe
+		sleep 1
+		mkswap -L $LABEL $DEVICE
+		swapon $DEVICE
+		echo "LABEL=$LABEL         swap                swap   defaults,nofail       0       2" >> /etc/fstab
+	fi
+done
+```
+ 
+### /usr/local/sbin/initialize_nvme_storage.sh:
+```
+#!/bin/bash
+#
+# This needs to be called every boot, before Splunk starts. It
+# initializes local storage as RAID-0.
+#
+# NOTE: This determines which NVMe drives are local based on
+#       whether they are 2.5TB or 7.5TB! THIS IS NOT A GOOD
+#       WAY, but works in a pinch. If you create 2.5TB EBS
+#       volumes, you're in for some trouble.
+if [ ! -b /dev/md0 ]; then
+	# We are fresh or on new hardware. Recreate the RAID.
+	rm -f /etc/mdadm.conf 2> /dev/null
+	DEVICES=$( nvme list | egrep "[72].50  TB" | awk '{print $1}' )
+	NUM=$( nvme list | egrep "[72].50  TB" | wc -l )
+	mdadm --create --force --verbose /dev/md0 --level=0 --name=SMARTSTORE_CACHE --raid-devices=${NUM} ${DEVICES}
+	mkfs -t xfs /dev/md0
+	mkdir -p /opt/splunk/var/lib/splunk 2> /dev/null
+	chown splunk:splunk /opt/splunk 2> /dev/null
+	mdadm --verbose --detail --scan | tee -a /etc/mdadm.conf
+fi
+# Alternatively, could be mounted to /opt/splunk/var
+mount /dev/md0 /opt/splunk/var/lib/splunk
+```
+ 
+### /etc/systemd/system/ephemeral-init.service:
+```
+# Configures the splunk initialization script to run on
+# boot.
+# Because splunk is started by an init.d script, we
+# cannot set dependencies in here, and instead must
+# also modify the splunk init script.
+[Unit]
+#DefaultDependencies=no
+#After=sysinit.target local-fs.target
+#Before=base.target
+RequiresMountsFor=/opt/splunk
+
+[Service]
+ExecStart=/usr/local/sbin/initialize_nvme_storage.sh
+
+[Install]
+WantedBy=default.target
+```
+
+### /etc/init.d/splunkd
+The splunk init script needs to be modified to wait for
+the instance storage RAID array to become active. To do
+so, modify the `splunk_start()` function in the script
+to begin with:
+```
+# WaitForMount BEGIN
+  echo Waiting for mount point...
+  count=0
+  while ! mountpoint -q /opt/splunk/var/lib/splunk
+  do
+    echo "Mount point not ready. Sleep 1 second..."
+    sleep 1
+    count=`expr $count + 1`
+    if test $count -eq 90
+    then
+      echo "timed out!"
+      exit 1
+    fi
+  done
+# WaitForMount END
+  echo Starting Splunk...
+```
+

+ 0 - 15
AWS New User Setup Notes.md

@@ -1,15 +0,0 @@
-AWS New User Setup Notes.md
-
-
-
-https://gpgtools.org/ download and install
-use gpg keychain to generate pub/private keys
-something something terraform
-echo "wcFMA2sXDKYLpzaU<redacted>bf6clQ043oDkHIrcWK509UIy5GUpEqBV/WLmuCMHkXUgnxy12HY8qBErF58vB7/VXs5pCKp4SDYWEtK73fKmYZ5wJDW6j6OHkpYI4USZXjVYb+Utt56Qprk4KiT6VlFNNPo00r2YDABDdtxPJS3N9REzHqp+7oR2SQkiyEhcF3ZwILk2fH4mc1VQUiFu68RCqbt+QfmDt3OHIRZVPvrS4AHkCbj2fdgkbAaRMJ/21TBn8OE8WuDR4NHh5w/gWeK5m6754DzkjVLxDpsvPG2UR9ErwANEo+BI4upil2vgT+S63PIVsAmTew/7QpPavttP4rUBM47h5cMA"|base64 -D  | gpg -d
-    
-
-Export in one line and base64
-gpg --export | base64
-
-    
-

+ 13 - 1
AWS Notes.md

@@ -61,7 +61,19 @@ systemctl start amazon-ssm-agent
 
 ----------------------------------------------
 
-## config and credentials files
+## AWS User Password Distribution ( legecy )
+
+https://gpgtools.org/ download and install
+use gpg keychain to generate pub/private keys
+something something terraform
+echo "wcFMA2sXDKYLpzaU<redacted>bf6clQ043oDkHIrcWK509UIy5GUpEqBV/WLmuCMHkXUgnxy12HY8qBErF58vB7/VXs5pCKp4SDYWEtK73fKmYZ5wJDW6j6OHkpYI4USZXjVYb+Utt56Qprk4KiT6VlFNNPo00r2YDABDdtxPJS3N9REzHqp+7oR2SQkiyEhcF3ZwILk2fH4mc1VQUiFu68RCqbt+QfmDt3OHIRZVPvrS4AHkCbj2fdgkbAaRMJ/21TBn8OE8WuDR4NHh5w/gWeK5m6754DzkjVLxDpsvPG2UR9ErwANEo+BI4upil2vgT+S63PIVsAmTew/7QpPavttP4rUBM47h5cMA"|base64 -D  | gpg -d
+    
+
+Export in one line and base64
+gpg --export | base64
+
+
+## Config and credentials files
 2020-05-07
 
 To set up appropriate aliases for terraform:

+ 0 - 7
Annual FedRAMP Assessment Answers/Readme.md

@@ -1,7 +0,0 @@
-# Annual Assessment Answers
-
-If you are here, then it is that time of year again to get poked and proded by Coalfire. This section of the notes is dedicated to the answers given to the auditors. This can be used as a reference for future years. Becuase the audit will occur every year, it is possible that these notes might save the future engineer some time. Not all controls will be relevant to the engineers, so only notes for relevant controls will be included here. Included in the notes, should be how to preform the assessment of the control. 
-
-## Rules
-
-1. All files MUST follow the folder and naming scheme. Each control family as its own folder. Inside the folder, each control has a file. Subcontrols are contained inside the parent control's file. For example, the control, AC-2(1), will be located in the AC folder in the file AC-2.md.  

+ 70 - 0
Asset Inventory Notes.md

@@ -0,0 +1,70 @@
+# Asset Inventory
+
+The XDR asset inventory is a set of scripts to gather data about XDR assets and store them in the MOOSE KV Store. From there, the data is generated into a report for the compliance team and converted to a CSV (via a saved search) for ES purposes.
+
+## Code
+
+At present, the code is stored as part of the [msoc_infrastructure](https://github.mdr.defpoint.com/mdr-engineering/msoc-infrastructure/tree/master/salt/fileroots/salt_master/files/xdr_asset_inventory) git project.
+
+Code is written in Python 3 and distributed to the salt-master servers via a salt state.
+
+This app is supported via the [SA-Moose spunk app](https://github.mdr.defpoint.com/MDR-Content/SA-moose). See [`collections.conf`](https://github.mdr.defpoint.com/MDR-Content/SA-moose/blob/master/default/collections.conf) for all fields, and [`FIELDS.md`](https://github.mdr.defpoint.com/mdr-engineering/msoc-infrastructure/blob/master/salt/fileroots/salt_master/files/xdr_asset_inventory/FIELDS.md) for field descriptions.
+
+## Overview
+
+There are two scripts `gather_aws.py` and `gather_salt.py`. Each runs separately, and gathers data from the respective source. It is assumed that future scripts will be added for additional sources of data.
+
+Each script operates independently. It:
+  1) Gathers the information from its data source.
+  2) Grabs the existing record, if any, from the Splunk KV Store
+  3) Combines the information together. In the case of AWS, prefers the information in AWS. In the case of Salt, prefers the information already present in the kv store (see [Bugs and Known Issues]).
+
+## Accessing in Splunk
+
+The data is may be accessed in moose by using `| inputlookup xdr_assets_lookup`.
+
+The following searches may be useful:
+### Nicely Formatted Assets
+```
+| inputlookup xdr_assets_lookup 
+| where lastseen>relative_time(now(), "-30d")
+| fieldformat firstseen=strftime(firstseen, "%+") 
+| fieldformat lastseen=strftime(lastseen, "%+") 
+| table resource name fqdn ip mac owner priority category state firstseen lastseen
+```
+
+### Assets not seen in 30 days
+```
+| inputlookup xdr_assets_lookup 
+| where lastseen<relative_time(now(), "-30d") 
+| fieldformat firstseen=strftime(firstseen, "%+") 
+| fieldformat lastseen=strftime(lastseen, "%+") 
+| table age resource name fqdn ip mac owner priority category state firstseen lastseen
+```
+
+### Assets Detected by Salt or AWS but not both
+```
+| inputlookup xdr_assets_lookup 
+| search NOT(category=salt category=aws) 
+| table name ip resource category firstseen lastseen 
+| fieldformat firstseen=strftime(firstseen, "%+") 
+| fieldformat lastseen=strftime(lastseen, "%+")
+```
+
+## The KV Store Key
+
+Every unique resource is stored. To calculate a unique key, the sha256 of a unique "resource id" is generated. This resource id is:
+* For instances in aws, the full arn of the resource (whether detected via `gather_aws.py` or `gather_salt.py`).
+* For instances in salt, a unique id of the format `salt://{salt-master}/lifecycle:{lifecycle}/{minionid}/{serialnumber}`
+
+## The "Category" Field
+
+The categories are always merged together with existing data. This means that categories can be added but never removed (if they need to be removed, you will need to do so manually). This allows categories to be added via salt or terraform (AWS tags), whichever is more appropriate.
+
+## AWS Permissions
+
+The `gather_aws.py` must be able to `assumeRole` into the `service/salt-master-inventory-role` role in the account to be inventoried. This is managed by terraform.
+
+## Bugs and Known Issues
+* `gather_salt.py` always prefers the existing data. This means that salt information won't override information gathered via AWS, which is desired. But it also means that data gathered from salt will never override existing data. This is not ideal.
+* Category tags are never removed, even if they are removed from the source data.

+ 3 - 6
Atlantis Notes.md

@@ -1,3 +1,5 @@
+# Atlantis Notes
+
 Atlantis allows for applying your TF code from a github comment. 
 Atlantis Lock is NOT a Terraform Lock. Atlantis lock is only for two Git PRs. 
 
@@ -14,13 +16,8 @@ It is 100% aware of modules and workspaces so it will do the needful
 there is no such thing as a authorize list tho so ANYONE who leaves the comment "atlantis apply" will trigger it
 
 
-
-
-
-
-
 --------------------
-How to delete locks
+## How to delete locks
 
 #1 option,
 If atlantis runs a plan and doesn't unlock terraform, delete the fargate docker and rebuild it (should be a quick action)

+ 3 - 2
Collectd Notes.md

@@ -13,5 +13,6 @@ Collectd is used to tracking hard drive space and cpu usage. The data is collect
 
 Currently a bug in collectd where it writes the response from HEC into
 the system log `/var/log/messages`.  There's a github issue, 
-https://github.com/collectd/collectd/issues/3105.  Duane has a PR in to
-fix it, in theory - https://github.com/collectd/collectd/pull/3263
+https://github.com/collectd/collectd/issues/3105.  Duane has a PR in to fix it, in theory - https://github.com/collectd/collectd/pull/3263
+
+Duane's PR has been merged. 

+ 5 - 1
FedRAMP Notes.md

@@ -3,7 +3,11 @@ Okta fedRAMP notes:
 https://www.okta.com/resources/whitepaper/configuring-okta-for-fedramp-compliance/
 
 
- 
+AC-12 Jira 15 minute timeout
+
+docker exec -it  jira2 /bin/bash
+cd /var/atlassian/jira/customisations/atlassian-jira/WEB-INF
+cat web.xml | grep -5 "<session-config"
 
 CM-6(a)-2 CIS SCAP Checklist
 Update parameter field to indicate the the CIS QA checklist is stored in Qualys and is SCAP compatible. 

+ 109 - 0
Jira Notes.md

@@ -0,0 +1,109 @@
+# Jira Notes
+
+# TLS Setup for RDS
+
+First need to update `dbconfig.xml` to tell it to use TLS and what root certs to use:
+
+```
+    <url><![CDATA[jdbc:postgresql://jira.cm5pc4cb8hlj.us-east-1.rds.amazonaws.com:5432/jira?sslmode=verify-full&sslrootcert=/opt/atlassian/jira/rds-root-chain.pem]]></url>
+
+```
+
+Then in `/opt/atlassian/jira/rds-root-chain.pem` you need the root cert(s) for RDS.  Use something like this:
+
+```
+#!/bin/bash
+
+URLS="https://s3.amazonaws.com/rds-downloads/rds-ca-2019-root.pem"
+URLS="${URLS} https://s3.amazonaws.com/rds-downloads/rds-ca-2015-root.pem"
+URLS="${URLS} https://s3-us-gov-west-1.amazonaws.com/rds-downloads/rds-ca-us-gov-east-1-2017-root.pem"
+URLS="${URLS} https://s3-us-gov-west-1.amazonaws.com/rds-downloads/rds-ca-us-gov-west-1-2017-root.pem"
+
+rm rds-root-chain.pem
+
+for i in $URLS; do
+        echo "# `basename $i`"
+        curl -s $i
+done >> rds-root-chain.pem
+
+
+```
+see [https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.SSL.html]()
+
+There is mention of ways with newer versions of the PostgreSQL JDBC driver to use the 
+standard Java keystore for root certs.  This does not work with the version of the JDBC 
+driver skipping with Jira version 7.13, as the class needed is missing.  (There's no
+DefaultJavaSSLFactory in `postresql-9.4.1212.jar`)
+
+One handy trick:
+
+```
+openssl s_client -starttls postgres -connect my.postgres.host:5432 # etc...
+```
+
+
+# Proxy setup
+
+In `JIRA_HOME/bin/setenv.sh`
+
+```
+JVM_SUPPORT_RECOMMENDED_ARGS=" -Dhttp.proxyHost=proxy.msoc.defpoint.local -Dhttp.proxyPort=80 -Dhttps.proxyHost=proxy.msoc.defpoint.local -Dhttps.proxyPort=80 -Dhttp.nonProxyHosts='*.defpoint.local|localhost|127.0.0.1|169.254.169.254|*.amazonaws.com'"
+```
+
+Without this, JIRA cannot download new plugins and things from the Atlassian repositories.
+
+# Okta stuff
+
+Okta appears to have provided their own SAML implementation for JIRA.  Which is weird, I
+expected JIRA to have their own.
+
+[https://saml-doc.okta.com/Provisioning_Docs/Okta_Jira_Authenticator_Configuration_Guide.html]()
+
+There's a config file in `/opt/atlassian/jira/atlassian-jira/WEB-INF/classes/seraph-config.xml`
+that refers to another config file `/opt/docker/okta-config-jira.xml`.  That is where the actual
+SAML magic is stored.
+
+# Load Balancer Stuff
+
+There's stuff in web.xml that tells it that it's in front of a load balancer.  The 
+proxyName and proxyPort settings matter, because they will cause redirects when
+you connect to the wrong name.  Note that in the current config, the load balancer 
+terminates TLS and sends plain HTTP back to JIRA itself.
+
+
+```
+        <Connector
+        port="8080"
+        relaxedPathChars="[]|"
+        relaxedQueryChars="[]|{}^\`&quot;&lt;&gt;"
+        maxThreads="150"
+        minSpareThreads="25"
+        connectionTimeout="20000"
+        enableLookups="false"
+        maxHttpHeaderSize="8192"
+        protocol="HTTP/1.1"
+        useBodyEncodingForURI="true"
+        redirectPort="443"
+        acceptCount="100"
+        disableUploadTimeout="true"
+        bindOnInit="false"
+        proxyName="jira.mdr-test.defpoint.com"
+        proxyPort="443"
+        scheme="https"
+        secure="true"
+    />
+```
+
+
+# Useful links
+
+[https://confluence.atlassian.com/adminjiraserver085/setting-properties-and-options-on-startup-981155694.html]()
+[https://confluence.atlassian.com/jirakb/change-the-base-url-of-jira-server-in-the-database-733940375.html]()
+
+
+# Undockerizing
+
+* Fix the split attachments dir
+* Move attachments out to something like EFS
+* Load balancer expects to connect to port 80, which is being forwarded by docker to 8080 inside the container.
+

+ 3 - 1
New Customer Setup.md → New Customer Setup Notes.md

@@ -1,4 +1,6 @@
-# This is in no specific order YET
+# New Customer Setup Notes
+
+***This is in no specific order YET***
 
 ## Git Repos to be made
 

+ 51 - 13
OpenVPN Notes.md

@@ -1,11 +1,13 @@
+#  OpenVPN Notes
 To admin openvpn, SSH into the openvpn server and use the admin user that is located in Vault. 
 
 the admin username is openvpn
 
+Helpful...
+https://openvpn.net/vpn-server-resources/managing-settings-for-the-web-services-from-the-command-line/
 
 
-----------
-Reset ldap.read
+## How to Reset ldap.read 
 
 ldap.read@defpoint.com is the okta user that openvpn uses to auth to okta. the ldap.read account's password expires after 60 days. to see when the password will expire, go to Reports -> Okta Password Health. Don't open with EXCEL!
 
@@ -30,9 +32,7 @@ and put into viscosity your password as  password,123456
 clearly your password should have no commas in it
 
 
-
--------------
-LDAP config
+### LDAP config
 
 Primary server: mdr-multipass.ldap.okta.com
 Bind Anon? NO
@@ -45,17 +45,42 @@ uid=ldap.read@defpoint.com, dc=mdr-multipass, dc=okta, dc=com
 BASE DN for Users
 ou=users, dc=mdr-multipass, dc=okta, dc=com
 
-Usernaem Attribute
+Username Attribute
 uid
 
 
-------------
-OpenVPN License
+## OpenVPN License
 
 TEST -> YOLO via web interface. This means i did not take the time to reconfigure the Salt states to handle a prod and test license. 
 
 
-## Timeout
+## CLI
+
+OpenVPN can also be configured via CLI. 
+
+The `confdba` tool is used to view the configurations DB.
+
+Show all configurations
+`/usr/local/openvpn_as/scripts/confdba -s`
+
+Show all configurations in the User database
+`/usr/local/openvpn_as/scripts/confdba -us`
+
+The `sacli` tool is used to interact with the OpenVPN API.
+
+`/usr/local/openvpn_as/scripts/sacli Version`
+
+View Configurations
+If configuration doesn't show up it is set to the default.
+
+`/usr/local/openvpn_as/scripts/sacli ConfigQuery`
+`/usr/local/openvpn_as/scripts/sacli UserPropGet`
+
+`/usr/local/openvpn_as/scripts/sacli ConfigQuery --pfilt=vpn.server.tls_version_min`
+
+## Timeouts
+
+https://openvpn.net/vpn-server-resources/openvpn-tunnel-session-management-options/
 
 Fedramp SC-10
 
@@ -66,11 +91,24 @@ Control with the following user/group properties:
 
 prop_isec:	(int, number of seconds over which to sample bytes in/out)
 prop_ibytes:	(int, minimum number of in/out bytes over prop_isec seconds to allow connection to continue)
-For example, to disconnect a user who fails to transmit/receive at least 75,000 bytes during a 15 minute period:
+For example, to disconnect a user who fails to transmit/receive at least 75,000 bytes during a 30 minute period:
 
 #default user applies to all users. 
-./sacli --user __DEFAULT__ --key prop_isec --value 900 UserPropPut
-./sacli --user __DEFAULT__ --key prop_ibytes --value 75000 UserPropPut
+`/usr/local/openvpn_as/scripts/sacli --user __DEFAULT__ --key prop_isec --value 1800 UserPropPut`
+`/usr/local/openvpn_as/scripts/sacli --user __DEFAULT__ --key prop_ibytes --value 75000 UserPropPut`
 
 #verify the setting is in place
-./confdba -us -p __DEFAULT__
+`/usr/local/openvpn_as/scripts/confdba -us -p __DEFAULT__`
+
+## Configure TLS on OpenVPN
+
+Make a certificate like you would any other, using openssl commands and our CA.  Then to install:
+
+```
+../scripts/sacli --key "cs.openssl_ciphersuites" --value 'TLSv1.2+FIPS:kRSA+FIPS:!eNULL:!aNULL:!3DES:!SHA' ConfigPut
+../scripts/sacli --key "cs.ca_bundle" --value_file=bundle.pem ConfigPut
+../scripts/sacli --key "cs.cert" --value_file=openvpn.pem ConfigPut
+../scripts/sacli --key "cs.priv_key" --value_file=openvpn.key ConfigPut
+```
+
+See openvpn docs https://openvpn.net/vpn-server-resources/managing-settings-for-the-web-services-from-the-command-line/#selecting-ssl-and-tls-levels-on-the-web-server

+ 2 - 0
Packer Notes.md

@@ -1,3 +1,5 @@
+# Packer Notes
+
 Used to create the AWS AMI. run this on your local laptop. Part of the process is on the local laptop and part is in AWS. 
 https://packer.io/
 create a symlink to the DVD iso so Git doesn't try to commit it. 

+ 2 - 0
Full Drive Notes.md → RedHat Full Drive Notes.md

@@ -1,3 +1,5 @@
+# RedHat Full Drive Notes
+
 sudo: unable to mkdir /var/log/sudo-io/00/00/08: No space left on device
 sudo: error initializing I/O plugin sudoers_io
 

+ 4 - 0
ScaleFT Notes.md

@@ -43,6 +43,10 @@ Match exec "/usr/local/bin/sft resolve -q  %h" !User centos
     ProxyCommand "/usr/local/bin/sft" proxycommand  %h
     UserKnownHostsFile "/Users/bradpoulton/Library/Application Support/ScaleFT/proxycommand_known_hosts"
 
+ssh using msoc_build key
+
+ssh -i msoc_build_fips centos@10.80.101.126
+
 ### Troubleshooting SFT Client
 
 Review the cache file: /var/lib/sftd/osync

+ 3 - 1
Migration to Sensu Go Notes.md → Sensu Go Migration Notes.md

@@ -1,4 +1,6 @@
-# Migration to Sensu Go
+# Sensu Go Migration
+
+***Legecy***
 
 Currently sensu is installed, going to migrate us to Sensu Go
 

+ 1 - 1
Sensu Notes.md

@@ -1,6 +1,6 @@
 # Sensu Notes.md
 
-## See (Migration to Sensu Go.md) file for more details
+## See (Sensu Go Migration Notes.md) file for more details
 
 ## Sensu Upgrade
 08/03/2020

+ 3 - 4
Splunk Notes.md

@@ -52,16 +52,15 @@ TEST SPLUNK indexer-* admin password
 | mstats count WHERE index=collectd metric_name=* by host, metric_name
 
 
-
-
-
-
 #aws cloudtrail 
 index=app_aws sourcetype=aws:cloudtrail
 
 #proxy
 index=web sourcetype=squid:access:json
 
+#Okta
+index=auth sourcetype="OktaIM2:log"
+
 CLI search
 /opt/splunk/bin/splunk search 'index=bro' -earliest_time '-5m' output=raw > test.text
 

+ 5 - 1
Salt Splunk Whitelisting FedRAMP Notes.md → Splunk Process List Whitelisting FedRAMP Notes.md

@@ -1,3 +1,7 @@
+# Splunk Process List Whitelisting FedRAMP Notes
+
+***Only Used to Fufill CM-7(5)***
+
 Notes from talking with Fred
 Salt State -> Push cron job + bash script to Minions -> Bash script writes to file -> Splunk UF reads file and indexes it. -> Splunk creates lookup file which compares to a baseline lookup file. Differneces between the two are displayed on a dashboard and can be "approved". the approve button runs a search that will merge the two lookups and updates the baseline. 
 
@@ -7,7 +11,7 @@ https://access.redhat.com/solutions/61691
 proc f
 
 
-Dashboard is broken need to fix it. Remove the blacklist variable and it will start working. 
+Dashboard is broken needed to fix it. Remove the blacklist variable and it will start working. 
 
 app uses SHA256 hashes
 

+ 24 - 0
VMRay Notes.md

@@ -0,0 +1,24 @@
+# VMRay Notes
+
+**DRAFT**
+
+VMRay Deployment is currently in progress. Information below is subject to change.
+
+# Summary
+VMRay Analyzer is a tool to detonate malware in a controlled environment.
+
+## Generalized Architecture
+
+VMRay Analyzer consists of a VMRay Server, which coordinates the use of other systems, and one or more _bare metal_ worker machines on which malware detonates. The systems run Ubuntu 18.04 LTS.
+
+The system is deployed in its own VPC in the GovCloud C&C accounts (one for prod, one for test)
+
+## Documentation
+
+* [On-Prem_Hardware_Sizing_Estimate.xlsx](files/vmray/On-Prem_Hardware_Sizing_Estimate.xlsx)
+* [VMRay v3.3.0 Admin Guide](files/vmray/vmray-onprem-admin-guide-v3.3.0.pdf)
+
+## Integrations
+
+Integrated with Phantom
+

+ 20 - 12
Vault Notes.md

@@ -4,6 +4,16 @@ Vualt is setup with dynamoDB as the backend. Vault has 3 nodes in a cluster and
 
 the vault binary is located at /usr/local/bin/vault
 
+Additional Notes are located here: https://github.mdr.defpoint.com/mdr-engineering/msoc-infrastructure/blob/master/salt/fileroots/vault/README.md 
+
+## How to log into CLI on the Vault server. 
+
+1. login to the web interface
+2. copy token 
+3. run this on vault-1 `vault login`
+4. paste token and login
+
+Auth Error? Try populating the Bash variables. `export VAULT_ADDR=https://vault.mdr-test.defpoint.com`
 
 1. change made to the service file
 Unknown lvalue 'StartLimitIntervalSec' in section 'Service'
@@ -14,7 +24,7 @@ Oct 30 13:31:32 vault-1 systemd: [/etc/systemd/system/vault.service:16] Failed t
 
 
 
-TEST VAULT
+## TEST VAULT Notes
 
 https://github.mdr.defpoint.com/mdr-engineering/msoc-infrastructure/tree/master/salt/fileroots/vault
 
@@ -141,27 +151,18 @@ vault write salt/pillar_data auth="abc123"
 /Users/bradpoulton/.go/src/vault-backend-migrator/vault-backend-migrator -import portal/ -file portal-secrets.json -ver 2
 
 
-
-
-
-
-AWS auth 
+## AWS Auth 
 the vault instances have access to AWS IAM Read. 
 
 curl -v --header "X-Vault-Token:$VAULT_TOKEN" --request LIST \
     https://vault.mdr.defpoint.com:443/v1/auth/aws/roles --insecure
 
 
-
-
-
-
-
 8. map okta to policies ( not needed )
 8.1 VAULT_ADDR=https://vault.mdr-test.defpoint.com vault policy write -tls-skip-verify=true auth/okta/groups/mdr-admins policies=admins
 
 
-Vault Logs
+## Vault Logs
 
 cat 0c86fda6-1139-7914-fef5-6b7532e9fb5a | grep -v -F '"operation":"list"' | grep -v -F '"operation":"read"'
 cat c3c0b50b-9429-355d-8c8f-038e093c3e4b | grep -v -F '"operation":"list"' | grep -v -F '"operation":"read"'
@@ -199,3 +200,10 @@ output "secret" {
   value = data.vault_generic_secret.palo_auth.data
 }
 ```
+
+## Vault Timeouts for FedRAMP
+
+Tune the auth method to reduce the life of the token provided for this auth method. AC-2(5) Sort of but not really!
+
+`vault auth tune -default-lease-ttl=3h -max-lease-ttl=3h okta/`
+`vault auth tune -default-lease-ttl=15m -max-lease-ttl=15m okta/`

+ 8 - 1
VictorOps Notes.md

@@ -1 +1,8 @@
-Collectd -> Splunk -> VictorOps -> My Phone
+# VictorOps Notes
+
+# How does this thing work?
+Collectd -> Splunk/Sensu -> VictorOps -> My Phone
+
+# What is the Organization ID?
+It is "defpoint"
+

File diff suppressed because it is too large
+ 118 - 0
ePO Syslog Notes.md


Some files were not shown because too many files changed in this diff