Forráskód Böngészése

RDS, Redhat, Salt, Splunk

Brad Poulton 3 éve
szülő
commit
b668384523
4 módosított fájl, 50 hozzáadás és 8 törlés
  1. 2 2
      AWS RDS Notes.md
  2. 7 5
      RedHat Notes.md
  3. 40 0
      Salt Upgrade 3002.6 -> 3003.3 Notes.md
  4. 1 1
      Splunk Upgrade Notes.md

+ 2 - 2
RDS Notes.md → AWS RDS Notes.md

@@ -5,10 +5,10 @@
 According to AWS support, the auto minor version upgrade feature will only upgrade your RDS instance when AWS has fully vetted the new minor version. They will not provide an ETA when this will happen. If you need to upgrade the RDS instance before AWS has fully vetted you can do that. 
 
 - stop service of webapp that connects to RDS
-- take snapshot of RDS
+- take snapshot of RDS rds-pre-upgrade-backup-12-7
 - modify the RDS to set the desired version
 - start service of webapp
-- update the TF code to match the new version. might only need to run `terragrunt apply -refresh-only`
+- update the TF state with this: `terragrunt apply -refresh-only`
 
 ## Upgrading RDS Major Version
 

+ 7 - 5
RedHat Notes.md

@@ -53,8 +53,10 @@ index=junk sourcetype=_json "headers.Subject"="*rotatelogs.sh"
 
 Error: 'rhel-7-server-rpms' does not match a valid repository ID. Use "subscription-manager repos --list" to see valid repositories.
 
+```
 $subscription-manager repos --list
 This system has no repositories available through subscriptions.
+```
 
 ## Expand AWS EBS (Not LVM)
 
@@ -74,11 +76,11 @@ xfs_growfs -d /opt
 
 ## Expand AWS EBS and LVM or XFS
 
-note, this is for legacy installs. New installs have separate EBS for each partition.
+note, this is for legacy installs or LCPs. New installs have separate EBS for each partition.
 
 1. Expand drive in AWS (easy)
 2. Expand partition
-What file system type is it? What partition number is it? For growpart command use the correct partition number. To find the partition number look at the output from the `lsblk` command. are there multiple partitions for the disk? The partition numbers start at 1. For example, this would be the second partition. 
+What file system type is it? What partition number is it? For growpart command use the correct partition number. To find the partition number look at the output from the `lsblk` command. Are there multiple partitions for the disk? The partition numbers start at 1. For example, this would be the second partition. 
 ```
 nvme0n1     259:7    0  20G  0 disk
 ├─nvme0n1p1 259:8    0   1M  0 part
@@ -103,7 +105,7 @@ growpart /dev/xvda <partition number>
 Because our servers are all copies of the same server the UUID of the LVM is the same across all the servers. To mount a EBS volume of a snapshot to the same server that the snapshot came from try these steps.
 
 1. Create the EBS volume from the snapshot and attach to the EC2 instance. 
-2. try to mount the drive. `mount /dev/xvdh3 /mnt/backup` probably will get error about LVM file type `mount: unknown filesystem type 'LVM2_member'`
+2. try to mount the drive. `mount /dev/xvdh3 /mnt/backup` probably will get error about LVM file type `mount: unknown filesystem type 'LVM2_member'` OR `mount: wrong fs type, bad option, bad superblock on /dev/nvme9n1`. If the drive doesn't use LVM, skip to step 10 to fix issues with XFS. 
 
 3. Try pvscan to see if duplicate UUID
 ```
@@ -146,9 +148,9 @@ mount: wrong fs type, bad option, bad superblock on /dev/mapper/recover-opt,
 [288992.103137] XFS (dm-8): Filesystem has duplicate UUID c0027f10-6007-42d5-8680-7bbfb5f2e6dc - can't mount
 ```
 
-11. Repair the volume to prep for UUID change. `xfs_repair -L /dev/mapper/recover-opt`
+11. Repair the volume to prep for UUID change. `xfs_repair -L /dev/mapper/recover-opt` OR `xfs_repair -L /dev/nvme9n1`
 
-12. Change UUID `xfs_admin -U generate /dev/mapper/recover-opt`
+12. Change UUID `xfs_admin -U generate /dev/mapper/recover-opt` OR `xfs_admin -U generate /dev/nvme9n1`
 
 13. Mount filesystem `mount /dev/mapper/recover-opt /mnt/backup/`
 

+ 40 - 0
Salt Upgrade 3002.6 -> 3003.3 Notes.md

@@ -0,0 +1,40 @@
+### Salt Upgrade 3002.6 -> 3003.3 Notes.md
+
+
+
+upgrade salt master then minions
+
+Update the pillar in git salt/pillar/dev/yumrepos.sls
+```
+salt salt* cmd.run 'salt-run fileserver.update'
+salt salt* cmd.run 'salt-run git_pillar.update'
+salt salt* saltutil.refresh_pillar
+salt salt* pillar.get yumrepos:salt:version
+```
+
+Update salt master salt 
+```
+salt salt* cmd.run 'cat /etc/yum.repos.d/salt.repo'
+salt salt* state.sls os_modifications.repo_update test=true --output-diff
+salt salt* cmd.run 'cat /etc/yum.repos.d/salt.repo'
+salt salt* cmd.run 'yum clean all ; yum makecache fast'
+salt salt* cmd.run 'yum check-update | grep salt'
+salt salt* pkg.upgrade name=salt-master
+sudo salt salt* state.sls salt_master.salt_posix_acl --output-diff
+```
+
+Update salt minions
+```
+salt sensu* cmd.run 'cat /etc/yum.repos.d/salt.repo'
+salt sensu* state.sls os_modifications.repo_update test=true --output-diff
+salt sensu* cmd.run 'cat /etc/yum.repos.d/salt.repo'
+salt sensu* cmd.run 'yum clean all ; yum makecache fast'
+salt sensu* cmd.run 'yum check-update | grep salt'
+salt sensu* cmd.run_bg 'systemd-run --scope yum update salt-minion -y && sleep 20 && systemctl daemon-reload && sleep 20 && systemctl start salt-minion'
+salt sensu* test.version
+```
+
+Did you miss any?
+`salt -G saltversion:3001.6 test.ping`
+
+repeat for PROD. 

+ 1 - 1
Splunk Upgrade Notes.md

@@ -138,7 +138,7 @@ max_upload_size = 1024
     - Ensure you have room to take a backup
         - `cmd.run 'df -h /opt'`
 - Stop Splunk and take a backup
-    - `systemctl stop splunk`
+    - `cmd.run 'systemctl stop splunk'`
     - `cmd.run 'tar -czf /opt/opt-splunk-backup-8.0.5.tar.gz /opt/splunk'`
     - Worried about space?
         - `cmd.run 'tar -czf /opt/syslog-ng/opt-splunk-backup-8.0.5.tar.gz /opt/splunk'`