RedHat Notes.md 4.5 KB

RedHat Notes.md

Notes about RedHat subscriptions and OS configurations

Command Line

Review commands run on the command line: sudoreplay or "ausearch -ua " For more info see Sudo Replay Notes.md for more details.

Subscriptions

Redhat.com TWO different accounts to access Test and Prod. Prod Subscription Account Number: 6195362 Test : 6076020

https://access.redhat.com/management/systems

TEST! subscription-manager register --activationkey=packerbuilder --org=11696629

Pillar for RHEL subscription salt/pillar/dev/rhel_subs.sls salt/pillar/prod/rhel_subs.sls

System Emails

System emails are being sent to Moose Splunk.

index=junk sourcetype=_json "headers.Subject"="*rotatelogs.sh"

ERRORs

Error: 'rhel-7-server-rpms' does not match a valid repository ID. Use "subscription-manager repos --list" to see valid repositories.

$subscription-manager repos --list This system has no repositories available through subscriptions.

Expand AWS EBS (Not LVM)

  1. Record drive size in xdr-terraform-live or xdr-terraform-modules
  2. Take a snapshot of the volume, or an image of the system (if image, remember to check the 'do not shutdown' box!)
  3. Expand drive in AWS GUI (at this time, terragrunt apply will not update drive sizes)
  4. Verify the filesystem:

    df -hT
    lsblk
    
  5. Grow the filesystem. xfs_growsf uses the mount point, not the device:

    xfs_growfs -d /opt
    
  6. Validate with df -h

Expand AWS EBS and LVM

note, this is for legacy installs. New installs have separate EBS for each partition.

  1. Expand drive in AWS (easy)
  2. Expand partition What file system type is it? df -hT lsblk growpart /dev/xvda 3

  3. Expand volume group lvextend /dev/mapper/vg_root-opt /dev/xvda3

  4. Expand file system xfs_growfs -d /opt

Mount Snapshot Drives

Because our servers are all copies of the same server the UUID of the LVM is the same across all the servers. To mount a EBS volume of a snapshot to the same server that the snapshot came from try these steps.

  1. Create the EBS volume from the snapshot and attach to the EC2 instance.
  2. try to mount the drive. mount /dev/xvdh3 /mnt/backup probably will get error about LVM file type mount: unknown filesystem type 'LVM2_member'

  3. Try pvscan to see if duplicate UUID

    [prod]root@nga-splunk-hf:/mnt:# pvscan
    WARNING: found device with duplicate /dev/xvdh3
    WARNING: Disabling lvmetad cache which does not support duplicate PVs.
    WARNING: Scan found duplicate PVs.
    WARNING: Not using lvmetad because cache update failed.
    WARNING: Not using device /dev/xvdh3 for PV KJjKPv-hB1d-nINV-Vocw-9ptH-LTuZ-QLYSGE.
    WARNING: PV KJjKPv-hB1d-nINV-Vocw-9ptH-LTuZ-QLYSGE prefers device /dev/xvda3 because device is used by LV.
    PV /dev/xvda3   VG vg_root         lvm2 [<97.00 GiB / 8.82 GiB free]
    Total: 1 [<97.00 GiB] / in use: 1 [<97.00 GiB] / in no VG: 0 [0   ]
    
    
  4. Try vgdisplay to see if duplicate UUID

    [prod]root@nga-splunk-hf:/mnt:# vgdisplay
    WARNING: Not using lvmetad because duplicate PVs were found.
    WARNING: Use multipath or vgimportclone to resolve duplicate PVs?
    WARNING: After duplicates are resolved, run "pvscan --cache" to enable lvmetad.
    WARNING: Not using device /dev/xvdh3 for PV KJjKPv-hB1d-nINV-Vocw-9ptH-LTuZ-QLYSGE.
    WARNING: PV KJjKPv-hB1d-nINV-Vocw-9ptH-LTuZ-QLYSGE prefers device /dev/xvda3 because device is used by LV.
    
  5. Try vgimportclone --basevgname recover /dev/xvdh3 to change the UUID of the LVM VG.

  6. Try to mount mount /dev/xvdh3 /mnt/backup or mount /dev/mapper/recover-opt /mnt/backup/

  7. See if the VG is active with lvscan

  8. Activate the LVM VG with vgchange -ay

  9. Try to mount mount /dev/xvdh3 /mnt/backup or mount /dev/mapper/recover-opt /mnt/backup/

  10. If you see this, then you have duplicate XFS Filesystem UUID.

    mount: wrong fs type, bad option, bad superblock on /dev/mapper/recover-opt,
       missing codepage or helper program, or other error
    
       In some cases useful info is found in syslog - try
       dmesg | tail or so.
    
    [288992.103137] XFS (dm-8): Filesystem has duplicate UUID c0027f10-6007-42d5-8680-7bbfb5f2e6dc - can't mount
    
  11. Repair the volume to prep for UUID change. xfs_repair -L /dev/mapper/recover-opt

  12. Change UUID xfs_admin -U generate /dev/mapper/recover-opt

  13. Mount filesystem mount /dev/mapper/recover-opt /mnt/backup/

Time to Remove the drive

I unmounted the drive, detached the volume in AWS, and then restarted the OS. This cleared out the LVM references to the drive.