Notes about RedHat subscriptions and OS configurations
Review commands run on the command line: sudoreplay or "ausearch -ua " For more info see Sudo Replay Notes.md for more details.
Still used for on-prem LCP nodes.
Redhat.com TWO different accounts to access Test and Prod. Prod Subscription Account Number: 6195362 Test : 6076020
https://access.redhat.com/management/systems
TEST! subscription-manager register --activationkey=packerbuilder --org=11696629
Pillar for RHEL subscription salt/pillar/dev/rhel_subs.sls salt/pillar/prod/rhel_subs.sls
Oh no the aws redhat repos broke! Try this Get an AWS RHUI Client Package Supporting IMDSv2
download the redhat repo client package and copy it over via salt then install it.
yumdownloader rh-amazon-rhui-client # on salt master
sudo cp rh-amazon-rhui-client-3.0.40-1.el7.noarch.rpm /var/opt/salt # put file in fileroot
#move file to LCP/minion
salt bas-splunk-ds-1 cp.get_file salt://rh-amazon-rhui-client-3.0.40-1.el7.noarch.rpm /root/rh-amazon-rhui-client-3.0.40-1.el7.noarch.rpm
#install file
rpm -U /root/rh-amazon-rhui-client-3.0.40-1.el7.noarch.rpm
yum clean all ; yum makecache fast
System emails are being sent to Moose Splunk.
index=junk sourcetype=_json "headers.Subject"="*rotatelogs.sh"
Error: 'rhel-7-server-rpms' does not match a valid repository ID. Use "subscription-manager repos --list" to see valid repositories.
$subscription-manager repos --list
This system has no repositories available through subscriptions.
terragrunt apply
will not update drive sizes)Verify the filesystem:
df -hT
lsblk
Grow the filesystem. xfs_growsf
uses the mount point, not the device:
xfs_growfs -d /opt
Validate with df -h
NOTE: this is for LCPs. New installs have separate EBS for each partition. For LCPs based on the VMware image you may need to install growpart, yum install cloud-utils-growpart
partprobe
Expand partition
What file system type is it? What partition number is it? For growpart command use the correct partition number. To find the partition number look at the output from the lsblk
command. Are there multiple partitions for the disk? The partition numbers start at 1. For example, this would be the second partition.
nvme0n1 259:7 0 20G 0 disk
├─nvme0n1p1 259:8 0 1M 0 part
└─nvme0n1p2 259:9 0 20G 0 part / <-------- partition number 2
and this would be the growpart command: growpart /dev/nvme0n1 2
df -hT
lsblk
growpart /dev/xvda 3
growpart /dev/xvda <partition number>
Expand volume group ( only for LVM )
lvextend /dev/mapper/vg_root-opt /dev/xvda3
Alternate increase by 5GB:
lvextend -L +5G /dev/mapper/vg_root-root /dev/sda3
Expand file system
xfs_growfs -d /opt
If you run 'lsblk' and it looks more like this:
nvme1n1 259:0 0 300G 0 disk
`-vg_syslog-lv_syslog 253:0 0 300G 0 lvm /opt/syslog-ng
There is no partition. The pv is the drive itself. For those, follow these directions, substituting values as appropriate:
salt XXX-splunk-syslog-9 cmd.run 'nvme list'
salt XXX-splunk-syslog-9 cmd.run 'lsblk'
# Expand the physical volume
salt XXX-splunk-syslog-10 cmd.run 'pvresize /dev/nvme1n1'
# Expand the logical volume
salt XXX-splunk-syslog-10 cmd.run 'lvextend /dev/mapper/vg_syslog-lv_syslog /dev/nvme1n1'
# Expand the filesystem
salt XXX-splunk-syslog-10 cmd.run 'xfs_growfs /opt/syslog-ng'
Because our servers are all copies of the same server the UUID of the LVM is the same across all the servers. To mount a EBS volume of a snapshot to the same server that the snapshot came from try these steps.
try to mount the drive. mount /dev/xvdh3 /mnt/backup
probably will get error about LVM file type mount: unknown filesystem type 'LVM2_member'
OR mount: wrong fs type, bad option, bad superblock on /dev/nvme9n1
. If you run dmesg it might say Filesystem has duplicate UUID
. If the drive doesn't use LVM, skip to step 10 to fix issues with XFS.
Try pvscan to see if duplicate UUID
[prod]root@nga-splunk-hf:/mnt:# pvscan
WARNING: found device with duplicate /dev/xvdh3
WARNING: Disabling lvmetad cache which does not support duplicate PVs.
WARNING: Scan found duplicate PVs.
WARNING: Not using lvmetad because cache update failed.
WARNING: Not using device /dev/xvdh3 for PV KJjKPv-hB1d-nINV-Vocw-9ptH-LTuZ-QLYSGE.
WARNING: PV KJjKPv-hB1d-nINV-Vocw-9ptH-LTuZ-QLYSGE prefers device /dev/xvda3 because device is used by LV.
PV /dev/xvda3 VG vg_root lvm2 [<97.00 GiB / 8.82 GiB free]
Total: 1 [<97.00 GiB] / in use: 1 [<97.00 GiB] / in no VG: 0 [0 ]
Try vgdisplay to see if duplicate UUID
[prod]root@nga-splunk-hf:/mnt:# vgdisplay
WARNING: Not using lvmetad because duplicate PVs were found.
WARNING: Use multipath or vgimportclone to resolve duplicate PVs?
WARNING: After duplicates are resolved, run "pvscan --cache" to enable lvmetad.
WARNING: Not using device /dev/xvdh3 for PV KJjKPv-hB1d-nINV-Vocw-9ptH-LTuZ-QLYSGE.
WARNING: PV KJjKPv-hB1d-nINV-Vocw-9ptH-LTuZ-QLYSGE prefers device /dev/xvda3 because device is used by LV.
Try vgimportclone --basevgname recover /dev/xvdh3
to change the UUID of the LVM VG.
Try to mount mount /dev/xvdh3 /mnt/backup
or mount /dev/mapper/recover-opt /mnt/backup/
See if the VG is active with lvscan
Activate the LVM VG with vgchange -ay
Try to mount mount /dev/xvdh3 /mnt/backup
or mount /dev/mapper/recover-opt /mnt/backup/
If you see this, then you have duplicate XFS Filesystem UUID.
mount: wrong fs type, bad option, bad superblock on /dev/mapper/recover-opt,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so.
[288992.103137] XFS (dm-8): Filesystem has duplicate UUID c0027f10-6007-42d5-8680-7bbfb5f2e6dc - can't mount
Repair the volume to prep for UUID change. xfs_repair -L /dev/mapper/recover-opt
OR xfs_repair -L /dev/nvme9n1
Change UUID xfs_admin -U generate /dev/mapper/recover-opt
OR xfs_admin -U generate /dev/nvme9n1
Mount filesystem mount /dev/mapper/recover-opt /mnt/backup/
I unmounted the drive, detached the volume in AWS, and then restarted the OS. This cleared out the LVM references to the drive.