Extend Root LVM Configured Partition – RHEL8 / RHEL7 / CentOS 8 / CentOS 7 – No Reboot Required

In below example, I have RHEL 8 server with 100 GB OS disk, the partitions are distributed between:

  • root = 50 GB
  • home = 46.3 GB
  • swap = 2.1 GB

Filesystem: xfs, however it should work with ext partitions.

The requirement to fulfill here is to extend the root partition from 50 GB to 100 GB.

Ok, Let’s find the device where root “/” partition is:

# lsblk

NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0  100G  0 disk
├─sda1            8:1    0  600M  0 part /boot/efi
├─sda2            8:2    0    1G  0 part /boot
└─sda3            8:3    0 98.4G  0 part
├─rhel-root   253:0    0   50G  0 lvm  /
├─rhel-swap   253:1    0  2.1G  0 lvm  [SWAP]
└─rhel-home   253:2    0 46.3G  0 lvm  /home

#==> /dev/sda is my OS disk, root partition is on /dev/sda3

Find device with LVM PV Scan

#pvs

PV         VG    Fmt  Attr PSize    PFree
/dev/sda3  rhel  lvm2 a--    98.41g    0
/dev/sdb1  vgapp lvm2 a--  <200.00g    0

My server is running on VMware vSphere environment, I have increased the disk size on VM from 100 GB to 150 GB and we will be increasing the space without rebooting/restarting the server.

VM Edit Settings > Disk Size Increase from 100 to 150 GB

Note: My server was already deployed with LVM configuration in place initially, we are just leveraging the LVM to extend the partition in this case.

Let us Rescan Extended Disk in OS to make Linux Kernel Aware, non-reboot method

# echo 1 > /sys/class/block/sda/device/rescan

Validate New Disk Size on sda

#lsblk

NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0  150G  0 disk
├─sda1            8:1    0  600M  0 part /boot/efi
├─sda2            8:2    0    1G  0 part /boot
└─sda3            8:3    0 98.4G  0 part
├─rhel-root   253:0    0   50G  0 lvm  /
├─rhel-swap   253:1    0  2.1G  0 lvm  [SWAP]
└─rhel-home   253:2    0 46.3G  0 lvm  /home

#==> sda size changed from 100 GB to 150 GB

Note: The partition is not automatically adjusted and needs to be resized in two steps

  1. resizing the partition
  2. make the kernel aware of the bigger partition

Now typically we use fdisk for the first step and a utility like partprobe (or a reboot) for the second step. But we now have a great software called growpart which we will use here. growpart is part of the cloud-utils-package, and should be available in your distro’s repositories. In my case it was not installed so let’s install it.

#yum install -y cloud-utils-growpart

Let’s increase the partition now with growpart.

As identified by #lsblk, our:

Device = /dev/sda ==> Disk of Root Partition

Partition = /dev/sda3 ==> Where our Root partition is.

#growpart /dev/sda 3
CHANGED: partition=3 start=3328000 old: size=206385152 end=209713152 new: size=311244767,end=314572767

#Note: there is a space in /dev/sda & 3 (3 was our partition number)

Let’s validate new partition size

#lsblk

NAME            MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda               8:0    0   150G  0 disk
├─sda1            8:1    0   600M  0 part /boot/efi
├─sda2            8:2    0     1G  0 part /boot
└─sda3            8:3    0 148.4G  0 part
├─rhel-root   253:0    0    50G  0 lvm  /
├─rhel-swap   253:1    0   2.1G  0 lvm  [SWAP]
└─rhel-home   253:2    0  46.3G  0 lvm  /home

#==> sda3 size changed from 98.4 GB to 148.5 GB

Let’s resize Physical Volume to occupy all new space

#pvresize /dev/sda3
1 physical volume(s) resized or updated / 0 physical volume(s) not resized

Validate with LVM PV Scan

#pvs

PV         VG    Fmt  Attr PSize    PFree
/dev/sda3  rhel  lvm2 a--   148.41g 50.00g
/dev/sdb1  vgapp lvm2 a--  <200.00g     0

#==> PSize changed for /dev/sda3

Let’s check the LVM volume group status

#vgs

VG    #PV #LV #SN Attr   VSize    VFree
rhel    1   3   0 wz--n-  148.41g 50.00g
vgapp   1   1   0 wz--n- <200.00g     0

#==> We have free space of 50 GB for rhel VG i.e our root partition VG

Let’s resize Logical Volume to occupy all new space

#lvextend -r -l +100%FREE /dev/rhel/root

Size of logical volume rhel/root changed from 50.00 GiB (12800 extents) to 100.00 GiB (25600 extents).
Logical volume rhel/root successfully resized.
meta-data=/dev/mapper/rhel-root  isize=512    agcount=4, agsize=3276800 blks
=                       sectsz=512   attr=2, projid32bit=1
=                       crc=1        finobt=1, sparse=1, rmapbt=0
=                       reflink=1
data     =                       bsize=4096   blocks=13107200, imaxpct=25
=                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=6400, version=2
=                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 13107200 to 26214400

#==> example: #lvextend -r -l +100%FREE /dev/<name-of-volume-group>/root

Validate root partition should be increased to 100 GB now:

# lsblk

NAME            MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda               8:0    0   150G  0 disk
├─sda1            8:1    0   600M  0 part /boot/efi
├─sda2            8:2    0     1G  0 part /boot
└─sda3            8:3    0 148.4G  0 part
├─rhel-root   253:0    0   100G  0 lvm  /
├─rhel-swap   253:1    0   2.1G  0 lvm  [SWAP]
└─rhel-home   253:2    0  46.3G  0 lvm  /home

#==> rhel-root increased size.

Hope you find this article useful, share the love with others if you feel worth it.

Have a nice day.

Multi-writer Shared / Clustered Disk for Windows or Oracle Cluster – vSphere / ESXi / vSAN 6.7.x

For the sake of this blog, it will be a Two VM’s Use Case, running on separate ESXi Hosts (Anti-Affinity Rule in Place)

The step of second node will apply on third / fourth node if to be introduced.

Environment

FS-VM1 – OS = Windows 2019 OS Disk 1 = 100 GB

FS-VM2 – OS = Windows 2019 OS Disk 1 = 100 GB

Created a thick provisioned eager zeroed storage policy as I was configuring on this setup on vSAN, thick disk is required for this setup to work as expected

Required disk for clustering:
Shared Disk 1 = 70 GB
Shared Disk 2 = 140 GB

Configuration Steps:

Shutdown Guest OS or Power Off FS-VM1 & FS-VM2

FS-VM1

  • Edit Settings – FS-VM1
  • Add New Devices > SCSI Controller
    • Select type: VMware Paravirtual
    • Select SCSI Bus sharing mode: Physical
  • Add New Disk
    • Size: 70 GB (1:0)
    • Type: Thick provisioned eager zeroed (using thick policy)
    • Sharing: Multi-writer
    • Disk Mode: Independent – Persistent
    • Virtual Device Node: SCSI controller 1 = SCSI(1:0)
  • Add New Disk
    • Size: 140 GB (1:1)
    • Type: Thick provisioned eager zeroed (using thick policy)
    • Sharing: Multi-writer
    • Disk Mode: Independent – Persistent
    • Virtual Device Node: SCSI controller 1 = SCSI(1:1)

Save Settings for VM by Pressing OK

FS-VM2

  • Edit Settings – FS-VM2
  • Add New Devices > SCSI Controller
    • Select type: VMware Paravirtual
    • Select SCSI Bus sharing mode: Physical
  • Add “Existing Hard Disk”
    • Find for FS-VM1 Folder in Search Browser
    • Select 70 GB .vmdk
    • Sharing: Multi-writer
    • Disk Mode: Independent – Persistent
    • Virtual Device Node: SCSI controller 1 = SCSI(1:0)
  • Add “Existing Hard Disk”
    • Find for FS-VM1 Folder in Search Browser
    • Select 140 GB .vmdk
    • Sharing: Multi-writer
    • Disk Mode: Independent – Persistent
    • Virtual Device Node: SCSI controller 1 = SCSI(1:1)

In case of vSAN it will be Storage Identifier ID with VM name and code.vmdk

Sample: [VxRail-Virtual-SAN-Datastore-8fe79a34-32432-45d4-affb-cdf6a37dd110] 62479960-6cf5-58d0-77bb-e4434bf848f0/FS-VM1-cXNb_1.vmdk

To further validate make sure it’s same path in FS-VM1 Disk File (VM Edit Settings > Hard Disk > Disk File)

Make sure configuration is identical on both VM’s such as SCSI Controller, SCSI Number, Multi-writer, Independent-Persistent etc.

Save Settings for VM by Pressing OK

Validation Steps

Power on both Virtual Machines

Login – FS-VM1 (RDP or Console Access)

  • Navigate to Disk Management
  • Validate Disks are Visible
  • Rescan Disk
  • Initialize Disk
  • Create Partition

Login – FS-VM2 – Repeat same steps

  • Navigate to Disk Management
  • Validate Disks are Visible
  • Rescan Disk
  • Initialize Disk
  • Create Partition

Servers are ready for you to configure your Clustering application

Reset root password – RHEL7/8 CentOS 7/8

https://images.unsplash.com/photo-1584433144859-1fc3ab64a957?ixlib=rb-1.2.1&q=85&fm=jpg&crop=entropy&cs=srgb

Reboot your Server, at the bootloader screen select the kernel you would like to boot it with (usually the latest one) and hit ‘e’

In the next screen, find the line that refers to the kernel

  • For RHEL/CentOS 7, the line starts with ‘linux16’.
  • For RHEL/Centos 8x, and Fedora the line starts with ‘linux’.

Add ‘rd.break‘ at the end of kernel line and press Ctrl-x

Now the server will boot into OS rescue mode

switch_root:/#.

Now, remount root partition in read/write mode

#mount -o remount rw /sysroot

Next, switch to root directory

#chroot /sysroot

At this point you can change the root password

#passwd <enter>
*<new password>*
*<repeat new password>*

Next step is for SELinux to allow new file changes – such as password reset in our case.

#touch /.autorelabel

This step will take some time to relabel, as it depends on filesystem size

Once complete, Exit Server

#exit

then, Restart server

#reboot

Validate new password has been set on Server after reboot by logging in with root account.