Extend Root LVM Configured Partition – RHEL8 / RHEL7 / CentOS 8 / CentOS 7 – No Reboot Required

In below example, I have RHEL 8 server with 100 GB OS disk, the partitions are distributed between:

  • root = 50 GB
  • home = 46.3 GB
  • swap = 2.1 GB

Filesystem: xfs, however it should work with ext partitions.

The requirement to fulfill here is to extend the root partition from 50 GB to 100 GB.

Ok, Let’s find the device where root “/” partition is:

# lsblk

NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0  100G  0 disk
├─sda1            8:1    0  600M  0 part /boot/efi
├─sda2            8:2    0    1G  0 part /boot
└─sda3            8:3    0 98.4G  0 part
├─rhel-root   253:0    0   50G  0 lvm  /
├─rhel-swap   253:1    0  2.1G  0 lvm  [SWAP]
└─rhel-home   253:2    0 46.3G  0 lvm  /home

#==> /dev/sda is my OS disk, root partition is on /dev/sda3

Find device with LVM PV Scan

#pvs

PV         VG    Fmt  Attr PSize    PFree
/dev/sda3  rhel  lvm2 a--    98.41g    0
/dev/sdb1  vgapp lvm2 a--  <200.00g    0

My server is running on VMware vSphere environment, I have increased the disk size on VM from 100 GB to 150 GB and we will be increasing the space without rebooting/restarting the server.

VM Edit Settings > Disk Size Increase from 100 to 150 GB

Note: My server was already deployed with LVM configuration in place initially, we are just leveraging the LVM to extend the partition in this case.

Let us Rescan Extended Disk in OS to make Linux Kernel Aware, non-reboot method

# echo 1 > /sys/class/block/sda/device/rescan

Validate New Disk Size on sda

#lsblk

NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0  150G  0 disk
├─sda1            8:1    0  600M  0 part /boot/efi
├─sda2            8:2    0    1G  0 part /boot
└─sda3            8:3    0 98.4G  0 part
├─rhel-root   253:0    0   50G  0 lvm  /
├─rhel-swap   253:1    0  2.1G  0 lvm  [SWAP]
└─rhel-home   253:2    0 46.3G  0 lvm  /home

#==> sda size changed from 100 GB to 150 GB

Note: The partition is not automatically adjusted and needs to be resized in two steps

  1. resizing the partition
  2. make the kernel aware of the bigger partition

Now typically we use fdisk for the first step and a utility like partprobe (or a reboot) for the second step. But we now have a great software called growpart which we will use here. growpart is part of the cloud-utils-package, and should be available in your distro’s repositories. In my case it was not installed so let’s install it.

#yum install -y cloud-utils-growpart

Let’s increase the partition now with growpart.

As identified by #lsblk, our:

Device = /dev/sda ==> Disk of Root Partition

Partition = /dev/sda3 ==> Where our Root partition is.

#growpart /dev/sda 3
CHANGED: partition=3 start=3328000 old: size=206385152 end=209713152 new: size=311244767,end=314572767

#Note: there is a space in /dev/sda & 3 (3 was our partition number)

Let’s validate new partition size

#lsblk

NAME            MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda               8:0    0   150G  0 disk
├─sda1            8:1    0   600M  0 part /boot/efi
├─sda2            8:2    0     1G  0 part /boot
└─sda3            8:3    0 148.4G  0 part
├─rhel-root   253:0    0    50G  0 lvm  /
├─rhel-swap   253:1    0   2.1G  0 lvm  [SWAP]
└─rhel-home   253:2    0  46.3G  0 lvm  /home

#==> sda3 size changed from 98.4 GB to 148.5 GB

Let’s resize Physical Volume to occupy all new space

#pvresize /dev/sda3
1 physical volume(s) resized or updated / 0 physical volume(s) not resized

Validate with LVM PV Scan

#pvs

PV         VG    Fmt  Attr PSize    PFree
/dev/sda3  rhel  lvm2 a--   148.41g 50.00g
/dev/sdb1  vgapp lvm2 a--  <200.00g     0

#==> PSize changed for /dev/sda3

Let’s check the LVM volume group status

#vgs

VG    #PV #LV #SN Attr   VSize    VFree
rhel    1   3   0 wz--n-  148.41g 50.00g
vgapp   1   1   0 wz--n- <200.00g     0

#==> We have free space of 50 GB for rhel VG i.e our root partition VG

Let’s resize Logical Volume to occupy all new space

#lvextend -r -l +100%FREE /dev/rhel/root

Size of logical volume rhel/root changed from 50.00 GiB (12800 extents) to 100.00 GiB (25600 extents).
Logical volume rhel/root successfully resized.
meta-data=/dev/mapper/rhel-root  isize=512    agcount=4, agsize=3276800 blks
=                       sectsz=512   attr=2, projid32bit=1
=                       crc=1        finobt=1, sparse=1, rmapbt=0
=                       reflink=1
data     =                       bsize=4096   blocks=13107200, imaxpct=25
=                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=6400, version=2
=                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 13107200 to 26214400

#==> example: #lvextend -r -l +100%FREE /dev/<name-of-volume-group>/root

Validate root partition should be increased to 100 GB now:

# lsblk

NAME            MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda               8:0    0   150G  0 disk
├─sda1            8:1    0   600M  0 part /boot/efi
├─sda2            8:2    0     1G  0 part /boot
└─sda3            8:3    0 148.4G  0 part
├─rhel-root   253:0    0   100G  0 lvm  /
├─rhel-swap   253:1    0   2.1G  0 lvm  [SWAP]
└─rhel-home   253:2    0  46.3G  0 lvm  /home

#==> rhel-root increased size.

Hope you find this article useful, share the love with others if you feel worth it.

Have a nice day.

Multi-writer Shared / Clustered Disk for Windows or Oracle Cluster – vSphere / ESXi / vSAN 6.7.x

For the sake of this blog, it will be a Two VM’s Use Case, running on separate ESXi Hosts (Anti-Affinity Rule in Place)

The step of second node will apply on third / fourth node if to be introduced.

Environment

FS-VM1 – OS = Windows 2019 OS Disk 1 = 100 GB

FS-VM2 – OS = Windows 2019 OS Disk 1 = 100 GB

Created a thick provisioned eager zeroed storage policy as I was configuring on this setup on vSAN, thick disk is required for this setup to work as expected

Required disk for clustering:
Shared Disk 1 = 70 GB
Shared Disk 2 = 140 GB

Configuration Steps:

Shutdown Guest OS or Power Off FS-VM1 & FS-VM2

FS-VM1

  • Edit Settings – FS-VM1
  • Add New Devices > SCSI Controller
    • Select type: VMware Paravirtual
    • Select SCSI Bus sharing mode: Physical
  • Add New Disk
    • Size: 70 GB (1:0)
    • Type: Thick provisioned eager zeroed (using thick policy)
    • Sharing: Multi-writer
    • Disk Mode: Independent – Persistent
    • Virtual Device Node: SCSI controller 1 = SCSI(1:0)
  • Add New Disk
    • Size: 140 GB (1:1)
    • Type: Thick provisioned eager zeroed (using thick policy)
    • Sharing: Multi-writer
    • Disk Mode: Independent – Persistent
    • Virtual Device Node: SCSI controller 1 = SCSI(1:1)

Save Settings for VM by Pressing OK

FS-VM2

  • Edit Settings – FS-VM2
  • Add New Devices > SCSI Controller
    • Select type: VMware Paravirtual
    • Select SCSI Bus sharing mode: Physical
  • Add “Existing Hard Disk”
    • Find for FS-VM1 Folder in Search Browser
    • Select 70 GB .vmdk
    • Sharing: Multi-writer
    • Disk Mode: Independent – Persistent
    • Virtual Device Node: SCSI controller 1 = SCSI(1:0)
  • Add “Existing Hard Disk”
    • Find for FS-VM1 Folder in Search Browser
    • Select 140 GB .vmdk
    • Sharing: Multi-writer
    • Disk Mode: Independent – Persistent
    • Virtual Device Node: SCSI controller 1 = SCSI(1:1)

In case of vSAN it will be Storage Identifier ID with VM name and code.vmdk

Sample: [VxRail-Virtual-SAN-Datastore-8fe79a34-32432-45d4-affb-cdf6a37dd110] 62479960-6cf5-58d0-77bb-e4434bf848f0/FS-VM1-cXNb_1.vmdk

To further validate make sure it’s same path in FS-VM1 Disk File (VM Edit Settings > Hard Disk > Disk File)

Make sure configuration is identical on both VM’s such as SCSI Controller, SCSI Number, Multi-writer, Independent-Persistent etc.

Save Settings for VM by Pressing OK

Validation Steps

Power on both Virtual Machines

Login – FS-VM1 (RDP or Console Access)

  • Navigate to Disk Management
  • Validate Disks are Visible
  • Rescan Disk
  • Initialize Disk
  • Create Partition

Login – FS-VM2 – Repeat same steps

  • Navigate to Disk Management
  • Validate Disks are Visible
  • Rescan Disk
  • Initialize Disk
  • Create Partition

Servers are ready for you to configure your Clustering application

Reset root password – RHEL7/8 CentOS 7/8

https://images.unsplash.com/photo-1584433144859-1fc3ab64a957?ixlib=rb-1.2.1&q=85&fm=jpg&crop=entropy&cs=srgb

Reboot your Server, at the bootloader screen select the kernel you would like to boot it with (usually the latest one) and hit ‘e’

In the next screen, find the line that refers to the kernel

  • For RHEL/CentOS 7, the line starts with ‘linux16’.
  • For RHEL/Centos 8x, and Fedora the line starts with ‘linux’.

Add ‘rd.break‘ at the end of kernel line and press Ctrl-x

Now the server will boot into OS rescue mode

switch_root:/#.

Now, remount root partition in read/write mode

#mount -o remount rw /sysroot

Next, switch to root directory

#chroot /sysroot

At this point you can change the root password

#passwd <enter>
*<new password>*
*<repeat new password>*

Next step is for SELinux to allow new file changes – such as password reset in our case.

#touch /.autorelabel

This step will take some time to relabel, as it depends on filesystem size

Once complete, Exit Server

#exit

then, Restart server

#reboot

Validate new password has been set on Server after reboot by logging in with root account.

Kubernetes Architecture 101 – Mind Map

I have been spending some time on K8s, digging deep down to understand the components involved and how they play role in running a successful application on K8s Architecture.

Learning for me is always into basics, once you are comfortable with it then any top application running will be easy to understand, will be easy to troubleshoot, will be easy to enhance.

Previous post was a very high level K8s mind map and here I am posting another 101 Mind map for it’s Architecture.

K8s Architecture Mind Map

Kubernetes 101 – Mind Map

I have played with Kubernetes in past, but it has been years and was just to get started. That knowledge still stays inside me to understand discussions related to Kubernetes

Finally decided to get hands deep dirty and created this couple of days back, thought to blog it for reference and might be useful for someone getting started.

K8s Mind Map

Using command line kill/power off a virtual machine

VMWARE ESXI CLI

Using the ESXi esxcli command to power off a virtual machine

The esxcli command can be used locally or remotely to power off a virtual machine running on ESXi 5.x or later. For more information, see the esxcli vm Commands section in the vSphere Command-Line Interface Reference.

  1. Open a console session where the esxcli tool is available, either in the ESXi Shell, the vSphere Management Assistant (vMA), or the location where the vSphere Command-Line Interface (vCLI) is installed.
  2. Get a list of running virtual machines, identified by World ID, UUID, Display Name, and path to the .vmx configuration file by running this command:esxcli vm process list
  3. Power off the virtual machine from the list by running this command:esxcli vm process kill --type= [soft,hard,force] --world-id= WorldNumberNotes:
    • Three power-off methods are available. Soft is the most graceful, hard performs an immediate shutdown, and force should be used as a last resort.
    • alternate power off command syntax is: esxcli vm process kill -t [ soft,hard,force] -w WorldNumber
  4. Repeat Step 2 and validate that the virtual machine is no longer running.

Using the ESXi command-line utility vim-cmd to power off the virtual machine

  1. On the ESXi console, enter Tech Support mode and log in as root. For more information, see Tech Support Mode for Emergency Support (1003677).
  2. Get a list of all registered virtual machines, identified by their VMID, Display Name, and path to the .vmx configuration file by running this command:vim-cmd vmsvc/getallvms
  3. Get the current state of a virtual machine by running this command:vim-cmd vmsvc/power.getstate VMID
  4. Shutdown the virtual machine using the VMID found in Step 2 and run this command:vim-cmd vmsvc/power.shutdown VMIDNote: If the virtual machine fails to shut down, run this command:vim-cmd vmsvc/power.off VMID

Note: this article is for my quick reference, VMware KB reference (https://kb.vmware.com/s/article/1014165)

Default ssh username for AMI – AWS EC2 Instances

Amazon EC2

Amazon EC2

Just for my quick reference:

  • For an Amazon Linux AMI, the user name is ec2-user.
  • For a Centos AMI, the user name is centos.
  • For a Debian AMI, the user name is admin or root.
  • For a Fedora AMI, the user name is ec2-user or fedora.
  • For a RHEL AMI, the user name is ec2-user or root.
  • For a SUSE AMI, the user name is ec2-user or root.
  • For an Ubuntu AMI, the user name is ubuntu or root.
  • Otherwise, if ec2-user and root don’t work, check with the AMI provider.

Reference AWS Guide: 

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html#ssh-prereqs

Scan for new SCSI drives without the need to reboot the VM – LINUX

New disk presented to the running LINUX virtual machine will not display the device into operating system unless you reboot the server.

Found another way to achieve this and thought to note it down here for my future reference.

Run the following commands:

The first command returns the SCSI host in use which in this case is host2.:

grep mpt /sys/class/scsi_host/host?/proc_name

The next command, performs a bus scan.

echo “- – -“ > /sys/class/scsi_host/host2/scan

Fdisk is used to list all the available drives on the machine.

Validate if you can view new disk:

fdisk -l

If the newly added drive is still not discovered, then unfortunately just reboot the VM.

Content Referenced article: https://www.altaro.com/vmware/managing-disk-space-linux-vm/

 

Install MHA on RHEL6 / CentOS6 for mySQL

Configure Proxy for Internet

#export http_proxy=http://proxy.xxxx.intra:00
#export https_proxy=https://proxy.xxxx.intra:00

Note: My environment is using a proxy server for Internet access, if you have direct access to internet ignore this step.

Configure RedHat subscription for yum

#subscription-manager register –username admin-example –password secret –auto-attach

Download epel-release package on machine

#wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-6.noarch.rpm

Note: the package will be downloaded on the path you are standing – verify from “pwd” command

Install epel-package

#yum install -y epel-release-latest-6.noarch.rpm

Install perl packages

#yum -y install perl-DBD-MySQL perl-Config-Tiny perl-Log-Dispatch perl-Parallel-ForkManager perl-Config-IniFiles ncftp perl-Params-Validate perl-CPAN perl-Test-Mock-LWP.noarch perl-LWP-Authen-Negotiate.noarch perl-devel

Install more perl packages

#yum install perl-ExtUtils-CBuilder perl-ExtUtils-MakeMaker

Download MHA Packages (Node & Manager)

https://code.google.com/p/mysql-master-ha/wiki/Downloads?tm=2

– MHA Manager 0.56 rpm RHEL6 – mha4mysql-manager-0.56-0.el6.noarch.rpm
– MHA Node 0.56 rpm RHEL6 – mha4mysql-node-0.56-0.el6.noarch.rpm

Once downloaded copy to server via WinSCP or SSH (somehow it is not working properly in wget for me)

Install MHA Packages

#yum -y install mha4mysql-node-0.56-0.el6.noarch.rpm

#yum -y install mha4mysql-manager-0.56-0.el6.noarch.rpm

Note: Move to directory where packages are downloaded

Docker – Basic Cheat Sheet

docker.com

docker.com

As a learning curve, I had marked Docker in my list

So just tipping some quick commands for your and my own reference as it is not a primary tool I am using day-to-day

Display Docker Images:

$ docker images

Run Docker Image:

$ docker run hello-world

Note: If you do not have an image in your local machine, docker will look into docker hub (over the internet)

Write a docker file:

$ mkdir dockerfolder

$ cd dockerfolder

$ vim newdockerfile

RUN apt-get -y update && apt-get install -y telnet

save and close your newdockerfile “wq!”

Build and image from our Docker File

$ docker build -t telnet-install .

Note: we are using a period in the end of docker build command to represent the newdockerfile within the directory

If you want to run new build file: “docker run telnet-install”

Tag your image-id: (required if you want to push to docker hub)

$ docker tag 693bce725149 terminaltolinux/telnet-install:latest

Note: image id can be found from “docker images” command

Login to docker hub:

$ docker login –username=terminaltolinux –email=terminal@linux.com

Note: after docker login command it will prompt for password, prior have a docker hub account

Push docker image to docker hub account:

$ docker push terminaltolinux/telnet-install

Note: Verify from docker hub account, the docker image will be pushed. Prior create a repo on docker hub

Delete a docker image:

$ docker rmi -f terminaltolinux/telnet-install

Pull docker image from docker hub account:

$ docker run terminaltolinux/telnet-install

Note: it will not find in the local machine as we deleted earlier and will fetch it online and run it, however “docker pull terminaltolinux/telnet-install” can also be used to just pull the image.

Search docker image:

$ docker search mysql

Search docker image with number of stars:

$ docker search -s 1 mysql

Run docker image in background:

$ docker run -d mysql

Run docker image with interactive session:

$ docker run -it ubuntu

List running containers

$ dockers ps

Inspect a container

$ docker inspect <container-id>

Note: container-id will be available from “docker ps”

Logs of standard error or standard out

$ docker log <container-id>

Commit changes to container and save as a separate image. (tag it):

$ docker commit <container-id> nginx-ubuntu

Port binding to container

$ docker run -d -p 6379 reds

Note: -p binds port but if we wanted to map this port directly on the host, we will use the option -p 6379:6379 and if with particular ip then -p 127.0.0.1:6379:6379

Binding directories

$ docker run -d -v “/home/docker/data”:/data reds

Start a container

$ docker start <container-id>

Stop a container

$ docker stop <container-id>

Remove an exited container

$ docker rm <container-id>

Restart a container

$ docker restart <container-id>

Use docker with proxy:

If you want to run docker with environment proxy, edit /etc/default/docker amend your entry for http_proxy

TIP:

If we don’t tell docker explicitly we want to map port, it will block access through that port (because containers are isolated until you tell them you want access)