Harbor with Public Wildcard Certificate via Helm Chart on Kubernetes Cluster (TKG Cluster)


Assumptions:

  • Tanzu services mainly Tanzu Kubernetes Grid or vCloud Director with Container Service Extension backed by TKG Images
  • TKG Cluster is provisioned and kubeconfig file is available
  • a dedicated Jump Host or workstation installed with necessary Tanzu Libraries and Helm

Harbor Installation / Deployment and Upgrade

We will deploy Harbor as normal install and then will upgrade it with Public Certificate.

Let’s first set KubeConfig file in session.

# export KUBECONFIG=/root/kubeconfig-cluster.txt

Make sure helm package is available then add repo and fetch harbor

# helm repo add harbor https://helm.goharbor.io

# helm fetch harbor/harbor --untar

Navigate in harbor folder

# cd harbor

make a copy of values.yaml file which you will use for installation

# cp values.yaml cluster-values.yaml

Modify the cluster-values.yaml as per the required configuration without Public Certificate

# vim cluster-values.yaml

following parameters update for basic install:

certSource: auto

commonName: "harbor.publicURL.com"

core: harbor.publicURL.com

externalURL: https://harbor.publicURL.com

harborAdminPassword: "<Password>"

Create a namespace for harbor

# kubectl create ns harbor-system

Install harbor

# helm install harbor . -n harbor-system -f cluster-values.yaml

Wait and verify the installation

# kubectl get deployments -n harbor-system

Verify the installation from service ip over browser

# kubectl get svc -n harbor-sytem

Once the normal install is complete, perform the upgrade to install a Public Certificate to the setup

Make sure the Public Certificate and Key is available in the path:

- publiccertificate.crt

- publiccertificate_pkcs8.key

Create a secret

# kubectl create secret tls harbor-secret --cert=publiccertificate.crt --key=publiccertificate_pkcs8.key --namespace=harbor-system

Validate secret created

# kubectl get secret -n harbor-system

Again make a copy of cluster-values.yaml file which you will use for installation

# cp cluster-values.yaml cert-cluster-values.yaml

Modify the cert-cluster-values.yaml as per the required configuration with Public Certificate

# vim cert-cluster-values.yaml

following parameters update for cert install:

certSource: secret

secretName: harbor-secret

Upgrade the harbor helm deployments

# helm upgrade harbor . -n harbor-system -f cert-cluster-values.yaml

Wait and verify the upgrade

# kubectl get deployments -n harbor-system

Verify the upgrade from service ip over browser

# kubectl get svc -n harbor-sytem

Create DNS records as per the service IP of Harbor or in case of Load Balancer/DNAT IP of TKG Cluster for day to day use or create local /etc/host file entry as per the requirement

Extend Root LVM Configured Partition – RHEL8 / RHEL7 / CentOS 8 / CentOS 7 – No Reboot Required

In below example, I have RHEL 8 server with 100 GB OS disk, the partitions are distributed between:

  • root = 50 GB
  • home = 46.3 GB
  • swap = 2.1 GB

Filesystem: xfs, however it should work with ext partitions.

The requirement to fulfill here is to extend the root partition from 50 GB to 100 GB.

Ok, Let’s find the device where root “/” partition is:

# lsblk

NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0  100G  0 disk
├─sda1            8:1    0  600M  0 part /boot/efi
├─sda2            8:2    0    1G  0 part /boot
└─sda3            8:3    0 98.4G  0 part
├─rhel-root   253:0    0   50G  0 lvm  /
├─rhel-swap   253:1    0  2.1G  0 lvm  [SWAP]
└─rhel-home   253:2    0 46.3G  0 lvm  /home

#==> /dev/sda is my OS disk, root partition is on /dev/sda3

Find device with LVM PV Scan

#pvs

PV         VG    Fmt  Attr PSize    PFree
/dev/sda3  rhel  lvm2 a--    98.41g    0
/dev/sdb1  vgapp lvm2 a--  <200.00g    0

My server is running on VMware vSphere environment, I have increased the disk size on VM from 100 GB to 150 GB and we will be increasing the space without rebooting/restarting the server.

VM Edit Settings > Disk Size Increase from 100 to 150 GB

Note: My server was already deployed with LVM configuration in place initially, we are just leveraging the LVM to extend the partition in this case.

Let us Rescan Extended Disk in OS to make Linux Kernel Aware, non-reboot method

# echo 1 > /sys/class/block/sda/device/rescan

Validate New Disk Size on sda

#lsblk

NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0  150G  0 disk
├─sda1            8:1    0  600M  0 part /boot/efi
├─sda2            8:2    0    1G  0 part /boot
└─sda3            8:3    0 98.4G  0 part
├─rhel-root   253:0    0   50G  0 lvm  /
├─rhel-swap   253:1    0  2.1G  0 lvm  [SWAP]
└─rhel-home   253:2    0 46.3G  0 lvm  /home

#==> sda size changed from 100 GB to 150 GB

Note: The partition is not automatically adjusted and needs to be resized in two steps

  1. resizing the partition
  2. make the kernel aware of the bigger partition

Now typically we use fdisk for the first step and a utility like partprobe (or a reboot) for the second step. But we now have a great software called growpart which we will use here. growpart is part of the cloud-utils-package, and should be available in your distro’s repositories. In my case it was not installed so let’s install it.

#yum install -y cloud-utils-growpart

Let’s increase the partition now with growpart.

As identified by #lsblk, our:

Device = /dev/sda ==> Disk of Root Partition

Partition = /dev/sda3 ==> Where our Root partition is.

#growpart /dev/sda 3
CHANGED: partition=3 start=3328000 old: size=206385152 end=209713152 new: size=311244767,end=314572767

#Note: there is a space in /dev/sda & 3 (3 was our partition number)

Let’s validate new partition size

#lsblk

NAME            MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda               8:0    0   150G  0 disk
├─sda1            8:1    0   600M  0 part /boot/efi
├─sda2            8:2    0     1G  0 part /boot
└─sda3            8:3    0 148.4G  0 part
├─rhel-root   253:0    0    50G  0 lvm  /
├─rhel-swap   253:1    0   2.1G  0 lvm  [SWAP]
└─rhel-home   253:2    0  46.3G  0 lvm  /home

#==> sda3 size changed from 98.4 GB to 148.5 GB

Let’s resize Physical Volume to occupy all new space

#pvresize /dev/sda3
1 physical volume(s) resized or updated / 0 physical volume(s) not resized

Validate with LVM PV Scan

#pvs

PV         VG    Fmt  Attr PSize    PFree
/dev/sda3  rhel  lvm2 a--   148.41g 50.00g
/dev/sdb1  vgapp lvm2 a--  <200.00g     0

#==> PSize changed for /dev/sda3

Let’s check the LVM volume group status

#vgs

VG    #PV #LV #SN Attr   VSize    VFree
rhel    1   3   0 wz--n-  148.41g 50.00g
vgapp   1   1   0 wz--n- <200.00g     0

#==> We have free space of 50 GB for rhel VG i.e our root partition VG

Let’s resize Logical Volume to occupy all new space

#lvextend -r -l +100%FREE /dev/rhel/root

Size of logical volume rhel/root changed from 50.00 GiB (12800 extents) to 100.00 GiB (25600 extents).
Logical volume rhel/root successfully resized.
meta-data=/dev/mapper/rhel-root  isize=512    agcount=4, agsize=3276800 blks
=                       sectsz=512   attr=2, projid32bit=1
=                       crc=1        finobt=1, sparse=1, rmapbt=0
=                       reflink=1
data     =                       bsize=4096   blocks=13107200, imaxpct=25
=                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=6400, version=2
=                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 13107200 to 26214400

#==> example: #lvextend -r -l +100%FREE /dev/<name-of-volume-group>/root

Validate root partition should be increased to 100 GB now:

# lsblk

NAME            MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda               8:0    0   150G  0 disk
├─sda1            8:1    0   600M  0 part /boot/efi
├─sda2            8:2    0     1G  0 part /boot
└─sda3            8:3    0 148.4G  0 part
├─rhel-root   253:0    0   100G  0 lvm  /
├─rhel-swap   253:1    0   2.1G  0 lvm  [SWAP]
└─rhel-home   253:2    0  46.3G  0 lvm  /home

#==> rhel-root increased size.

Hope you find this article useful, share the love with others if you feel worth it.

Have a nice day.

Multi-writer Shared / Clustered Disk for Windows or Oracle Cluster – vSphere / ESXi / vSAN 6.7.x

For the sake of this blog, it will be a Two VM’s Use Case, running on separate ESXi Hosts (Anti-Affinity Rule in Place)

The step of second node will apply on third / fourth node if to be introduced.

Environment

FS-VM1 – OS = Windows 2019 OS Disk 1 = 100 GB

FS-VM2 – OS = Windows 2019 OS Disk 1 = 100 GB

Created a thick provisioned eager zeroed storage policy as I was configuring on this setup on vSAN, thick disk is required for this setup to work as expected

Required disk for clustering:
Shared Disk 1 = 70 GB
Shared Disk 2 = 140 GB

Configuration Steps:

Shutdown Guest OS or Power Off FS-VM1 & FS-VM2

FS-VM1

  • Edit Settings – FS-VM1
  • Add New Devices > SCSI Controller
    • Select type: VMware Paravirtual
    • Select SCSI Bus sharing mode: Physical
  • Add New Disk
    • Size: 70 GB (1:0)
    • Type: Thick provisioned eager zeroed (using thick policy)
    • Sharing: Multi-writer
    • Disk Mode: Independent – Persistent
    • Virtual Device Node: SCSI controller 1 = SCSI(1:0)
  • Add New Disk
    • Size: 140 GB (1:1)
    • Type: Thick provisioned eager zeroed (using thick policy)
    • Sharing: Multi-writer
    • Disk Mode: Independent – Persistent
    • Virtual Device Node: SCSI controller 1 = SCSI(1:1)

Save Settings for VM by Pressing OK

FS-VM2

  • Edit Settings – FS-VM2
  • Add New Devices > SCSI Controller
    • Select type: VMware Paravirtual
    • Select SCSI Bus sharing mode: Physical
  • Add “Existing Hard Disk”
    • Find for FS-VM1 Folder in Search Browser
    • Select 70 GB .vmdk
    • Sharing: Multi-writer
    • Disk Mode: Independent – Persistent
    • Virtual Device Node: SCSI controller 1 = SCSI(1:0)
  • Add “Existing Hard Disk”
    • Find for FS-VM1 Folder in Search Browser
    • Select 140 GB .vmdk
    • Sharing: Multi-writer
    • Disk Mode: Independent – Persistent
    • Virtual Device Node: SCSI controller 1 = SCSI(1:1)

In case of vSAN it will be Storage Identifier ID with VM name and code.vmdk

Sample: [VxRail-Virtual-SAN-Datastore-8fe79a34-32432-45d4-affb-cdf6a37dd110] 62479960-6cf5-58d0-77bb-e4434bf848f0/FS-VM1-cXNb_1.vmdk

To further validate make sure it’s same path in FS-VM1 Disk File (VM Edit Settings > Hard Disk > Disk File)

Make sure configuration is identical on both VM’s such as SCSI Controller, SCSI Number, Multi-writer, Independent-Persistent etc.

Save Settings for VM by Pressing OK

Validation Steps

Power on both Virtual Machines

Login – FS-VM1 (RDP or Console Access)

  • Navigate to Disk Management
  • Validate Disks are Visible
  • Rescan Disk
  • Initialize Disk
  • Create Partition

Login – FS-VM2 – Repeat same steps

  • Navigate to Disk Management
  • Validate Disks are Visible
  • Rescan Disk
  • Initialize Disk
  • Create Partition

Servers are ready for you to configure your Clustering application

Reset root password – RHEL7/8 CentOS 7/8

https://images.unsplash.com/photo-1584433144859-1fc3ab64a957?ixlib=rb-1.2.1&q=85&fm=jpg&crop=entropy&cs=srgb

Reboot your Server, at the bootloader screen select the kernel you would like to boot it with (usually the latest one) and hit ‘e’

In the next screen, find the line that refers to the kernel

  • For RHEL/CentOS 7, the line starts with ‘linux16’.
  • For RHEL/Centos 8x, and Fedora the line starts with ‘linux’.

Add ‘rd.break‘ at the end of kernel line and press Ctrl-x

Now the server will boot into OS rescue mode

switch_root:/#.

Now, remount root partition in read/write mode

#mount -o remount rw /sysroot

Next, switch to root directory

#chroot /sysroot

At this point you can change the root password

#passwd <enter>
*<new password>*
*<repeat new password>*

Next step is for SELinux to allow new file changes – such as password reset in our case.

#touch /.autorelabel

This step will take some time to relabel, as it depends on filesystem size

Once complete, Exit Server

#exit

then, Restart server

#reboot

Validate new password has been set on Server after reboot by logging in with root account.

Using command line kill/power off a virtual machine

VMWARE ESXI CLI

Using the ESXi esxcli command to power off a virtual machine

The esxcli command can be used locally or remotely to power off a virtual machine running on ESXi 5.x or later. For more information, see the esxcli vm Commands section in the vSphere Command-Line Interface Reference.

  1. Open a console session where the esxcli tool is available, either in the ESXi Shell, the vSphere Management Assistant (vMA), or the location where the vSphere Command-Line Interface (vCLI) is installed.
  2. Get a list of running virtual machines, identified by World ID, UUID, Display Name, and path to the .vmx configuration file by running this command:esxcli vm process list
  3. Power off the virtual machine from the list by running this command:esxcli vm process kill --type= [soft,hard,force] --world-id= WorldNumberNotes:
    • Three power-off methods are available. Soft is the most graceful, hard performs an immediate shutdown, and force should be used as a last resort.
    • alternate power off command syntax is: esxcli vm process kill -t [ soft,hard,force] -w WorldNumber
  4. Repeat Step 2 and validate that the virtual machine is no longer running.

Using the ESXi command-line utility vim-cmd to power off the virtual machine

  1. On the ESXi console, enter Tech Support mode and log in as root. For more information, see Tech Support Mode for Emergency Support (1003677).
  2. Get a list of all registered virtual machines, identified by their VMID, Display Name, and path to the .vmx configuration file by running this command:vim-cmd vmsvc/getallvms
  3. Get the current state of a virtual machine by running this command:vim-cmd vmsvc/power.getstate VMID
  4. Shutdown the virtual machine using the VMID found in Step 2 and run this command:vim-cmd vmsvc/power.shutdown VMIDNote: If the virtual machine fails to shut down, run this command:vim-cmd vmsvc/power.off VMID

Note: this article is for my quick reference, VMware KB reference (https://kb.vmware.com/s/article/1014165)

Scan for new SCSI drives without the need to reboot the VM – LINUX

New disk presented to the running LINUX virtual machine will not display the device into operating system unless you reboot the server.

Found another way to achieve this and thought to note it down here for my future reference.

Run the following commands:

The first command returns the SCSI host in use which in this case is host2.:

grep mpt /sys/class/scsi_host/host?/proc_name

The next command, performs a bus scan.

echo “- – -“ > /sys/class/scsi_host/host2/scan

Fdisk is used to list all the available drives on the machine.

Validate if you can view new disk:

fdisk -l

If the newly added drive is still not discovered, then unfortunately just reboot the VM.

Content Referenced article: https://www.altaro.com/vmware/managing-disk-space-linux-vm/