This step-by-step guide shows how to take a full backup of Self-Hosted Engine Virtual Machine before executing the upgrade of the same; this solution is an alternative to the official backup and restore solution available for Oracle Linux Virtualization Manager; the guide belows shows how-to take an offline snapshot of the Oracle Linux Virtualization Manager Self-Hosted Engine virtual machine.

This document is for test and educational purposes only.

This document is still under review; sections of this document could change.

 

Taking an external Self-Hosted Engine backup before upgrading Oracle Linux Virtualization Manager

  • Identify the KVM host running the SHE VM using the Engine Administration portal web UI, or from the command line, using the following command on a SHE-enabled KVM host:

[root@host1 ~]# hosted-engine –vm-status | grep -B2 “Engine status”
Hostname                           : host1.example.com
Host ID                            : 1
Engine status                      : {“health”: “good”, “vm”: “up”, “detail”: “Up”}

Hostname                           : host2.example.com
Host ID                            : 2
Engine status                      : {“reason”: “vm not running on this host”, “health”: “bad”, “vm”: “down”, “detail”: “unknown”}

Hostname                           : host3.example.com
Host ID                            : 3
Engine status                      : {“reason”: “vm not running on this host”, “health”: “bad”, “vm”: “down”, “detail”: “unknown”}
  • List the HostedEngine VM configuration at the KVM host:

[root@host1 ~]# cat /etc/ovirt-hosted-engine/hosted-engine.conf
fqdn=she.example.com
vm_disk_id=636b2989-b522-4b9a-97fa-4f2721996383
vm_disk_vol_id=6f32449b-f5b6-48c1-adf4-00e566911822  <– SHE virtual disk
vmid=8054010a-69e1-451f-ac73-1f307b9a742e
storage=192.168.0.100
nfs_version=
mnt_options=
conf=/var/run/ovirt-hosted-engine-ha/vm.conf
host_id=1
console=vnc
domainType=iscsi
spUUID=00000000-0000-0000-0000-000000000000
sdUUID=f97e88d0-73f3-4331-92fc-bc8a0fabe9a6
connectionUUID=e29cf818-5ee5-46e1-85c1-8aeefa33e95d
ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem
ca_subject=”C=EN, L=Test, O=Test, CN=Test”
vdsm_use_ssl=true
gateway=192.168.0.1
bridge=ovirtmgmt
network_test=dns
tcp_t_address=
tcp_t_port=
metadata_volume_UUID=588b14d1-06e6-4c65-81bd-db2fd15ce133
metadata_image_UUID=3a67b936-3d41-4501-b4a6-0889d1b81a84
lockspace_volume_UUID=7046b61a-c921-48a1-b292-d27fa64da7da
lockspace_image_UUID=51b470fd-917e-4285-8c6b-f71f59b58676
conf_volume_UUID=daab5d91-8a79-43c1-b8b4-93e34da48f81
conf_image_UUID=140d0832-d3a4-4ad8-9801-51b2adcb9c70
# The following are used only for iSCSI storage
iqn=iqn.2021-04.com.example:she
portal=1
user=
password=
port=3260
  • Confirm the Self Hosted Engine virtual disk ID

[root@host1 ~]# virsh -r dumpxml HostedEngine | grep “ovirt-vm:volumeID”
        <ovirt-vm:volumeID>6f32449b-f5b6-48c1-adf4-00e566911822</ovirt-vm:volumeID>
                <ovirt-vm:volumeID>6f32449b-f5b6-48c1-adf4-00e566911822</ovirt-vm:volumeID>
  • Identify running storage devices connected to the KVM host VDSM service

[root@host1 ~]# tree /var/run/vdsm/storage/
/var/run/vdsm/storage/
├── 9624d3c5-0957-4e24-b667-fb33e62248ad
└── f97e88d0-73f3-4331-92fc-bc8a0fabe9a6
    ├── 140d0832-d3a4-4ad8-9801-51b2adcb9c70
    │   └── daab5d91-8a79-43c1-b8b4-93e34da48f81 -> /dev/f97e88d0-73f3-4331-92fc-bc8a0fabe9a6/daab5d91-8a79-43c1-b8b4-93e34da48f81
    ├── 14d71966-ba65-462c-bbe2-00b56165bb00
    │   └── 478030fe-83a7-4d14-9d23-2ba9c4dfb686 -> /dev/f97e88d0-73f3-4331-92fc-bc8a0fabe9a6/478030fe-83a7-4d14-9d23-2ba9c4dfb686
    ├── 3a67b936-3d41-4501-b4a6-0889d1b81a84
    │   └── 588b14d1-06e6-4c65-81bd-db2fd15ce133 -> /dev/f97e88d0-73f3-4331-92fc-bc8a0fabe9a6/588b14d1-06e6-4c65-81bd-db2fd15ce133
    ├── 51b470fd-917e-4285-8c6b-f71f59b58676
    │   └── 7046b61a-c921-48a1-b292-d27fa64da7da -> /dev/f97e88d0-73f3-4331-92fc-bc8a0fabe9a6/7046b61a-c921-48a1-b292-d27fa64da7da
    ├── 636b2989-b522-4b9a-97fa-4f2721996383
    │   └── 6f32449b-f5b6-48c1-adf4-00e566911822 -> /dev/f97e88d0-73f3-4331-92fc-bc8a0fabe9a6/6f32449b-f5b6-48c1-adf4-00e566911822
    └── fdb608ba-1a1a-4476-adc5-52cece9fdfcb
        └── 7eb61d63-72e8-41c2-8db8-bc30174b71e6 -> /dev/f97e88d0-73f3-4331-92fc-bc8a0fabe9a6/7eb61d63-72e8-41c2-8db8-bc30174b71e6
8 directories, 6 files
  • Identify the Self Hosted Engine virtual disk available on the KVM host
[root@host1 ~]# find /dev/ -name “6f32449b-f5b6-48c1-adf4-00e566911822” -ls
62737    0 lrwxrwxrwx  1 root  root 8 Jul 26 08:20 /dev/f97e88d0-73f3-4331-92fc-bc8a0fabe9a6/6f32449b-f5b6-48c1-adf4-00e566911822 -> ../dm-18
  • Shutting down SHE instance and checking its status

[root@host1 ~]# hosted-engine –set-maintenance –mode=global
[root@host1 ~]# hosted-engine –vm-status | head -n 6
!! Cluster is in GLOBAL MAINTENANCE mode !!
[root@host1 ~]# hosted-engine –vm-shutdown
[root@host1 ~]# hosted-engine –vm-status | grep -B2 “Engine status”
Hostname                           : host1.example.com
Host ID                            : 1
Engine status                      : {“reason”: “bad vm status”, “health”: “bad”, “vm”: “down_unexpected”, “detail”: “Down”}

Hostname                           : host2.example.com
Host ID                            : 2
Engine status                      : {“reason”: “vm not running on this host”, “health”: “bad”, “vm”: “down”, “detail”: “unknown”}

Hostname                           : host3.example.com
Host ID                            : 3
Engine status                      : {“reason”: “vm not running on this host”, “health”: “bad”, “vm”: “down”, “detail”: “unknown”}
  • Checking Volume Groups and Logical Volumes available on the KVM host

[root@host1 ~]# vgs
  VG                                   #PV #LV #SN Attr   VSize    VFree
  9624d3c5-0957-4e24-b667-fb33e62248ad   1  35   0 wz–n-  838.00g <379.38g
  f97e88d0-73f3-4331-92fc-bc8a0fabe9a6   1  13   0 wz–n-   92.75g   15.00g
  ol_host1                               1   2   0 wz–n- <464.76g    4.00m
[root@host1 ~]# lvs
  LV                                   VG                                   Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  478030fe-83a7-4d14-9d23-2ba9c4dfb686 f97e88d0-73f3-4331-92fc-bc8a0fabe9a6 -wi-a—– 128.00m
  588b14d1-06e6-4c65-81bd-db2fd15ce133 f97e88d0-73f3-4331-92fc-bc8a0fabe9a6 -wi-a—–   1.00g
  6f32449b-f5b6-48c1-adf4-00e566911822 f97e88d0-73f3-4331-92fc-bc8a0fabe9a6 -wi——-  70.00g
  7046b61a-c921-48a1-b292-d27fa64da7da f97e88d0-73f3-4331-92fc-bc8a0fabe9a6 -wi-ao—-   1.00g
  7eb61d63-72e8-41c2-8db8-bc30174b71e6 f97e88d0-73f3-4331-92fc-bc8a0fabe9a6 -wi-a—– 128.00m
  daab5d91-8a79-43c1-b8b4-93e34da48f81 f97e88d0-73f3-4331-92fc-bc8a0fabe9a6 -wi-a—–   1.00g
  ids                                  f97e88d0-73f3-4331-92fc-bc8a0fabe9a6 -wi-ao—- 128.00m
  inbox                                f97e88d0-73f3-4331-92fc-bc8a0fabe9a6 -wi-a—– 128.00m
  leases                               f97e88d0-73f3-4331-92fc-bc8a0fabe9a6 -wi-a—–   2.00g
  master                               f97e88d0-73f3-4331-92fc-bc8a0fabe9a6 -wi-ao—-   1.00g
  metadata                             f97e88d0-73f3-4331-92fc-bc8a0fabe9a6 -wi-a—– 128.00m
  outbox                               f97e88d0-73f3-4331-92fc-bc8a0fabe9a6 -wi-a—– 128.00m
  xleases                              f97e88d0-73f3-4331-92fc-bc8a0fabe9a6 -wi-a—–   1.00g
  root                                 ol_host1                             -wi-ao—- 449.00g
  swap                                 ol_host1                             -wi-ao—-  15.75g
  • Filtering for the Self Hosted Engine volumeID and showing detailed Logical Volume information

[root@host1 ~]# lvs | grep “6f32449b-f5b6-48c1-adf4-00e566911822”
  6f32449b-f5b6-48c1-adf4-00e566911822 f97e88d0-73f3-4331-92fc-bc8a0fabe9a6 -wi——-  70.00g
[root@host1 ~]# ls -l /dev/f97e88d0-73f3-4331-92fc-bc8a0fabe9a6/
total 0
lrwxrwxrwx. 1 root root 8 Jul 26 09:20 478030fe-83a7-4d14-9d23-2ba9c4dfb686 -> ../dm-15
lrwxrwxrwx. 1 root root 8 Jul 26 10:19 588b14d1-06e6-4c65-81bd-db2fd15ce133 -> ../dm-14
lrwxrwxrwx. 1 root root 8 Jul 26 08:15 7046b61a-c921-48a1-b292-d27fa64da7da -> ../dm-13
lrwxrwxrwx. 1 root root 8 Jul 26 09:20 7eb61d63-72e8-41c2-8db8-bc30174b71e6 -> ../dm-16
lrwxrwxrwx. 1 root root 8 Jul 26 08:18 daab5d91-8a79-43c1-b8b4-93e34da48f81 -> ../dm-17
lrwxrwxrwx. 1 root root 7 Jul 26 08:15 ids -> ../dm-7
lrwxrwxrwx. 1 root root 8 Jul 26 08:20 inbox -> ../dm-11
lrwxrwxrwx. 1 root root 7 Jul 26 10:10 leases -> ../dm-8
lrwxrwxrwx. 1 root root 8 Jul 26 08:20 master -> ../dm-12
lrwxrwxrwx. 1 root root 7 Jul 26 09:20 metadata -> ../dm-6
lrwxrwxrwx. 1 root root 7 Jul 26 08:20 outbox -> ../dm-9
lrwxrwxrwx. 1 root root 8 Jul 26 08:15 xleases -> ../dm-10
  • After shutting down Self Hosted Engine machine, the Logical Volume is not visible from the KVM host

[root@host1 ~]# find /dev/ -name “6f32449b-f5b6-48c1-adf4-00e566911822” -ls
[root@host1 ~]# ls /dev/dm-*
/dev/dm-0   /dev/dm-13  /dev/dm-19  /dev/dm-23  /dev/dm-4  /dev/dm-9
/dev/dm-1   /dev/dm-14  /dev/dm-2   /dev/dm-24  /dev/dm-5
/dev/dm-10  /dev/dm-15  /dev/dm-20  /dev/dm-25  /dev/dm-6
/dev/dm-11  /dev/dm-16  /dev/dm-21  /dev/dm-26  /dev/dm-7
/dev/dm-12  /dev/dm-17  /dev/dm-22  /dev/dm-3   /dev/dm-8
  • Trying to manually activate the LV by checking the Volume Group first

[root@host1 ~]# vgdisplay f97e88d0-73f3-4331-92fc-bc8a0fabe9a6
  — Volume group —
  VG Name               f97e88d0-73f3-4331-92fc-bc8a0fabe9a6
  System ID
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  24
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                13
  Open LV               3
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               92.75 GiB
  PE Size               128.00 MiB
  Total PE              742
  Alloc PE / Size       622 / 77.75 GiB
  Free  PE / Size       120 / 15.00 GiB
  VG UUID               dlHRgN-IfxK-FCb3-AIvR-jCyU-hTNS-RMSAfL
  • Checking the Logical Volume in the f97e88d0-73f3-4331-92fc-bc8a0fabe9a6 Volume Group

[root@host1 ~]# lvs f97e88d0-73f3-4331-92fc-bc8a0fabe9a6
  LV                                   VG                                   Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  478030fe-83a7-4d14-9d23-2ba9c4dfb686 f97e88d0-73f3-4331-92fc-bc8a0fabe9a6 -wi-a—– 128.00m
  588b14d1-06e6-4c65-81bd-db2fd15ce133 f97e88d0-73f3-4331-92fc-bc8a0fabe9a6 -wi-a—–   1.00g
  6f32449b-f5b6-48c1-adf4-00e566911822 f97e88d0-73f3-4331-92fc-bc8a0fabe9a6 -wi——-  70.00g
  7046b61a-c921-48a1-b292-d27fa64da7da f97e88d0-73f3-4331-92fc-bc8a0fabe9a6 -wi-ao—-   1.00g
  7eb61d63-72e8-41c2-8db8-bc30174b71e6 f97e88d0-73f3-4331-92fc-bc8a0fabe9a6 -wi-a—– 128.00m
  daab5d91-8a79-43c1-b8b4-93e34da48f81 f97e88d0-73f3-4331-92fc-bc8a0fabe9a6 -wi-a—–   1.00g
  ids                                  f97e88d0-73f3-4331-92fc-bc8a0fabe9a6 -wi-ao—- 128.00m
  inbox                                f97e88d0-73f3-4331-92fc-bc8a0fabe9a6 -wi-a—– 128.00m
  leases                               f97e88d0-73f3-4331-92fc-bc8a0fabe9a6 -wi-a—–   2.00g
  master                               f97e88d0-73f3-4331-92fc-bc8a0fabe9a6 -wi-ao—-   1.00g
  metadata                             f97e88d0-73f3-4331-92fc-bc8a0fabe9a6 -wi-a—– 128.00m
  outbox                               f97e88d0-73f3-4331-92fc-bc8a0fabe9a6 -wi-a—– 128.00m
  xleases                              f97e88d0-73f3-4331-92fc-bc8a0fabe9a6 -wi-a—–   1.00g
  • Listing the Logical Volume device for the Self Hosted Engine volumeID (lvscan reports it “inactive)

[root@host1 ~]# lvscan | grep “6f32449b-f5b6-48c1-adf4-00e566911822”
  inactive          ‘/dev/f97e88d0-73f3-4331-92fc-bc8a0fabe9a6/6f32449b-f5b6-48c1-adf4-00e566911822’ [70.00 GiB] inherit
  • Activating the /dev/f97e88d0-73f3-4331-92fc-bc8a0fabe9a6/6f32449b-f5b6-48c1-adf4-00e566911822 Logical Volume

[root@host1 ~]# lvchange -ay /dev/f97e88d0-73f3-4331-92fc-bc8a0fabe9a6/6f32449b-f5b6-48c1-adf4-00e566911822
  • Checking the Self Hosted Engine Logical Volume status

[root@host1 ~]# lvscan | grep “6f32449b-f5b6-48c1-adf4-00e566911822”
  ACTIVE            ‘/dev/f97e88d0-73f3-4331-92fc-bc8a0fabe9a6/6f32449b-f5b6-48c1-adf4-00e566911822’ [70.00 GiB] inherit
  • Locating the Self Hosted Engine Logical Volume on the KVM host machine

[root@host1 ~]# find /dev/ -name “6f32449b-f5b6-48c1-adf4-00e566911822” -ls
93561285    0 lrwxrwxrwx   1 root     root            8 Jul 26 10:29 /dev/f97e88d0-73f3-4331-92fc-bc8a0fabe9a6/6f32449b-f5b6-48c1-adf4-00e566911822 -> ../dm-18
  • Checking the block device size using the Logical Volume device

[root@host1 ~]# lsblk /dev/f97e88d0-73f3-4331-92fc-bc8a0fabe9a6/6f32449b-f5b6-48c1-adf4-00e566911822
NAME                                                                              MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
f97e88d0–73f3–4331–92fc–bc8a0fabe9a6-6f32449b–f5b6–48c1–adf4–00e566911822 252:18   0  70G  0 lvm

Note: In this example, you will need 70GB of disk space to copy the SHE virtual disk.

  • Copying the SHE virtual disk using the dd command. You can use the LV or the dm device for it:

[root@host1 ~]# dd if=/dev/f97e88d0-73f3-4331-92fc-bc8a0fabe9a6/6f32449b-f5b6-48c1-adf4-00e566911822 of=/tmp/SHE-virtual-disk-copy.img status=progress
75147723264 bytes (75 GB) copied, 795.531140 s, 94.5 MB/s
146800640+0 records in
146800640+0 records out
75161927680 bytes (75 GB) copied, 797.69 s, 94.2 MB/s
  • Checking the copy disk layout

[root@host1 ~]# fdisk -l /tmp/SHE-virtual-disk-copy.img
Disk /tmp/SHE-virtual-disk-copy.img: 75.2 GB, 75161927680 bytes, 146800640 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0002e6b1
                         Device Boot      Start         End      Blocks   Id  System
/tmp/SHE-virtual-disk-copy.img1   *        2048     2099199     1048576   83  Linux
/tmp/SHE-virtual-disk-copy.img2         2099200    52428799    25164800   8e  Linux LVM

Note: While running the SHE upgrade, having this image available elsewhere is recommended.

  • Deactivate the Self Hosted Engine Logical Volume and confirm it is inactive, before booting Self Hosted Engine machine
[root@host1 ~]# lvchange -an /dev/f97e88d0-73f3-4331-92fc-bc8a0fabe9a6/6f32449b-f5b6-48c1-adf4-00e566911822
[root@host1 ~]# lvscan | grep “6f32449b-f5b6-48c1-adf4-00e566911822”
  inactive          ‘/dev/f97e88d0-73f3-4331-92fc-bc8a0fabe9a6/6f32449b-f5b6-48c1-adf4-00e566911822’ [70.00 GiB] inherit
  • Boot the Self Hosted Engine and proceed to the upgrade by “leapp
[root@host1 ~]# hosted-engine –vm-start
VM exists and is down, cleaning up and restarting
VM in WaitForLaunch
[root@host1 ~]# hosted-engine –vm-status | grep -B2 “Engine status”
Hostname                           : host1.example.com
Host ID                            : 1
Engine status                      : {“health”: “good”, “vm”: “up”, “detail”: “Up”}

Hostname                           : host2.example.com
Host ID                            : 2
Engine status                      : {“reason”: “vm not running on this host”, “health”: “bad”, “vm”: “down”, “detail”: “unknown”}

Hostname                           : host3.example.com
Host ID                            : 3
Engine status                      : {“reason”: “vm not running on this host”, “health”: “bad”, “vm”: “down”, “detail”: “unknown”}
 

Upgrade the SHE VM using the OLVM : Leapp Upgrade from OLVM 4.3.10 to 4.4.x(Doc ID 2900355.1) article from My Oracle Support – MOS.

 

Reverting to the previous state if needed

  • Confirm the Logical Volume device for the Self Hosted Engine volumeID

[root@host1 ~]# lvscan | grep “6f32449b-f5b6-48c1-adf4-00e566911822”
  inactive          ‘/dev/f97e88d0-73f3-4331-92fc-bc8a0fabe9a6/6f32449b-f5b6-48c1-adf4-00e566911822’ [70.00 GiB] inherit
  • Activate the /dev/f97e88d0-73f3-4331-92fc-bc8a0fabe9a6/6f32449b-f5b6-48c1-adf4-00e566911822 LV and check its status:

[root@host1 ~]# lvchange -ay /dev/f97e88d0-73f3-4331-92fc-bc8a0fabe9a6/6f32449b-f5b6-48c1-adf4-00e566911822
[root@host1 ~]# lvscan | grep “6f32449b-f5b6-48c1-adf4-00e566911822”
  ACTIVE            ‘/dev/f97e88d0-73f3-4331-92fc-bc8a0fabe9a6/6f32449b-f5b6-48c1-adf4-00e566911822’ [70.00 GiB] inherit
 
  • Ensure you have the Logical Volume copy previously created
[root@host1 ~]# ls -l /tmp/SHE-virtual-disk-copy.img
-rw-r–r–. 1 root root 75161927680 Jul 26 10:51 /tmp/SHE-virtual-disk-copy.img
  • Restore the image to the Self Hosted Engine Logical Volume device
[root@host1 ~]# dd if=/tmp/SHE-virtual-disk-copy.img of=/dev/f97e88d0-73f3-4331-92fc-bc8a0fabe9a6/6f32449b-f5b6-48c1-adf4-00e566911822 status=progress
75161784320 bytes (75 GB) copied, 1489.885368 s, 50.4 MB/s
0+36 records in
0+36 records out
75161927680 bytes (75 GB) copied, 1512.77 s, 49.7 MB/s
  • Start the HostedEngine VM again to ensure the restore was successful.

[root@host1 ~]# hosted-engine –vm-start
VM exists and is down, cleaning up and restarting
VM in WaitForLaunch
[root@host1 ~]# hosted-engine –vm-status | grep -B2 “Engine status”
Hostname                           : host1.example.com
Host ID: 1
Engine status                      : {“health”: “good”, “vm”: “up”, “detail”: “Up”}

Hostname                           : host2.example.com
Host ID                            : 2
Engine status                      : {“reason”: “vm not running on this host”, “health”: “bad”, “vm”: “down”, “detail”: “unknown”}

Hostname                           : host3.example.com
Host ID                            : 3
Engine status                      : {“reason”: “vm not running on this host”, “health”: “bad”, “vm”: “down”, “detail”: “unknown”}
  • As usual, access the Engine Administration portal to confirm it is working properly.