X

Technical articles, news, and insights
for Oracle's Infrastructure Software offerings

Hands on Backup Utilities for Oracle VM 3.4

Simon Coter
Director of Product Management

<last update on May, 10th 2018>

On this blog article you'll find different examples related to ovm-bkp v1.0 - Backup Utilities for Oracle VM 3.4.

"ovm-bkp v1.0.1" is based on an RPM for Oracle Linux 6 and 7 and needs to be installed on Oracle VM Manager:

In the meanwhile, feel free to leave your feedback on ovm-bkp v1.0 here.

Join the discussion

Comments ( 63 )
  • Michal Thursday, February 22, 2018
    Dear Simon,

    Is there an option to exclude some VM virtual disks from backup?

    Thanks,
    Michal
  • Simon Wednesday, February 28, 2018
    Hi Michal,

    actually this option is still not available.
    As you can see on the vm conf file is something that will be managed by the script.

    Simon
  • Igor Friday, March 2, 2018
    Hi Simon,
    I use older hotclone 8a scripts that work properly and thank you for them.
    In this older version, it is not possible to use a separate retention policy for different backup repositories (example below - one ocfs (fc) and one nfs repository).
    I'm wondering if this can be achieved with the new "ovm-bkp 1.0" script and how to do it?

    Example : Retention (1c) is common to both repositories DS34fcRepo (ocfs) and SYNnfsBck (nfs)

    ./HotCloneVm.sh xxx localhost lldevsvn V50fcPool DS34fcRepo 1c FULL

    = FULL == 20180302 == 1049 == lldevsvn-FULL-20180302-1049 ==

    ./HotCloneVm.sh xxx localhost lldevsvn V50fcPool SYNnfsBck 1c FULL
    ======= GUEST EXPIRED AND REMOVED ========
    lldevsvn-FULL-20180302-1049
    = FULL == 20180302 == 1055 == lldevsvn-FULL-20180302-1055 ==
  • Simon Friday, March 2, 2018
    Hi Igor,

    also on latest ovm-bkp v1.0 the retention is per-VM and not per repository.
    But that said, here you have one option more: the "Preserve"; thanks to the preserve you can keep a backup even if it's obsolete based on the retention.
    So you can define your retention policy and, after that, decide which backups will be preserved; when you "remove" the preserve option, the backup will be deleted.
    Hope it helps.

    Simon
  • Igor Friday, March 2, 2018
    The "preserve" option could be used to statically save some backup, but I would like to automate the backup procedure periodically.
    Thanks for explaining.
    Best regards, Igor
  • Cauê Freitas Tuesday, March 27, 2018
    Hi Simon.

    Do you have some paper or tip to install the ovm-bkp tool on a OPCA environment?

    Regards

    Cauê Freitas
  • Simon Wednesday, April 4, 2018
    Hi Cauê,

    I do not see any particular issue to run the ovm-bkp solution on Private Cloud Appliance; only one thing has to be considered: on the PCA you have two management nodes and, so, you have to install the package on both.
    That said, what "installed" under "/opt" should be shared between the nodes so the content is always the same.
    Let me know if you need further details.

    Simon
  • Jan Peter Vogt Wednesday, April 4, 2018
    Hi Simon,

    a question for those who still rely on Oracle Linux 6.x releases on their OVMM servers:

    Is replacement of "nmap-ncat-6.40-7" RPM by "nc-1.84-24" RPM acceptable or are there any issues known or specific functions used which make this a no-recommend substitute ? It seems to work on my testenvironment, but a hint would be great.

    Kind regards,
    Peter
  • Simon Wednesday, April 4, 2018
    Hi Peter,

    I've just tried to install the RPM on OL6 and following RPMs are automatically installed by "yum" as dependencies:

    expect - x86_64 - 5.44.1.15-5.el6_4
    nc - x86_64 - 1.84-24.el6
    tcl - x86_64 - 1:8.5.7-6.el6

    So, yep "nc" is the correct one.
    Thanks

    Simon
  • Rene V Monday, April 23, 2018
    Hi Simon,
    is there a time line to include the feature regarding selecting individual disks for the backup.
    I would need that feature very badly :)
    BR
    Rene
  • Damien Thursday, April 26, 2018
    Hi Simon,

    Thanks for your work.

    I made an OVA backup, and its restore (from a repository to another) doesn't work fine on my platform (OVM 3.4.3).
    The rename vm step at line 345 of ovm-restore.sh doesn't work => moveVmToRepository doesn't start => the job_id doesn't exist => the loop between lines 356 and 361 never finish.
    If I debug it, I find this :
    OVM> edit vm name=test-11g-std-OVA-20180426-1010_---- name=test-11g-std-RESTORE-20180426-1534
    Command: edit vm name=test-11g-std-OVA-20180426-1010_---- name=test-11g-std-RESTORE-20180426-1534
    Status: Failure
    Time: 2018-04-26 15:35:14,091 CEST
    Error Msg: Couldn't find a Vm object with the identifier of test-11g-std-OVA-20180426-1010_----.
    OVM> Failed to complete command(s), error happened. Connection closed.

    The VM created by createVmFromVirtualApplianceVm doesn't have the name $backupname, but test-11g-std-OVA-20180426-1010_0004fb00-0006-0000-97c7-f8cd3dab0265 in my case.

    Do you know why ?

    Another point about OVA restore : I don't think that virtual disks have their original name after retore.
    Do you think it's possible to keep their name ?

    Kind regards,
    Damien
  • Simon Thursday, April 26, 2018
    Hi Rene,

    no ETA actually; I'm a bit busy on other things.
    That said, the target is to get it available as soon as possible.

    Simon
  • Simon Thursday, April 26, 2018
    Hi Damien,

    please see the email I've sent you.

    Simon
  • Simon Thursday, May 10, 2018
    Hi Damien & all,

    the issue encountered while restoring from OVA has been fixed.
    Fix is part of updated release:

    ovm-bkp-1.0.1-20180510.noarch.rpm

    Thanks

    Simon
  • Leonardo Cursi Friday, May 11, 2018
    Hi Simon,

    The script is compatible with OVM 3.3.4.1093 ?

    Thanks!
  • Simon Monday, May 14, 2018
    Hi Leonardo,

    the script has been written and tested with OVM 3.4.
    That said, it should/could also work with OVM 3.3 but I would suggest you to upgrade first your Oracle VM Manager to 3.4.4 (latest) and then evaluate the upgrade of also your servers.
    Consider that Oracle VM Manager 3.4.2 and higher are also supported to manage older Oracle VM Server(s) releases (like 3.2.10/3.2.11 and 3.3.x).
    Thanks

    Simon
  • Seb Friday, May 25, 2018
    Hi,

    Thank you for this tool.

    How do you garantee a crash-consistent clone on multi-vdisk VM without suspending the VM ?

    The Vdisk thin clones won't be generated in a single atomic operation ?
  • Simon Friday, May 25, 2018
    Hi Seb,

    crash consistency is granted also on multi-disk system if you do not have volumes spanned on multi virtual-disks.
    And even if you have a volume-group built on more vdisks, in my experience, thanks to modern filesystems all-of-them can be easily recovered on the first boot of the cloned vm.
    That said, I've also tested to clone/backup VMs with ASM (volume-manager) and multi-disks for the same disk group; at the boot, the automatic recovery (ASM & Oracle Database) got me a running service without any manual intervention.
    Obviously, for the Oracle Database "RMAN" remains the best backup solution that avoids any possible database corruption.
    On the other side (application VMs) usually you do not have so many files opened (or continuously created) and possible problems are really unlikely.
    Thanks

    Simon
  • allen Monday, July 16, 2018
    Hi Simon

    Could you explain what is $sshovmcli in shell ?
    I can't find this from google XD

    thanks

    Allen
  • Simon Monday, July 16, 2018
    Hi Allen,

    "$sshovmcli" it's an alias defined in "common.sh" that executes, each time:

    # sshovmcli="ssh -o StrictHostKeyChecking=no -o ConnectTimeout=5 ${ovmmuser}@${ovmmhost} -p ${ovmmport}"

    Simon
  • Daniele Bocciolini Wednesday, July 18, 2018
    Hi Simon,

    I made a backup FULL and something is wrong with the tool.
    I have the backup in loop showing message
    Waiting for Vm moving to complete......3290 seconds
    Waiting for Vm moving to complete......3300 seconds
    Waiting for Vm moving to complete......3310 seconds
    Waiting for Vm moving to complete......3320 seconds
    Waiting for Vm moving to complete......3330 seconds
    Waiting for Vm moving to complete......3340 seconds
    Waiting for Vm moving to complete......3350 seconds

    So i look in the ovm-backup.sh and i can see
    job_id=`$sshovmcli list job |grep -i "Move Vm" |grep ${vmname}-CLONE-${today} |cut -d ":" -f2|cut -d " " -f1`

    so i'm looking on ovm job and i can see job with description "Move Vm ...." already terminate in success state.

    So i think backup is done but script is now in loop.
  • Simon Wednesday, July 18, 2018
    Hi Daniele,

    which is the name of the VM ?
    Which release of the scripts are you using ?
    That said, can you share the entire log somewhere ?
    Thanks

    Simon
  • Daniele Bocciolini Thursday, July 19, 2018
    Hi Simon,
    The name of the machine is: lsemovm
    The release is ovm-bkp-1.0.1-20180510.noarch.rpm

    here the full log https://pastebin.com/JrU2scUF
  • Simon Thursday, July 19, 2018
    Daniele,

    can you please re-execute the backup with:

    # sh -x ./ovm-backup.sh [parameters]

    and then share the log available at:

    /opt/ovm-bkp/logs/ ?

    Simon
  • Daniele Bocciolini Thursday, July 19, 2018
  • Simon Thursday, July 19, 2018
    Perfect,

    can you get me the output of following OVM-CLI command ?

    # show job id=1531992756835

    Thanks

    Simon
  • daniele bocciolini Thursday, July 19, 2018
    OVM> show job id=1531992756835
    Command: show job id=1531992756835
    Status: Success
    Time: 2018-07-19 14:24:34,515 CEST
    Data:
    Run State = Success
    Summary State = Success
    Done = Yes
    Summary Done = Yes
    Job Group = No
    Username = admin
    Creation Time = Jul 19, 2018 11:32:36 am
    Start Time = Jul 19, 2018 11:32:37 am
    End Time = Jul 19, 2018 11:33:25 am
    Duration = 48s
    Id = 1531992756835 [Move VM: lsemovm-CLONE-20180719-1126 to Repository: BACKUP]
    Name = Move VM: lsemovm-CLONE-20180719-1126 to Repository: BACKUP
    Description = Move VM: lsemovm-CLONE-20180719-1126 to Repository: BACKUP
    Locked = false
  • Simon Thursday, July 19, 2018
    Hi Daniele,

    it seems that the script didn't correctly collect the job-id of the "Move VM" operation.
    Can you please execute the following commands on your Oracle VM Manager Instance ?

    # ssh admin@localhost -p 10000 list job |grep -i "Move Vm" |grep lsemovm-CLONE-20180719-1126 |cut -d : -f2 |cut -d ' ' -f1

    If you get no output, can you please execute:

    # ssh admin@localhost -p 10000 list job |grep lsemovm-CLONE-20180719-1126

    ps: by reading your logs it seems that on line 355 of the "ovm-backup.sh" script, your shell execute first "cut" and then "grep" instead of vice-versa. Is there any particular env environment on your user (presumably root) ?

    Thanks

    Simon
  • daniele bocciolini Thursday, July 19, 2018
    I get no output from both command.
    I have no particular env, and yes i use root.
  • Simon Thursday, July 19, 2018
    So, it seems that the job executed is not there.
    Can you get the entire list of jobs and share the output ?

    By OVM-CLI just execute:

    # list job

    Simon
  • daniele bocciolini Thursday, July 19, 2018
    [root@lsovm ~]# ssh admin@localhost -p 10000
    OVM> list job
    Command: list job
    Status: Failure
    Time: 2018-07-19 21:25:12,648 CEST
    Error Msg: GEN_000000:An exception occurred during processing: No such object (level 1), cluster is null:
    Thu Jul 19 21:25:12 CEST 2018
    OVM>
  • Simon Friday, July 20, 2018
    Hi Daniele,

    message above means that your MySQL database is logically corrupted.
    You should check following KM note available on MOS:

    Restore Oracle VM Manager Database (Doc ID 2405023.1)

    Thanks

    Simon
  • Jure Wednesday, July 25, 2018
    Hi,

    What is the easiest way to list all available ovm-bkp backups for all VMs?
    Script ovm-listbackup.sh only lists backups per each VM separately but I'd love to have a list of all currently available backups for all machines. Is there a way?

    Thanks.
  • Simon Wednesday, July 25, 2018
    Hi Jure,

    thanks for your feedback.
    Actually there is no solution in-place able to get the list of all backups for all VMs.
    What you can do, is to create a script that:

    - loop on VMs under conf/vm/*
    - for each VM executes "list backup"

    I'm going to take in consideration your request for future releases.
    Thanks

    Simon
  • Jure Thursday, July 26, 2018
    Hi,
    Thanks for the answer.

    As per your recommendation if somebody finds it useful really basic script to achieve something like this can be the following:

    for i in `grep vmname /opt/ovm-bkp/conf/vm/* | cut -f2 -d=`
    do
    /opt/ovm-bkp/bin/ovm-listbackup.sh $i | grep -Ev '[=]{60,}'
    echo
    done
  • Nandini Monday, August 6, 2018
    Hi Simon,

    Thank you for the wonderful tool. I have installed latest RPM ovm-bkp-1.0.1-20180510.noarch.rpm and tried OVA backup and restore. OVA restore is not retaining virtual disk names and disk slot information. Virtual disks are restored with different name and are assigned to different slots. Could you please make a change to retain virtual disk names as well as disk slot information for OVA restore?

    Also restore created a VM and moved to the server pool but is not assigned to any of the VM servers in the pools. Hence the restored VM doesn’t apprear in the xm list. Starting the restored VM did assign VM server - appears from VM Manager as well as on VM servers, Virtual Machines folder. However it doesn’t still appear in xm list. Stopping restored VM actually generated an error for failure as the domain doesn’t exist on the VM server. Have to do some more testing on this scenario, but this is what I noticed so far with OVA restore.

    As for listing backups, ovm-listbackup.sh is not listing the vm-clones but ovm-backup.sh lists clones, ova, snap and full backups. Is it possible to include this as well in your next version of RPM?

    Thank you,
  • Simon Monday, August 6, 2018
    Hi Nandini,

    related to OVA:
    1. unfortunately disk-name cannot be managed, this is how OVA works on Oracle VM.
    2. the sort of the disks should be respected, even names change
    3. while creating a VM from OVA (as a restore) the process associates the VM to the Pool and then you can decide on which server start the VM; so, just click on your OVM-Pool and select "Virtual Machine" perspective, your machine will be there

    related to "ovm-listbackup.sh": I'll fix it asap.

    Thanks

    Simon
  • Nandini Thursday, August 9, 2018
    Hi Simon,

    Thank you for your quick response. I have checked the OVA restore one more time and see that it is not restoring the virtual disks with actual disk slot information. In my case I have 3 physical disks in slot 0,1,2 and one virtual disk in slot 4. When restored from OVA backup, the virtual disk is assigned to slot 0. However full backups restore with the right disk slot information. Another point is if the disks are shared between VM's, backups & restores from either full or OVA is loosing the Shareable and Description field information of the Virtual disks. Do you think it’s possible to retain that information?

    thanks,
    Nandini
  • Simon Sunday, August 12, 2018
    Hi Nandini,

    related to how OVA generation/restoration works we cannot do anything; at the end if your VM has only one virtual-disk, on the same we cannot retain the slot.
    That said, the order shouldn't be important if you use "LABELS" for your filesystems.
    Related to the option to retain "Shareable" information is a good ER for the backup-tool, something we can work-on.

    Thanks

    Simon
  • Rolando Wednesday, October 24, 2018
    Hi Simon

    I have the following errors:

    ./ovm-backup.sh: line 204: 0 + : syntax error: operand expected (error token is "+ ")
    ./ovm-backup.sh: line 233: [: OVM>: integer expression expected

    but the script continue working. This error could be ignored or you have a way to fix it?
  • Simon Wednesday, October 24, 2018
    Hi Rolando,

    is this happening with the latest release of the script ? If yes, can you please share the entire output of the log somewhere ?
    Thanks

    Simon
  • Rolando Thursday, October 25, 2018
    Hello Simon,

    You will have an email where I can send you the log, since it is a bit long.

    Waiting for your prompt comments.

    Greetings,
  • Karl M Thursday, October 25, 2018
    Hi Simon, I've just discovered and not yet tested these VM backup tools but wondering if the OVA export method requires VMs to be shut down? Or perhaps do the scripts shut down and restart VMs automatically? If so then that will be a significant improvement for my clients who are currently struggling with only semi-automated VM exports and manually unmapping and remapping physical disk slots between manual shut downs and restarts a significant risk of human error.
  • Simon Friday, October 26, 2018
    Hi Rolando,

    I've just sent you an email.

    Simon
  • Simon Friday, October 26, 2018
    Hi Karl,

    the script just take a crash-consistent snapshot of the virtual-disks and, after that, export the cloned vm into OVA format.

    Simon
  • Jonathan Thursday, December 6, 2018
    Hello Simon,

    We've been experiencing the infinite loop issue from the previous version and as seen by Daniele above. We've noted that sometimes the backup is successful, other time it gets stuck in the loop. After checking, it does not involve a corrupted database issue above.

    Here is the log from one of our failures. This backup worked fine later.
    =================
    Oracle VM 3.4 CLI
    =================
    =====================================================
    Adding VM test-host information to bkpinfo file /opt/ovm-bkp/bkpinfo/info-backup-test-host-FULL-20181204-2236.txt
    =====================================================
    0004fb000013000071b50dd59451de40
    0004fb000013000069966ff7f0faf694
    0004fb0000130000e4f811a0f2d39e55
    0004fb000013000071b50dd59451de40 0004fb000012000071fa6b3fc730174f.img
    0004fb000013000069966ff7f0faf694 0004fb0000120000ab893927ab463244.img
    0004fb0000130000e4f811a0f2d39e55 0004fb00001200000b7dfab73b8e719a.img
    /opt/ovm-bkp/bin/ovm-backup.sh: line 235: [: OVM>: integer expression expected
    ================================================
    Creating Clone-Customizer to get VM snapshot....
    ================================================
    OVM> create VmCloneCustomizer name=vDisks-test-host-20181204-2236 description=vDisks-test-host-20181204-2236 on Vm name=test-host
    Command: create VmCloneCustomizer name=vDisks-test-host-20181204-2236 description=vDisks-test-host-20181204-2236 on Vm name=test-host
    Status: Success
    Time: 2018-12-04 22:37:45,048 EST
    JobId: 1543981064806
    Data:
    id:0004fb0000130000ce953c541582637e name:vDisks-test-host-20181204-2236
    OVM> Connection closed.
    OVM> create VmCloneStorageMapping cloneType=THIN_CLONE name=vDisks-Mapping-0004fb000013000071b50dd59451de40 vmDiskMapping=0004fb000013000071b50dd59451de40 repository=TestRepo on VmCloneCustomizer name=vDisks-test-host-20181204-2236
    Command: create VmCloneStorageMapping cloneType=THIN_CLONE name=vDisks-Mapping-0004fb000013000071b50dd59451de40 vmDiskMapping=0004fb000013000071b50dd59451de40 repository=TestRepo on VmCloneCustomizer name=vDisks-test-host-20181204-2236
    Status: Success
    Time: 2018-12-04 22:37:46,052 EST
    JobId: 1543981065817
    Data:
    id:0004fb0000130000bd30b9419e7a7f28 name:vDisks-Mapping-0004fb000013000071b50dd59451de40
    OVM> Connection closed.
    OVM> create VmCloneStorageMapping cloneType=THIN_CLONE name=vDisks-Mapping-0004fb000013000069966ff7f0faf694 vmDiskMapping=0004fb000013000069966ff7f0faf694 repository=TestRepo on VmCloneCustomizer name=vDisks-test-host-20181204-2236
    Command: create VmCloneStorageMapping cloneType=THIN_CLONE name=vDisks-Mapping-0004fb000013000069966ff7f0faf694 vmDiskMapping=0004fb000013000069966ff7f0faf694 repository=TestRepo on VmCloneCustomizer name=vDisks-test-host-20181204-2236
    Status: Success
    Time: 2018-12-04 22:37:47,062 EST
    JobId: 1543981066816
    Data:
    id:0004fb000013000018241e0545e6a2b8 name:vDisks-Mapping-0004fb000013000069966ff7f0faf694
    OVM> Connection closed.
    OVM> create VmCloneStorageMapping cloneType=THIN_CLONE name=vDisks-Mapping-0004fb0000130000e4f811a0f2d39e55 vmDiskMapping=0004fb0000130000e4f811a0f2d39e55 repository=TestRepo on VmCloneCustomizer name=vDisks-test-host-20181204-2236
    Command: create VmCloneStorageMapping cloneType=THIN_CLONE name=vDisks-Mapping-0004fb0000130000e4f811a0f2d39e55 vmDiskMapping=0004fb0000130000e4f811a0f2d39e55 repository=TestRepo on VmCloneCustomizer name=vDisks-test-host-20181204-2236
    Status: Success
    Time: 2018-12-04 22:37:47,957 EST
    JobId: 1543981067739
    Data:
    id:0004fb00001300007747a0946154ece9 name:vDisks-Mapping-0004fb0000130000e4f811a0f2d39e55
    OVM> Connection closed.
    =======================
    Getting VM snapshot....
    =======================
    OVM> clone Vm name=test-host destType=Vm destName=test-host-CLONE-20181204-2236 ServerPool=TestPool cloneCustomizer=vDisks-test-host-20181204-2236
    Command: clone Vm name=test-host destType=Vm destName=test-host-CLONE-20181204-2236 ServerPool=TestPool cloneCustomizer=vDisks-test-host-20181204-2236
    Status: Success
    Time: 2018-12-04 22:37:55,585 EST
    JobId: 1543981074832
    Data:
    id:0004fb00000600009e55a8c78829cc34 name:test-host-CLONE-20181204-2236
    OVM> Connection closed.
    OVM> delete VmCloneCustomizer name=vDisks-test-host-20181204-2236
    Command: delete VmCloneCustomizer name=vDisks-test-host-20181204-2236
    Status: Success
    Time: 2018-12-04 22:37:56,477 EST
    JobId: 1543981076051
    OVM> Connection closed.
    =====================
    Backup Type: FULL....
    =====================
    OVM> create VmCloneCustomizer name=test-host-20181204-2236 description=test-host-20181204-2236 on Vm name=test-host-CLONE-20181204-2236
    Command: create VmCloneCustomizer name=test-host-20181204-2236 description=test-host-20181204-2236 on Vm name=test-host-CLONE-20181204-2236
    Status: Success
    Time: 2018-12-04 22:37:57,103 EST
    JobId: 1543981076894
    Data:
    id:0004fb000013000011f3f561c7836c29 name:test-host-20181204-2236
    OVM> Connection closed.
    OVM> create VmCloneStorageMapping cloneType=SPARSE_COPY name=Mapping-0004fb0000130000cf1380e8f9d406bc vmDiskMapping=0004fb0000130000cf1380e8f9d406bc repository=Backup-Repo on VmCloneCustomizer name=test-host-20181204-2236
    Command: create VmCloneStorageMapping cloneType=SPARSE_COPY name=Mapping-0004fb0000130000cf1380e8f9d406bc vmDiskMapping=0004fb0000130000cf1380e8f9d406bc repository=Backup-Repo on VmCloneCustomizer name=test-host-20181204-2236
    Status: Success
    Time: 2018-12-04 22:37:58,273 EST
    JobId: 1543981078054
    Data:
    id:0004fb0000130000ecda515f963adfb2 name:Mapping-0004fb0000130000cf1380e8f9d406bc
    OVM> Connection closed.
    OVM> create VmCloneStorageMapping cloneType=SPARSE_COPY name=Mapping-0004fb00001300001b0bb225209af720 vmDiskMapping=0004fb00001300001b0bb225209af720 repository=Backup-Repo on VmCloneCustomizer name=test-host-20181204-2236
    Command: create VmCloneStorageMapping cloneType=SPARSE_COPY name=Mapping-0004fb00001300001b0bb225209af720 vmDiskMapping=0004fb00001300001b0bb225209af720 repository=Backup-Repo on VmCloneCustomizer name=test-host-20181204-2236
    Status: Success
    Time: 2018-12-04 22:37:59,067 EST
    JobId: 1543981078822
    Data:
    id:0004fb000013000010ca380dea99569f name:Mapping-0004fb00001300001b0bb225209af720
    OVM> Connection closed.
    OVM> create VmCloneStorageMapping cloneType=SPARSE_COPY name=Mapping-0004fb00001300002acf216e913d8081 vmDiskMapping=0004fb00001300002acf216e913d8081 repository=Backup-Repo on VmCloneCustomizer name=test-host-20181204-2236
    Command: create VmCloneStorageMapping cloneType=SPARSE_COPY name=Mapping-0004fb00001300002acf216e913d8081 vmDiskMapping=0004fb00001300002acf216e913d8081 repository=Backup-Repo on VmCloneCustomizer name=test-host-20181204-2236
    Status: Success
    Time: 2018-12-04 22:37:59,845 EST
    JobId: 1543981079631
    Data:
    id:0004fb00001300001f29f243a33b0142 name:Mapping-0004fb00001300002acf216e913d8081
    OVM> Connection closed.
    =======================================================
    Moving cloned VM to target repository ....
    =======================================================
    OVM> moveVmToRepository Vm name=test-host-CLONE-20181204-2236 CloneCustomizer=test-host-20181204-2236 targetRepository=Backup-Repo
    =======================================================
    Waiting for Vm moving to complete......10 seconds
    Waiting for Vm moving to complete......20 seconds
    Waiting for Vm moving to complete......30 seconds
    Waiting for Vm moving to complete......40 seconds
    Waiting for Vm moving to complete......50 seconds
    Waiting for Vm moving to complete......60 seconds
    Waiting for Vm moving to complete......70 seconds
    Waiting for Vm moving to complete......80 seconds
    Waiting for Vm moving to complete......90 seconds
    Waiting for Vm moving to complete......100 seconds
    Waiting for Vm moving to complete......110 seconds
    Waiting for Vm moving to complete......120 seconds
    Waiting for Vm moving to complete......130 seconds
    Waiting for Vm moving to complete......140 seconds
    Waiting for Vm moving to complete......150 seconds
    Waiting for Vm moving to complete......160 seconds
    Waiting for Vm moving to complete......170 seconds
    Waiting for Vm moving to complete......180 seconds
    Waiting for Vm moving to complete......190 seconds
    Waiting for Vm moving to complete......200 seconds
    Waiting for Vm moving to complete......210 seconds
    Waiting for Vm moving to complete......220 seconds
    Command: moveVmToRepository Vm name=test-host-CLONE-20181204-2236 CloneCustomizer=test-host-20181204-2236 targetRepository=Backup-Repo
    Status: Success
    Time: 2018-12-04 22:41:55,663 EST
    JobId: 1543981090873
    OVM> Connection closed.
    Waiting for Vm moving to complete......230 seconds
    Waiting for Vm moving to complete......240 seconds
    Waiting for Vm moving to complete......250 seconds
  • Simon Friday, December 7, 2018
    Hi Jonathan,

    are you now using the release v1.0.1 ? If not, you should use that release.
    If you are already using v1.0.1, can you please re-execute this job by the following syntax:

    # sh -x (backup-command)

    And the get the log uploaded somewhere ?
    Thanks

    Simon
  • Jonathan Friday, December 7, 2018
    Simon,

    After more research on our end, it appears that we are running into some kind of OVM Manager bug and your script may just a symptom. The VM move to the backup repository is showing in the logs as succeeded but the move is not occurring. Again sometimes the move works fine, other times it is not, but in all cases the logs show a successful move. We'll open a ticket with support.
  • Jonathan Friday, December 7, 2018
    Simon,

    After additional research it does appear to be a timing issue with your backup code.

    I'd move the line:
    job_id=`$sshovmcli list job |grep -i "Move Vm" |grep ${vmname}-CLONE-${today} |cut -d ":" -f2|cut -d " " -f1`

    into the while loop. The issue is that in our environment, the job_id is apparently sometimes not being returned after the 10 second wait (not a long enough delay), thus the loop continues indefinitely. By adding that command inside the loop, that issue will not occur.
  • Simon Monday, December 10, 2018
    Hi Jonathan,

    can you please confirm the release of the RPM installed on your system ?

    Simon
  • claudio Thursday, December 13, 2018
    Hi Simon,

    im working on testing this tool to backup our vms, but im realized that have problems with the vms created with Domain type: Xen PVM. This is the error:

    VMRU_005089E VM: 0004fb000006000057fbdb1080ec5261, with operating system type, Red Hat Enterprise Linux 7, is not supported with the domain type, Xen PVM. [Thu Dec 13 11:58:45 CLST 2018]

    is it posible to fix this or is neccesary change the Domain type on the vms??
  • Simon Thursday, December 13, 2018
    Hi Claudio,

    starting from OVM 3.4.6 PVM has been removed as an option:

    https://docs.oracle.com/cd/E64076_01/E99523/html/vmrs-deprecated-3.4.6.html

    Thanks

    Simon
  • claudio Thursday, December 13, 2018
    Thanks for you answer Simon.

    Greetings from Chile.
  • Jure Thursday, January 10, 2019
    Hi Simon,

    We are preparing for the upgrade of OVM from 3.4.4 to 3.4.6.
    In current environment (3.4.4) I have ovm-bkp tool configured and we used it regularly.

    After the upgrade of OVM to 3.4.6 do I need to consider any post-upgrade steps for ovm-bkp tool to continue to work as is now? Like running ovm-setup-ovmm.sh again etc...?
    Will current backups work after the upgrade of OVM environment?

    Thank you.
  • Simon Monday, January 14, 2019
    Hi Jure,

    ovm-bkp should correctly work with 3.4.4 and 3.4.6.
    The upgrade won't introduce any kind of requirement for the ovm-bkp tool.

    Simon
  • sergio Tuesday, January 22, 2019
    Hi Simon,

    I solved the problem with:
    "ovm-backup.sh: line 235: [: OVM>: integer expression expected"

    adding to the script "bin/common.sh" the expression ">/dev/null" in the output for the first command, like this:

    "function get_freespace_repo {
    $sshovmcli refresh repository name=$1 >/dev/null
    $sshovmcli show repository name=$1 |grep " File System Free (GiB) = " |cut -d "=" -f2 |awk '{print int($1+0.9)}'
    }"

    Because if you trace your bash, you can see the problem:
    "+ '[' FULL '!=' SNAP ']'
    + '[' 75 -gt 'OVM>' refresh repository name=PROREPOS $'rCommand:' refresh repository name=PROREPOS $'rStatus:' Success $'rTime:' 2019-01-22 11:01:26,497 CET $'rJobId:' 1548151285793 '^MOVM>' Connection closed. 151 ']'
    /opt/ovm-bkp/bin/./ovm-backup.sh: line 235: [: OVM>: integer expression expected"

    Than you very much for you contribution to the OVM community ;)

    Regards
  • Simon Tuesday, January 22, 2019
    Thanks for your feedback Sergio.

    Simon
  • Gaurav Mittal Monday, February 4, 2019
    Hi,

    Is this backup periodic or it has to be croned to take periodic backup.

    Gaurav
  • Simon Monday, February 4, 2019
    Hi Gaurav,

    the backup script can be scheduled (by cron locally or by an external scheduler) or manually run on the system.

    Simon
  • Karl M Sunday, February 24, 2019
    Hi Simon, I found that physical disk mappings seem to work with OVA exports as an emergency backup - is this supported? I think it would be fantastic if the next release of OVMM included OVA export scheduling, where VMs can be shutdown, disks remapped (if required), exported to OVA and restarted out of hours from the web GUI and CLI. Combined with Site Guard this may be easier for 'whole VM' recovery between different OVM farms where VMs may have a large number of physical disk mappings although I appreciate database data files are not supported on vDisks, at least the data on them would be safely mapped? Thanks for a great product!
  • Simon Monday, February 25, 2019
    Hi Karl,

    I do not see the value of exporting physical-disks into an OVA file that (whenever you'll import) it won't write on physical-disks but will just create virtual-ones.
    Consider that Oracle Database datafiles are supported on virtual-disks, but not suggested due to performance reasons.

    Simon
  • Luis Sanchez Sunday, April 28, 2019
    Hi

    Great tool at the end we can now run guests hot backups.

    I wonder if this tool will be integrated into OVM Software in the future?

    Thank you
  • Simon Monday, April 29, 2019
    Hi Luis,

    actually we do not have a plan to include this tool in the official release.
    Thanks

    Simon
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.