Oracle VM ocfs2 repositories: hot-extend in four easy steps!

Pay Attention: Starting from release 3.1.1 you can extend your repository by refreshing it in Oracle VM Manager.

 With release 3.1.1 or above, execute these two simple steps:

1.  refresh LUN containing the repository by "Storage" tab.

2.  refresh your repository

Official documentation: What's new in Oracle VM 3.1.1

Usually when our Oracle VM pool needs more disk space we create new repositories and then we proceed to create our new guests or we extend our virtual-disks.

Sometimes customers ask us to extend one repository instead of create a new one.

I would like to clarify that add one repository is always safer than extend an existing one; this blog post is purely demonstrative that "extend" is possible and it works; remember to take a full repository backup before to extend it.

This guide can be used on release 3.0.x. ( starting from release 3.1.1 extend repository by Oracle VM Manager is faster, safer and supported ).

In this example we will extend a repository of 250 GB to 300 GB. Here you'll find the four easy steps:

1. Extend the lun on the storage ( .... )

Identify the physical device to extend:

[root@ovm01 ~]# df -k /dev/mapper/3600143801259961a0000800001170000
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/mapper/3600143801259961a0000800001170000
                     262144000 235312128  26831872  90% /OVS/Repositories/0004fb0000030000480c7108c43bcaaa 

[root@ovm01 scripts]# multipath -ll /dev/mapper/3600143801259961a0000800001170000
3600143801259961a0000800001170000 dm-47 HP,HSV360
size=250G features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| `- 2:0:0:29 sdhr 134:16  active ready running
|-+- policy='round-robin 0' prio=1 status=enabled
| `- 2:0:1:29 sdhs 134:32  active ready running
|-+- policy='round-robin 0' prio=1 status=enabled
| `- 2:0:2:29 sdht 134:48  active ready running
|-+- policy='round-robin 0' prio=1 status=enabled
| `- 2:0:3:29 sdhu 134:64  active ready running
|-+- policy='round-robin 0' prio=1 status=enabled
| `- 3:0:1:29 sdhw 134:96  active ready running
|-+- policy='round-robin 0' prio=1 status=enabled
| `- 3:0:0:29 sdhv 134:80  active ready running
|-+- policy='round-robin 0' prio=1 status=enabled
| `- 3:0:2:29 sdhx 134:112 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  `- 3:0:3:29 sdhy 134:128 active ready running

By your storage admin tool ( also by Oracle VM plugin for your storage ) extend your physical device that hosts the repository.

2. Refresh physical size of your storage device 

A little code string that can help to prepare the job is:

# for i in `multipath -ll /dev/mapper/3600143801259961a0000800001170000| grep sd |sed -e 's:^..............::g' |awk '{print $1}'`; do echo "blockdev --rereadpt /dev/$i" ; done

You will get an output similar to that show below:

# blockdev --rereadpt /dev/sdhr
# blockdev --rereadpt /dev/sdhs 
# blockdev --rereadpt /dev/sdht 
# blockdev --rereadpt /dev/sdhu 
# blockdev --rereadpt /dev/sdhw 
# blockdev --rereadpt /dev/sdhv 
# blockdev --rereadpt /dev/sdhx 
# blockdev --rereadpt /dev/sdhy

Verify your output and check that the device paths are what you expect; after that execute it on your Oracle VM Server. 

Remember to execute this step on all your OracleVM Servers (OVS) that are part of your pool!!!

3. Refresh physical size of your multipath device

After refresh of all physical paths ( sd* in this case ) you have to refresh and verify the new size of the multipath-device; to accomplish this step execute the command above:

[root@ovm01 scripts]# multipathd -k"resize map 3600143801259961a0000800001170000"
ok


[root@ovmgv02 scripts]# multipath -ll /dev/mapper/3600143801259961a0000800001170000
3600143801259961a0000800001170000 dm-47 HP,HSV360
size=300G features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| `- 2:0:0:29 sdhr 134:16  active ready running
|-+- policy='round-robin 0' prio=1 status=enabled
| `- 2:0:1:29 sdhs 134:32  active ready running
|-+- policy='round-robin 0' prio=1 status=enabled
| `- 2:0:2:29 sdht 134:48  active ready running
|-+- policy='round-robin 0' prio=1 status=enabled
| `- 2:0:3:29 sdhu 134:64  active ready running
|-+- policy='round-robin 0' prio=1 status=enabled
| `- 3:0:1:29 sdhw 134:96  active ready running
|-+- policy='round-robin 0' prio=1 status=enabled
| `- 3:0:0:29 sdhv 134:80  active ready running
|-+- policy='round-robin 0' prio=1 status=enabled
| `- 3:0:2:29 sdhx 134:112 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  `- 3:0:3:29 sdhy 134:128 active ready running

Remember to execute this step on all your OracleVM Servers (OVS) that are part of your pool!!!

4. Verify actual repository size and extend the ocfs2 filesystem 

View actual repository size:
[root@ovm01 ~]# df -k /dev/mapper/3600143801259961a0000800001170000
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/mapper/3600143801259961a0000800001170000
262144000 235312128  26831872  90% /OVS/Repositories/0004fb0000030000480c7108c43bcaaa

Extend ocfs2 filesystem:
comment on "man" of tunefs.ocfs2:
option -S, --volume-size
Grow the size of the OCFS2 file system. If blocks-count is not specified, tunefs.ocfs2 extends the volume to the current size of the device.

So:
[root@ovm01 ~]# tunefs.ocfs2 -S /dev/mapper/3600143801259961a0000800001170000
NB: just run the command "tunefs.ocfs2" once, on one node part of the ocfs2 cluster.
View new repository size: 
[root@ovm01 ~]# df -k /dev/mapper/3600143801259961a0000800001170000
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/mapper/3600143801259961a0000800001170000
314572800 235315200  79257600  75% /OVS/Repositories/0004fb0000030000480c7108c43bcaaa
Refresh repository by OVM-Manager
By Oracle VM Manager refresh your repository to acknowledge the new repository size; here an example on Oracle VM Manager 3.2.1:

 

This article is dedicated to the person who keeps me up at night, little Joseph, my son, born January 9, 2013.

Comments and corrections are welcome.

Simon COTER 

Comments:

worked like a charm
thanks!

Posted by guest on April 03, 2013 at 11:09 PM CEST #

This won't work by extending the Volume in the storage array, then simply refreshing the repository inside of Oracle VM???? I ask because I'm almost certain this is what I did before to extend our 1.5TB volume to 2TB a little while back. I did not do any command line work either. I have to do this again and now I'm scared to do it since you wrote up all these other commands that seem necessary. Do you think what I did should work? If not why not?

Posted by adam on July 16, 2013 at 04:16 AM CEST #

Hi Adam,

If your procedure correctly works ( extend the lun and then refresh the repository ) you haven't to be scared to repeat it.
What I remember of this post of 6 months ago is that I needed this commands steps to obtain the resizing on a particular environment.
For sure at the next opportunity I'll try your steps.
Let me know if you have the opportunity to repeat it.
Thank you.

Simon

Posted by guest on July 16, 2013 at 08:19 AM CEST #

Hi Simon, I followed this procedure to extend one of my repositories. It didn't display the new size correctly on step #3. I'm currently running OVM version 3.2.3.521. I made sure that I use the correct device to extend. I also read some knowledgebase article that extending a can be done by just refreshing the repository in VM Manager just like Adam said? Let me know if I'm missing something. Thanks!

Posted by RC on September 17, 2013 at 07:52 PM CEST #

Hi RC,

as you reported extending the repository can be done by just refreshing the repository in the manager.
Now I'm going to update this article reporting that the procedure was used on an 3.0.3 environment.
Let me know if you will be able to refresh/resize your repository.
Thank you.

Simon

Posted by Simon on September 17, 2013 at 08:57 PM CEST #

Hi Simon,

No I was not able to extend my volume using repository refresh. It did not extend my repository using your procedure either. It yielded the same output at command line however it didn't display the new size after doing step #3 of the procedure.

# multipathd -k"resize map 3600143801259a4a20000600000240000"
# multipath -ll /dev/mapper/3600143801259a4a20000600000240000

Please advise if I'm missing something?

Thanks,
RC

Posted by RC on September 17, 2013 at 09:54 PM CEST #

Hi RC,

you aren't missing anything.
Which is the output of the commands:

# multipath -ll /dev/mapper/3600143801259a4a20000600000240000

Which was the size of the LUN before ? And now ?

Choose one of the path of this multipathed device and reports the output of:

# fdisk -l /dev/<device_name>

Please take also the output of:

# df -k

on your dom0 server.
Let me know.

Simon

Posted by Simon on September 17, 2013 at 10:07 PM CEST #

Hi Simon,

LUN size was extended from 200GB to 600GB.

Here's the outputs:

# multipath -ll /dev/mapper/3600143801259a4a20000600000240000
3600143801259a4a20000600000240000 dm-4 HP,HSV340
size=200G features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=1 status=enabled
| `- 4:0:1:5 sdai 66:32 active ready running
|-+- policy='round-robin 0' prio=1 status=active
| `- 4:0:2:5 sdaj 66:48 active ready running
|-+- policy='round-robin 0' prio=1 status=enabled
| `- 4:0:0:5 sdah 66:16 active ready running
|-+- policy='round-robin 0' prio=1 status=enabled
| `- 4:0:3:5 sdak 66:64 active ready running
|-+- policy='round-robin 0' prio=1 status=enabled
| `- 3:0:1:5 sdam 66:96 active ready running
|-+- policy='round-robin 0' prio=1 status=enabled
| `- 3:0:3:5 sdao 66:128 active ready running
|-+- policy='round-robin 0' prio=1 status=enabled
| `- 3:0:2:5 sdan 66:112 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
`- 3:0:0:5 sdal 66:80 active ready running

# fdisk -l /dev/mapper/3600143801259a4a20000600000240000

Disk /dev/mapper/3600143801259a4a20000600000240000: 214.7 GB, 214748364800 bytes
255 heads, 63 sectors/track, 26108 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/mapper/3600143801259a4a20000600000240000 doesn't contain a valid partition table

Posted by guest on September 17, 2013 at 10:21 PM CEST #

# df -k
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda2 3050092 894384 1998272 31% /
/dev/sda1 101086 28558 67309 30% /boot
tmpfs 871528 0 871528 0% /dev/shm
none 871528 136 871392 1% /var/lib/xenstored
/dev/mapper/3600143801259a4a20000400000e90000
20971520 281364 20690156 2% /poolfsmnt/0004fb00000500003ff8b0ed68c3ae64
/dev/mapper/3600143801259a4a200006000001d0000
629145600 28420096 600725504 5% /OVS/Repositories/0004fb000003000057d82704789fa113
/dev/mapper/3600143801259a4a20000400000f80000
209715200 119761920 89953280 58% /OVS/Repositories/0004fb0000030000b934990e3f83fa2b
/dev/mapper/3600143801259a4a20000400001250000
629145600 523082752 106062848 84% /OVS/Repositories/0004fb0000030000962a13d69eedb969
/dev/mapper/3600143801259a4a20000600000240000
209715200 4871168 204844032 3% /OVS/Repositories/0004fb00000300006cda847750fe59eb

Posted by RC on September 17, 2013 at 10:23 PM CEST #

Output of the commands:

# fdisk -l /dev/sdai
# fdisk -l /dev/sdaj
# fdisk -l /dev/sdah
# fdisk -l /dev/sdak
# fdisk -l /dev/sdam
# fdisk -l /dev/sdao
# fdisk -l /dev/sdan
# fdisk -l /dev/sdal

Let me know.
I suspect that all will report 200gb.

Simon

Posted by guest on September 17, 2013 at 10:29 PM CEST #

No sdai to sdak reported 200GB while sdam to sdal reported 600GB

# fdisk -l /dev/sdai

Disk /dev/sdai: 214.7 GB, 214748364800 bytes
255 heads, 63 sectors/track, 26108 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdai doesn't contain a valid partition table
[root@r5-h8 /]# fdisk -l /dev/sdaj

Disk /dev/sdaj: 214.7 GB, 214748364800 bytes
255 heads, 63 sectors/track, 26108 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdaj doesn't contain a valid partition table
[root@r5-h8 /]# fdisk -l /dev/sdah

Disk /dev/sdah: 214.7 GB, 214748364800 bytes
255 heads, 63 sectors/track, 26108 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdah doesn't contain a valid partition table
[root@r5-h8 /]# fdisk -l /dev/sdak

Disk /dev/sdak: 214.7 GB, 214748364800 bytes
255 heads, 63 sectors/track, 26108 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdak doesn't contain a valid partition table
[root@r5-h8 /]# fdisk -l /dev/sdam

Disk /dev/sdam: 644.2 GB, 644245094400 bytes
255 heads, 63 sectors/track, 78325 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdam doesn't contain a valid partition table
[root@r5-h8 /]# fdisk -l /dev/sdao

Disk /dev/sdao: 644.2 GB, 644245094400 bytes
255 heads, 63 sectors/track, 78325 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdao doesn't contain a valid partition table
[root@r5-h8 /]# fdisk -l /dev/sdan

Disk /dev/sdan: 644.2 GB, 644245094400 bytes
255 heads, 63 sectors/track, 78325 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdan doesn't contain a valid partition table

Disk /dev/sdal: 644.2 GB, 644245094400 bytes
255 heads, 63 sectors/track, 78325 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdal doesn't contain a valid partition table

Posted by RC on September 17, 2013 at 10:37 PM CEST #

Try to re-read partition table on 200GB paths with:

# blockdev --rereadpt /dev/sdai
# blockdev --rereadpt /dev/sdaj
# blockdev --rereadpt /dev/sdah
# blockdev --rereadpt /dev/sdak

And then check that all paths reports 600gb.
If it works correctly, remember that you have to execute these steps on all dom0 of your pool.
Let me know.

Simon

Posted by guest on September 17, 2013 at 10:46 PM CEST #

Simon,

The repo is now showing with the correct size after doing tunefs.ocfs2 -S command. Thanks! Do you have any idea why repo refresh on VM Manager did not work? Did it work for Adam and not on my environment? Please advise.

Thanks,
RC

Posted by RC on September 17, 2013 at 11:08 PM CEST #

Hi RC,

everything depends on configuration of the entire architecture.
server, hba, storage.
I remember that I had a similar problem on an 3.1.1 installation but now I don't encounter it anymore in 3.2.4.
If you aren't able to refresh and resize your repository by Oracle VM Manager, open a service-request on https://support.oracle.com .
Glad to help you.
Regards,

Simon

Posted by Simon on September 17, 2013 at 11:16 PM CEST #

Hi Simon,

Yes, I already created a new SR for this issue. Thanks for your help.

RC

Posted by RC on September 18, 2013 at 01:38 AM CEST #

Hi Simon,

Repo refresh is actually working. I need to perform it first on the Storage tab of Oracle VM Manager.

1.Go Storage Tab -> Select the storage->Select the Disk and Refresh.
2.Go to Repositories Tab -> Select the repository and Refresh.

So I had to refresh on the disk first before the repository.

Thanks,

RC

Posted by RC on October 05, 2013 at 12:45 AM CEST #

Hi,

glad to know that you sorted it out.
I'll update the article specifying your steps and thank you for your feedback.
King regards.

Simon

Posted by Simon on October 05, 2013 at 01:26 AM CEST #

Hi Friends,
I had same issue and refresh repository was not working.
tunefs.ocfs2 -S worked fine.

Thanks for your information

Posted by Raghuvir on February 14, 2014 at 05:37 PM CET #

Post a Comment:
  • HTML Syntax: NOT allowed
About

Simon Coter is a Technical Expert Core Technology consultant for Oracle. He works on projects covering more Oracle products such as Oracle Database, eBusiness Suite, Oracle VM, Oracle Linux, Oracle ExaData and much more.

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today