Exa-byte - Use OEDACLI to create a new ASM Diskgroup

November 23, 2021 | 6 minute read
Alex Blyth
Senior Principal Product Manager
Text Size 100%:

In this Exa-byte, we're going to use OEDACLI to create a new ASM Diskgroup. The really nice thing about using OEDACLI to do this is that all the requisite steps - validating freespace in the celldisks, creating new grid disks, updating the ASM diskstring etcs - are all done for us!

Lets quickly check the amount of freespace in our cell disks using dcli


# dcli -g cell_group -l root "cellcli -e list celldisk attributes name, size, freespace where disktype = 'HardDisk'"

exademo01celadm01: CD_00_exademo01celadm01     12.4737091064453125T     4.157257080078125T
exademo01celadm01: CD_01_exademo01celadm01     12.4737091064453125T     4.157257080078125T
exademo01celadm01: CD_02_exademo01celadm01     12.4737091064453125T     4.157257080078125T
exademo01celadm01: CD_03_exademo01celadm01     12.4737091064453125T     4.157257080078125T
exademo01celadm01: CD_04_exademo01celadm01     12.4737091064453125T     4.157257080078125T
exademo01celadm01: CD_05_exademo01celadm01     12.4737091064453125T     4.157257080078125T
exademo01celadm01: CD_06_exademo01celadm01     12.4737091064453125T     4.157257080078125T
exademo01celadm01: CD_07_exademo01celadm01     12.4737091064453125T     4.157257080078125T
exademo01celadm01: CD_08_exademo01celadm01     12.4737091064453125T     4.157257080078125T
exademo01celadm01: CD_09_exademo01celadm01     12.4737091064453125T     4.157257080078125T
exademo01celadm01: CD_10_exademo01celadm01     12.4737091064453125T     4.157257080078125T
exademo01celadm01: CD_11_exademo01celadm01     12.4737091064453125T     4.157257080078125T
exademo01celadm02: CD_00_exademo01celadm02     12.4737091064453125T     4.157257080078125T
exademo01celadm02: CD_01_exademo01celadm02     12.4737091064453125T     4.157257080078125T
exademo01celadm02: CD_02_exademo01celadm02     12.4737091064453125T     4.157257080078125T
exademo01celadm02: CD_03_exademo01celadm02     12.4737091064453125T     4.157257080078125T
exademo01celadm02: CD_04_exademo01celadm02     12.4737091064453125T     4.157257080078125T
exademo01celadm02: CD_05_exademo01celadm02     12.4737091064453125T     4.157257080078125T
exademo01celadm02: CD_06_exademo01celadm02     12.4737091064453125T     4.157257080078125T
exademo01celadm02: CD_07_exademo01celadm02     12.4737091064453125T     4.157257080078125T
exademo01celadm02: CD_08_exademo01celadm02     12.4737091064453125T     4.157257080078125T
exademo01celadm02: CD_09_exademo01celadm02     12.4737091064453125T     4.157257080078125T
exademo01celadm02: CD_10_exademo01celadm02     12.4737091064453125T     4.157257080078125T
exademo01celadm02: CD_11_exademo01celadm02     12.4737091064453125T     4.157257080078125T
exademo01celadm03: CD_00_exademo01celadm03     12.4737091064453125T     4.157257080078125T
exademo01celadm03: CD_01_exademo01celadm03     12.4737091064453125T     4.157257080078125T
exademo01celadm03: CD_02_exademo01celadm03     12.4737091064453125T     4.157257080078125T
exademo01celadm03: CD_03_exademo01celadm03     12.4737091064453125T     4.157257080078125T
exademo01celadm03: CD_04_exademo01celadm03     12.4737091064453125T     4.157257080078125T
exademo01celadm03: CD_05_exademo01celadm03     12.4737091064453125T     4.157257080078125T
exademo01celadm03: CD_06_exademo01celadm03     12.4737091064453125T     4.157257080078125T
exademo01celadm03: CD_07_exademo01celadm03     12.4737091064453125T     4.157257080078125T
exademo01celadm03: CD_08_exademo01celadm03     12.4737091064453125T     4.157257080078125T
exademo01celadm03: CD_09_exademo01celadm03     12.4737091064453125T     4.157257080078125T
exademo01celadm03: CD_10_exademo01celadm03     12.4737091064453125T     4.157257080078125T
exademo01celadm03: CD_11_exademo01celadm03     12.4737091064453125T     4.157257080078125T

So we can see our 36 disks (I'm playing with an X8M Quarter Rack with Hich Capacity Storage Servers - each of which has 14TB drives). Each has 4.15T free space which we're going to use to create a new diskgroup (and grid disks) totaling 49TB (after applying High Redundancy ASM mirroring)

Next, we're going to start OEDACLI and load our es.xml file


./oedacli -c es.xml

Next, lets get the guid for our cluster (you could use clusternumber or clustername as you see fit). As it happens, the data we want will be the very last line in the below output

oedacli> list clusters
   version : "CloneInstall"
  clusterName : "exademoCluster"
  clusterOwner : "ddbeed48-3b32-93be-1750-14d9efe29052"
  clusterVersion : "21.3.0.0.0"
  clusterHome : "/u01/app/21.0.0.0/grid"
  inventoryLocation : "/u01/app/oraInventory"
  asmScopedSecurity : "false"
  clusterVips :
    clusterVip :
      vipName : "exademo01client01vm01-vip"
      domainName : "exacorp.com"
      vipIpAddress : "XX.XX.XX.XX"
      machines :
        machine :
          domainGroup :
          machine :
          id : "exademo01compute01_Cluster-c8809c218-5f07-768a-e589-82378e8f2cdc_vm01_id"
      id : "exademo01compute01_Cluster-c8809c218-5f07-768a-e589-82378e8f2cdc_vm01_id_vip"
      vipName : "exademo01client02vm01-vip"
      domainName : "exacorp.com"
      vipIpAddress : "XX.XX.XX.XX"
      machines :
        machine :
          domainGroup :
          machine :
          id : "exademo01compute02_Cluster-c8809c218-5f07-768a-e589-82378e8f2cdc_vm01_id"
      id : "exademo01compute02_Cluster-c8809c218-5f07-768a-e589-82378e8f2cdc_vm01_id_vip"
  customerName : "Oracle"
  application : "Mission Critial Application"
  scanIps :
    scanIp :
  clusterScans :
    clusterScan :
      id : "Cluster-c8809c218-5f07-768a-e589-82378e8f2cdc_id_scan_client"
  diskGroups :
    diskGroup :
      id : "f2f1ad76-881f-cefc-fb19-becd8978c523"
      id : "c0_datadg"
      id : "c0_otherdg"
      id : "c0_otherdg1"
      id : "c34006c4-d6aa-8b1d-79bb-38a3baac44e4"
  basedir : "/u01/app/oracle"
  language : "all_langs"
  patches :
    patch :
  id : "Cluster-c8809c218-5f07-768a-e589-82378e8f2cdc_id"

Next we're going to add a new diskgroup, called DATA2C2 which will be 49TB usable and configured with ASM High Redundancy (triple mirroring).

oedacli> add diskgroup DISKGROUPNAME=DATA2C1 DISKGROUPSIZE=49T ocrvote=false REDUNDANCY=HIGH TYPE=DATA where clusternumber="Cluster-c8809c218-5f07-768a-e589-82378e8f2cdc_id"

Lets save and merge the action in OEDACLI

oedacli> save action
oedacli> merge actions
 processMerge
 processMergeActions
 Merging Action : add diskgroup DISKGROUPNAME=DATA2C1 DISKGROUPSIZE=49T ocrvote=false REDUNDANCY=HIGH TYPE=DATA where clusternumber="Cluster-c8809c218-5f07-768a-e589-82378e8f2cdc_id"
 Merging ADD DISKGROUP
 Action Validated and Merged OK

And DEPLOY!

oedacli> deploy actions
 Deploying Action ID : 11 add diskgroup DISKGROUPNAME=DATA2C1 DISKGROUPSIZE=49T ocrvote=false REDUNDANCY=HIGH TYPE=DATA where clusternumber="Cluster-c8809c218-5f07-768a-e589-82378e8f2cdc_id"
 Deploying ADD DISKGROUP
 Diskgroup DATA2C1 will be created on Storage Servers  [exademo01celadm01.exacorp.com, exademo01celadm02.exacorp.com, exademo01celadm03.exacorp.com]
 Validating free space....
 Creating Grid Disks for ASM Disk Group DATA2C1
 Creating ASM Disk Group DATA2C1
 Updating ASM Diskstring...
 Getting grid disks using utility in /u01/app/21.0.0.0/grid/bin
 Checking ASM Disk Group status...
 Completed creation of ASM Disk Group DATA2C1
 Done...
 Done

Lets quickly check the freespace in each cell disk:

 

# dcli -g cell_group -l root "cellcli -e list celldisk attributes name, size, freespace where disktype = 'HardDisk'"

exademo01celadm01: CD_00_exademo01celadm01     12.4737091064453125T     76.03125G
exademo01celadm01: CD_01_exademo01celadm01     12.4737091064453125T     76.03125G
exademo01celadm01: CD_02_exademo01celadm01     12.4737091064453125T     76.03125G
exademo01celadm01: CD_03_exademo01celadm01     12.4737091064453125T     76.03125G
exademo01celadm01: CD_04_exademo01celadm01     12.4737091064453125T     76.03125G
... - truncated
exademo01celadm03: CD_07_exademo01celadm03     12.4737091064453125T     76.03125G
exademo01celadm03: CD_08_exademo01celadm03     12.4737091064453125T     76.03125G
exademo01celadm03: CD_09_exademo01celadm03     12.4737091064453125T     76.03125G
exademo01celadm03: CD_10_exademo01celadm03     12.4737091064453125T     76.03125G
exademo01celadm03: CD_11_exademo01celadm03     12.4737091064453125T     76.03125G

And lets check ASM for our new disk group

ASMCMD> lsdg

State    Type  Rebal  Sector  Logical_Sector  Block       AU   Total_MB    Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  HIGH  N         512             512   4096  4194304  154128384  154124772          8562688        48520694              0             N  DATA2C1/
MOUNTED  HIGH  N         512             512   4096  4194304  188338176  187179180         10463232        58905316              0             Y  DATAC1/
MOUNTED  HIGH  N         512             512   4096  4194304   94150656   94125252          5230592        29631553              0             N  RECOC1/
MOUNTED  HIGH  N         512             512   4096  4194304  314449920  314269260         17469440        98933273              0             N  SPARSC1/

There it is at the top of the list! Job done!! All done in one tool and all with Exadata best practices applied automatically.

Alex Blyth

Senior Principal Product Manager

Alex Blyth is a Product Manager for Oracle Exadata with over 22 years of IT experience mainly focused on Oracle Database, Engineered Systems, manageability tools such as Enterprise Manager and most recently Cloud. Prior to joining the product management team, Alex was a member of the Australia/New Zealand Oracle Presales community and before that a customer of Oracle's at a Financial Services organisation.


Previous Post

Disk Expansion Kit for Exadata X9M Database Server - Under the Covers

Gavin Parish | 10 min read

Next Post


Exadata System Software Updates - November 2021

Gavin Parish | 2 min read