Creating an ACFS Filesystem on Exascale Volumes

January 29, 2025 | 9 minute read
Alex Blyth
Senior Principal Product Manager - Exadata
Text Size 100%:

One of the exciting capabilities of Exadata Exascale is the ability to create block volumes on shared Exadata storage. This allows you to tap into the scalable, high-performance, highly available shared storage capacity of an Exascale storage pool to create POSIX-compliant filesystems and attach them to your Exadata database VMs.


While there is a lot to cover when talking about Exascale, we’re going to focus on Exascale Volumes. Exascale Volumes are the ability for Exadata to store and manage RDMA-enabled block volumes on shared Exadata storage. These volumes are then presented to database VMs and bare metal servers over an optimized purpose-built protocol called EDV. Exadata Volumes can also be presented over iSCSI – but that’s a topic for another day. 

So, what are these volumes good for? I’m glad you asked.

Two main purposes: 1) VM image storage, and 2) additional filesystems for data staging, Data Pump export/import files, GoldenGate trail files, and more.

Focusing on the second use case, we all have the need to store additional “data” on our Exadata from time to time. Typically, we’ve used NFS or ACFS to satisfy this need. But with Exascale, we’re not using ASM so how can we use ACFS? 

First, its worth noting that ACFS – the ASM Cluster File System – has been renamed to the Advanced Cluster File System. This is to indicate that ASM is not the only volume manager ACFS filesystems can be stored on. Exascale Volumes can also be used for ACFS – and they’re really simple and easy to use! Let’s see how.
In this example, I’m using a two VM cluster with the latest Exadata System Software (25.1.1 at time of writing), which has access to three storage servers with Exascale deployed.

On one of the database VMs, we’re going to start by checking which Exascale vaults are available for us to create a volume in. 

Copied to Clipboard
Error: Could not Copy
Copied to Clipboard
Error: Could not Copy
oracle@exapmdbvm01$ escli lsvault
Name
Vault8

Next we create a volume using the mkvolume command, specifying the vault – Vault8 – from the previous step. 

Copied to Clipboard
Error: Could not Copy
Copied to Clipboard
Error: Could not Copy
oracle@exapmdbvm01$escli mkvolume 50G --vault vault8
Created volume with id 84:1756a959580240d5894263cffa7aeef4

After noting the volume id '84:1756a959580240d5894263cffa7aeef4' - we'll need this in a few steps, next we’re going to attach the volume to both database VMs in the cluster. To do this, we need to discover the GUID of the GI cluster running on these servers using the lsinitatior command. 

Copied to Clipboard
Error: Could not Copy
Copied to Clipboard
Error: Could not Copy
oracle@exapmdbvm01$ escli lsinitiator
id                                   hostName    giClusterName giClusterId
2aacc24d-371f-b7e8-2aac-c24d371fb7e8 exapmdbvm01 exapmdbvm01   578c403c-6c14-6f16-ffea-ba5fffcd3806
f6970b8d-68ca-b0da-f697-0b8d68cab0da exapmdbvm02 exapmdbvm01   578c403c-6c14-6f16-ffea-ba5fffcd3806

Again, take note of the giClusterId '578c403c-6c14-6f16-ffea-ba5fffcd3806' for use in the next step. 

As an aside, we could use crsctl query css to get the GI Cluster GUID, but using lsinitiator also shows us the VM/Host GUID which would allow us to attach the volume to specific VMs rather than all VMs in the cluster. 

To attach the volume to all VMs in this cluster, we need to create a cluster-wide volume attachment – which we’ll call acfsvol2 and using both the volumeId and giClusterId's we jotted down earlier. 

Copied to Clipboard
Error: Could not Copy
Copied to Clipboard
Error: Could not Copy
oracle@exapmdbvm01$ escli mkvolumeattachment 84:1756a959580240d5894263cffa7aeef4 acfsvol2 --attributes giClusterId=578c403c-6c14-6f16-ffea-ba5fffcd3806
Created edv attachment with id 84:d2b5edf431964ed4a0d01cb49422888d

And now, we create an ACFS filesystem! And this is the really simple bit – the mkacfsfileystem command not only initializes the filesystem, it also registers it with the GI cluster, and starts it on all the VMs in the cluster. Nice and easy!

Copied to Clipboard
Error: Could not Copy
Copied to Clipboard
Error: Could not Copy
oracle@exapmdbvm01$ escli mkacfsfilesystem 84:1756a959580240d5894263cffa7aeef4 /acfs2
Creating ACFS file system with ID 2:e479635242b445a7853520d0b84484fb
Created ACFS file system with ID 2:e479635242b445a7853520d0b84484fb

And that’s it! Our ACFS Filesystem has been created and started on both VMs in our cluster. But don’t take my word for it – lets look at the VMs and prove it.

From the Linux command line we can use df -h to see the filesystems are available on our VMs. Note the last line in the output below. 

Copied to Clipboard
Error: Could not Copy
Copied to Clipboard
Error: Could not Copy
oracle@exapmdbvm01$ df -h
Filesystem                                                                                 Size  Used Avail Use% Mounted on
devtmpfs                                                                                    32G     0   32G   0% /dev
tmpfs                                                                                       32G  8.8M   32G   1% /run
tmpfs                                                                                       32G     0   32G   0% /sys/fs/cgroup
/dev/mapper/VGExaDbDomU-LVDbSys1                                                            15G  3.9G   12G  26% /
/dev/mapper/VGExaDbDomU-LVDbTmp                                                            3.0G   73M  2.9G   3% /tmp
/dev/mapper/VGExaDbDomU-LVDbKdump                                                           20G  175M   20G   1% /crashfiles
/dev/sdb1                                                                                  412M  124M  289M  31% /boot
/dev/mapper/VGExaDbDomU-LVDbVar1                                                           2.0G  697M  1.3G  36% /var
/dev/mapper/VGExaDbDomU-LVDbVarLog                                                          18G  451M   18G   3% /var/log
/dev/mapper/VGExaDbDomU-LVDbVarLogAudit                                                    924M  153M  772M  17% /var/log/audit
/dev/mapper/VGExaDbDomU-LVDbHome                                                           4.0G   77M  4.0G   2% /home
tmpfs                                                                                       63G  513M   62G   1% /dev/shm
/dev/mapper/VGExaDbDisk. exapmdbvm01_u01_6d901444ceb648abadc97308a73c124e-LVDBDisk         18G  2.6G   16G  15% /u01
/dev/mapper/VGExaDbDisk. exapmdbvm01_dbh01_9b0ec48579c84773859fa402202745f7-LVDBDisk       48G  5.8G   43G  13% /u01/app/oracle/product/23.0.0.0/dbhome_1
/dev/mapper/VGExaDbDisk. exapmdbvm01_gih01_52e8bed1f4ee4fd4b10563c2c90aee7c-LVDBDisk       48G  3.5G   45G   8% /u01/app/23.0.0.0/grid
tmpfs                                                                                      6.3G     0  6.3G   0% /run/user/1001
oracle_clusterware                                                                         128M  5.3M  123M   5% /u01/app/oracle/crsdata/exapmdbvm01/shm
tmpfs                                                                                      6.3G     0  6.3G   0% /run/user/0
/dev/exc/acfsvol2                                                                           50G  651M   50G   2% /acfs2

We can also look at the srvctl for more information, and confirmation that it has been registered with the Clusterware so it can be started and mounted automatically when the VMs restart

Copied to Clipboard
Error: Could not Copy
Copied to Clipboard
Error: Could not Copy
$ srvctl status filesystem
ACFS file system /acfs2 is mounted on nodes exapmdbvm01, exapmdbvm02

And

Copied to Clipboard
Error: Could not Copy
Copied to Clipboard
Error: Could not Copy
$ srvctl config filesystem
Volume device: /dev/exc/acfsvol2
Canonical volume device: /dev/exc/acfsvol2
Accelerator volume devices:
Mountpoint path: /acfs2
Mount point owner: root
Mount point group: root
Mount permissions: owner:root:rwx,pgrp:root:r-x,other::r-x
Mount users: oracle
Type: ACFS
Mount options:
Description:
ACFS file system is enabled
ACFS file system is individually enabled on nodes:
ACFS file system is individually disabled on nodes:

Now we have our ACFS filesystem, we can use it just as we would any other ACFS filesystem! I’m confident you’ll find this short guide useful and if you’d like more information, check out these links on Exascale, Exascale Volumes, and ACFS on Exascale.

Alex Blyth

Senior Principal Product Manager - Exadata

Alex Blyth is a Product Manager for Oracle Exadata with over 25 years of IT experience mainly focused on Oracle Database, Engineered Systems, manageability tools such as Enterprise Manager and most recently Cloud. Prior to joining the product management team, Alex was a member of the Australia/New Zealand Oracle Presales community and before that a customer of Oracle's at a Financial Services organisation.

Show more

Previous Post

Exadata System Software Updates - January 2025

Alex Blyth | 3 min read

Next Post


Power Management with Exadata X11M: Maximizing Performance and Efficiency

Maruti Sharma | 12 min read
Oracle Chatbot
Disconnected