Friday, May 17, 2013

Splitting a FlexClone volume from its parent


If you want the FlexClone volume to have its own disk space, rather than using that of its parent, you can split it from its parent using the commands mentioned below.

prod_filer_h2> snap list vmdk_vol
Volume vmdk_vol
working...

  %/used       %/total  date          name
----------  ----------  ------------  --------
 31% (29%)   30% (27%)  Sep 24 22:46  clone_qa_vmdk_vol.1 (busy,vclone)

prod_filer_h2> snap delete vmdk_vol clone_qa_vmdk_vol.1
Snapshot clone_qa_vmdk_vol.1 is busy because of LUN clone, snapmirror, sync mirror, volume clone, snap restore, dump, CIFS share, volume copy, ndmp, WORM volume, SIS Clone

prod_filer_h2> vol status qa_vmdk_vol
         Volume State           Status            Options
        qa_vmdk_vol online          raid_dp, flex     nosnap=on, no_atime_update=on, maxdirsize=18350,
                                64-bit            guarantee=none
                Clone, backed by volume 'vmdk_vol', snapshot 'clone_qa_vmdk_vol.1'
                         Volume UUID: 2dd22882-06bb-11e2-9ef8-123478563412
                Containing aggregate: 'aggr0'

prod_filer_h2> vol clone split start qa_vmdk_vol
Clone volume 'qa_vmdk_vol' will be split from its parent.
Monitor system log or use 'vol clone split status' for progress.

The clone-splitting operation begins. All existing Snapshot copies of the clone are deleted, and the creation of Snapshot copies of the clone is prevented for the duration of the split operation.

Note: If an online data migration operation is in progress, this command might fail. In this case, wait and retry the command when the online data migration operation is complete.

prod_filer_h2> Fri May 17 01:23:06 EDT [prod_filer_h2:wafl.volume.clone.split.started:info]: Clone split was started for volume qa_vmdk_vol
Fri May 17 01:23:06 EDT [prod_filer_h2:wafl.scan.start:info]: Starting volume clone split on volume qa_vmdk_vol.

prod_filer_h2> vol clone split status qa_vmdk_vol
Volume 'qa_vmdk_vol', 2489 of 9863175 inodes processed (0%)
        12104546 blocks scanned. 6739818 blocks updated.

prod_filer_h2> vol clone split status qa_vmdk_vol
Volume 'qa_vmdk_vol', 3971 of 9863175 inodes processed (0%)
        19258602 blocks scanned. 13389935 blocks updated.

prod_filer_h2> vol clone split status qa_vmdk_vol
Volume 'qa_vmdk_vol', 4364 of 9863175 inodes processed (0%)
        23202794 blocks scanned. 15088433 blocks updated.

prod_filer_h2> snap list vmdk_vol
Volume vmdk_vol
working...

  %/used       %/total  date          name
----------  ----------  ------------  --------
 31% (29%)   30% (27%)  Sep 24 22:46  clone_qa_vmdk_vol.1 (busy,vclone)

prod_filer_h2> vol clone split status qa_vmdk_vol
Volume 'qa_vmdk_vol', 11615 of 9863175 inodes processed (0%)
        32004621 blocks scanned. 23269747 blocks updated.

prod_filer_h2> vol clone split status qa_vmdk_vol
vol clone split status: The volume is not a clone

prod_filer_h2> snap list vmdk_vol
Volume vmdk_vol
working...

  %/used       %/total  date          name
----------  ----------  ------------  --------
 30% (30%)   28% (28%)  Sep 24 22:46  clone_qa_vmdk_vol.1

prod_filer_h2> snap delete vmdk_vol clone_qa_vmdk_vol.1

prod_filer_h2> Fri May 17 03:51:30 EDT [prod_filer_h2:wafl.snap.delete:info]: Snapshot copy clone_qa_vmdk_vol.1 on volume vmdk_vol NetApp was deleted by the Data ONTAP function snapcmd_delete. The unique ID for this Snapshot copy is (8, 1283708).

No comments: