Skip to main content
Hitachi Vantara Knowledge

Expanding a UVM-based filesystem

With normal HDP pools, the server can detect how much disk space is available and it never allocates new chunks to a filesystem if no HDP pool in the span has enough space. The server also performs pre-allocation writes: when a chunk is allocated, the server writes a non-zero block to every HDP page in the chunk, so that the free space on the HDP pool falls immediately and the server has an accurate view of how much space is left. If an HDP pool were to run out of space, write operations would fail and file systems would unmount. It would be impossible to remount them until new space had been added to the HDP pool.

With UVM, the server has no way to determine:

  • Whether the external LUs are thinly provisioned and, if so, how much free disk space is left.
  • The page size of the HDP pool.

The server is, therefore, unable to guard against running out of disk space as effectively as it can with a local HDP pool.

NoteIt is the Administrator's job to ensure that no external HDP pool ever runs out of space.

For a span residing on UVM LUs, the behavior of the NAS server changes as follows:

  • Auto-expansion
    • Because the server cannot determine that any filesystem expansion is safe, auto-expansion is disabled. Manual expansion is still permitted, but it is the Administrator's responsibility to ensure that any external HDP pools never run out of space.
    • If a span resides on UVM storage, you can use Hitachi Storage Administrator Migrator functionality to migrate its LUs to a new, local HDP pool. While migration is in progress, the span is treated as still residing on UVM, even after some of its LUs have migrated to the new, local HDP pool. The server overlooks the fact that a stripeset is split between two pools (which would normally be a forbidden configuration), but filesystems do not auto-expand. As soon as all LUs have migrated, the server treats the span as residing on HDP: auto-expansion resumes, and the server performs its normal checks for free space before allocating chunks.
  • Pre-allocation writes
    • In order to help the Administrator form an accurate picture of the available free space, the server performs pre-allocation writes when a new chunk is allocated, just as it would on HDP. Because it cannot determine the external LUs' page size, it performs pre-allocation writes twice: once assuming 32 MiB pages (as used by HUS and AMS) and once assuming 42 MiB pages (as used by Enterprise storage platforms). However, if the external storage comes from a different vendor and uses a smaller page size, pre-allocation writes reduce the free space by less than the expected amount, and the rest of the reduction occurs later, when the Filesystem writes to the newly allocated chunks for the first time. The Administrator must take extra care when virtualizing non-Hitachi external storage over UVM.
    • Performing two sets of pre-allocation writes does not take twice as long as performing one set, because HDP pages are mapped to real disk space only once. In rare cases where pre-allocation writes cause problems, they can be disabled for the system as a whole using the span-hdp-preallocation command. This makes filesystem-expansion faster, but also causes the server to over-estimate the available space on HDP and, therefore, can cause the Administrator to over-estimate the available space on UVM. Do not disable pre-allocation writes if there is any alternative solution.
  • Minimum SD rules
    • When any span is expanded, the server enforces minimum SD-counts to help maintain performance. With UVM, the server enforces the same minimum as for plain storage. However, whenever possible, we recommend adhering to the stricter HDP rules (see the span-create man page). Expanding on to more SDs helps to ensure that adequate queue depth is available.
  • Reuse of HDP pages
    • The server maintains a vacated-chunks list, just as on HDP, so that deleting and recycling one filesystem and creating or expanding another reuses the same chunks, instead of selecting new ones. If the external LUs are thinly provisioned, they benefit from the reuse of HDP pages that are already mapped to real disk space.
    • The server cannot unmap HDP pages on the external LUs, so the span-unmap-vacated-chunks command does not run on a UVM-resident span.
  • DP-Vols
    • If a span resides on UVM internal LUs, you can expand it on to DP-Vols from a local HDP pool. You can also expand on to DDM LUs from any DDM pool.
    • You cannot expand a span on to UVM internal LUs if it resides entirely on HDP DP-Vols.


  • Was this article helpful?