Skip to main content

We've Moved!

Product Documentation has moved to docs.hitachivantara.com
Hitachi Vantara Knowledge

Controlling file system space usage

The server can monitor space allocation on a file system and trigger alerts when pre-set thresholds are reached; optionally, users can be prevented from creating more files once a threshold has been reached. Alternatively, the file system can be expanded either manually or automatically while online. The command fs-usage controls the monitoring. See the man pages for details.

Two activities consume system space:

  • Live file system. Refers to the space consumed when network users add files or increase the size of existing files.
  • Snapshots. Refers to consistent file system images at specific points in time. Snapshots are not full copies of the live file system, and snapshot sizes change depending on the live file system. As the live file system uses more space, snapshots use more space, and as the data in the live file system is changed, snapshots require less space.
NoteDeleting files from the live file system may increase the space taken up by snapshots, so that no disk space is actually reclaimed as a result of the delete operation. The only sure way to reclaim space taken up by snapshots is to delete the oldest snapshot.

The server tracks space taken up by:

  • The user data in the live file system
  • The file system metadata (the data the server uses to manage the user data files)
  • Snapshots
  • Entire file system

For each of these slices, both a warning and a severe thresholds can be configured. Although they differ from system to system, the following settings should work in most cases:

Warning Severe
Live file system 70% 90%
Snapshots 20% 25%
Entire file system 90% 95%

When the storage space occupied by a volume crosses the warning threshold, a warning event is recorded in the event log. When the Entire File System Warning threshold has been reached, the space bar used to indicate disk usage turns yellow.

When the space reaches the severe threshold, a severe event is recorded in the event log, generating corresponding alerts. If the Entire File System Severe threshold has been reached, the space bar used to indicate disk usage turns amber.

If file system auto-expansion is disabled, you can limit the growth of the live file system to prevent it from crossing the severe threshold, effectively reserving the remaining space for use by snapshots. To limit the live file system to the percentage of available space defined as the severe threshold, fill the Do not allow the live file system to expand beyond its Severe limit check box on the File System Details page.

File system utilization recommendations

The recommendations are structured to take into consideration file systems of various sizes and uses. The recommendations are broken into the following components:

  • Type of file system
  • Recommended maximum file system utilization
  • Recommended file system thresholds

Archive file systems

Archive file systems are defined as file systems that maintain a data set for an extended period of time that has little to no change during that life time. This type of access pattern allows it to be utilized at very high levels.

Recommendation: The file system should be maintained at a usage level no higher than 97%. The Entire File System usage thresholds are recommended to be set at the following levels:

Warning 90%
Severe 97%

High activity file systems

High activity or high churn file systems are defined as file systems that have a high rate of data being accessed, deleted and created. Due to the workload type and to maintain a high level of write performance, sufficient free space is required. These amounts can vary based on file system size. The following recommendations take into account file system size.

  • File system size range < 1 TiB
    • Recommendation: The file system should be maintained at a usage level no higher than 80%. The Entire File System usage thresholds are recommended to be set at the following levels:
Warning User Definable*
Severe 80%

* User Definable: Choose a value that provides sufficient time to increase file system capacity.

  • File system size range 1 TiB < 10 TiB
    • Recommendation: The file system should be maintained at a usage level no higher than 85%. The Entire File System usage thresholds are recommended to be set at the following levels:
Warning 70%
Severe 85%
  • File System size range > 10 TiB
    • Recommendation: The file system should be maintained at a usage level no higher than 90%. The Entire File System usage thresholds are recommended to be set at the following levels:
Warning 80%
Severe 90%

Dynamic Superblocks (DSB)

The file system maintains a history of file system checkpoints known as Dynamic Superblocks. If the end user requires fast reclamation of free space after data deletions, the DSB count could be reduced to 2 for file systems <10TiB and 16 for file systems >10TiB. The default number of DSBs is 128. You can specify the setting at format time or change it at a later time by issuing the following command:

fs-set-dsb-count <file system> <dsb count>

Example:

To change the DSB count of "fs1" to two DSBs:

fs-set-dsb-count fs1 2

Note that changing the number of DSBs requires that the file system be unmounted.

Increasing the size of a file system

There are two methods to expand the amount of storage allocated to a file system:

  • Manual expansion

    Manually expanding a file system allows you to add storage capacity to a file system (or a tier of a tiered file system) immediately. You specify the new size of a file system, and the storage is allocated immediately. The maximum size that a file system or tier can attain is specified, and the file system size can be set to the maximum size supported by the storage pool in which the file system was created.

  • Automatic expansion

    File system auto-expansion allows a file system to grow to by adding chunks of storage on an as-needed basis, as long as the confinement limit or the maximum file system size has not been reached. For tiered file systems, auto-expansion can be applied independently to one or to all tiers of the file system, allowing one tier to expand independently of another.

    When auto-expansion is enabled, and the file system (or tier) reaches approximately 80 percent of its allocated capacity, one or more additional chunks are allocated (refer to the Storage Subsystem Administration Guide for a discussion of chunks). The maximum size that a file system can attain can be specified, or the file system size can be allowed to grow to the maximum size supported by the storage pool in which the file system was created.

NoteOnce storage is allocated to a file system, that storage becomes dedicated to that file system, meaning that once a file system is expanded, its size may not be reduced. Unused space in the file system cannot be reclaimed, allocated to another file system, or removed. To reclaim the storage space, the file system must be relocated to different storage or deleted.

Increasing the amount of storage allocated to a file system (manually or automatically) does not require that the file system be taken offline.

Thin provisioning file systems

Thin provisioning is a method of controlling how a file system's free space is calculated and reported. Administrators use thin provisioning to optimize the utilization of storage and to plan resource acquisition in a way that helps minimize expenses, while ensuring that there is enough storage for all the system needs.

Thin provisioning allows you to oversubscribe the storage connected to the storage server. As long as the available storage is not completely allocated to file systems, the oversubscription cannot be noticed by storage system users.

When thin provisioning is enabled and storage is oversubscribed, if a client attempts a write operation and there is insufficient storage space, the client will receive an insufficient space error, even though a query for the amount of free space will show that space is still available. When storage is oversubscribed, the storage server's administrator must ensure that this situation does not occur; the storage server does not prevent this situation from occurring. To resolve this situation, the storage server's administrator must either disable thin provisioning or add storage.

When thin provisioning is enabled, the storage server reports the amount of free space for a file system based on the file system's expansion limit (its maximum configured capacity), rather than on the amount of free space based on the amount of storage actually allocated to the file system. Because file systems can be allowed to automatically expand up to a specified limit (the expansion limit), additional storage is allocated to the file system as needed, instead of all the storage being allocated to the file system when it is created.

For example, a file system has an expansion limit of 20 TB, with 6 TB already used and 8 TB currently allocated. If thin provisioning is enabled, the server will report that the file system has 14 TB of free space, regardless of how much free space is actually available in the storage pool. For more information about storage pools, refer to the Storage Subsystem Administration Guide. If thin provisioning is disabled, the server will report that the file system has 2 TB of free space.

By default, thin provisioning is disabled for existing file systems and for newly created file systems. Enable and disable thin provisioning using the filesystem-thin command (currently there is no way to enable or disable thin provisioning using NAS Manager).

Thin provisioning works on a per file system basis, and does not affect the capacity reported by the span-list --filesystems and filesystem-list commands. Also, NAS Manager displays the actual file system size. As a result, the administrator can perform proper capacity planning.

When enabled, thin provisioning information is returned by the following CLI commands:

  • cifs-share list
  • df
  • filesystem-limits
  • filesystem-list v
  • fs-stat
  • nfs-export list
  • query

For more information about CLI commands, refer to the Command Line Reference.

If thin provisioning is enabled and you disable file system auto-expansion for a storage pool, the free space reported for each of the file systems in that storage pool is the same as if thin provisioning were not enabled. This means that the free space reported becomes equal to the difference between the file system's current usage and the amount of space in all storage pool chunks currently allocated to that file system. If you re-enable file system auto-expansion for file systems in the storage pool, free space is again reported as the difference between the file system's current usage and its expansion limit, if an expansion limit has been specified.

When thin provisioning is enabled, and the aggregated file system expansion limits of all file systems exceeds the amount of storage connected to the server/cluster, warnings are issued to indicate that storage is oversubscribed. These warnings are issued because there is an insufficient amount of actual storage space for all file systems to grow to their expansion limit.

Managing file system expansion

File system growth management strategies can be summarized as follows:

  • Auto-expansion enabled, but not confined. The file system is created with a defined size limit, and a small amount of that space is actually allocated when the file system is created. The file system is then allowed to expand automatically (auto-expansion enabled) until the storage pool hosting the file system is full (auto-expansion is not confined), as long as the file system expansion will not cause the file system to exceed the maximum allowable number of chunks in a file system.
  • Auto-expansion enabled, and confined. The file system is created with a defined size limit, and a small amount of that space is actually allocated when the file system is created. The file system is then allowed to expand automatically (auto-expansion enabled) to the defined size limit (auto-expansion is confined), as long as there is available space in the storage pool and the file system expansion will not cause the file system to exceed the maximum allowable number of chunks in a file system.
  • Auto-expansion disabled. The file system is created with the full amount of the specified size, and is not allowed to expand automatically (auto-expansion disabled).
Note The size of a file system cannot be reduced.
File System Type Auto-Expansion Enabled Auto-Expansion Disabled
Untiered

If auto-expansion is not confined, the size limit is ignored. The file system will be allowed to expand until the storage pool is full.

If auto-expansion is confined, the size limit defines the maximum size to which a file system will be allowed to expand.

When the file system is created, it is initially allocated a certain amount of space (the initial capacity), and the file system is allowed to expand automatically, up to its size limit. When the file system uses approximately 80% of its currently allocated space, it is expanded automatically up to its size limit. This expansion occurs in increments specified by the guideline chunk size (which is calculated by the system).

The file system can be manually expanded, increasing the file system size limit.

The size limit defines the amount of space that is immediately allocated to the file system.

When the file system is created, it is allocated the total amount of space specified by the size limit.

The file system can be manually expanded, increasing the file system size limit.

Tiered

If auto-expansion is not confined, the size limit is ignored if defined. The tiers of the file system will be allowed to expand until the storage pool is full.

If auto-expansion is confined, the size limit defines the maximum size to which the tier of a file system will be allowed to expand.

When the file system is created, the user data tier is initially allocated a certain amount of space (the initial capacity), and the user data tier is allowed to expand automatically, up to its size limit. When the user data tier uses approximately 80% of its currently allocated space, it is expanded automatically up to its size limit. This expansion occurs in increments specified by the guideline chunk size (which is calculated by the system).

Either tier can be manually expanded, increasing the file system size limit.

When the file system is created, the user data tier is initially allocated the total amount of space specified by the size limit.

Either tier can be manually expanded, increasing the file system size limit.

The size limit defines the amount of space that is immediately allocated to the user-data tier.

When the file system is created, the user data tier is initially allocated the total amount of space specified by the size limit.

Either tier can be manually expanded, increasing the file system size limit.

By default, file system auto-expansion is enabled and, when auto-expansion is enabled, the file system expands, without interruption of service if the following conditions exist:

  • Confined limit not reached (only for file systems that have auto-expansion confined). As long as the file system expansion would not exceed the confined auto-expansion limit.
  • Available space. Sufficient available free space and chunks remain in the storage pool.
  • Chunk limit. The file system expansion will not cause the file system to exceed the maximum allowable number of chunks in a file system.
  • Maximum supported file system size. The file system expansion will not cause the file system to exceed the maximum supported file system size.

Whether auto-expansion is enabled or disabled, you can limit the size of the file system of an untiered file system or either tier of a tiered file system. If necessary, you can manually expand the file system or of an untiered file system or a tier of a tiered file system.

Note: File system auto-expansion may be enabled or disabled for all file systems in a particular storage pool. When enabled for a storage pool, file system auto-expansion is enabled by default, but if it has been disabled on an individual file system, you can re-enable it. When file system auto-expansion is disabled for a storage pool, you cannot enable it (you must expand the file system manually).

Enabling and disabling file system auto-expansion

When file system auto-expansion is enabled or disabled for a storage pool, you cannot change the setting for a single file system in the storage pool; you must enable or disable file system auto-expansion for all file systems in the storage pool.

When file system auto-expansion is disabled and a file system requires expansion, you must expand the file system manually.

The ability for file systems in a storage pool to automatically expand is enabled or disabled at the storage pool level.

  • When file system auto‐expansion is enabled for a storage pool, file systems in the storage pool may be allowed to auto‐expand or they may be confined to a specified size limit. File system auto‐expansion is enabled or disabled for each file system independently
  • When file system auto‐expansion is disabled for a storage pool, file systems in the storage pool are not allowed to auto‐expand. You cannot change the setting for an individual file system in the storage pool to allow auto‐expansion.

When file system auto‐expansion is disabled (at the storage pool level, or for an individual file system) and a file system requires expansion, you must expand the file system manually.

Expanding a file system

Manual file system expansion is supported through NAS Manager and through the CLI.

Procedure

  1. Navigate to Home Storage Management File Systems.

  2. Select a file system and click details to display the File System Details page.

  3. Click expand to display the Expand File System page.

    For an untiered file system the Expand File System page looks like the following:GUID-3D719027-3FB5-4B8B-89FD-B49248F3138F-low.png The Allocate On Demand option is not available for UVM-backed spans.

    For a tiered file system, the Expand File System page looks like the following:GUID-71BDD40A-B7D1-4CB6-97F4-A6CB6C2C4A9B-low.png

    In some circumstances, such as when the storage pool resides on a UVM span or in HDP compressed storage, a specific stripeset must be selected for expanding the file system. If the server cannot select the stripeset, the Expand File System page shows a list of stripesets from which to select.

    GUID-570DDCBF-7EA0-4415-8E39-62453646E849-low.png

  4. To expand the file system manually, do one of the following:

    • For an untiered file system, specify the new file system capacity in the New Capacity field and use the list to select MiB, GiB, or TiB.
    • For a tiered file system, select the tier you want to expand, and specify the new file system capacity in the New Capacity field, then use the list to select MiB, GiB, or TiB.
      NoteYou can expand one tier per expansion operation. To expand both tiers, you must perform a manual expansion twice.
  5. Click OK.

    NoteBecause space is always allocated in multiples of the chunk size set when the storage pool containing the file system was created, the final size of the file system may be slightly larger than you request.

    Manual expansion of file systems is also supported through the command line interface. For detailed information on this process, run man filesystem-expand on the CLI.

Moving a file system

Moving a file system (or several file systems) may be necessary to improve performance or balance loads, to move data to different storage resources, to support changing network topography, or other reasons.

There are two basic methods of moving a file system:

  • File System Relocation

    File system relocation changes the EVS (virtual server) that hosts the file system, but it does not move file system data. Moving the file system from one EVS to another changes the IP address used to access the file system, and also changes CIFS shares and NFS Exports for that file system. For information on how to relocate a file system using File System Relocation, refer to the Replication and Disaster Recovery Administration Guide.

    If the file system to be relocated is linked to from within a CNS, and clients access the CNS using a CIFS share or an NFS export, the relocation can be performed with no change to the configuration of network clients. In this case, clients will be able to access the file system through the same IP address and CIFS share/NFS export name after the relocation as they did before the relocation was initiated. For more information on CNS, refer to the Server and Cluster Administration Guide.

    CautionWhether or not the file system resides in a CNS, relocating a file system will disrupt CIFS communication with the server. If Windows clients require access to the file system, the file system relocation should be scheduled for a time when CIFS access can be interrupted.
  • Transfer of primary access

    A transfer of primary access is a replication-based method of copying data from a portion of a file system (or an entire file system) and relocating the access points for that data (copying the data and metadata). A transfer of primary access causes very little down time, and the file system is live and servicing file read requests during most of the relocation process. For a short period during the relocation process, access is limited to read-only. For more information on relocating a file system using transfer of primary access, refer to the Replication and Disaster Recovery Administration Guide.

The method you use to relocate a file system depends, in part, on what you want to move, and what you want to accomplish by relocating the file system.

  • If you want to move the file system's access points, but not the actual data, using file system relocation is the most appropriate method.
  • If you want to move the file system's data and access points, using a transfer of primary access is the most appropriate method.

File system relocation

Before it can be shared or exported, a file system must be associated with a Virtual Server (EVS), thereby making it available to network clients. The association between a file system and an EVS is established when the file system is created. Over time, evolving patterns of use and/or requirements for storage resources may make it desirable to relocate a file system to a different EVS.

NoteRead caches cannot be relocated.

A file system hosted by an EVS on a cluster node may be relocated to:

  • An EVS on the same cluster node, or
  • An EVS on a different node in the same cluster.

    but may not be relocated to:

  • An EVS on a stand-alone server, or
  • An EVS on a node of a different cluster.

A file system hosted by an EVS on a stand-alone server may be relocated to

  • An EVS on the same server

    but may not be relocated to:

  • An EVS on a different server, or
  • An EVS on a node in a cluster.

Typically, File System Relocation is used to move a file system from an EVS on a cluster node to an EVS on a different cluster node in order to improve throughput by balancing the load between cluster nodes.

File system relocation performs the following operations:

  • Re-associates the file system with the selected EVS.
  • Transfers explicit CIFS shares of the file system to the new EVS.
  • Transfers explicit NFS exports of the file system to the new EVS.
  • Migrates FTP users to the new EVS.
  • Migrates snapshot rules associated with the file system to the new EVS.
  • Migrates the iSCSI LUs and targets.

File system relocation may require relocating more than just the specified file system. If the file system is a member of a data migration path, both the data migration source file system and the target file system will be relocated. It is possible for the target of a data migration path to be the target for more than one source file system. If a data migration target is relocated, all associated source file systems will be relocated as well.

If more than one file system must be relocated, a confirmation dialog will appear indicating the additional file systems that must be moved. Explicit confirmation must be acknowledged before the relocation will be performed.

File System Relocation will affect the way in which network clients access the file system in any of the following situations:

  • The file system is linked to from the CNS tree, but is shared or exported outside of the context of the CNS.
  • The cluster does not use a CNS.

In each of the above cases, access to the shares and exports will be changed. In order to access the shares and exports after the relocation, use an IP address of the new EVS to access the file service.

Relocating file systems that contain iSCSI Logical Units (LUs) will interrupt service to attached initiators, and manual reconfiguration of the IP addresses through which targets are accessed will be required once the relocation is complete. If relocating a file system with LUs is required, the following steps must be performed:

  • Disconnect any iSCSI Initiators with connections to LUs on the file system to be relocated.
  • Unmount the iSCSI LU.
  • Relocate the file system as normal. This procedure is described in detail in the Replication and Disaster Recovery Administration Guide.
  • Reconnect the new Targets with the iSCSI Initiators. Be aware that the Targets will be referenced by a new name corresponding to the new EVSs.
NoteAll iSCSI LUs on a target must be associated with file systems hosted by the same EVS.

Using system lock on file systems

System Lock mode protects file systems during replication and transfer of primary access operations. Four important distinctions apply:

  • NDMP (Network Data Management Protocol) versus file service protocols. When System Lock is enabled for a file system:
    • NDMP has full access (including writes) during backups, replication, and transfer of primary access.
    • The file system remains in read-only mode to clients using the file service protocols (NFS, CIFS, FTP, and iSCSI).
  • System Lock versus read only:
    • When a file system is Syslocked, NDMP still has full access to that file system and can write to it.
    • When a file system is mounted as read-only, NDMP (like all other protocols) has read-only access to that file system, and cannot write to it. To ensure that a file system remains completely unchanged, you should mount it as read-only.
  • Replication versus transfer of primary access:
    • During replication operations, the destination file system is put into System Lock mode.
    • During transfer of primary access operations, both the source file system and the destination file system are put into System Lock mode.
  • Read Cache Exception. A read cache may not be put into System Lock mode.

Enabling and disabling system lock for a file system

  1. Navigate to Home Storage Management File Systems.

  2. Select a file system and click details to display the File System Details page.

  3. In the Syslock field, toggle the enable/disable button as appropriate.

    When the file system is in System Lock mode, the Status changes to Syslocked, the System Lock status becomes enabled, and the Enable button becomes Disable.

    When System Lock is enabled for a file system, NDMP has full access to the file system and can write to it during a backup or replication, but the file system remains in read-only mode to clients using the file service protocols (NFS, CIFS, FTP, and iSCSI).

    When viewing the details of a read cache, the System Lock’s enable/disable button is not available.

Recovering a file system

Following some system failures, a file system may require recovery before mounting. If required, such a recovery is performed automatically when you mount the file system. Performing recovery rolls the file system back to its last checkpoint and replays any data in NVRAM.

In extreme cases, when you mount a file system after a system failure, the automatic recovery procedures may not be sufficient to restore the file system to a mountable condition. In such a case, you must forcefully mount the file system, which discards the contents of NVRAM before mounting the file system.

Procedure

  1. Navigate to Home Storage Management File Systems.

  2. Select a file system and click details to display the File System Details page.

  3. If a file system displays Not Mounted in the Status column, click mount to try to mount the file system.

    • If necessary, the automatic recovery processes will be invoked automatically. The file system was mounted successfully.
    • If the automatic recovery fails, the file system will not mount, and the File Systems page will reappear, indicating that the file system was not mounted. Navigate to the File System Details page.
  4. For the file system that failed to mount, click details to display the File System Details page. In the Settings/Status area of the page, the file system label will be displayed, along with the reason the file system failed to mount (if known), and suggested methods to recover the file system, including the link for the Forcefully mount option.

  5. Depending on the configuration of your system, and the reason the file system failed to mount, you may have several recovery options:

    • If the server is part of a cluster, you may be able to migrate the assigned EVS to another cluster node, then try to mount the file system. This can become necessary when another node in the cluster has the current available data in NVRAM that is necessary to replay write transactions to the file system following the last checkpoint. An EVS should be migrated to the cluster node that mirrors the failed node's NVRAM (for more information on NVRAM mirroring, refer to the System Access Guide. For more details on migrating EVSs, refer to the Server and Cluster Administration Guide.
    • If the first recovery attempt fails, click the Forcefully mount link. This will execute a file system recovery without replaying the contents of NVRAM.
    CautionUsing the Forcefully mount option discards the contents of NVRAM, data which may have already been acknowledged to the client. Discarding the NVRAM contents means that all write operations in NVRAM (those write operations not yet committed to disk) are lost. The client will then have to resubmit the write request. Use the Forcefully mount option only upon the recommendation of customer support.

Restoring a file system from a checkpoint

Following a storage subsystem failure, it may be necessary to recover file systems.

File system corruption due to an event (such as RAID controller crash, storage system component failure, or power loss) often affects objects that were being modified around the time of the event.

The file system is configured to keep up to 128 checkpoints. The maximum number of checkpoints supported is 1024. The number of checkpoints preserved is configurable when the file system is formatted, but, once set, the number of checkpoints cannot be changed.

When a checkpoint completes, rather than immediately freeing the storage used for the previous checkpoint, the file system maintains a number of old checkpoints. As each new checkpoint completes, the oldest checkpoint is overwritten. This means that there can be multiple checkpoints on-disk, each of which is complete and internally consistent point-in-time view of the file system. If necessary, the file system can be restored to any of these checkpoints.

In the case of file system corruption, if there are enough checkpoints on disk, it may be possible to roll back to a previous checkpoint, pre-dating the event that caused the corruption and restoring the file system using the uncorrupted checkpoint. This may be possible even if this event occurred up to a few minutes before the file system was taken offline.

To restore a file system to a previous checkpoint, use the fs-checkpoint-health and the fs-checkpoint-select commands. Refer to the Command Line Reference for more information about these commands.

Note the following:

  • Restoring a file system using a checkpoint does not affect snapshots taken prior to the checkpoint being restored, but, like any other file system update, snapshots taken after that checkpoint are lost.
  • After restoring to a checkpoint, it is possible to restore again, to an older checkpoint and, if the file system has not been modified, restore again, to a more recent checkpoint. So, for example, it is possible to mount the file system in read only mode, check its status, and then decide whether to re-mount the file system in normal (read/write) mode or to restore to a different checkpoint.
CautionOnce you mount a restored file system in normal (read/write) mode, you cannot restore to a later checkpoint.

File system recovery from a snapshot

It is possible that, although corruption has occurred in the live file system, a good snapshot still exists. If so, it may be preferable to recover the file system from this snapshot, with some loss of data, rather than incur the downtime that might be required to fix the live file system. Recovering a file system from a snapshot restores the file system to the state that it was in when the snapshot was taken.

Recovering a file system from a snapshot makes it possible to roll back the file system to the state that it was in when a previous snapshot was taken.

File system recovery from a snapshot is a licensed feature, which requires a valid FSRS license on the server/cluster.

NoteYou can recover a file system from a snapshot only when at least the configured number of preserved file system checkpoints have been taken since that snapshot was taken. For example, if a file system is configured to preserve 128 checkpoints (the default), then you can recover the file system from a snapshot only after a minimum of 128 checkpoints have been taken after the snapshot. If less than the configured number of checkpoints have been taken since the snapshot, you can either recover from an earlier snapshot or recover the file system from a checkpoint.

The following file system rollback considerations apply:

  • File system rollback can be performed even if the live file system is corrupted.
  • All snapshots are lost after the rollback.
  • Even though the file system recovery happens very quickly, no new snapshots can be taken until all previous snapshots have been discarded. The time required before a new snapshot can be taken depends on the size of the file system, not on the number of files in the file system.
NoteOnce you have recovered a file system from a snapshot, and mounted it in read-write mode, you cannot undo the recovery or recover again to a different snapshot or checkpoint.

To roll back a file system from a snapshot, use the snapshot-recover-fs command. Refer to the Command Line Reference for more information about this command.

An additional tool is available to kill all current snapshots, that is the kill-snapshots command (refer to the Command Line Reference for more information about this command). snapshot-delete-all is the preferred tool for deleting all snapshots as it does not require the file system to be unmounted, no space is leaked, and it does not affect checkpoint selection.

Automatic file system recovery

The fixfs utility is the main file system recovery tool, but it should only be used under the supervision of customer support personnel.

fixfs is capable of repairing a certain amount of non-critical metadata, for example performing orphan recovery. At all stages that have the potential to last longer than a few minutes, fixfs provides progress reporting, and the option to abort the fix. Note that progress reports are stage or operation based, for example Stage 3 of 7 complete. For some operations, fixfs will also provide an estimate of time until the completion of the operation.

The strategy used by fixfs to repair file systems can be summarized as:

  • fixfs or fs-checkpoint-health are the recovery tools to be used if a file system is experiencing corruption. The default fixfs behavior may be modified by various command line switches, but often the required switch is suggested by fixfs during or at the end of a previous run.
  • Where possible, fixfs will run with the file system in any state (there will be no need to perform file system recovery first, so that there's no need to worry about what happens if recovery cannot complete due to corruption). Where not possible (for example, if the file system is marked as "failed" or "requires expansion"), fixfs will not run. When fixfs does not run, it will give a clear indication of what needs to be done to get to the point where it can run.
  • By default, fixfs will only modify those objects which are lost or corrupted.
  • By default, fixfs will only bring the file system to the point where it can be mounted.
  • Snapshots are considered expendable and are deleted.

 

  • Was this article helpful?