Skip to main content

We've Moved!

Product Documentation has moved to docs.hitachivantara.com
Hitachi Vantara Knowledge

Managing object stores, filesystem groups, and filesystems

The management of object stores, filesystem groups and filesystems is an integral part of the operation and performance of the Content Software for File system and overall data lifecycle management.

Managing object stores using the CLI

Using the CLI, you can perform the following actions:

Viewing object stores using the CLI

Commandweka fs tier obs

This command is used to view information on all the object stores configured to the Content Software for File system.

NoteUsing the GUI only object-store buckets are present. Adding an object-store bucket will add it to the only local or remote object-store present. If more than one is present (such as during the time recovering from a remote snapshot), the CLI should be used.

Editing an object store using the CLI

Commandweka fs tier obs update

Use the following command line to edit an object store:

weka fs tier obs update <name> [--new-name new-name] [--site site] [--hostname=<hostname>] [--port=<port>] [--auth-method=<auth-method>] [--region=<region>] [--access-key-id=<access-key-id>] [--secret-key=<secret-key>] [--protocol=<protocol>] [--bandwidth=<bandwidth>] [--download-bandwidth=<download-bandwidth>] [--upload-bandwidth=<upload-bandwidth>] [--max-concurrent-downloads=<max-concurrent-downloads>] [--max-concurrent-uploads=<max-concurrent-uploads>] [--max-concurrent-removals=<max-concurrent-removals>] [--enable-upload-tags=<enable-upload-tags>]
Parameters
Name Type Value Limitations Mandatory Default
nameString Name of the object store being editedMust be a valid nameYes
new-nameString New name for the object storeMust be a valid name No
siteStringlocal - for tiering+snapshots, remote - for snapshots onlylocal or remoteNo
hostname String Object store host identifierMust be a valid name/IPYes
port String Object store portMust be a valid nameYes
auth-methodString Authentication methodNoneYes
regionString Region name Yes
access-key-idString Object store access key ID Yes
secret-keyString Object store secret key Yes
protocolStringProtocol type, to be used as a default for added bucketsHTTP, HTTPS or HTTPS_UNVERIFIEDNo
bandwidthNumber Bandwidth limitation per core (Mbps) No
download-bandwidthNumberObject-store download bandwidth limitation per core (Mbps)No
upload-bandwidthNumberObject-store upload bandwidth limitation per core (Mbps)No
max-concurrent-downloadsNumber Maximum number of downloads concurrently performed on this object store in a single IO node1-64 No
max-concurrent-uploads Number Maximum number of uploads concurrently performed on this object store in a single IO node1-64 No
max-concurrent-removalsNumber Maximum number of removals concurrently performed on this object store in a single IO node1-64 No
enable-upload-tagsStringWhether to enable object-tagging or not, to be used as a default for added bucketstrue or falseNo

Viewing object store buckets

Command:
weka fs tier s3

This command is used to view information on all the object-store buckets configured to the Weka system.

Adding an object store bucket using the CLI

Commandweka fs tier s3 add

Use the following command line to add an object store:

weka fs tier s3 add <name> [--site site] [--obs-name obs-name] [--hostname=<hostname>] [--port=<port> [--bucket=<bucket>] [--auth-method=<auth-method>] [--region=<region>] [--access-key-id=<access-key-id>] [--secret-key=<secret-key>] [--protocol=<protocol>] [--bandwidth=<bandwidth>] [--download-bandwidth=<download-bandwidth>] [--upload-bandwidth=<upload-bandwidth>] [--errors-timeout=<errors-timeout>] [--prefetch-mib=<prefetch-mib>] [--enable-upload-tags=<enable-upload-tags>]
Parameters
Name Type Value Limitations Mandatory Default
nameString Name of the object-store bucket being createdMust be a valid name Yes
SiteStringlocal - for tiering+snapshots, remote - for snapshots onlyMust be the same as the object store site it is added to (obs-name) No Local
obs-nameStringName of the object-store to add this object-store bucket toMust be an existing object-store NoIf there is only one object-store of type mentioned in site it is chosen automatically
hostnameStringObject store host identifier Must be a valid name/IPYes, if not specified in the object-store level The hostname specified in obs-name if present
portStringObject store portMust be a valid nameNoThe port specified in obs-name if present, otherwise 80
bucket StringObject store bucket nameMust be a valid name Yes
auth-methodStringAuthentication methodNone, AWSSignature2 or AWSSignature4 Yes, if not specified in the object-store levelThe auth-method specified in obs-name if present
regionStringRegion name Yes, if not specified in the object-store levelThe region specified in obs-name if present
access-key-id StringObject store bucket access key ID Yes, if not specified in the object-store level (can be left empty when using IAM role in AWS) The access-key-id specified in obs-name if present
secret-keyStringObject store bucket secret key Yes, if not specified in the object-store level (can be left empty when using IAM role in AWS)The secret-key specified in obs-name if present
protocolStringProtocol type to be usedHTTP, HTTPS or HTTPS_UNVERIFIED No The protocol specified in obs-name if present, otherwise HTTP
bandwidth Number Bucket bandwidth limitation per core (Mbps)No
download-bandwidth Number Bucket download bandwidth limitation per core (Mbps)No
upload-bandwidthNumber Bucket upload bandwidth limitation per core (Mbps) No
errors-timeoutNumberIf the object-store link is down for longer than this timeout period, all IOs that need data return with an error1-15 minutes, e.g: 5m or 300sNo300
prefetch-mibNumberHow many MiB of data to prefetch when reading a whole MiB on the object storeNo0
enabme-upload-tagsStringWhether to enable object-tagging or nottrue or falseNofalse
NoteWhen using the CLI, by default a misconfigured object store will not be created. To create an object store even when it is misconfigured, use the --skip-verification option.
NoteThe max-concurrent settings are applied per Content Software for File compute process and the minimum setting of all object-stores is applied.

Make the relevant changes and click Update to update the object store bucket.

Editing an object store bucket using the CLI

Commandweka fs tier s3 update

Use the following command line to edit an object store bucket:

weka fs tier s3 update <name> [--new-name=<new-name>] [--new-obs-name new-obs-name] [--hostname=<hostname>] [--port=<port> [--bucket=<bucket>] [--auth-method=<auth-method>] [--region=<region>] [--access-key-id=<access-key-id>] [--secret-key=<secret-key>] [--protocol=<protocol>] [--bandwidth=<bandwidth>] [--download-bandwidth=<download-bandwidth>] [--upload-bandwidth=<upload-bandwidth>] [--errors-timeout=<errors-timeout>] [--prefetch-mib=<prefetch-mib>] [--enable-upload-tags=<enable-upload-tags>]
Parameters
Name Type Value Limitations Mandatory Default
nameString Name of the object store being editedMust be a valid nameYes
new-nameString New name for the object storeMust be a valid name No
new-obs-nameStringNew name of the object-store to add this object-store bucket toMust be an existing object-store, with the same site value.No
hostname String Object store host identifierMust be a valid name/IPNo
port String Object store portMust be a valid nameNo
bucketString Object store bucket nameMust be a valid nameNo
auth-methodString Authentication methodNone, AWSSignature2 or AWSSignature4No
regionString Region name No
access-key-idString Object-store bucket access key ID No
secret-keyString Object-store bucket secret key No
protocolStringProtocol type to be usedHTTP, HTTPS or HTTPS_UNVERIFIEDNo
bandwidthNumber Bandwidth limitation per core (Mbps) No
download-bandwidthNumberBucket download bandwidth limitation per core (Mbps)No
upload-bandwidthNumberBucket upload bandwidth limitation per core (Mbps)No
errors-timeoutIf the object-store link is down for longer than this timeout period, all IOs that need data return with an error1-15 minutes, e.g: 5m or 300sNo
prefetch-mibHow many MiB of data to prefetch when reading a whole MiB on the object storeNo
Zenable-upload-tagsStringWhether to enable object-tagging or nottrue or falseNo

Deleting an object store bucket using the CLI

Commandweka fs tier s3 delete

Use the following command line to delete an object store:

weka fs tier s3 delete <name>
Parameters
Name Type Value Limitations Mandatory Default
name String Name of the object store being deletedMust be a valid nameYes

Managing filesystem groups

Using the CLI, you can perform the following actions:

Viewing filesystem groups using the CLI

Commandweka fs group

Use this command to view information about the filesystem groups in the system.

Adding a filesystem group using the CLI

Commandweka fs group create

Use the following command to add a filesystem group:

weka fs group create <name> [--target-ssd-retention=<target-ssd-retention>] [--start-demote=<start-demote>]
Parameters
Name Type Value Limitations Mandatory Default
name String Name of the filesystem group being created Must be a valid name Yes
target-ssd-retention Number Target retention period (in seconds) before tiering to the object store Must be a valid number No 86400 (24 hours)
start-demote Number Target tiering cue (in seconds) before tiering to the object store Must be a valid number No 10

Editing a filesystem group using the CLI

Commandweka fs group update

Use the following command to edit a filesystem group:

weka fs group update <name> [--new-name=<new-name>] [--target-ssd-retention=<target-ssd-retention>] [--start-demote=<start-demote>]
Parameters
Name Type Value Limitations Mandatory Default
name String Name of the filesystem group being edited Must be a valid name Yes
new-name String New name for the filesystem group Must be a valid name Yes
target-ssd-retention Number New target retention period (in seconds) before tiering to the object store Must be a valid number No
start-demote Number New target tiering cue (in seconds) before tiering to the object store Must be a valid number No

Deleting a filesystem group using the CLI

Commandweka fs group delete

Use the following command line to delete a filesystem group:

weka fs group delete <name>
Parameters
Name Type Value Limitations Mandatory Default
name String Name of the filesystem group to deleteMust be a valid name Yes

Managing filesystems

Using the CLI, you can perform the following actions:

Viewing filesystems using the CLI

Commandweka fs

Use this command to view information on the filesystems in the Content Software for File system.

Enter the relevant parameters and click Create to create the filesystem.

Adding a filesystem using the CLI

Commandweka fs create

Use the following command line to add a filesystem:

weka fs create <name> <group-name> <total-capacity> [--ssd-capacity <ssd-capacity>] [--thin-provision-min-ssd <thin-provision-min-ssd>] [--thin-provision-max-ssd <thin-provision-max-ssd>] [--max-files <max-files>] [--encrypted] [--obs-name <obs-name>] [--auth-required <auth-required>]
Parameters
Name Type Value Limitations Mandatory Default
name String Name of the filesystem being createdMust be a valid name Yes
group-name String Name of the filesystem group to which the new filesystem is to be connectedMust be a valid name Yes
total-capacity Number Total capacity of the new filesystemMinimum of 1GiB Yes
ssd-capacity Number For tiered filesystems, this is the SSD capacity. If not specified, the filesystem is pinned to SSDMinimum of 1GiB No SSD capacity will be set to total capacity
thin-provision-min-ssdNumberFor thin-provisioned filesystems, this is the minimum SSD capacity that is ensured to be always available to this filesystemMinimum of 1GiBNo. Must be set when defining a thin-provisioned filesystem.
thin-provision-max-ssdNumberFor thin-provisioned filesystem, this is the maximum SSD capacity the filesystem can consumeCannot exceed the total-capacity
max-files Number Metadata allocation for this filesystemMust be a valid number No Automatically calculated by the system based on the SSD capacity
encrypted Boolean Encryption of filesystem No No
obs-name String Object store name for tieringMust be a valid name Mandatory for tiered filesystems
auth-required String Determines if mounting the filesystem requires to be authenticated to Content Software for Fileyes or no

For a filesystem hosting NFS exports or SMB shares, enabling authentication is not allowed.

No no
NoteWhen creating an encrypted filesystem a KMS must be defined.
Note
  • To define an encrypted filesystem without a KMS, it is possible to use the--allow-no-kms parameter in the command. This can be useful when running POCs but should not be used in production, since the security chain is compromised when a KMS is not used.
  • If filesystem keys exist when adding a KMS, they are automatically re-encrypted by the KMS for any future use.

Add a filesystem when thin-provisioning is used

To create a new filesystem, the SSD space for the filesystem must be free and unprovisioned. When using thin-provisioned filesystems, that might not be the case. SSD space can be occupied for the thin-provisioned portion of other filesystems. Even if those are tiered, and data can be released (to object-store) or deleted, the SSD space can still get filled when data keeps being written or rehydrated from the object-store.

To create a new filesystem in this case, use the weka fs reserve CLI command. Once enough space is cleared from the SSD (either by releasing to object-store or explicit deletion of data), it is possible to create the new filesystem using the reserved space.

Editing a filesystem using the CLI

Commandweka fs update

Use the following command line to edit an existing filesystem:

weka fs update <name> [--new-name=<new-name>] [--total-capacity=<total-capacity>] [--ssd-capacity=<ssd-capacity>] [--thin-provision-min-ssd <thin-provision-min-ssd>] [--thin-provision-max-ssd <thin-provision-max-ssd>] [--max-files=<max-files>] [--auth-required=<auth-required>]
Parameters
Name Type Value Limitations Mandatory Default
name String Name of the filesystem being editedMust be a valid nameYes
new-name String New name for the filesystemMust be a valid nameOptional Keep unchanged
total-capacityNumber Total capacity of the edited filesystemMust be a valid numberOptional Keep unchanged
ssd-capacity Number SSD capacity of the edited filesystemMinimum of 1GiBOptional Keep unchanged
thin-provision-min-ssdNumberFor thin-provisioned filesystems, this is the minimum SSD capacity that is ensured to be always available to this filesystemMinimum of 1GiBOptional
thin-provision-max-ssdNumberFor thin-provisioned filesystems, this is the maximum SSD capacity the filesystem can consumeCannot exceed the total-capacityOptional
max-files Number Metadata limit for the filesystemMust be a valid numberOptional Keep unchanged
auth-requiredString Determines if mounting the filesystem requires to be authenticated to Content Software for Fileyes or no

For a filesystem hosting NFS exports or SMB shares, enabling authentication is not allowed.

No no

Deleting a filesystem using the CLI

Commandweka fs delete

Use the following command line to delete a filesystem:

weka fs delete <name> [--purge-from-obs]
Parameters
Name Type Value Limitations Mandatory Default
name String Name of the filesystem to be deleted Must be a valid name Yes
purge-from-obs Boolean For a tiered filesystem, if set, all filesystem data is deleted from the object store bucket. No False
NoteUsing purge-from-obs will remove all data from the object-store. This includes any backup data or snapshots created from this filesystem (if this filesystem has been downloaded from a snapshot of a different filesystem, it will leave the original snapshot data intact).
  • If any of the removed snapshots have been (or are) downloaded and used by a different filesystem, that filesystem will stop functioning correctly, data might be unavailable and errors might occur when accessing the data.

It is possible to either un-tier or migrate such a filesystem to a different object store bucket before deleting the snapshots it has downloaded.

Attach or detach object store buckets using the CLI

Using the CLI, you can:

Attaching an object stores bucket to a filesystem using the CLI

Commandweka fs tier s3 attach

To attach an object store to a filesystem, use the following command:

weka fs tier s3 attach <fs-name> <obs-name> [--mode mode]
Parameters
NameTypeValueLimitationsMandatoryDefault
fs-nameStringName of the filesystem to be attached to the object storeMust be a valid nameYes
obs-nameStringName of the object store to be attachedMust be a valid nameYes
modeStringlocal or remoteA local bucket can only be attached as local and a remote bucket can only be attached as remoteNo

Detaching an object store bucket from a filesystem using the CLI

Command:weka fs tier s3 detach

To detach an object store from a filesystem, use the following command:

weka fs tier s3 detach <fs-name> <obs-name>
Parameters:
NameTypeValueLimitationsMandatoryDefault
fs-nameStringName of the filesystem to be detached from the object storeMust be a valid nameYes
obs-nameStringName of the object store to be detachedMust be a valid nameYes
NoteTo recover from a snapshot that has been uploaded when two local object stores have been attached, use the --additional-obs parameter in weka fs download command. The primary object-store should be the one where the locator has been uploaded to.

Mounting filesystems

How to use a filesystem through the Content Software for File filesystem driver, it has to be mounted on one of the cluster hosts. This section describes how this is performed.

Overview

There are two methods available for mounting a filesystem in one of the cluster hosts:

  1. Using the traditional method: See below and also refer to Adding Clients (Bare Metal Installation) or Adding Clients (AWS Installation), where first a client is configured and joins a cluster, after which a mount command is executed.
  2. Using the stateless clients feature: See Mounting Filesystems Using the Stateless Clients Feature, which simplifies and improves the management of clients in the cluster and eliminates the adding clients process.

Mounting a filesystem using the traditional method

Note Using the mount command as explained below first requires the installation of the Content Software for File client, configuring the client, and joining it to a Content Software for File cluster.

To mount a filesystem on one of the cluster hosts, let’s assume the cluster has a filesystem called demo. To add this filesystem to a host, SSH into one of the hosts and run the mount command as the root user, as follows:

mkdir -p /mnt/weka/demo
mount -t wekafs demo /mnt/weka/demo

The general structure of a mount command for a Content Software for File filesystem is:

mount -t wekafs [-o option[,option]...]] <fs-name> <mount-point>

There are two options for mounting a filesystem on a cluster client: read cache and write cache. For more information on the differences between these modes, see read cache and write cache mount modes in the Hitachi Content Software for File User Guide.

Mounting a filesystem using the stateless clients feature

The Stateless Clients feature defers the process of joining the cluster until the mount is performed. Simplifying and improving the management of clients in the cluster. It removes tedious client management procedures, which is particularly beneficial in AWS installations where clients may join and leave at high frequency.

Furthermore, it unifies all security aspects in the mount command, eliminating the search for separate credentials at cluster join and mount.

To use the Stateless Clients feature, a Content Software for File agent must be installed. Once this is complete, mounts can be created and configured using the mount command and can be easily removed from the cluster using the unmount command.

NoteTo allow only Content Software for File authenticated users to mount a filesystem, set the filesystem auth-required flag to yes. For more information about mount authentication for organization filesystems, see Hitachi Content Software for File User Guide.

Assuming the Content Software for File cluster is using the backend IP of 1.2.3.4, running the following command as root on a client will install the agent:

curl http://1.2.3.4:14000/dist/v1/install | sh

On completion, the agent is installed on the client machine.

Run the mount command

Command:

mount -t wekafs

Use one of the following command lines to invoke the mount command (note, the delimiter between the server and filesystem can be either :/ or / ):

mount -t wekafs -o <options> <backend0>[,<backend1>,...,<backendN>]/<fs> <mount-point>
mount -t wekafs -o <options> <backend0>[,<backend1>,...,<backendN>]:/<fs> <mount-point>
Parameters
NameTypeValueLimitationsMandatoryDefault
OptionsSee additional mount options below
backendStringIP/hostname of a backend hostMust be a valid nameYes
fsStringFilesystem nameMust be a valid nameYes
mount-pointStringPath to mount on the local machineMust be a valid path-nameYes

Mount command options

Each mount option can be passed by an individual -o flag to mount.

For all clients types

OptionValueDescriptionDefault
readcacheNoneSet mode to read cacheNo
writecacheNoneSet mode to write cacheYes
dentry_max_age_positiveNumber in millisecondsAfter the defined time period, every metadata cached entry is refreshed from the system, allowing the host to take into account metadata changes performed by other hosts.1000
dentry_max_age_negativeNumber in millisecondsEach time a file or directory lookup fails, an entry specifying that the file or directory does not exist is created in the local dentry cache. This entry is refreshed after the defined time, allowing the host to use files or directories created by other hosts.0
roNoneMount filesystem as read-onlyNo
rwNoneMount filesystem as read-writeYes
inode_bits32, 64 or autoSize of the inode in bits, which may be required for 32-bit applications.Auto
verboseNoneWrite debug logs to the consoleNo
quietNoneDon't show any logs to consoleNo
aclNoneCan be defined per mount. Setting POSIX ACLs can change the effective group permissions (via the mask permissions). When ACLs defined but the mount has no ACL, the effective group permissions are granted.)No
obs_directNoneSee Object-store Direct Mount sectionNo
noatimeNoneDo not update inode access timesNo
strictatimeNoneAlways update inode access timesNo
relatimeNoneUpdate inode access times only on modification or change, or if inode has been accessed and relatime_threshold has passed.Yes
relatime_thresholdNumber in secondsHow much time (in seconds) to wait since an inode has been accessed (not modified) before updating the access time.

0 means to never update the access time on access only. This option is relevant only if relatime is on.

0 (infinite)
nosuidNoneDo not take suid/sgid bits into effect.No
nodevNoneDo not interpret character or block special devices.No
noexecNoneDo not allow direct execution of any binaries.No
file_create_maskNumeric (octal) notation of POSIX permissionsNewly created file permissions are masked with the creation mask. For example, if a user creates a file with permissions=777 but the file_create_mask is 770, the file will be created with 770 permissions. First, the umask is taken into account, followed by the file_create_mask and then the force_file_mode.0777
directory_create_maskNumeric (octal) notation of POSIX permissionsNewly created directory permissions are masked with the creation mask. For example, if a user creates a directory with permissions=777 but the directory_create_mask is 770, the directory will be created with 770 permissions. First, the umask is taken into account, followed by the directory_create_mask and then the force_directory_mode.0777
force_file_modeNumeric (octal) notation of POSIX permissionsNewly created file permissions are logically OR'ed with the mode. For example, if a user creates a file with permissions 770 but the force_file_mode is 775, the resulting file will be created with mode 775. First, the umask is taken into account, followed by the file_create_mask and then the force_file_mode.0
force_directory_modeNumeric (octal) notation of POSIX permissionsNewly created directory permissions are logically OR'ed with the mode. For example, if a user creates a directory with permissions 770 but the force_directory_mode is 775, the resulting directory will be created with mode 775. First, the umask is taken into account, followed by the directory_create_mask and then the force_directory_mode.0

Remount of general options

You can remount using the mount options marked as Remount Supported in the above table (mount -o remount).

When a mount option has been explicitly changed, you must set it again in the remount operation to ensure it retains its value. For example, if you mount with ro, a remount without it changes the mount option to the default rw. If you mount with rw, it is not required to re-specify the mount option because this is the default.

Additional mount options using the stateless clients feature

Option Value Description Default Remount Supported
memory_mb=<memory_mb>
Number Amount of memory to be used by the client (for huge pages). 1400 MiB Yes
num_cores=<frontendcores>
Number The number of frontend cores to allocate for the client. Either <num_cores> or <core> can be specified, but not both. If none are specified, the client will be configured with 1 core. If 0 is specified then you must use net=udp. 1 No
core=<core>
Number Specify explicit cores to be used by the Content Software for File FS client. Multiple cores can be specified. Core 0 is not allowed.No
net=<netdev>
[/<ip>/<bits>
[/<gateway>]]
String For more info refer to Advanced network configuration using mount options section. No
bandwidth_mbps=<bandwidth_mbps>
Number Maximum network bandwidth in Mb/s, which limits the traffic that the container can send. Auto-select Yes
remove_after_secs=<secs>
Number The number of seconds without connectivity after which the client will be removed from the cluster. Minimum value: 60 seconds. 86,400 seconds (24 hours) Yes
traces_capacity_mb=
<size-in-mb>
Number Traces capacity limit in MB. Minimum value: 512 MB. No
reserve_1g_hugepages
None Controls the page allocation algorithm if to reserve only 2MB huge pages or also 1GB ones. Yes Yes
readahead_kb=
<readahead>
Number in KB Controls the readahead per mount (higher readahead better for sequential reads of large files). 32768 Yes
auth_token_path
String Path to the mount authentication token (per mount).
~/.weka/auth-token.json
dedicated_modefull or noneDetermine whether DPKD networking dedicates a core (full) or not (none). none can only be set when the NIC driver supports it. See DPDK Without Code Dedication section.This option is relevant when using DPDK networking (net=udp is not set).full
qos_preferred_throughput_mbpsNumberPreferred requests rate for QoS in megabytes per second.No limit. The cluster admin can set this default. See mount option defaults.Yes
qos_max_throughput_mbpsNumberMaximum requests rate for QoS in megabytes per second. This option allows bursting above the specified limit but aims to keep this limit on average.No limit. The cluster admin can set this default. See mount option defaults.Yes
qos_max_opsNumberMaximum number of IO operations a client can perform per second. Set a limit to a client or clients to prevent starvation from the rest of the clients. No limit. Do not set this option for mounting from a backend.Yes
connect_timeout_secsNumberThe timeout in seconds for establishing a connection to a single host.
10Yes
response_timeout_secsNumberThe timeout in seconds for waiting for the response from a single host.60Yes
join_timeout_secsNumberThe timeout, in seconds, for the client container to join the Content Software for File cluster.360Yes
NoteThese parameters, if not stated otherwise, are only effective on the first mount command for each client.
NoteBy default, the command selects the optimal core allocation for Content Software for File. If necessary, multiple core parameters can be used to allocate specific cores to the WekaFS client. For example,
mount -t wekafs -o core=2 -o core=4 -o net=ib0 backend-host-0/my_fs /mnt/weka

On-Premise Installations

mount -t wekafs -o num_cores=1 -o net=ib0 backend-host-0/my_fs /mnt/weka

Running this command on a host installed with the Content Software for File agent will download the appropriate version from the host backend-host-0 and create a container which allocates a single core and a named network interface ib0. Then it will join the cluster that backend-host-0 is part of and mount the filesystem my_fs on /mnt/weka.

mount -t wekafs -o num_cores=0 -o net=udp backend-host-0/my_fs
/mnt/weka

Running this command will use UDP mode (usually selected when the use of DPDK is not available).

For stateless clients, the first mount command installs the weka client software and joins the cluster). Any subsequent mount command, can either use the same syntax or just the traditional/per-mount parameters as defined in Mounting Filesystems since it is not necessary to join a cluster.

It is now possible to access Content Software for File filesystems via the mount-point, by the cd /mnt/weka/ command.

After the execution of anumount command, which unmounts the last Weka filesystem, the client is disconnected from the cluster and will be uninstalled by the agent. Consequently, executing a new mount command requires the specification of the cluster, cores, and networking parameters again.

NoteMemory allocation for a client is predefined. Contact contact your Hitachi representative when it is necessary to change the amount of memory allocated to a client.

Remount of stateless clients options

Mount options marked as Remount Supported in the above table can be remounted (using mount -o remount). When a mount option is not set in the remount operation, it will retain its current value. To set a mount option back to its default value, use the default modifier (e.g., memory_mb=default).

Set mount option default values

The defaults of the mount options qos_max_throughput_mbps and qos_preferred_throughput_mbps have no limit.

The cluster admin can set these default values to meet the organization's requirements, reset to the initial default values (no limit), or show the existing values.

The mount option defaults are only relevant for new mounts performed and do not influence the existing ones.

Commands:
weka cluster mount-defaults set
weka cluster mount-defaults reset
weka cluster mount-defaults show

To set the mount option default values, run the following command:

weka cluster mount-defaults set [--qos-max-throughput qos-max-throughput] [--qos-preferred-throughput qos-preferred-throughput]
Parameters
OptionValueDescription
qos_max_throughputNumberSets the default value for the qos_max_throughput_mbps option, which is the max requests rate for QoS in megabytes per second
qos_preferred_throughputNumberSets the default value for the qos_preferred_throughput_mbps option, which is the preferred requests rate for QoS in megabytes per second.

Advanced network configuration using mount options

When using a stateless client, it is possible to alter and control many different networking options, such as:

  • Virtual functions.
  • IPs.
  • Gateway (in case the client is on a different subnet).
  • Physical network devices (for performance and HA).
  • UDP mode.

Use -o net=<netdev> mount option with the various modifiers as described below.

<netdev> is either the name, MAC address, or PCI address of the physical network device (can be a bond device) to allocate for the client.

NoteWhen using wekafs mounts, both clients and backends should use the same type of networking technology (either IB or Ethernet).

IP, subnet, gateway, and virtual functions

For higher performance, the usage of multiple Frontends may be required. When using a NIC other than Mellanox or Intel E810, or when mounting a DPDK client on a VM, it is required to use SR-IOV to expose a VF of the physical device to the client. Once exposed, it can be configured via the mount command.

When you want to determine the VFs IP addresses, or when the client resides in a different subnet and routing is needed in the data network, use:

net=<netdev>/[ip]/[bits]/[gateway]

The ip, bits, gateway parameters are optional. In case they are not provided, the Content Software for File system tries to deduce them when in IB environments or allocate from the default data network otherwise. If both approaches fail, the mount command will fail.

For example, the following command will allocate two cores and a single physical network device (intel0). It will configure two VFs for the device and assign each one of them to one of the frontend nodes. The first node will receive 192.168.1.100 IP address, and the second will use 192.168.1.101 IP address. Both of the IPs have a 24 network mask bits and default gateway of 192.168.1.254.
mount -t wekafs -o num_cores=2 -o net=intel0/192.168.1.100+192.168.1.101/24/192.168.1.254 backend1/my_fs /mnt/weka

Multiple physical network devices for performance and HA

For performance or high availability, it is possible to use more than one physical network device.

Using multiple physical network devices for better performance

It's easy to saturate the bandwidth of a single network interface when using WekaFS. For higher throughput, it is possible to leverage multiple network interface cards (NICs\). The -o net notation shown in the next example can be used to pass the names of specific NICs to WekaFS host driver.

For example, the following command will allocate two cores and two physical network devices for increased throughput:
mount -t wekafs -o num_cores=2 -o net=mlnx0,net=mlnx1 backend1/my_fs /mnt/weka
Using multiple physical network devices for HA configuration

Multiple NICs can also be configured to achieve redundancy (refer to Content Software for File Installation Guide, HA networking configuration section for more information) in addition to higher throughput, for a complete, highly available solution. For that, use more than one physical device and specify the client management IPs using the command-line option:

-o mgmt_ip=<ip>+<ip2>
For example, the following command will use two network devices for HA networking and allocate both devices to four Frontend processes on the client. The modifier ha is used here, which stands for using the device on all processes.
mount -t wekafs -o num_cores=4 -o net:ha=mlnx0,net:ha=mlnx1 backend1/my_fs -o mgmt_ip=10.0.0.1+10.0.0.2 /mnt/weka
Advanced mounting options for multiple physical network devices

With multiple Frontend processes (as expressed by -o num_cores), it is possible to control what processes use what NICs. This can be accomplished through the use of special command line modifiers called slots. In WekaFS, slot is synonymous with a process number. Typically, the first WekaFS Frontend process will occupy slot 1, then the second slot 2 and so on.

Examples of slot notation include s1, s2, s2+1, s1-2, slots1+3, slot1, slots1-4, where - specifies a range of devices, while + specifies a list. For example, s1-4 implies slots 1, 2, 3 and 4, while s1+4 specifies slots 1 and 4 only. For example, in the following command, mlnx0 is bound to the second Frontend process while mlnx1 to the first one for improved performance.
mount -t wekafs -o num_cores=2 -o net:s2=mlnx0,net:s1=mlnx1 backend1/my_fs /mnt/weka

For example, in the following HA mounting command, two cores (two Frontend processes) and two physical network devices (mlnx0, mlnx1) are allocated. By explicitly specifying s2+1, s1-2 modifiers for network devices, both devices will be used by both Frontend processes. Notation s2+1 stands for the first and second processes, while s1-2 stands for the range of 1 to 2, and are effectively the same.

mount -t wekafs -o num_cores=2 -o net:s2+1=mlnx0,net:s1-2=mlnx1 backend1/my_fs -o mgmt_ip=10.0.0.1+10.0.0.2 /mnt/weka

UDP mode

In cases where the Data Plane Development Kit (DPDK) cannot be used, it is possible to use WekaFS in User Datagram Protocol (UDP) mode through the kernel. Use net=udp in the mount command to set the UDP networking mode, for example:

mount -t wekafs -o num_cores=0 -o net=udp backend-host-0/my_fs /mnt/weka
NoteA client in UDP mode cannot be configured in HA mode. However, the client can still work with a highly available cluster.
NoteProviding multiple IPs in the <mgmt-ip> in UDP mode will utilize their network interfaces for more bandwidth (can be useful in RDMA environments), rather than using only one NIC.

Mounting filesystems using fstab

Note This option works when using stateless clients and with OS that supports systemd (for example, RHEL/CentOS 7.2 and up, Ubuntu 16.04 and up, Amazon Linux 2 LTS).

Edit /etc/fstab file to include the filesystem mount entry:

  • A comma-separated list of backend hosts, with the filesystem name
  • The mount point
  • Filesystem type - wekafs
  • Mount options:
    • Configure systemd to wait for the weka-agent service to come up, and set the filesystem as a network filesystem, for example:
      x-systemd.requires=weka-agent.service,x-systemd.mount-timeout=infinity,_netdev
    • Any additional wekafs supported mount option
      # create a mount point
      mkdir -p /mnt/weka/my_fs
      
      # edit fstab file
      vi /etc/fstab
      
      # fstab with weka options (example, change with your desired settings)
      backend-0,backend-1,backend-3/my_fs /mnt/weka/my_fs wekafs num_cores=1,net=eth1,x-systemd.requires=weka-agent.service,x-systemd.mount-timeout=infinity,_netdev 0 0
      
      

Reboot the machine for the systemd unit to be created and marked correctly.

The filesystem should now be mounted at boot time.

NoteDo not configure this entry for a mounted filesystem before un-mounting it unmount, as the systemd needs to mark the filesystem as a network filesystem (occurs as part of the reboot. Trying to reboot a host when there is a mounted WekaFS filesystem when setting its fstab configuration might yield a failure to unmount the filesystem and leave the system hanged.

Mounting filesystems using autofs

It is possible to mount a Content Software for File filesystem using the autofs command.

Procedure

  1. Install autofs on the host using one of the following commands according to your deployment:

    • On RedHat or Centos:
      yum install -y autofs
    • On Debian or Ubuntu:
      apt-get install -y autofs
  2. To create the autofs configuration files for Content Software for File filesystems, do one of the following depending on the client type:

    • For a stateless client, run the following commands (specify the backend names as parameters):
      echo "/mnt/weka   /etc/auto.wekafs -fstype=wekafs,num_cores=1,net=<netdevice>" > /etc/auto.master.d/wekafs.autofs
      echo "*   <backend-1>,<backend-2>/&" > /etc/auto.wekafs
      
    • For a stateful client (traditional), run the following commands:
      echo "/mnt/weka /etc/auto.wekafs -fstype=wekafs" > /etc/auto.master.d/wekafs.autofs
      echo "* &" > /etc/auto.wekafs
  3. Restart the autofs service:

    service autofs restart
  4. The configuration is distribution-dependent. Verify that the service is configured to start automatically after restarting the host. Run the following command:

    systemctl is-enabled autofs.

    If the output is enabled the service is configured to start automatically.

In Amazon Linux, you can verify that the autofs service is configured to start automatically by running the command chkconfig. If the output is on for the current runlevel (you can check with the runlevel command), autofs is enabled upon restart.
# chkconfig | grep autofs
autofs         0:off 1:off 2:off 3:on 4:on 5:on 6:off

Once you complete this procedure, it is possible to access Content Software for File filesystems using the command cd /mnt/weka/<fs-name>.