Managing object stores, filesystem groups, and filesystems
The management of object stores, filesystem groups and filesystems is an integral part of the operation and performance of the Content Software for File system and overall data lifecycle management.
Managing object stores using the CLI
Using the CLI, you can perform the following actions:
- View an object store
- Edit an object store
- View an object store bucket
- Add an object store bucket
- Edit an object store bucket
- Delete an object store bucket
Viewing object stores using the CLI
This command is used to view information on all the object stores configured to the Content Software for File system.
local
or remote
object-store present. If more than one is present (such as during the time recovering from a remote snapshot), the CLI should be used.Editing an object store using the CLI
Use the following command line to edit an object store:
weka fs tier obs update <name> [--new-name new-name] [--site site] [--hostname=<hostname>] [--port=<port>] [--auth-method=<auth-method>] [--region=<region>] [--access-key-id=<access-key-id>] [--secret-key=<secret-key>] [--protocol=<protocol>] [--bandwidth=<bandwidth>] [--download-bandwidth=<download-bandwidth>] [--upload-bandwidth=<upload-bandwidth>] [--max-concurrent-downloads=<max-concurrent-downloads>] [--max-concurrent-uploads=<max-concurrent-uploads>] [--max-concurrent-removals=<max-concurrent-removals>] [--enable-upload-tags=<enable-upload-tags>]
Name | Type | Value | Limitations | Mandatory | Default |
name | String | Name of the object store being edited | Must be a valid name | Yes | |
new-name | String | New name for the object store | Must be a valid name | No | |
site | String | local - for tiering+snapshots, remote - for snapshots only | local or remote | No | |
hostname | String | Object store host identifier | Must be a valid name/IP | Yes | |
port | String | Object store port | Must be a valid name | Yes | |
auth-method | String | Authentication method | None | Yes | |
region | String | Region name | Yes | ||
access-key-id | String | Object store access key ID | Yes | ||
secret-key | String | Object store secret key | Yes | ||
protocol | String | Protocol type, to be used as a default for added buckets | HTTP , HTTPS or HTTPS_UNVERIFIED | No | |
bandwidth | Number | Bandwidth limitation per core (Mbps) | No | ||
download-bandwidth | Number | Object-store download bandwidth limitation per core (Mbps) | No | ||
upload-bandwidth | Number | Object-store upload bandwidth limitation per core (Mbps) | No | ||
max-concurrent-downloads | Number | Maximum number of downloads concurrently performed on this object store in a single IO node | 1-64 | No | |
max-concurrent-uploads | Number | Maximum number of uploads concurrently performed on this object store in a single IO node | 1-64 | No | |
max-concurrent-removals | Number | Maximum number of removals concurrently performed on this object store in a single IO node | 1-64 | No | |
enable-upload-tags | String | Whether to enable object-tagging or not, to be used as a default for added buckets | true or false | No |
Viewing object store buckets
weka fs tier s3
This command is used to view information on all the object-store buckets configured to the Weka system.
Adding an object store bucket using the CLI
Use the following command line to add an object store:
weka fs tier s3 add <name> [--site site] [--obs-name obs-name] [--hostname=<hostname>] [--port=<port> [--bucket=<bucket>] [--auth-method=<auth-method>] [--region=<region>] [--access-key-id=<access-key-id>] [--secret-key=<secret-key>] [--protocol=<protocol>] [--bandwidth=<bandwidth>] [--download-bandwidth=<download-bandwidth>] [--upload-bandwidth=<upload-bandwidth>] [--errors-timeout=<errors-timeout>] [--prefetch-mib=<prefetch-mib>] [--enable-upload-tags=<enable-upload-tags>]
Name | Type | Value | Limitations | Mandatory | Default |
name | String | Name of the object-store bucket being created | Must be a valid name | Yes | |
Site | String | local - for tiering+snapshots, remote - for snapshots only | Must be the same as the object store site it is added to (obs-name ) | No | Local |
obs-name | String | Name of the object-store to add this object-store bucket to | Must be an existing object-store | No | If there is only one object-store of type mentioned in site it is chosen automatically |
hostname | String | Object store host identifier | Must be a valid name/IP | Yes, if not specified in the object-store level | The hostname specified in obs-name if present |
port | String | Object store port | Must be a valid name | No | The port specified in obs-name if present, otherwise 80 |
bucket | String | Object store bucket name | Must be a valid name | Yes | |
auth-method | String | Authentication method | None, AWSSignature2 or AWSSignature4 | Yes, if not specified in the object-store level | The auth-method specified in obs-name if present |
region | String | Region name | Yes, if not specified in the object-store level | The region specified in obs-name if present | |
access-key-id | String | Object store bucket access key ID | Yes, if not specified in the object-store level (can be left empty when using IAM role in AWS) | The access-key-id specified in obs-name if present | |
secret-key | String | Object store bucket secret key | Yes, if not specified in the object-store level (can be left empty when using IAM role in AWS) | The secret-key specified in obs-name if present | |
protocol | String | Protocol type to be used | HTTP, HTTPS or HTTPS_UNVERIFIED | No | The protocol specified in obs-name if present, otherwise HTTP |
bandwidth | Number | Bucket bandwidth limitation per core (Mbps) | No | ||
download-bandwidth | Number | Bucket download bandwidth limitation per core (Mbps) | No | ||
upload-bandwidth | Number | Bucket upload bandwidth limitation per core (Mbps) | No | ||
errors-timeout | Number | If the object-store link is down for longer than this timeout period, all IOs that need data return with an error | 1-15 minutes, e.g: 5m or 300s | No | 300 |
prefetch-mib | Number | How many MiB of data to prefetch when reading a whole MiB on the object store | No | 0 | |
enabme-upload-tags | String | Whether to enable object-tagging or not | true or false | No | false |
--skip-verification
option. Make the relevant changes and click Update to update the object store bucket.
Editing an object store bucket using the CLI
Use the following command line to edit an object store bucket:
weka fs tier s3 update <name> [--new-name=<new-name>] [--new-obs-name new-obs-name] [--hostname=<hostname>] [--port=<port> [--bucket=<bucket>] [--auth-method=<auth-method>] [--region=<region>] [--access-key-id=<access-key-id>] [--secret-key=<secret-key>] [--protocol=<protocol>] [--bandwidth=<bandwidth>] [--download-bandwidth=<download-bandwidth>] [--upload-bandwidth=<upload-bandwidth>] [--errors-timeout=<errors-timeout>] [--prefetch-mib=<prefetch-mib>] [--enable-upload-tags=<enable-upload-tags>]
Name | Type | Value | Limitations | Mandatory | Default |
name | String | Name of the object store being edited | Must be a valid name | Yes | |
new-name | String | New name for the object store | Must be a valid name | No | |
new-obs-name | String | New name of the object-store to add this object-store bucket to | Must be an existing object-store, with the same site value. | No | |
hostname | String | Object store host identifier | Must be a valid name/IP | No | |
port | String | Object store port | Must be a valid name | No | |
bucket | String | Object store bucket name | Must be a valid name | No | |
auth-method | String | Authentication method | None, AWSSignature2 or AWSSignature4 | No | |
region | String | Region name | No | ||
access-key-id | String | Object-store bucket access key ID | No | ||
secret-key | String | Object-store bucket secret key | No | ||
protocol | String | Protocol type to be used | HTTP , HTTPS or HTTPS_UNVERIFIED | No | |
bandwidth | Number | Bandwidth limitation per core (Mbps) | No | ||
download-bandwidth | Number | Bucket download bandwidth limitation per core (Mbps) | No | ||
upload-bandwidth | Number | Bucket upload bandwidth limitation per core (Mbps) | No | ||
errors-timeout | If the object-store link is down for longer than this timeout period, all IOs that need data return with an error | 1-15 minutes, e.g: 5m or 300s | No | ||
prefetch-mib | How many MiB of data to prefetch when reading a whole MiB on the object store | No | |||
Z | enable-upload-tags | String | Whether to enable object-tagging or not | true or false | No |
Deleting an object store bucket using the CLI
Use the following command line to delete an object store:
weka fs tier s3 delete <name>
Name | Type | Value | Limitations | Mandatory | Default |
name | String | Name of the object store being deleted | Must be a valid name | Yes | |
Managing filesystem groups
Using the CLI, you can perform the following actions:
Viewing filesystem groups using the CLI
Use this command to view information about the filesystem groups in the system.
Adding a filesystem group using the CLI
Use the following command to add a filesystem group:
weka fs group create <name> [--target-ssd-retention=<target-ssd-retention>] [--start-demote=<start-demote>]
Name | Type | Value | Limitations | Mandatory | Default |
name | String | Name of the filesystem group being created | Must be a valid name | Yes | |
target-ssd-retention | Number | Target retention period (in seconds) before tiering to the object store | Must be a valid number | No | 86400 (24 hours) |
start-demote | Number | Target tiering cue (in seconds) before tiering to the object store | Must be a valid number | No | 10 |
Editing a filesystem group using the CLI
Use the following command to edit a filesystem group:
weka fs group update <name> [--new-name=<new-name>] [--target-ssd-retention=<target-ssd-retention>] [--start-demote=<start-demote>]
Name | Type | Value | Limitations | Mandatory | Default |
name | String | Name of the filesystem group being edited | Must be a valid name | Yes | |
new-name | String | New name for the filesystem group | Must be a valid name | Yes | |
target-ssd-retention | Number | New target retention period (in seconds) before tiering to the object store | Must be a valid number | No | |
start-demote | Number | New target tiering cue (in seconds) before tiering to the object store | Must be a valid number | No |
Deleting a filesystem group using the CLI
Use the following command line to delete a filesystem group:
weka fs group delete <name>
Name | Type | Value | Limitations | Mandatory | Default |
name | String | Name of the filesystem group to delete | Must be a valid name | Yes | |
Managing filesystems
Using the CLI, you can perform the following actions:
- View filesystems
- Add a filesystem
- Add a filesystme when thin-provisioning is used
- Edit a filesystem
- Delete a filesystem
Viewing filesystems using the CLI
Use this command to view information on the filesystems in the Content Software for File system.
Enter the relevant parameters and click Create to create the filesystem.
Adding a filesystem using the CLI
Use the following command line to add a filesystem:
weka fs create <name> <group-name> <total-capacity> [--ssd-capacity <ssd-capacity>] [--thin-provision-min-ssd <thin-provision-min-ssd>] [--thin-provision-max-ssd <thin-provision-max-ssd>] [--max-files <max-files>] [--encrypted] [--obs-name <obs-name>] [--auth-required <auth-required>]
Name | Type | Value | Limitations | Mandatory | Default |
name | String | Name of the filesystem being created | Must be a valid name | Yes | |
group-name | String | Name of the filesystem group to which the new filesystem is to be connected | Must be a valid name | Yes | |
total-capacity | Number | Total capacity of the new filesystem | Minimum of 1GiB | Yes | |
ssd-capacity | Number | For tiered filesystems, this is the SSD capacity. If not specified, the filesystem is pinned to SSD | Minimum of 1GiB | No | SSD capacity will be set to total capacity |
thin-provision-min-ssd | Number | For thin-provisioned filesystems, this is the minimum SSD capacity that is ensured to be always available to this filesystem | Minimum of 1GiB | No. Must be set when defining a thin-provisioned filesystem. | |
thin-provision-max-ssd | Number | For thin-provisioned filesystem, this is the maximum SSD capacity the filesystem can consume | Cannot exceed the total-capacity | ||
max-files | Number | Metadata allocation for this filesystem | Must be a valid number | No | Automatically calculated by the system based on the SSD capacity |
encrypted | Boolean | Encryption of filesystem | No | No | |
obs-name | String | Object store name for tiering | Must be a valid name | Mandatory for tiered filesystems | |
auth-required | String | Determines if mounting the filesystem requires to be authenticated to Content Software for File | yes or no For a filesystem hosting NFS exports or SMB shares, enabling authentication is not allowed. | No | no |
- To define an encrypted filesystem without a KMS, it is possible to use the--allow-no-kms parameter in the command. This can be useful when running POCs but should not be used in production, since the security chain is compromised when a KMS is not used.
- If filesystem keys exist when adding a KMS, they are automatically re-encrypted by the KMS for any future use.
Add a filesystem when thin-provisioning is used
To create a new filesystem, the SSD space for the filesystem must be free and unprovisioned. When using thin-provisioned filesystems, that might not be the case. SSD space can be occupied for the thin-provisioned portion of other filesystems. Even if those are tiered, and data can be released (to object-store) or deleted, the SSD space can still get filled when data keeps being written or rehydrated from the object-store.
To create a new filesystem in this case, use the weka fs reserve CLI command. Once enough space is cleared from the SSD (either by releasing to object-store or explicit deletion of data), it is possible to create the new filesystem using the reserved space.
Editing a filesystem using the CLI
Use the following command line to edit an existing filesystem:
weka fs update <name> [--new-name=<new-name>] [--total-capacity=<total-capacity>] [--ssd-capacity=<ssd-capacity>] [--thin-provision-min-ssd <thin-provision-min-ssd>] [--thin-provision-max-ssd <thin-provision-max-ssd>] [--max-files=<max-files>] [--auth-required=<auth-required>]
Name | Type | Value | Limitations | Mandatory | Default |
name | String | Name of the filesystem being edited | Must be a valid name | Yes | |
new-name | String | New name for the filesystem | Must be a valid name | Optional | Keep unchanged |
total-capacity | Number | Total capacity of the edited filesystem | Must be a valid number | Optional | Keep unchanged |
ssd-capacity | Number | SSD capacity of the edited filesystem | Minimum of 1GiB | Optional | Keep unchanged |
thin-provision-min-ssd | Number | For thin-provisioned filesystems, this is the minimum SSD capacity that is ensured to be always available to this filesystem | Minimum of 1GiB | Optional | |
thin-provision-max-ssd | Number | For thin-provisioned filesystems, this is the maximum SSD capacity the filesystem can consume | Cannot exceed the total-capacity | Optional | |
max-files | Number | Metadata limit for the filesystem | Must be a valid number | Optional | Keep unchanged |
auth-required | String | Determines if mounting the filesystem requires to be authenticated to Content Software for File | yes or no For a filesystem hosting NFS exports or SMB shares, enabling authentication is not allowed. | No | no |
Deleting a filesystem using the CLI
Use the following command line to delete a filesystem:
weka fs delete <name> [--purge-from-obs]
Name | Type | Value | Limitations | Mandatory | Default |
name | String | Name of the filesystem to be deleted | Must be a valid name | Yes | |
purge-from-obs | Boolean | For a tiered filesystem, if set, all filesystem data is deleted from the object store bucket. | No | False |
- If any of the removed snapshots have been (or are) downloaded and used by a different filesystem, that filesystem will stop functioning correctly, data might be unavailable and errors might occur when accessing the data.
It is possible to either un-tier or migrate such a filesystem to a different object store bucket before deleting the snapshots it has downloaded.
Attach or detach object store buckets using the CLI
Using the CLI, you can:
Attaching an object stores bucket to a filesystem using the CLI
To attach an object store to a filesystem, use the following command:
weka fs tier s3 attach <fs-name> <obs-name> [--mode mode]
Name | Type | Value | Limitations | Mandatory | Default |
fs-name | String | Name of the filesystem to be attached to the object store | Must be a valid name | Yes | |
obs-name | String | Name of the object store to be attached | Must be a valid name | Yes | |
mode | String | local or remote | A local bucket can only be attached as local and a remote bucket can only be attached as remote | No |
Detaching an object store bucket from a filesystem using the CLI
weka fs tier s3 detach
To detach an object store from a filesystem, use the following command:
weka fs tier s3 detach <fs-name> <obs-name>
Name | Type | Value | Limitations | Mandatory | Default |
fs-name | String | Name of the filesystem to be detached from the object store | Must be a valid name | Yes | |
obs-name | String | Name of the object store to be detached | Must be a valid name | Yes |
Mounting filesystems
How to use a filesystem through the Content Software for File filesystem driver, it has to be mounted on one of the cluster hosts. This section describes how this is performed.
Overview
There are two methods available for mounting a filesystem in one of the cluster hosts:
- Using the traditional method: See below and also refer to Adding Clients (Bare Metal Installation) or Adding Clients (AWS Installation), where first a client is configured and joins a cluster, after which a mount command is executed.
- Using the stateless clients feature: See Mounting Filesystems Using the Stateless Clients Feature, which simplifies and improves the management of clients in the cluster and eliminates the adding clients process.
Mounting a filesystem using the traditional method
To mount a filesystem on one of the cluster hosts, let’s assume the cluster has a filesystem called demo
. To add this filesystem to a host, SSH into one of the hosts and run the mount command as the root
user, as follows:
mkdir -p /mnt/weka/demo mount -t wekafs demo /mnt/weka/demo
The general structure of a mount command for a Content Software for File filesystem is:
mount -t wekafs [-o option[,option]...]] <fs-name> <mount-point>
There are two options for mounting a filesystem on a cluster client: read cache and write cache. For more information on the differences between these modes, see read cache and write cache mount modes in the Hitachi Content Software for File User Guide.
Mounting a filesystem using the stateless clients feature
The Stateless Clients feature defers the process of joining the cluster until the mount is performed. Simplifying and improving the management of clients in the cluster. It removes tedious client management procedures, which is particularly beneficial in AWS installations where clients may join and leave at high frequency.
Furthermore, it unifies all security aspects in the mount command, eliminating the search for separate credentials at cluster join and mount.
To use the Stateless Clients feature, a Content Software for File agent must be installed. Once this is complete, mounts can be created and configured using the mount command and can be easily removed from the cluster using the unmount command.
auth-required
flag to yes
. For more information about mount authentication for organization filesystems, see Hitachi Content Software for File User Guide.Assuming the Content Software for File cluster is using the backend IP of 1.2.3.4, running the following command as root
on a client will install the agent:
curl http://1.2.3.4:14000/dist/v1/install | sh
On completion, the agent is installed on the client machine.
Command:
mount -t wekafs
Use one of the following command lines to invoke the mount command (note, the delimiter between the server and filesystem can be either :/
or /
):
mount -t wekafs -o <options> <backend0>[,<backend1>,...,<backendN>]/<fs> <mount-point>
mount -t wekafs -o <options> <backend0>[,<backend1>,...,<backendN>]:/<fs> <mount-point>
Name | Type | Value | Limitations | Mandatory | Default |
Options | See additional mount options below | ||||
backend | String | IP/hostname of a backend host | Must be a valid name | Yes | |
fs | String | Filesystem name | Must be a valid name | Yes | |
mount-point | String | Path to mount on the local machine | Must be a valid path-name | Yes |
Mount command options
Each mount option can be passed by an individual -o flag to mount.
For all clients types
Option | Value | Description | Default |
readcache | None | Set mode to read cache | No |
writecache | None | Set mode to write cache | Yes |
dentry_max_age_positive | Number in milliseconds | After the defined time period, every metadata cached entry is refreshed from the system, allowing the host to take into account metadata changes performed by other hosts. | 1000 |
dentry_max_age_negative | Number in milliseconds | Each time a file or directory lookup fails, an entry specifying that the file or directory does not exist is created in the local dentry cache. This entry is refreshed after the defined time, allowing the host to use files or directories created by other hosts. | 0 |
ro | None | Mount filesystem as read-only | No |
rw | None | Mount filesystem as read-write | Yes |
inode_bits | 32, 64 or auto | Size of the inode in bits, which may be required for 32-bit applications. | Auto |
verbose | None | Write debug logs to the console | No |
quiet | None | Don't show any logs to console | No |
acl | None | Can be defined per mount. Setting POSIX ACLs can change the effective group permissions (via the mask permissions). When ACLs defined but the mount has no ACL, the effective group permissions are granted.) | No |
obs_direct | None | See Object-store Direct Mount section | No |
noatime | None | Do not update inode access times | No |
strictatime | None | Always update inode access times | No |
relatime | None | Update inode access times only on modification or change, or if inode has been accessed and relatime_threshold has passed. | Yes |
relatime_threshold | Number in seconds | How much time (in seconds) to wait since an inode has been accessed (not modified) before updating the access time. 0 means to never update the access time on access only. This option is relevant only if | 0 (infinite) |
nosuid | None | Do not take suid /sgid bits into effect. | No |
nodev | None | Do not interpret character or block special devices. | No |
noexec | None | Do not allow direct execution of any binaries. | No |
file_create_mask | Numeric (octal) notation of POSIX permissions | Newly created file permissions are masked with the creation mask. For example, if a user creates a file with permissions=777 but the file_create_mask is 770, the file will be created with 770 permissions. First, the umask is taken into account, followed by the file_create_mask and then the force_file_mode . | 0777 |
directory_create_mask | Numeric (octal) notation of POSIX permissions | Newly created directory permissions are masked with the creation mask. For example, if a user creates a directory with permissions=777 but the directory_create_mask is 770, the directory will be created with 770 permissions. First, the umask is taken into account, followed by the directory_create_mask and then the force_directory_mode . | 0777 |
force_file_mode | Numeric (octal) notation of POSIX permissions | Newly created file permissions are logically OR'ed with the mode. For example, if a user creates a file with permissions 770 but the force_file_mode is 775, the resulting file will be created with mode 775. First, the umask is taken into account, followed by the file_create_mask and then the force_file_mode . | 0 |
force_directory_mode | Numeric (octal) notation of POSIX permissions | Newly created directory permissions are logically OR'ed with the mode. For example, if a user creates a directory with permissions 770 but the force_directory_mode is 775, the resulting directory will be created with mode 775. First, the umask is taken into account, followed by the directory_create_mask and then the force_directory_mode . | 0 |
Remount of general options
You can remount using the mount options marked as Remount Supported
in the above table (mount -o remount
).
When a mount option has been explicitly changed, you must set it again in the remount operation to ensure it retains its value. For example, if you mount with ro, a remount without it changes the mount option to the default rw
. If you mount with rw
, it is not required to re-specify the mount option because this is the default.
Additional mount options using the stateless clients feature
Option | Value | Description | Default | Remount Supported |
memory_mb=<memory_mb> | Number | Amount of memory to be used by the client (for huge pages). | 1400 MiB | Yes |
num_cores=<frontendcores> | Number | The number of frontend cores to allocate for the client. Either <num_cores> or <core> can be specified, but not both. If none are specified, the client will be configured with 1 core. If 0 is specified then you must use net=udp. | 1 | No |
core=<core> | Number | Specify explicit cores to be used by the Content Software for File FS client. Multiple cores can be specified. Core 0 is not allowed. | No | |
net=<netdev> [/<ip>/<bits> [/<gateway>]] | String | For more info refer to Advanced network configuration using mount options section. | No | |
bandwidth_mbps=<bandwidth_mbps> | Number | Maximum network bandwidth in Mb/s, which limits the traffic that the container can send. | Auto-select | Yes |
remove_after_secs=<secs> | Number | The number of seconds without connectivity after which the client will be removed from the cluster. Minimum value: 60 seconds. | 86,400 seconds (24 hours) | Yes |
traces_capacity_mb= <size-in-mb> | Number | Traces capacity limit in MB. Minimum value: 512 MB. | No | |
reserve_1g_hugepages | None | Controls the page allocation algorithm if to reserve only 2MB huge pages or also 1GB ones. | Yes | Yes |
readahead_kb= <readahead> | Number in KB | Controls the readahead per mount (higher readahead better for sequential reads of large files). | 32768 | Yes |
auth_token_path | String | Path to the mount authentication token (per mount). |
~/.weka/auth-token.json | |
dedicated_mode | full or none | Determine whether DPKD networking dedicates a core (full ) or not (none ). none can only be set when the NIC driver supports it. See DPDK Without Code Dedication section.This option is relevant when using DPDK networking (net=udp is not set). | full | |
qos_preferred_throughput_mbps | Number | Preferred requests rate for QoS in megabytes per second. | No limit. The cluster admin can set this default. See mount option defaults. | Yes |
qos_max_throughput_mbps | Number | Maximum requests rate for QoS in megabytes per second. This option allows bursting above the specified limit but aims to keep this limit on average. | No limit. The cluster admin can set this default. See mount option defaults. | Yes |
qos_max_ops | Number | Maximum number of IO operations a client can perform per second. Set a limit to a client or clients to prevent starvation from the rest of the clients. | No limit. Do not set this option for mounting from a backend. | Yes |
connect_timeout_secs | Number | The timeout in seconds for establishing a connection to a single host. | 10 | Yes |
response_timeout_secs | Number | The timeout in seconds for waiting for the response from a single host. | 60 | Yes |
join_timeout_secs | Number | The timeout, in seconds, for the client container to join the Content Software for File cluster. | 360 | Yes |
core
parameters can be used to allocate specific cores to the WekaFS client. For example, mount -t wekafs -o core=2 -o core=4 -o net=ib0 backend-host-0/my_fs /mnt/weka
On-Premise Installations
mount -t wekafs -o num_cores=1 -o net=ib0 backend-host-0/my_fs /mnt/weka
Running this command on a host installed with the
Content Software for File agent will download the appropriate version from the host
backend-host-0
and create a container which allocates a single core and a named network interface
ib0
. Then it will join the cluster that
backend-host-0
is part of and mount the filesystem
my_fs
on
/mnt/weka
.
mount -t wekafs -o num_cores=0 -o net=udp backend-host-0/my_fs /mnt/weka
Running this command will use UDP mode (usually selected when the use of DPDK is not available).
For stateless clients, the first mount
command installs the weka client software and joins the cluster). Any subsequent mount
command, can either use the same syntax or just the traditional/per-mount parameters as defined in Mounting Filesystems since it is not necessary to join a cluster.
It is now possible to access Content Software for File filesystems via the mount-point, by the cd /mnt/weka/
command.
After the execution of anumount command, which unmounts the last Weka filesystem, the client is disconnected from the cluster and will be uninstalled by the agent. Consequently, executing a new mount command requires the specification of the cluster, cores, and networking parameters again.
Remount of stateless clients options
Mount options marked as Remount Supported
in the above table can be remounted (using mount -o remount
). When a mount option is not set in the remount operation, it will retain its current value. To set a mount option back to its default value, use the default
modifier (e.g., memory_mb=default)
.
Set mount option default values
The defaults of the mount options qos_max_throughput_mbps
and qos_preferred_throughput_mbps
have no limit.
The cluster admin can set these default values to meet the organization's requirements, reset to the initial default values (no limit), or show the existing values.
The mount option defaults are only relevant for new mounts performed and do not influence the existing ones.
weka cluster mount-defaults set
weka cluster mount-defaults reset
weka cluster mount-defaults show
To set the mount option default values, run the following command:
weka cluster mount-defaults set [--qos-max-throughput qos-max-throughput] [--qos-preferred-throughput qos-preferred-throughput]
Option | Value | Description |
qos_max_throughput | Number | Sets the default value for the qos_max_throughput_mbps option, which is the max requests rate for QoS in megabytes per second |
qos_preferred_throughput | Number | Sets the default value for the qos_preferred_throughput_mbps option, which is the preferred requests rate for QoS in megabytes per second. |
Advanced network configuration using mount options
When using a stateless client, it is possible to alter and control many different networking options, such as:
- Virtual functions.
- IPs.
- Gateway (in case the client is on a different subnet).
- Physical network devices (for performance and HA).
- UDP mode.
Use -o net=<netdev>
mount option with the various modifiers as described below.
<netdev>
is either the name, MAC address, or PCI address of the physical network device (can be a bond device) to allocate for the client.
IP, subnet, gateway, and virtual functions
For higher performance, the usage of multiple Frontends may be required. When using a NIC other than Mellanox or Intel E810, or when mounting a DPDK client on a VM, it is required to use SR-IOV to expose a VF of the physical device to the client. Once exposed, it can be configured via the mount command.
When you want to determine the VFs IP addresses, or when the client resides in a different subnet and routing is needed in the data network, use:
net=<netdev>/[ip]/[bits]/[gateway]
The ip, bits, gateway
parameters are optional. In case they are not provided, the Content Software for File system tries to deduce them when in IB environments or allocate from the default data network otherwise. If both approaches fail, the mount command will fail.
mount -t wekafs -o num_cores=2 -o net=intel0/192.168.1.100+192.168.1.101/24/192.168.1.254 backend1/my_fs /mnt/weka
Multiple physical network devices for performance and HA
For performance or high availability, it is possible to use more than one physical network device.
Using multiple physical network devices for better performance
It's easy to saturate the bandwidth of a single network interface when using WekaFS. For higher throughput, it is possible to leverage multiple network interface cards (NICs\). The -o net
notation shown in the next example can be used to pass the names of specific NICs to WekaFS host driver.
mount -t wekafs -o num_cores=2 -o net=mlnx0,net=mlnx1 backend1/my_fs /mnt/weka
Using multiple physical network devices for HA configuration
Multiple NICs can also be configured to achieve redundancy (refer to Content Software for File Installation Guide, HA networking configuration section for more information) in addition to higher throughput, for a complete, highly available solution. For that, use more than one physical device and specify the client management IPs using the command-line option:
-o mgmt_ip=<ip>+<ip2>
ha
is used here, which stands for using the device on all processes.mount -t wekafs -o num_cores=4 -o net:ha=mlnx0,net:ha=mlnx1 backend1/my_fs -o mgmt_ip=10.0.0.1+10.0.0.2 /mnt/weka
Advanced mounting options for multiple physical network devices
With multiple Frontend processes (as expressed by -o num_cores
), it is possible to control what processes use what NICs. This can be accomplished through the use of special command line modifiers called slots. In WekaFS, slot is synonymous with a process number. Typically, the first WekaFS Frontend process will occupy slot 1, then the second slot 2 and so on.
s1
, s2
, s2+1
, s1-2
, slots1+3
, slot1
, slots1-4
, where -
specifies a range of devices, while +
specifies a list. For example, s1-4
implies slots 1, 2, 3 and 4, while s1+4
specifies slots 1 and 4 only. For example, in the following command, mlnx0
is bound to the second Frontend process while mlnx1
to the first one for improved performance.mount -t wekafs -o num_cores=2 -o net:s2=mlnx0,net:s1=mlnx1 backend1/my_fs /mnt/weka
For example, in the following HA mounting command, two cores (two Frontend processes) and two physical network devices (mlnx0, mlnx1
) are allocated. By explicitly specifying s2+1, s1-2
modifiers for network devices, both devices will be used by both Frontend processes. Notation s2+1
stands for the first and second processes, while s1-2
stands for the range of 1 to 2, and are effectively the same.
mount -t wekafs -o num_cores=2 -o net:s2+1=mlnx0,net:s1-2=mlnx1 backend1/my_fs -o mgmt_ip=10.0.0.1+10.0.0.2 /mnt/weka
UDP mode
In cases where the Data Plane Development Kit (DPDK) cannot be used, it is possible to use WekaFS in User Datagram Protocol (UDP) mode through the kernel. Use net=udp in the mount command to set the UDP networking mode, for example:
mount -t wekafs -o num_cores=0 -o net=udp backend-host-0/my_fs /mnt/weka
Mounting filesystems using fstab
systemd
(for example, RHEL/CentOS 7.2 and up, Ubuntu 16.04 and up, Amazon Linux 2 LTS).Edit /etc/fstab
file to include the filesystem mount entry:
- A comma-separated list of backend hosts, with the filesystem name
- The mount point
- Filesystem type -
wekafs
- Mount options:
- Configure
systemd
to wait for theweka-agent
service to come up, and set the filesystem as a network filesystem, for example:x-systemd.requires=weka-agent.service,x-systemd.mount-timeout=infinity,_netdev
- Any additional
wekafs
supported mount option# create a mount point mkdir -p /mnt/weka/my_fs # edit fstab file vi /etc/fstab # fstab with weka options (example, change with your desired settings) backend-0,backend-1,backend-3/my_fs /mnt/weka/my_fs wekafs num_cores=1,net=eth1,x-systemd.requires=weka-agent.service,x-systemd.mount-timeout=infinity,_netdev 0 0
- Configure
Reboot the machine for the systemd
unit to be created and marked correctly.
The filesystem should now be mounted at boot time.
systemd
needs to mark the filesystem as a network filesystem (occurs as part of the reboot
. Trying to reboot a host when there is a mounted WekaFS filesystem when setting its fstab
configuration might yield a failure to unmount the filesystem and leave the system hanged.Mounting filesystems using autofs
It is possible to mount a Content Software for File filesystem using the autofs command.
Procedure
Install
autofs
on the host using one of the following commands according to your deployment:- On RedHat or Centos:
yum install -y autofs
- On Debian or Ubuntu:
apt-get install -y autofs
- On RedHat or Centos:
To create the
autofs
configuration files for Content Software for File filesystems, do one of the following depending on the client type:- For a stateless client, run the following commands (specify the backend names as parameters):
echo "/mnt/weka /etc/auto.wekafs -fstype=wekafs,num_cores=1,net=<netdevice>" > /etc/auto.master.d/wekafs.autofs echo "* <backend-1>,<backend-2>/&" > /etc/auto.wekafs
- For a stateful client (traditional), run the following commands:
echo "/mnt/weka /etc/auto.wekafs -fstype=wekafs" > /etc/auto.master.d/wekafs.autofs echo "* &" > /etc/auto.wekafs
- For a stateless client, run the following commands (specify the backend names as parameters):
Restart the
autofs
service:service autofs restart
The configuration is distribution-dependent. Verify that the service is configured to start automatically after restarting the host. Run the following command:
systemctl is-enabled autofs.
If the output is enabled the service is configured to start automatically.
autofs
service is configured to start automatically by running the command chkconfig
. If the output is on
for the current runlevel (you can check with the runlevel
command), autofs
is enabled upon restart.# chkconfig | grep autofs autofs 0:off 1:off 2:off 3:on 4:on 5:on 6:off
Once you complete this procedure, it is possible to access Content Software for File filesystems using the command cd /mnt/weka/<fs-name>
.