Skip to main content

We've Moved!

Product Documentation has moved to docs.hitachivantara.com
Hitachi Vantara Knowledge

Replacing a server

Capturing information from the existing node

  1. Use the following table to record information about the node being replaced.

    Information of the node to be replaced
    Node number
    Software version
    ETH0 node IP address
    ETH0 Subnet Mask
    ETH1 IP Address (if applicable)
    WWN-Port 1
    WWN-Port 2
    WWN-Port 3
    WWN-Port 4
  2. How is the current node connected to the storage?

    Direct Connected SAN Connected
  3. Is the storage using Host Group Security?

    No Yes

Obtaining backups, diagnostics, firmware levels, and license keys

On the old server:

Procedure

  1. If the server is online, using NAS Manager, navigate to Home Server Settings Configuration Backup & Restore.

  2. Click backup, then select a location to save the backup file.

    GUID-E193F5A9-4B1E-4DDC-88C7-16EB79E981D8-low.png

    Ensure that you save the backup file to a safe location off platform so that you can access it after the storage system is taken offline.

  3. For a node in a cluster, backup the Node Registry.

  4. For a node in a cluster, migrate the EVSs to an alternate node.

  5. Navigate to Home Status & Monitoring Download Diagnostics.

  6. Click download to retrieve the diagnostic test results.

    GUID-B8BD0B07-B521-41F0-B531-4D77E393427B-low.png
    NoteUnless the SMU is actively managing Brocade Fibre Channel switches, un-check the Fibre Channel Switches box.
  7. Navigate to Home Server Settings Firmware Package Management to verify the existing server (SU) firmware release level.

    firmware package management
    The server firmware version on the new server must match the failed server otherwise the server cannot properly restore from the backup file. See the Release Notes for release-specific requirements.
  8. Navigate to Home Server Settings License Keys to check the license keys to ensure you have the correct set of new license keys.

  9. Record the following information:

    • IP addresses for Ethernet ports 0 and 1
    • Gateway
    • Domain name
    • Host name

Shutting down a server that you are replacing

On the server that you are replacing:

Procedure

  1. From the server console, enter the shutdown --powerdown command.

  2. Wait until the console displays Information: Server has shut down, and the rear panel LEDs turn off. The PSU and server fans continue to run until you remove the power cables from the PSU module. See the appropriate system component section for more information.

    NoteThis specific powerdown command prepares the system for both shipping and potential long-term, post-replacement storage.
  3. Unplug the power cords from the power supplies.

  4. For a node in a cluster, once the node is shut down, go to Home Server Settings Cluster Configuration and delete the entry for the node that you are replacing.

  5. Use the following rear panel figure and table to identify and label the cabling placement on the existing server.

  6. If cables are not labeled, label them before removing them from the server.

  7. Remove all cables from the server, and remove the server from the rack.

  8. Remove the rail mounts from the old server, and install them on the new server.

  9. Remove the power supply from the old server, and install it in the new server.

  10. Remove the bezel from the old server, and install it on the new server.

  11. Insert the new server into the rack, and connect the power cords to the power supplies.

    NoteDo not make any other cable connections at this time.

Configuring the replacement servers

Before you begin

Obtain the necessary IP addresses to be used for the replacement server. Servers shipped from the factory have not yet had the nas-preconfig script run on them, so a replacement server will not have any IP addresses pre-configured for your use.

IP addresses are required for the following:

  • Eth1 (cluster IP) – 192.0.2.200/24 eth1
  • Eth1 (testhost private IP) – 192.0.2.2/24 eth1
  • Eth0 (testhost external IP, which might vary) – 192.168.4.120/24 eth0

For a single NAS server, when you run the nas-preconfig script, it reconfigures the server to the previous settings. This step allows the SMU to recognize the server as the same and to be managed.

The reconfigured settings are:

  • IP addresses for Ethernet ports 0 and 1
  • Gateway
  • Domain name
  • Host name

On the replacement server:

Procedure

  1. Log in to the server.

  2. Run the nas-preconfig script.

    The IP addresses are assigned at this step.
  3. Reboot if you are instructed to by the script.

  4. Log in to the SMU using one of the IP addresses.

  5. Use a keyboard, video, and mouse (KVM) device or a serial cable to connect to the serial port. If you connect to the serial port, use the following SSH client settings:

    • 115,200 b/s
    • 8 data bits
    • 1 stop bit
    • No parity
    • No flow control
    • VT100 emulation
  6. Log in as root, and enter the ssc localhost command to access the command prompt.

  7. If using SAN attached, and/or host group security, update this to reflect the changes that are being made to the WWN, as described in Capturing information from the existing node.

  8. Add the new node as a managed server on the SMU.

  9. Enter evs list to see the IP configuration for the server.

  10. Using a supported browser, launch the NAS Manager using either one of the IP addresses acquired from the EVS list output.

  11. Click Yes to proceed past the security alert, and log in as admin.

  12. Verify and, if necessary, convert the new server to the model profile required.

    This step requires a separate process, training, and license keys. Contact Hitachi Vantara if the incorrect model arrives for replacement.
  13. Navigate to Home Server Settings Firmware Package Management to verify and, if necessary, upgrade the new server to the latest SU release.

  14. When replacing a server in a cluster only, navigate to Home Server Settings Cluster Wizard, and promote the node to the cluster.

    1. Enter the cluster name, cluster node IP address, subnet, and select a quorum device. Note that the node reboots several times during this process.

    2. When prompted, add the second node to the cluster.

    3. Enter the physical node IP address, log in as supervisor, and click finish. Wait for the system to reboot.

    4. Enter smu-uninstall to uninstall the embedded SMU.

  15. For all servers, navigate to Home Server Settings Configuration Backup & Restore, select the required backup file, and then click restore to restore the system from that backup file.

  16. When replacing a server in a cluster only, reconfigure the server to the previous settings:

    • IP addresses for Ethernet ports 0 and 1
    • Gateway
    • Domain name
    • Host name

    The SMU should recognize the node as the same and allow it to be managed.

  17. Navigate to Home Server Settings License Keys to load the license keys.

  18. Reboot the server.

  19. Reconnect the data cables to the server.

Finalizing and verifying the replacement server configuration

The maximum Fibre Channel (FC) link speed on the NAS Platform Series 5000 is 16 Gbps.

On the replacement server:

Procedure

  1. Navigate to Home Server Settings License Keys to load the license keys.

  2. Remove the previous license keys in the backup file, then add the new keys.

  3. Use fc-link-speed to verify and, if necessary, configure the FC port speed as required. For example:

    1. Enter fc-link-speed to display the current settings.

    2. Enter fc-link-speed -i port_number -s speed for each port.

    3. Enter fc-link-speed to verify the settings.

  4. Modify zoning and switches with the new WWPN, if you are using WWN-based zoning.

    If you are using port-based zoning, no modifications are necessary for the switch configurations.
  5. Open Device Manager - Storage Navigator and re-configure the LUN mapping and host group on the storage system that is dedicated to the server with the new WWPNs. Perform this step for every affected server port.

  6. If the server does not recognize the system drives, enter fc-link-reset to reset the fiber paths.

  7. Enter the sdpath command to display the path to the devices (system drives) and the port and storage port that are used.

  8. Enter the sd-list command to verify the system drive statuses are OK and that access is allowed.

  9. Use the CLI to verify that the new node has access to the system drives. Use sd-list from the node that you have just replaced.

    For example: pn x sd-list where x is the node number in the cluster.
    FSS-HNAS-1:$ sd-list
    Device  Status  Alw  GiByte  Mirror   In span        Span Cap
    -----   ------  ---  ------  ------   -------        --------
    0       OK      Yes   1607   Pri      FSS_Pool_1     3214 
    1       OK      Yes   1607   Pri      FSS_Pool_1     3214 
    4       OK      Yes    390   Pri      FSS_AMS200     1560
    5       OK      Yes    390   Pri      FSS_AMS200     1560
    6       OK      Yes    390   Pri      FSS_AMS200     1560
    7       OK      Yes    390   Pri      FSS_AMS200     1560
           
  10. Enter span-list to verify the storage pools (spans) are accessible.

    NoteIn this instance, cluster is synonymous with the stand-alone server.
  11. Enter the span-list-cluster-uuids span_label command to display the cluster serial number (UUID) to which the storage pool belongs.

    The UUID is written into the storage pool configuration on disk (COD). The COD is a data structure stored in every SD, which provides information how the different SDs are combined into different stripesets and storage pools.
  12. Enter the span-assign-to-cluster span_label command to assign all the spans to the new server.

  13. If EVS mapping or balancing is required, select the EVS to migrate, assign it to the preferred node, then click migrate.

    GUID-7D21AF68-3DC1-40B3-A9ED-34999A3882B2-low.png
  14. To set the preferred node for any remaining EVSs, navigate to Home Server Settings EVS Management EVS Details.

    GUID-8670232A-E5FF-4B98-86C8-7678773F8AA5-low.png
  15. Select the node from the Preferred Cluster Node list, then click apply.

  16. Reconfigure any required tape backup application security.

  17. Navigate to Home Status & Monitoring Event Logs, and click Clear Event Logs.

  18. Navigate to Home Status & Monitoring System Monitor and verify the server status:

    • If the server is operating normally and is not displaying any alarm conditions, run a backup to capture the revised configuration, then download another diagnostic to support. Permanent license keys for the replacement server are normally provided within seven days.
    • If the server is not operating normally for any reason, contact Customer Support for assistance.
  19. Navigate to Home Server Settings Cluster Configuration to verify the cluster configuration status. Verify the cluster is shown as Online and Robust and has the correct number of nodes.

    cluster configuration
  20. Confirm all final settings, IP addresses, customer contact information, service restarts, client access, and that customer expectations are all in place. Features such as replication and data migration must all be confirmed as working, and all file systems and storage pools must be online.