| What Information Do I Need to Gather to Allow GSC to Diagnose a Hung HNAS System |
Content | Question What information do I need to gather to allow GSC to diagnose a hung HNAS system? See also: Environment - Hitachi Network Attached Storage (HNAS)
- 3100/3200
- 3080/3090
- 4000 series
- 5000 series
Answer A standard set of HNAS diagnostics taken after the event is usually insufficient to diagnose the cause of a system "hang." When a system appears to be hung gather the following information: - Capture the output of running the bt active HNAS CLI command a few times (say 3 - 5 times) with a short pause in-between each invocation of the command. The output from this will be quite large so it is best captured by setting up session logging in your SSH client. Note also that the PIR will contain two invocations of "bt active" - one in the
old-command-output.txt and one in the command-output.txt . - Gather a performance-info-report for the impacted file system. If several are impacted or it isn't clear which to use, try and identify the busiest using the fs-perf-stats HNAS CLI command. It's significantly more useful to get a PIR focused on a file system in the busiest span on the node than a "whole node" PIR which only contains summary stats for the entire node.
- Reset the system by using the reset button on the impacted HNAS node. This will cause the system to dump crash diagnostics and then reset. If the system is remote and it isn't possible to press the reset button you can request a dev password from GSC so that you can run the thread-breakpoint command (on the appropriate node) which will trigger the same behavior.
- Once the system has come back up, download a full set of diagnostics for the cluster. This set of diagnostics will contain the crash diagnostics generated in the previous step.
|
CXone Metadata | Tags: Diagnosis,hang,hung,Stuck,Wedged,Deadlock,Livelock,Unresponsive,Q&A PageID: 7570 |
|