| Performance Tuning - Rules of Thumb |
Content | Environment - VSP G1x00 and VSP F1x00
- VSP Fx00 and Gx00
- Adaptable Modular Storage 2000 Series
- Hitachi Unified Storage (HUS)
Performance Tuning - Rules of Thumb Rules Of Thumb (ROT) are guidelines that can be used to perform initial subsystem tuning. These ROTs are intended to be used to identify component and resource thresholds for successful monitoring of Hitachi Storage Subsystems. The figures represent a sensible level for optimal usage but also consider that some subsystems may be very performance sensitive and need much lower thresholds. Also some subsystems are routinely run to the limits of resource where higher thresholds are expected. Consider the time of day and day of week as levels may be much higher outside of normal local working hours and days, during batch and backup. Remember that performance is relative and you should adjust the thresholds based on local experience, local expectations and historical usage. These figures may be exceeded during short spikes and performance may still be acceptable depending on the application, how critical it is etc. Resource | Midrange (DF) Threshold | Enterprise (RAID) Threshold | Enterprise (HM) Threshold | MP Busy | 70% | 70% | 80% (1) | Cache Write Pending | 12% (9) | 40% (2) | 40% (2) | SAS/SATA Drive Busy | 50% (9) | 60% (3) | 60% (3) | Flash Drive Busy (4) | 90% | 90% | 90% | Cache paths Utilization (5,8) | N/A | 50% | 50% | IO/sec (6) | Pattern & history | Pattern & history | Pattern & history | Port Transfer (7) | 75% of Bandwidth | 75% of BW | 75% of Bandwidth | IO response time (10) | Pattern & history | Pattern & history | Pattern & history | - The HM series processor emulate several ASIC so processing levels are generally higher due to asynchronous jobs which do not impact response times.
- Under 30% and lazy destage occurs whenever the system gets a chance. Over 30% and every 10% thereafter the destage processing priority increases. Brief high spikes are not a concern unless over 65%.
- These are figures for RAID groups in pool's if they are used for basic LDEVs then 50% busy is the threshold.
- Calculated from the queuing level on the drives and good performance can be seen at 100% busy (but not always).
- This is bidirectional so 50% could be 100% in the rx/tx direction.
- There is no threshold. Look for patterns matching high response or problems.
- The threshold depends on speed of the port, e.g. use 8Gig up to 75% of 800MB/sec = 600MB/sec.
- Some figures represent usage within the board, e.g. R800 CM, in which case up to 90% is normal.
- This threshold assumes dirty data setting DDO/DDSO are at of 5/5. If Cache Write Pending is ≥ 12%, Physical Disk Busy% must be < 50% or they array will apply inflow control to host IO.
- Response times vary widely depending on resources and usage, use patterns and historical data to judge. Response time should be judged in combination with understanding the IO, transfer rates and workload profile (read/write ratio, block size, random/sequential ratio and hit rates).
Additional Notes See these additional articles for further details: |
CXone Metadata | Tags: Performance,article:qanda,MPB,Threshold,CWP,health,MPU,Busy,ROT,Rules of thumb,MP
Page ID: 11943 |
|