![]() | Sun System Handbook - ISO 4.1 October 2012 Internal/Partner Edition | ||
|
|
![]() |
||||||||||||
Solution Type Technical Instruction Sure Solution 1014170.1 : VTL- What NBU parameters are there for performance tuning with VTL?
PreviouslyPublishedAs 220520
Applies to:Sun StorageTek VTL Plus Storage Appliance - Version: 1.0 - Build 1323 to 2.0 - Build 1656 - Release: 1.0 to 2.0Sun StorageTek VTL Storage Appliance - Version: 4.0 - Build 1221 and later [Release: 4.0 and later] Sun StorageTek VTL Prime System - Version: 1.0 - Build 1813 to 1.1 - Build 2076 [Release: 1.0 to 1.0] Sun StorageTek VTL Value System - Version: 1.0 - Build 1323 and later [Release: 1.0 and later] All Platforms . ***Checked for relevance on 05-08-2011*** (dd-mm-yyyy) GoalWhat NBU parameters are there for performance tuning with VTL?SolutionFor most configurations, the default NetBackup buffer settings are correct and there is no need to adjust them for the purpose of performance. Furthermore, there are factors outside of NetBackup which effect performance and should be reviewed. Some of these external factors include Host Bus Adapter (HBA) cards, SCSI cards, network interface card (NIC) settings, client disk I/O speed, network latency, and tape drive I/O. All of these should be reviewed to determine their respective impact on backup and restore speeds prior to any attempts to tune NetBackup. NUMBER_DATA_BUFFERS: The number of buffers used by NetBackup to buffer data prior to sending it to the tape drives. The default value is 16. SIZE_DATA_BUFFERS: The size of each buffer setup multiplied by the NUMBER_DATA_BUFFERS value. The default value is 65536. NET_BUFFER_SZ: The network receive buffer on the media server. It receives data from the client. The default value is 256K. Buffer_size: The size of each data package sent from the client to the media server. The default value is 32K.
bptm: 13:16:06.546 [2776.2620] <2> mpx_read_data: waited for empty buffer 0 times, delayed 0 times bpbkar: 1:16:6.875 PM: [2532.1304] <4> tar_backup::OVPC_EOFSharedMemory: INF - bpbkar waited 0 times for empty buffer, delayed 0 times Conclusion: Neither the bptm or bpbkar logs are waiting. This means that you could increase the buffers on each side to pass greater amounts of data and possibly see greater performance.
bptm: 12:32:25.937 [2520.2176] <2> write_data: waited for \b full buffer\b0 19285 times, delayed 18012 times bpbkar: 12:32:44.875 PM: [1372.135] <4> tar_backup::OVPC_EOFSharedMemory: INF - bpbkar waited 612 times for empty buffer, delayed 651 times Conclusion: The bptm process is waiting to receive data from the client many thousands of times more then the client is waiting on the bptm process. The bottleneck here is on the client. Increasing SIZE_DATA_BUFFERS or NUMBER_DATA_BUFFERS will not improve performance. Finding out why the client is slow to pass data to the media server, is the key. Investigate disk read performance and network throughput performance.
bptm: 13:31:42.343 [1800.2108] <2> fill_buffer: [2016] socket is closed, waited for \b empty buffer\b0 18834 times, delayed 33954 times, read 3326864 bytes bpbkar: 1:30:25.301 PM: [1242.916] <4> tar_backup::OVPC_EOFSharedMemory: INF - bpbkar waited 27420 times for empty buffer, delayed 26525 times Conclusion: The quantity of waits listed in bpbkar indicates the problem is on the media server. In the bptm log we see it waiting a significant amount of times for empty buffers. The client is waiting to send data, until there is a place to put it. This indicates the data is not passing to the tape drives fast enough. Increasing the SIZE_DATA_BUFFERS or NUMBER_DATA_BUFFERS will not help. The key here is to figure out if the performance bottleneck is the tape drive write speed, or the HBA/SCSI transfer speed.
To change the NUMBER_DATA_BUFFERS, create the
Remember this is the size of each buffer setup on the media server, the number of which is defined by NUMBER_DATA_BUFFERS. Exercise caution when changing the value from the default setting as some SCSI cards, HBA cards, and tape drives cannot transfer buffers greater than 65536 in size. After changing this value, it is important to test both backups and restores, as sometimes data can be written at the modified size, but can not be read at the modified size. Please review the specifications of the HBA card, SCSI card and tape drive to confirm the value is not being exceeded. Related Documents: 183702: NET_BUFFER_SZ, SIZE_DATA_BUFFERS and NUMBER_DATA_BUFFERS - how they work and how to configure them Attachments This solution has no attachment |
||||||||||||
|