Sun Microsystems, Inc.  Sun System Handbook - ISO 4.1 October 2012 Internal/Partner Edition
   Home | Current Systems | Former STK Products | EOL Systems | Components | General Info | Search | Feedback

Asset ID: 1-71-1014170.1
Update Date:2011-08-11
Keywords:

Solution Type  Technical Instruction Sure

Solution  1014170.1 :   VTL- What NBU parameters are there for performance tuning with VTL?  


Related Items
  • Sun StorageTek VTL Storage Appliance
  •  
  • Sun StorageTek VTL Prime System
  •  
  • Sun StorageTek VTL Plus Storage Appliance
  •  
  • Sun StorageTek VTL Value System
  •  
Related Categories
  • PLA-Support>Sun Systems>TAPE>Virtual Tape>SN-TP: VTL
  •  
  • .Old GCS Categories>Sun Microsystems>Storage - Tape>Tape Virtualization
  •  

PreviouslyPublishedAs
220520


Applies to:

Sun StorageTek VTL Plus Storage Appliance - Version: 1.0 - Build 1323 to 2.0 - Build 1656 - Release: 1.0 to 2.0
Sun StorageTek VTL Storage Appliance - Version: 4.0 - Build 1221 and later    [Release: 4.0 and later]
Sun StorageTek VTL Prime System - Version: 1.0 - Build 1813 to 1.1 - Build 2076   [Release: 1.0 to 1.0]
Sun StorageTek VTL Value System - Version: 1.0 - Build 1323 and later    [Release: 1.0 and later]
All Platforms
.
***Checked for relevance on 05-08-2011*** (dd-mm-yyyy)

Goal

What NBU parameters are there for performance tuning with VTL?

Solution

 
For most configurations, the default NetBackup buffer settings are correct and there is no need to adjust them for the purpose of performance.

Furthermore, there are factors outside of NetBackup which effect performance and should be reviewed. Some of these external factors include Host Bus Adapter (HBA) cards, SCSI cards, network interface card (NIC) settings, client disk I/O speed, network latency, and tape drive I/O. All of these should be reviewed to determine their respective impact on backup and restore speeds prior to any attempts to tune NetBackup.

On a Windows server, four different buffer settings can be modified to enhance backup performance. Those settings are:

 NUMBER_DATA_BUFFERS: The number of buffers used by NetBackup to buffer data prior to sending it to the tape drives. The default value is 16.

 SIZE_DATA_BUFFERS: The size of each buffer setup multiplied by the NUMBER_DATA_BUFFERS value. The default value is 65536.

 NET_BUFFER_SZ: The network receive buffer on the media server. It receives data from the client. The default value is 256K.

 Buffer_size: The size of each data package sent from the client to the media server. The default value is 32K.


Overview:
When a backup is initiated, the client packages data of the amount specified by the Buffer_size value, and transfers the information to the media server, which in turn, buffers that data in the NET_BUFFER_SZ. When the NET_BUFFER_SZ is full, it transfers data to the array of space created by a combination of NUMBER_DATA_BUFFERS and SIZE_DATA_BUFFERS. As soon as at least one of the SIZE_DATA_BUFFERS is full, assuming the drive is ready to write, the information is written to the tape drive.


To troubleshoot performance issues related to buffer settings, enable and review the bptm log on the media server and the bpbkar log on the client. On the media server, go to the \Veritas\NetBackup\Logs directory, and create a bptm folder. On the client, go to the

Examine the bptm and bpbkar logs for references to waits and buffers and compare one side with the other. Individually, each side's number of waits means nothing. Only when compared to the opposite side, can you deduce where a potential bottleneck is:

Example 1:
--------------------

bptm:

13:16:06.546 [2776.2620] <2> mpx_read_data: waited for empty buffer 0 times, delayed 0 times

bpbkar:

1:16:6.875 PM: [2532.1304] <4> tar_backup::OVPC_EOFSharedMemory: INF - bpbkar waited 0 times for empty buffer, delayed 0 times

Conclusion: Neither the bptm or bpbkar logs are waiting. This means that you could increase the buffers on each side to pass greater amounts of data and possibly see greater performance.


Example 2:
--------------------

bptm:

12:32:25.937 [2520.2176] <2> write_data: waited for \b full buffer\b0 19285 times, delayed 18012 times

bpbkar:

12:32:44.875 PM: [1372.135] <4> tar_backup::OVPC_EOFSharedMemory: INF - bpbkar waited 612 times for empty buffer, delayed 651 times

Conclusion: The bptm process is waiting to receive data from the client many thousands of times more then the client is waiting on the bptm process. The bottleneck here is on the client. Increasing SIZE_DATA_BUFFERS or NUMBER_DATA_BUFFERS will not improve performance. Finding out why the client is slow to pass data to the media server, is the key. Investigate disk read performance and network throughput performance.


Example 3:
--------------------

bptm:

13:31:42.343 [1800.2108] <2> fill_buffer: [2016] socket is closed, waited for \b empty buffer\b0 18834 times, delayed 33954 times, read 3326864 bytes

bpbkar:

1:30:25.301 PM: [1242.916] <4> tar_backup::OVPC_EOFSharedMemory: INF - bpbkar waited 27420 times for empty buffer, delayed 26525 times

Conclusion: The quantity of waits listed in bpbkar indicates the problem is on the media server. In the bptm log we see it waiting a significant amount of times for empty buffers. The client is waiting to send data, until there is a place to put it. This indicates the data is not passing to the tape drives fast enough. Increasing the SIZE_DATA_BUFFERS or NUMBER_DATA_BUFFERS will not help. The key here is to figure out if the performance bottleneck is the tape drive write speed, or the HBA/SCSI transfer speed.


NUMBER_DATA_BUFFERS

To change the NUMBER_DATA_BUFFERS, create the


SIZE_DATA_BUFFERS

Remember this is the size of each buffer setup on the media server, the number of which is defined by NUMBER_DATA_BUFFERS. Exercise caution when changing the value from the default setting as some SCSI cards, HBA cards, and tape drives cannot transfer buffers greater than 65536 in size. After changing this value, it is important to test both backups and restores, as sometimes data can be written at the modified size, but can not be read at the modified size. Please review the specifications of the HBA card, SCSI card and tape drive to confirm the value is not being exceeded.


Related Documents:

183702: NET_BUFFER_SZ, SIZE_DATA_BUFFERS and NUMBER_DATA_BUFFERS - how they work and how to configure them
http://www.symantec.com/business/support/index?page=content&id=TECH1724


281842: Veritas NetBackup (tm) Enterprise Server / Server 6.0 Backup Planning and Performance Tuning Guide for UNIX, Windows, and Linux
http://www.symantec.com/business/support/index?page=content&id=TECH46281


Attachments
This solution has no attachment
  Copyright © 2012 Sun Microsystems, Inc.  All rights reserved.
 Feedback