Veritas-bu

Re: [Veritas-bu] Performance tuning with remote NDMP.

2009-06-17 18:07:59
Subject: Re: [Veritas-bu] Performance tuning with remote NDMP.
From: Jeff Cleverley <jeff.cleverley AT avagotech DOT com>
To: william.d.brown AT gsk DOT com
Date: Wed, 17 Jun 2009 16:03:51 -0600


william.d.brown AT gsk DOT com wrote:
Well NDMP logging is done differently, so you may want to search for the 
technotes for that - it will likely give more information.  However, I've 
heard that it can produce an enormous amount of logging.
  
I know.  I've already filled up the file system once :-)
I've not tried remote NDMP any time recently, so I can't claim real-world 
experience.  I have read the NetApp documents, and you'll find them on the 
filer if installed.  In particular there is information about the 
parameters that can be set by adding the SET commands to the backup 
selection list.  Really there is very little, but I do recall it saying 
that for NDMP you could not change the block size from 64KB even though 
the setting exists.  Now I'm not sure how that applies to remote NDMP, but 
I'm sure you'll either see that in the logs.  I suspect that somewhere in 
the path, even though you are using the tape mover in your Media Server 
and the disk mover in the NetApp, that 64KB will be in force.

That is going to limit the bandwidth and slow you down.
  
I'll dig up the NDMP manual and give it another look.  From what I saw it seemed to treat remote NDMP the same as any other client backup with a few differences.
You are also putting over the same link I guess the backup data and the 
catalog information which AFAIR is sent at each change of path to the 
'NetBackup for NDMP' server, in this case your Media Server.  That has to 
grab the information and send it on to the Master Server each time, so the 
catalog gets updated.  I understand that every time it does that there is 
another DNS lookup - as you turn up the logging you will see a lot of 
activity.  This is one reason why it is recommended not to put the NDMP 
agent on the Master.
  
The master server is the media server so I wouldn't think this would hurt too much.
You might look at the network setting on the RHEL server to make sure 
you've done all the TCP tuning of receive buffers and the like.   You 
might be able to turn on jumbo frames on the filer and the RHEL server, if 
your network allows.  I'd say 110MB/s was pretty good, wish I got that 
anywhere.
  
Unfortunately the jumbo frames are not an option at this point.  I have looked into doing a direct connect between the NetApp port and the master serve 10G ports which would allow for the jumbo packets.  That presents other logistical issues that I can really work around right now.  The 110 MB/s isn't bad until I put it in perspective of having to backup 70+ TB of data with a number of 6 TB file systems mixed with a bunch of smaller ones.  No multiplexing of NDMP streams means I'm going to need a lot of drives to get things to run in a timely manner.

Thanks,

Jeff
William D L Brown


veritas-bu-bounces AT mailman.eng.auburn DOT edu wrote on 17/06/2009 20:58:29:

  
Greetings,

I found some posts from April about people trying to get NDMP 
performance out of a NetApp but seem to stall out around 120 MB/s. 
I didn't find any posts that detailed why.  I know that having a 
trunk connection of 1 gig ports will still only use 1 gig port for 
the point to point connection and not use the entire aggregated 
bandwidth.  I didn't know if this was the issue or not.

Our setup is a 6030 running 7.2.5.1 Ontap and a linux RHEL4 server 
running 6.5.3.  Both the NetApp and the master server had 10G 
network cards in a private network.  When I do a dump to null on the
filer I can average over 200 MB/s.  When I do a dd to a tape drive 
on the master I can get over 120 MB/s to a Gen 3 LTO.  When I do 
backups of a single file system I can never exceed 70 MB/s.  Running
multiple backup jobs has maxed out at 110 MB/s.  I have verified 
through the network switch traffic that the data is going between 
the 10G ports.

I've gone through the performance tuning guide and played with the 
buffer settings for NET_BUFFER_SZ, SIZE_DATA_BUFFERS_NDMP, 
NUMBER_DATA_BUFFERS, and  SIZE_DATA_BUFFERS.  I can make performance
worse but can never get better than what is listed above.  One thing
I've noticed is that even though I've made changes to the VERBOSE 
statement in the bp.conf file and restarted NetBackup, I never get 
the wait and delay messages in the bptm logs.  The master server 
properties have all be set to maximum logging Obviously there is no 
bpbkar log file on the filer to look at.  Am I missing something to 
get that turned on? 

Thanks,

Jeff
-- 

Jeff Cleverley
Unix Systems Administrator
4380 Ziegler Road
Fort Collins, Colorado 80525
970-288-4611
jeff.cleverley AT avagotech DOT com
_______________________________________________
Veritas-bu maillist  -  Veritas-bu AT mailman.eng.auburn DOT edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
    

-----------------------------------------------------------
This e-mail was sent by GlaxoSmithKline Services Unlimited 
(registered in England and Wales No. 1047315), which is a 
member of the GlaxoSmithKline group of companies. The 
registered address of GlaxoSmithKline Services Unlimited 
is 980 Great West Road, Brentford, Middlesex TW8 9GS.
-----------------------------------------------------------

_______________________________________________
Veritas-bu maillist  -  Veritas-bu AT mailman.eng.auburn DOT edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
  

-- 

Jeff Cleverley
Unix Systems Administrator
4380 Ziegler Road
Fort Collins, Colorado 80525
970-288-4611
jeff.cleverley AT avagotech DOT com
_______________________________________________
Veritas-bu maillist  -  Veritas-bu AT mailman.eng.auburn DOT edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
<Prev in Thread] Current Thread [Next in Thread>