Networker

Re: [Networker] Best way to evaluate drive speed?

2006-07-12 15:51:36
Subject: Re: [Networker] Best way to evaluate drive speed?
From: George Sinclair <George.Sinclair AT NOAA DOT GOV>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Wed, 12 Jul 2006 20:49:37 -0400
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Thanks.

I also see a utility /usr/sbin/tapeexercise, but not sure where that's
from, Legato or Linux provided? Anyone use this?

I assume the tape_perf_test is Legato's version of HPTapePerf which some
other folks suggested and can be downloaded from HP.

George

Peter Viertel wrote:
> In the 7.3 networker package I found a new tool.
> /usr/sbin/tape_perf_test  that will answer most of your questions....  I
> don't think you need to install all of 7.3 - just get the binary onto
> the right server....
> 
> 
> 
> -----Original Message-----
> From: Legato NetWorker discussion [mailto:NETWORKER AT listserv.temple DOT 
> edu]
> On Behalf Of Itzik Meirson
> Sent: Wednesday, 12 July 2006 2:02 PM
> To: NETWORKER AT listserv.temple DOT edu
> Subject: Re: [Networker] Best way to evaluate drive speed?
> 
> The best procedure to run speed test on the drives is to use the
> "bigasm" directive described in the "Performance Tuning Guide".
> This will do a backup from memory thus testing all components from
> "client to device"
> If you use this procedure on the backup server you get the maximum
> throughput the backup server is able to push to your drives.
> You should of course play with "client parallelism" and "target
> sessions" to distribute the backup to single/multiple drives with
> varying multiplexing.
> If you run this for your storage node client, you will get the maximum
> throughput for the storage node drives.
> If you run this for a "general" client in your data zone you will be
> also testing the client capabilities to push the data through the
> network to your designated drives.
> Itzik
> 
> -----Original Message-----
> From: Legato NetWorker discussion [mailto:NETWORKER AT LISTSERV.TEMPLE DOT 
> EDU]
> On Behalf Of George Sinclair
> Sent: Wednesday, July 12, 2006 03:16
> To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
> Subject: [Networker] Best way to evaluate drive speed?
> 
> I think I'm asking too many questions here, but it's related so ...
> 
> How do we best determine if we're getting reasonable write speed on our
> tape drives?
> 
> We have 4 SDLT 600 drives in a LVD Quantum library attached to a Linux
> storage node. We're using SCSI interface with dual channel host adapter
> cards (LSI 22320 Ultra320) which supplies 160 MB/channel. Drive1-2 are
> daisy chained to channel A. Drives 3-4 are daisy chained to channel B. 
> Perhaps each drive should be on its own channel, but given the speed of
> the channel it seemed OK to have two drives share a channel. Was not
> going to have more than 2 drive per channel, however. But, burst
> transfer speed does list as 160 max, so with two drives that would be
> 320 so maybe they should each be on separate channels. Anyway, the
> product information indicates that these drives have a speed of 35
> native, 70 MB/sec with compression. I assume this is a best case
> scenario, and I doubt we'll be able to match those in practice, and it
> might make a difference if the data is local to the snode or coming over
> the network, but how do we determine if we're getting good results?
> 
> This brings up 3 questions:
> 1. What are the vendors doing to claim these numbers? How are they
> writing the data during the tests so as to optimize the speed to claim
> the best possible results?
> 
> 2. What is a good way to determine if the drives in your library are
> functioning at their proper speeds?
> 
> 3. Does sending more save sets to a tape device really increase the
> speed, and if so, why? I thought I've seen this behavior before wherein
> the speed increases (to a point) when more sessions are running, but
> maybe that was coincidental. I mean, why would one large save set (as in
> one large enough such that when you're doing a full, it will take a
> while so there's no shoe shining effect and the drive can just stream
> along)  not do the same as multi sessions?
> 
> I was thinking to first run some non-Legato tests by tarring a 2 GB
> directory on the storage node directly to one of the devices, time the
> operation and then do the math. Perform a tar extract to ensure it
> worked correctly. I don't think I can run multiple concurrent tar
> sessions to the same device, though, like multi-plexing in NetWorker. 
> Will this still yield a good idea of the drives speed? Not sure this
> would fill the buffer fast enough or generate good enough burst speed
> like if I'm running multiple save sets in NetWorker, but I refer to
> question 2. above?
> 
> If sending more sessions to the device does increase the speed then when
> using NetWorker to send multiple sessions to the device, so as to better
> fill the buffer and increase drive speed, how can I best capture the
> results? Is it obvious enough to simply create a group, place the snode
> in the group, specify 'All' for the client's save sets and then just
> launch a full and then look at the start and completion times for the
> group and how much total was backed up and do the math to get an average
> drive speed? There are 5 file systems on the snode, and 4 are very small
> (under 300 MB) except for /usr which is about 6 GB, so we all know which
> one will still be cranking on a full long after the others are done. 
> Will this be a fair test? Maybe better to create say 4 separate named
> paths of 1 GB each, for example, and list those as the save sets? Again,
> this gets back to question 2. above.
> 
> Thanks.
> 
> George
>  
> 
> To sign off this list, send email to listserv AT listserv.temple DOT edu and
> type "signoff networker" in the body of the email. Please write to
> networker-request AT listserv.temple DOT edu if you have any problems wit this
> list. You can access the archives at
> http://listserv.temple.edu/archives/networker.html or via RSS at
> http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
> 
> To sign off this list, send email to listserv AT listserv.temple DOT edu and
> type "signoff networker" in the body of the email. Please write to
> networker-request AT listserv.temple DOT edu if you have any problems wit this
> list. You can access the archives at
> http://listserv.temple.edu/archives/networker.html or via RSS at
> http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
> 
> 
> 
> NOTICE
> This e-mail and any attachments are confidential and may contain copyright 
> material of Macquarie Bank or third parties. If you are not the intended 
> recipient of this email you should not read, print, re-transmit, store or act 
> in reliance on this e-mail or any attachments, and should destroy all copies 
> of them. Macquarie Bank does not guarantee the integrity of any emails or any 
> attached files. The views or opinions expressed are the author's own and may 
> not reflect the views or opinions of Macquarie Bank.
> 
> To sign off this list, send email to listserv AT listserv.temple DOT edu and 
> type "signoff networker" in the
> body of the email. Please write to networker-request AT listserv.temple DOT 
> edu if you have any problems
> wit this list. You can access the archives at 
> http://listserv.temple.edu/archives/networker.html or
> via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
> 

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFEtZieKOIon/nxC8YRAqA/AKCHjKALlqhH7xzzTp/MGYv0Zl9cIACgmiW/
PdLEqSy9hEDUFY+rIEv6gic=
=oQpT
-----END PGP SIGNATURE-----

To sign off this list, send email to listserv AT listserv.temple DOT edu and 
type "signoff networker" in the
body of the email. Please write to networker-request AT listserv.temple DOT edu 
if you have any problems
wit this list. You can access the archives at 
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER