Veritas-bu

[Veritas-bu] performance?

2006-02-07 11:17:12
Subject: [Veritas-bu] performance?
From: pkeating AT bank-banque-canada DOT ca (Paul Keating)
Date: Tue, 7 Feb 2006 11:17:12 -0500
--K02i.433Qtj2bb.pmNqe.EMNZTen
MIME-Version: 1.0
Content-Type: multipart/alternative;
        boundary="----_=_NextPart_001_01C62C01.F415A3C1"
content-class: urn:content-classes:message

------_=_NextPart_001_01C62C01.F415A3C1
Content-Type: text/plain;
        charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

I'm running a Sunfire V880.
=20
4x 1.2 GHz Ultrasparc III+ proc.
8 Gig Ram
=20
6 internal 72 Gig disks.
=20
1st pair disks mirrored OS /, /usr, /opt, etc etc
2nd pair disks mirrored /opt/openv (replicated to a standby system using
Veritas Volume Replicator)
3rd pair disks one slice mirrored for VVR's SRL logs=20
                    remainder of the two disks concatenated into a 96
Gig FS for a DSSU for a couple of small clients on 10Mb/s HD encrypted
links.
=20
Pair of FC200 HBAs in 66MHz PCI slots
Pair of GigaSwift GigE cards in 33MHz PCI slots (only 2 66Mhz slots on
the machine.)
=20
the HBAs each feed 7 tape drives and 1 robot.....using Inline tape
copy.....the robots are essentially mirrored....one onsite, one at the
end of a DWDM link.
=20
=20
I previously had only 3 drives at each site.....and added the additional
4 at each site two weeks ago.
=20
with the 3 drives per site, I was getting approx 60MB/s total date to
the three drives...so an avg of 20MB/s drive.
=20
The addidional drives were added for resiliency (Mgmt wanted two stus,
one for dev, one for prod servers..so now we have a stu with 4 drives
avail for prod, and 3 drives avail for dev)
in the case where a client or two stuck at half duplex hangs up a drive
all night, the remainder of jobs can finish...
=20
The problem is that, after adding the new tape drives, we don't get any
more total throughput....seemd stuck at about 60-65MB/s, but now spread
among twice the tape drives.
This means that since more machines are backing up concurrantly (allowed
because of the increased number of drives) that each machine is backing
up slower...in effect, each machine is taking twice as long to back
up....causing some major issues.
=20
Anyone know of any particular configs or issues that may be affecting us
here? benchmarks on processing required to manage this many ITC jobs? we
didn't see a performance hit in the lab, but never had this big of a
system in the lab to really load it.
=20
I've been doing consant IOstat and netstat monitoring....no
waits/queues/errors/collisions anywhere, except at one point for about
15 minutes during the FULL window on the weekend, a few waits
accumulated on the disks the /opt/openv resides on....but the
performance was the same during that period as the remaining 24+ hours,
where there were no waits.
=20
any new ideas would be welcome.
=20
Thanks,
Paul

------_=_NextPart_001_01C62C01.F415A3C1
Content-Type: text/html;
        charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD><TITLE>Message</TITLE>
<META http-equiv=3DContent-Type content=3D"text/html; =
charset=3Dus-ascii">
<META content=3D"MSHTML 6.00.2800.1170" name=3DGENERATOR></HEAD>
<BODY>
<DIV><SPAN class=3D941515815-07022006><FONT face=3DArial size=3D2>I'm =
running a=20
Sunfire V880.</FONT></SPAN></DIV>
<DIV><SPAN class=3D941515815-07022006><FONT face=3DArial=20
size=3D2></FONT></SPAN>&nbsp;</DIV>
<DIV><SPAN class=3D941515815-07022006><FONT face=3DArial size=3D2>4x 1.2 =
GHz=20
Ultrasparc III+ proc.</FONT></SPAN></DIV>
<DIV><SPAN class=3D941515815-07022006><FONT face=3DArial size=3D2>8 Gig=20
Ram</FONT></SPAN></DIV>
<DIV><SPAN class=3D941515815-07022006><FONT face=3DArial=20
size=3D2></FONT></SPAN>&nbsp;</DIV>
<DIV><SPAN class=3D941515815-07022006><FONT face=3DArial size=3D2>6 =
internal 72 Gig=20
disks.</FONT></SPAN></DIV>
<DIV><SPAN class=3D941515815-07022006><FONT face=3DArial=20
size=3D2></FONT></SPAN>&nbsp;</DIV>
<DIV><SPAN class=3D941515815-07022006><FONT face=3DArial =
size=3D2>1st&nbsp;pair disks=20
mirrored OS /, /usr, /opt, etc etc</FONT></SPAN></DIV>
<DIV><SPAN class=3D941515815-07022006><FONT face=3DArial size=3D2>2nd =
pair disks=20
mirrored /opt/openv (replicated to a standby system using Veritas Volume =

Replicator)</FONT></SPAN></DIV>
<DIV><SPAN class=3D941515815-07022006><FONT face=3DArial size=3D2>3rd =
pair disks one=20
slice mirrored for VVR's SRL logs </FONT></SPAN></DIV>
<DIV><SPAN=20
class=3D941515815-07022006>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=20
<FONT face=3DArial size=3D2>remainder of the two disks concatenated into =
a 96 Gig FS=20
for a DSSU for a couple of small clients on 10Mb/s HD encrypted=20
links.</FONT></SPAN></DIV>
<DIV><SPAN class=3D941515815-07022006><FONT face=3DArial=20
size=3D2></FONT></SPAN>&nbsp;</DIV>
<DIV><SPAN class=3D941515815-07022006><FONT face=3DArial size=3D2>Pair =
of FC200 HBAs=20
in 66MHz PCI slots</FONT></SPAN></DIV>
<DIV><SPAN class=3D941515815-07022006><FONT face=3DArial size=3D2>Pair =
of GigaSwift=20
GigE cards in 33MHz PCI slots (only 2 66Mhz slots on the=20
machine.)</FONT></SPAN></DIV>
<DIV><SPAN class=3D941515815-07022006><FONT face=3DArial=20
size=3D2></FONT></SPAN>&nbsp;</DIV>
<DIV><SPAN class=3D941515815-07022006><FONT face=3DArial size=3D2>the =
HBAs each feed 7=20
tape drives and 1 robot.....using Inline tape copy.....the robots are=20
essentially mirrored....one onsite, one at the end of a DWDM=20
link.</FONT></SPAN></DIV>
<DIV><SPAN class=3D941515815-07022006><FONT face=3DArial=20
size=3D2></FONT></SPAN>&nbsp;</DIV>
<DIV><SPAN class=3D941515815-07022006><FONT face=3DArial=20
size=3D2></FONT></SPAN>&nbsp;</DIV>
<DIV><SPAN class=3D941515815-07022006><FONT face=3DArial size=3D2>I =
previously had=20
only 3 drives at each site.....and added the additional 4 at each site =
two weeks=20
ago.</FONT></SPAN></DIV>
<DIV><SPAN class=3D941515815-07022006><FONT face=3DArial=20
size=3D2></FONT></SPAN>&nbsp;</DIV>
<DIV><SPAN class=3D941515815-07022006><FONT face=3DArial size=3D2>with =
the 3 drives=20
per site, I was getting approx 60MB/s total date to the three =
drives...so an avg=20
of 20MB/s drive.</FONT></SPAN></DIV>
<DIV><SPAN class=3D941515815-07022006><FONT face=3DArial=20
size=3D2></FONT></SPAN>&nbsp;</DIV>
<DIV><SPAN class=3D941515815-07022006><FONT face=3DArial size=3D2>The =
addidional=20
drives were added for resiliency (Mgmt wanted two stus, one for dev, one =
for=20
prod servers..so now we have a stu with 4 drives avail for prod, and 3 =
drives=20
avail for dev)</FONT></SPAN></DIV>
<DIV><SPAN class=3D941515815-07022006><FONT face=3DArial size=3D2>in the =
case where a=20
client or two&nbsp;stuck at half duplex hangs up a drive all night, the=20
remainder of jobs can finish...</FONT></SPAN></DIV>
<DIV><SPAN class=3D941515815-07022006><FONT face=3DArial=20
size=3D2></FONT></SPAN>&nbsp;</DIV>
<DIV><SPAN class=3D941515815-07022006><FONT face=3DArial size=3D2>The =
problem is that,=20
after adding the new tape drives, we don't get any more total=20
throughput....seemd stuck at about 60-65MB/s, but now spread among twice =
the=20
tape drives.</FONT></SPAN></DIV>
<DIV><SPAN class=3D941515815-07022006><FONT face=3DArial size=3D2>This =
means that=20
since more machines are backing up concurrantly (allowed because of the=20
increased number of drives) that each machine is backing up slower...in =
effect,=20
each machine is taking twice as long to back up....causing some major=20
issues.</FONT></SPAN></DIV>
<DIV><SPAN class=3D941515815-07022006><FONT face=3DArial=20
size=3D2></FONT></SPAN>&nbsp;</DIV>
<DIV><SPAN class=3D941515815-07022006><FONT face=3DArial size=3D2>Anyone =
know of any=20
particular configs or issues that may be affecting us here? benchmarks =
on=20
processing required to manage this many ITC jobs? we didn't see a =
performance=20
hit in the lab, but never had this big of a system in the lab to really =
load=20
it.</FONT></SPAN></DIV>
<DIV><SPAN class=3D941515815-07022006><FONT face=3DArial=20
size=3D2></FONT></SPAN>&nbsp;</DIV>
<DIV><SPAN class=3D941515815-07022006><FONT face=3DArial size=3D2>I've =
been doing=20
consant IOstat and netstat monitoring....no =
waits/queues/errors/collisions=20
anywhere, except at one point for about 15 minutes during the FULL =
window on the=20
weekend, a few waits accumulated on the disks the /opt/openv resides =
on....but=20
the performance was the same during that period as the remaining 24+ =
hours,=20
where there were no waits.</FONT></SPAN></DIV>
<DIV><SPAN class=3D941515815-07022006><FONT face=3DArial=20
size=3D2></FONT></SPAN>&nbsp;</DIV>
<DIV><SPAN class=3D941515815-07022006><FONT face=3DArial size=3D2>any =
new ideas would=20
be welcome.</FONT></SPAN></DIV>
<DIV><SPAN class=3D941515815-07022006><FONT face=3DArial=20
size=3D2></FONT></SPAN>&nbsp;</DIV>
<DIV><SPAN class=3D941515815-07022006><FONT face=3DArial=20
size=3D2>Thanks,</FONT></SPAN></DIV>
<DIV><SPAN class=3D941515815-07022006><FONT face=3DArial=20
size=3D2>Paul</FONT></SPAN></DIV></BODY></HTML>
=00
------_=_NextPart_001_01C62C01.F415A3C1--
--K02i.433Qtj2bb.pmNqe.EMNZTen
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D

La version fran=C3=A7aise suit le texte anglais.

---------------------------------------------------------------------------=
---------

This email message from the Bank of Canada is given in good faith, and shal=
l not be
binding or construed as constituting any obligation on the part of the Bank.

This email may contain privileged and/or confidential information, and the =
Bank of
Canada does not waive any related rights. Any distribution, use, or copying=
 of this
email or the information it contains by other than the intended recipient is
unauthorized. If you received this email in error please delete it immediat=
ely from
your system and notify the sender promptly by email that you have done so.=20

Recipients are advised to apply their own virus checks to this message upon=
 receipt.

---------------------------------------------------------------------------=
---------

L'information communiqu=C3=A9e dans les courriels en provenance de la Banqu=
e du Canada
est soumise de bonne foi, mais elle ne saurait lier la Banque et ne doit au=
cunement
=C3=AAtre interpr=C3=A9t=C3=A9e comme constituant une obligation de sa part.

Le pr=C3=A9sent courriel peut contenir de l'information privil=C3=A9gi=C3=
=A9e ou confidentielle.
La Banque du Canada ne renonce pas aux droits qui s'y rapportent. Toute dif=
fusion,
utilisation ou copie de ce courriel ou des renseignements qu'il contient pa=
r une
personne autre que le ou les destinataires d=C3=A9sign=C3=A9s est interdite=
 Si vous recevez
ce courriel par erreur, veuillez le supprimer imm=C3=A9diatement et envoyer=
 sans d=C3=A9lai =C3=A0
l'exp=C3=A9diteur un message =C3=A9lectronique pour l'aviser que vous avez =
=C3=A9limin=C3=A9 de votre
ordinateur toute copie du courriel re=C3=A7u.

D=C3=A8s la r=C3=A9ception du pr=C3=A9sent message, le ou les destinataires=
 doivent activer leur
programme de d=C3=A9tection de virus pour =C3=A9viter toute contamination p=
ossible.

--K02i.433Qtj2bb.pmNqe.EMNZTen--

<Prev in Thread] Current Thread [Next in Thread>