Networker

Re: [Networker] Parallelism???

2005-06-27 10:27:53
Subject: Re: [Networker] Parallelism???
From: Robert Maiello <robert.maiello AT PFIZER DOT COM>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Mon, 27 Jun 2005 10:19:54 -0400
Also did you look into the rx-dma-weight and tx-dma-weight params for /dev/ce?

One is suppose to be able to give receives more weight.   Can't
say I notice a difference setting it though.

Robert Maiello
Pioneer Data Systems


On Sat, 25 Jun 2005 11:08:18 +0200, Oscar Olsson <spam1 AT QBRANCH DOT SE> 
wrote:

>On Fri, 24 Jun 2005, Robert Maiello wrote:
>
>RM> Yes, it seems the Cassini interface can do 900+ Mbits/sec but that Sun
>RM> recommends 3to4 UltraSparc III's per NIC.  We can't use Jumbo frames
either.
>
>OK, do you know of any recommendations from Sun for the bge
>(Broadcom) interfaces? It would be interesting to know how they compare in
>regard to CPU useage for the same type of network I/O.
>
>RM> I heard Solaris 9 and, of course Solaris 10 make some strides in Network
>RM> performance.  Still, I'd love to hear if anyone is maxing out 2 or more
>RM> and what Sun hardware they use to do it??   Do the UltraSparc IV's help
>RM> in any way here?? ie...2 ce worker threads per CPU ??   Of course PCI
>RM> bus bandwidth becomes an issue as well...
>

>We're running solaris 9, with 4 UltraSPARC IIIi CPUs with 1MB cache, at
>1281MHz, on a Sun V440. PCI bandwidth shouldn't be a problem, since
>they're on two different buses. There is just no way we can max out both
>NICs on this hardware, at least not without using jumbo frames, and I
>doubt that will add more than 10-20% extra performance. We rather need to
>double it. :)
>
>Anyway, yesterday I decided to see if I can't tune kernel and driver
>parameters in order to increase performance. I used a few documents I
>found on google, and compared them (they tend to conflict somewhat), and
>then I came up with a few settings that seem better than the previous
>defaults. I'm pretty sure they're not optimal, but I have seen a 10%
>performance increase on our NetWorker server. The performance gained is
>10% higher backup throughput. This is clearly visible on the MRTG graph,
>that is drawn by using load data from the port-channel interface on the
>switch (i.e. both ce0 and ce1 aggregated, since we're running Sun Trunking
>1.3).
>
>I was thinking of applying "set ce:ce_taskq_disable = 1", but some
>documents suggested that this might be a bad idea. Does anyone know why or
>if it should be enabled/disabled and under what circumstances?
>
>This is what I applied (no warranty that this will work for you. Who
>knows, it might even mess up your system instead? :) ):
>
>---
>
>bash-2.05# pwd
>/etc/rc2.d
>bash-2.05# more S94local
>#!/sbin/sh
># L
>
>--
>Note: To sign off this list, send a "signoff networker" command via email
>to listserv AT listserv.temple DOT edu or visit the list's Web site at
>http://listserv.temple.edu/archives/networker.html where you can
>also view and post messages to the list. Questions regarding this list
>should be sent to stan AT temple DOT edu
>=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=

--
Note: To sign off this list, send a "signoff networker" command via email
to listserv AT listserv.temple DOT edu or visit the list's Web site at
http://listserv.temple.edu/archives/networker.html where you can
also view and post messages to the list. Questions regarding this list
should be sent to stan AT temple DOT edu
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=

<Prev in Thread] Current Thread [Next in Thread>