ADSM-L

Re: opinions on disk partitioning

2003-08-21 11:53:42
Subject: Re: opinions on disk partitioning
From: David Longo <David.Longo AT HEALTH-FIRST DOT ORG>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Thu, 21 Aug 2003 11:52:37 -0400
I have a FAStT200 HA with (20) 73GB disks and AIX TSM server.

In brief, when I initially setup, I had (3) 5 member raid disks
configured for TSM disk pools and backing up 400 GB per day.
This was slower as amount of data increased.  I changed to (5)
3 member raid disks and performance was much better.  I am now 
backing up 700 - 800 GB in about a 15 hour window from 200 clients
and doing migration and backup stg.

I would definitely not use single LUN for TSM disk pool.  Also at TSM
point, you should have multiple TSM volumes.  For a 500GB disk pool,
I would have at least 10 TSM volumes.  Unless you just have a few big clients.

The idea is not only more spindles but more "arrays" for improved
performance.


David B. Longo
System Administrator
Health First, Inc.
3300 Fiske Blvd.
Rockledge, FL 32955-4305
PH      321.434.5536
Pager  321.634.8230
Fax:    321.434.5509
david.longo AT health-first DOT org


>>> leonard AT UKY DOT EDU 08/21/03 08:53AM >>>
I would like to get opinions on disk configurations for a new platform.

I am installing my TSM server on an aix platform, with FASTT700 disk.

The FASTT700 has (28) 73 GB, 15K disks, of which 23 are available for this
application.

I am considering the following 2 setups, but please feel free to make other
suggestions!

1)  12 drives for the TSM DB, raid 10.  I only need 100 GB, plus room to grow,
           perhaps double, so this wastes a lot of space, but I get needed
spindles.
       Remaining 11 drives would be Raid 5, and exported as a single LUN
which would
           have 4 logical volumes for:

             570 GB disk pool
             200 GB disk pool
               20 GB disk pool
               13 GB TSM log

2)  8 drives for the TSM DB, raid 5.  I only need 100 GB, plus room to grow,
           perhaps double, so this is less space, but fewer spindles and no
mirror.
       Remaining 15 drives would be Raid 5, and exported as a single LUN
which would
           have 4 logical volumes for:

             500 GB disk pool
             500 GB disk pool
               82 GB disk pool
               13 GB TSM log

#2 gives me more spindles for the disk pools, by using only raid5 for the
database partitions.

Another concern is having the disk pools compete with each other on the
same disks.  Would
it be better to have fewer spindles per disk pools, but have disk pools
seperate from each other,
or all the disk pools spread over the same larger number of spindles?

Also, I have had a lot of conflicting information regarding TSM doing
mirrors of the DB and LOG
versus letting the FASTT hardware do raid protection.  It seems the
hardware implementation would
be faster, and just as safe, as letting TSM do mirrors...not to mention
allowing me to spread things
out a bit more.

Any opinions on my options?

Thanks!

leonard

##############################################################
This message is for the named person's use only.  It may 
contain confidential, proprietary, or legally privileged 
information.  No confidentiality or privilege is waived or 
lost by any mistransmission.  If you receive this message 
in error, please immediately delete it and all copies of it 
from your system, destroy any hard copies of it, and notify 
the sender.  You must not, directly or indirectly, use, 
disclose, distribute, print, or copy any part of this message
if you are not the intended recipient.  Health First reserves
the right to monitor all e-mail communications through its
networks.  Any views or opinions expressed in this message
are solely those of the individual sender, except (1) where
the message states such views or opinions are on behalf of 
a particular entity;  and (2) the sender is authorized by 
the entity to give such views or opinions.
##############################################################

<Prev in Thread] Current Thread [Next in Thread>