ADSM-L

Re: [ADSM-L] strange tape handling

2013-09-20 13:23:55
Subject: Re: [ADSM-L] strange tape handling
From: Roger Deschner <rogerd AT UIC DOT EDU>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Fri, 20 Sep 2013 12:21:55 -0500
I see this strange behavior as well. All the tapes are READWRITE and
FILLING. It's the same on TSM V5 or V6.

Sometimes this arises when you have multiple migration processes. A
guess is that you will have (migprocesses * collocationgroups) FILLING
tapes. That could be a lot of FILLING tapes. But in real life, there are
often more than (migprocess * collocationgroups) FILLING tapes.

I think this can also come about due to tape thrashing as migration
works its way through collocation groups. If you have two tape drives,
and the FILLING tape for a collocation group is in use on one, but the
second migration process wants to use that tape to migrate files for
another node in that same collocation group, it allocates a new scratch
tape. So one change I'd suggest for TSM development would be to migrate
all nodes in a collocation group together. This would improve both tape
utilization, and performance.

Even that does not explain it all. I still see scratach tapes allocated
when there are FILLING READWRITE tapes available for the same
collocation group. It is still strange.

If the number of scratch tapes is getting low, I MOVE DATA off of the 1%
full volumes. There is so little data involved in a 1% full tape, that
you can typically afford to move it back into a disk stgpool where it
will be re-migrated.

I wish it didn't work this way, but I don't worry about it too much. The
99% empty FILLING tapes will be used as soon as really needed. I start
to worry when I see that the situation is forcing the server to ignore
collocation boundaries. That can happen when the number of scratch tapes
reaches 0. Pay more attention to setting MAXSCRATCH to the actual number
of tapes in your tape library, and to setting ESTCAP in the devclass to
how much data you can actually put on a tape after compression, and then
your %full statistics will accurately reflect the total amount of unused
space left in the stgpool on FILLING tapes as well as SCRATCH tapes. Our
real-life compression ratio runs about 1.4:1.

This annoying but not critical problem is even worse with FILE stgpools.
Another reason we use scratch FILE volumes, not preallocated ones. A 1%
full scratch FILE volume only takes up 1% of the space, whereas a 1%
full preallocated volume (like a tape) takes up 100% of the space.

Roger Deschner      University of Illinois at Chicago     rogerd AT uic DOT edu
======I have not lost my mind -- it is backed up on tape somewhere.=====


On Thu, 19 Sep 2013, Lee, Gary wrote:

>A few were readonly.  However, that did not explain all of them by a long way.
>Others are filling.
>
>-----Original Message-----
>From: ADSM: Dist Stor Manager [mailto:ADSM-L AT VM.MARIST DOT EDU] On Behalf 
>Of Prather, Wanda
>Sent: Thursday, September 19, 2013 4:05 PM
>To: ADSM-L AT VM.MARIST DOT EDU
>Subject: Re: [ADSM-L] strange tape handling
>
>What is the access setting on the tape, is it READWRITE? And is status FULL or 
>FILLING?
>
>-----Original Message-----
>From: ADSM: Dist Stor Manager [mailto:ADSM-L AT VM.MARIST DOT EDU] On Behalf 
>Of Lee, Gary
>Sent: Thursday, September 19, 2013 2:05 PM
>To: ADSM-L AT VM.MARIST DOT EDU
>Subject: [ADSM-L] strange tape handling
>
>Tsm server 6.2.4 on RHEL 6
>
>Tape storage pool on ts1120 tapes, collocation set to group.
>
>Node libprint does not belong to any group.  Its data is now on two volumes.
>First volume has estimated capacity 1 tB.
>It is 1.4% full.
>
>This morning, tsm was migrating its data from disk pool to tape.  It pulled 
>another scratch to store systemstate data even though the original volume was 
>only  1.4% full as stated above.
>
>What is up with this.  This is causing my scratch tapes to be eaten up to no 
>purpose.
>

<Prev in Thread] Current Thread [Next in Thread>