[Veritas-bu] How to handle "infinite" retention while maintaining a reasonable catalog size?
2002-09-18 11:29:05
Subject: |
[Veritas-bu] How to handle "infinite" retention while maintaining a reasonable catalog size? |
From: |
maddenca AT myrealbox DOT com (Chris Madden) |
Date: |
Wed, 18 Sep 2002 17:29:05 +0200 |
This is a multi-part message in MIME format.
------=_NextPart_000_03EF_01C25F38.E31B7530
Content-Type: text/plain;
charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
All,
We have a business requirement to keep our full monthly backups for an =
"infinite" period. While I can't minimize the additional media =
purchases to support this business requirement I am looking for a way to =
minimize the catalog growth that will accompany it. I have more =
experience with Legato NetWorker and in that environment there is an =
ability to purge the "client file index" for a backup image while =
retaining the media database entry. This allows the bulk of the data =
(the list of files) to be blown away but still leaves the backup image =
entry itself so we still know what saveset is on what tape(s). =20
Is there a similar functionality in NetBackup? I think not but perhaps =
there are some other tricks or techniques that can allow me to =
accomplish the same thing? I had thought about archiving the older =
client indexes (perhaps after 1 yr) and then come client restore time I =
would first need to restore that client's indexes for that point in time =
on the backup server itself before trying to restore anything for the =
client (and of course I'd keep the indexes for the backup server itself =
online for an infinite period). Don't quite know how this strategy =
would play out though come time to upgrade to 4.5 with the binary DB =
structure....
Any comments from those who have solved this problem or would solve this =
problem are appreciated. =20
Regards,
-Chris
P.S. As an aside, today we're running at about 900 GB compressed (after =
2 weeks) on our catalogs and are soon going to pass the 1TB mark also =
requiring a 2nd filesystem due to Veritas LVM limitations at 1TB on a =
single volume and will then have to symmlink some beefy client indexes =
onto a 2nd filesystem. Yuck!
------=_NextPart_000_03EF_01C25F38.E31B7530
Content-Type: text/html;
charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META http-equiv=3DContent-Type content=3D"text/html; =
charset=3Diso-8859-1">
<META content=3D"MSHTML 6.00.2716.2200" name=3DGENERATOR>
<STYLE></STYLE>
</HEAD>
<BODY bgColor=3D#ffffff>
<DIV><FONT face=3DArial size=3D2>All,</FONT></DIV>
<DIV><FONT face=3DArial size=3D2></FONT> </DIV>
<DIV><FONT face=3DArial size=3D2>We have a business requirement to =
keep our=20
full monthly backups for an "infinite" period. While I can't=20
minimize the additional media purchases to support this business=20
requirement I am looking for a way to minimize the catalog growth that =
will=20
accompany it. I have more experience with Legato NetWorker and in =
that=20
environment there is an ability to purge the "client file index" for a =
backup=20
image while retaining the media database entry. This allows =
the bulk=20
of the data (the list of files) to be blown away but still leaves the =
backup=20
image entry itself so we still know what saveset is on what =
tape(s). =20
</FONT></DIV>
<DIV><FONT face=3DArial size=3D2></FONT> </DIV>
<DIV><FONT face=3DArial size=3D2>Is there a similar functionality in=20
NetBackup? I think not but perhaps there are some other tricks or=20
techniques that can allow me to accomplish the same thing? I had =
thought=20
about archiving the older client indexes (perhaps after 1 yr) and then =
come=20
client restore time I would first need to restore that client's indexes =
for that=20
point in time on the backup server itself before trying to =
restore=20
anything for the client (and of course I'd keep the indexes for the =
backup=20
server itself online for an infinite period). Don't quite know how =
this=20
strategy would play out though come time to upgrade to 4.5 with the =
binary DB=20
structure....</FONT></DIV>
<DIV><FONT face=3DArial size=3D2></FONT> </DIV>
<DIV><FONT face=3DArial size=3D2>Any comments from those who have solved =
this=20
problem or would solve this problem are appreciated. </FONT></DIV>
<DIV><FONT face=3DArial size=3D2></FONT> </DIV>
<DIV><FONT face=3DArial size=3D2>Regards,</FONT></DIV>
<DIV><FONT face=3DArial size=3D2>-Chris</FONT></DIV>
<DIV><FONT face=3DArial size=3D2></FONT> </DIV>
<DIV><FONT face=3DArial size=3D2>P.S. As an aside, today we're running =
at about 900=20
GB compressed (after 2 weeks) on our catalogs and are soon going to =
pass=20
the 1TB mark also requiring a 2nd filesystem due to Veritas LVM =
limitations at=20
1TB on a single volume and will then have to symmlink some beefy =
client=20
indexes onto a 2nd filesystem. Yuck!</FONT></DIV>
<DIV><FONT face=3DArial size=3D2></FONT> </DIV>
<DIV><FONT face=3DArial size=3D2></FONT> </DIV></BODY></HTML>
------=_NextPart_000_03EF_01C25F38.E31B7530--
|
<Prev in Thread] |
Current Thread |
[Next in Thread>
|
- [Veritas-bu] How to handle "infinite" retention while maintaining a reasonable catalog size?,
Chris Madden <=
|
|
|