Chris Robertson schrieb:
> Ralf Gross wrote:
> > Hi,
> >
> > I'm faced with the growing storage demands in my department. In the
> > near future we will need several hundred TB. Mostly large files. ATM
> > we already have 80 TB of data with gets backed up to tape.
> >
> > Providing the primary storage is not the big problem. My biggest
> > concern is the backup of the data. One solution would be using a
> > NetApp solution with snapshots. On the other hand is this a very
> > expensive solution, the data will be written once, but then only read
> > again. Short: it should be a cheap solution, but the data should be
> > backed up. And it would be nice if we could abandon tape backups...
> >
> > My idea is to use some big RAID 6 arrays for the primary data, create
> > LUNs in slices of max. 10 TB with XFS filesystems.
> >
> > Backuppc would be ideal for backup, because of the pool feature (we
> > already use backuppc for a smaller amount of data).
> >
> > Has anyone experiences with backuppc and a pool size of >50 TB? I'm
> > not sure how well this will work. I see that backuppc needs 45h to
> > backup 3,2 TB of data right now, mostly small files.
> >
> > I don't like very large filesystems, but I don't see how this will
> > scale with either multiple backuppc server and smaller filesystems
> > (well, more than one server will be needed anyway, but I don't want to
> > run 20 or more server...) or (if possible) with multiple backuppc
> > instances on the same server, each with a own pool filesystem.
> >
> > So, anyone using backuppc in such an environment?
> >
>
> In one way, and compared to some my backup set is pretty small (pool is
> 791.45GB). In another dimension, I think it is one of the larger
> (comprising 20874602 files). The breadth of my pool leads to...
>
> -bash-3.2$ df -i /data/
> Filesystem Inodes IUsed IFree IUse% Mounted on
> /dev/drbd0 1932728448 47240613 1885487835 3% /data
>
> ...nearly 50 million inodes used (so somewhere close to 30 million hard
> links). XFS holds up surprisingly well to this abuse*, but the strain
> shows. Traversing the whole pool takes three days. Attempting to grow
> my tail (the number of backups I keep) causes serious performance
> degradation as I approach 55 million inodes.
>
> Just an anecdote to be aware of.
I think I've to look for a different solution, I just can't imagine a
pool with > 10 TB.
> * I have recently taken my DRBD mirror off-line and copied the BackupPC
> directory structure to both XFS-without-DRBD and an EXT4 file system for
> testing. Performance of the XFS file system was not much different
> with, or without DRBD (a fat fiber link helps there). The first
> traversal of the pool on the EXT4 partition is about 66% through the
> pool traversal after about 96 hours.
nice ;)
Ralf
------------------------------------------------------------------------------
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
|