Veritas-bu

[Veritas-bu] Sharing a throughput script for Netbackup

2003-11-19 13:18:51
Subject: [Veritas-bu] Sharing a throughput script for Netbackup
From: mlin AT fxcm DOT com (FXCM - Mark Lin)
Date: Wed, 19 Nov 2003 13:18:51 -0500
Hi list,
    After failure on searching the net for a script that reports total daily 
backup, I wrote one myself.  Just thought maybe some of you will find it useful 
too.  Perhaps Advance Reporter already has this function, but with no budget in 
the department and little knowledge of perl, a cheezy script is born.  If you 
have question or problem running it, I'll be glad to help out however I can.


p.s.  Please read the top part of the script to see what needs to be changed 
for your environment.

Mark


Seems like I can't attach .pl stuff.  Here's the copy and past:



#!/usr/bin/perl

# bp_report.pl
# Created by Mark Lin  11/12/2003

# Parse /usr/openv/netbackup/bin/admincmd/bpimagelist output.
# Outputs I would like to have:
# 1. Each host's backup size + speed
# 2. Total backup files + size
# 3. Growing rate (half done, only work with differential incremental backup)
# 4. Tapes used for the during of period specify. (not done)

# Parameters:
# -d : Starting date for bpimage
# -e : Ending date for bpimage
# -q : Day interval from today
# -c : Single client
# -v : Verbose ( Just more messy output but good stuff if you want to see how 
much each client is backing up each day and total throughput of the media 
server with client's proportion ).
# -h : Help

use Getopt::Std;
use Time::localtime;

# paramenter declartion

# Server that has the backup catalogues
# Modify this for your own server
$server = 'abc.foo.bar';

# bpimagelist command location
$bpimagelist = '/usr/openv/netbackup/bin/admincmd/bpimagelist';

# Get script name
(my $routine = $0) =~ s#^.*(\\|\/)##; #routine name, minus path


# By default 7 days back without -q;
$default_days = 7;

# default time;
$start_time = '00:00:00';
$end_time= '00:00:00';

# Get the options, and if wrong set, output help screen
getopts('d:e:q:c:vh', \%opts);

if (defined($opts{'q'}) && exists($opts{'d'})) {
   print "q is $opts{'q'}\n";
}
if ( ($opts{'d'} ne '' || exists($opts{'d'}) ) && ($opts{'e'} eq '' || 
!exists($opts{'e'}) ) 
  || ($opts{'e'} ne '' || exists($opts{'e'}) ) && ($opts{'d'} eq '' || 
!exists($opts{'d'}) )
  || ( defined($opts{'q'}) && exists($opts{'d'}) )
  || ( defined($opts{'q'}) && exists($opts{'e'}) )
  || defined($opts{'h'}) ) {
  &usage;
}


sub usage() {

  print <<EOF;

usage: $routine -[deqcvh] 
Generate a daily usage report from netbackup volume image.


Run it without parameter to retrieve all clients' info for the past seven days. 
***Remeber to change 'server' parameter in the script.

Parameters Explaination: 
-d & -e :
If you want to specify date boundry, you have to specify both of them.  -d is 
starting date and -e is the ending date.
They are similar to Netbackup's bpimage parameters except it only takes the 
date and not the time.
Ex: bp_report -d 11/01/2003 -e 11/08/2003
Generate reports from 11/01/2003 to 11/08/2003

-q :
Specify how many days back from today.  Can NOT combine with -d & -e.
Ex: bp_report -q 7
It will generate report from the past week.

-c :
Specify single clients for report.
Ex: bp_report -c abc.foo.com
Generate report of the host abc.foo.com for the past seven days.

-v :
Show verbose output

-h :
Show this help screen

EOF

  exit(1);
}


# Setting up parameters

# Single client option
if ( defined($opts{'c'}) ) {
  $single_client = "-client $opts{'c'}";
}

# If no date is specify, use $deefault_days interval from today
if ( defined($opts{'d'}) && defined($opts{'e'}) ) {
  $start_date = $opts{'d'};
  $end_date = $opts{'e'};
}
else {
  if ( ! defined($opts{'q'}) ) {
    $days = $default_days;
  } 
  else {
    $days = $opts{'q'};
  }
  # Setup default date and time when -d & -e are not specify.
  $tm = localtime;
  $month = $tm->mon + 1;
  $end_date = $month . "/" . $tm->mday . "/" . ($tm->year + 1900);

  # A day is 86400 = 60 * 60 * 24;
  $othertime = time - (86400 * $days);
  $tm = localtime($othertime);
  $month = $tm->mon + 1;
  $start_date = $month . "/" . $tm->mday ."/". ($tm->year+1900);
}

$command = "$bpimagelist -L -M $server -d $start_date $start_time -e $end_date 
$end_time $single_client";

if ( defined($opts{'v'}) ) {
  print "$command\n";
}

@BPIMAGE = `$command`;

if ($?) {
  print "bpimage command returns with error\n";
  print "Check the script for necessary parameter modification\n";
  exit(1);
}

# Data Hashes
%tmpHolder = ();
%dataHolder = ();

############################
# -- Program Stars HERE -- #
############################

$i = 0; # Counter for line

# Add an extra "\n" so the last token can be parsed
push(@BPIMAGE,"\n");

# Start the parsing.
foreach (@BPIMAGE) {
  # first line is always empty line, so we ditch it.
  if ($i == 0) {
    $i++;
    next;
  }
  # look at the bpimage output if you dont know how this regex works
  if ( $_ =~ m/^(.+):\s+(.+)$/ ) {
    $tt = $2;
    $tmpHolder{$1} = $tt;

#    print "$1 is $tt\n";

  }

  # if it's an empty line, start to proceed.
  if ( $_ =~ m/^\n$/ ) {
    # At this point, one backup token is done.  We will do our necessary 
parsing.
    # Get the date so we can differentiate daily traffic.
    ($day,$month,$date,$year,$time,$serial) = split(/\s/,$tmpHolder{'Backup 
Time'});
    $backup_date = $month . "-" . $date . "-" . $year;
    
    # Get elasped time in Seconds
    ($total_elapsed,$garbage1) = split(/\s/,$tmpHolder{'Elapsed Time'});


    # Distinguish Cumulative and Full backup and separate them by date
    if ( $tmpHolder{'Schedule Type'} eq 'CINC (4)' ) {
      $dataHolder{$tmpHolder{'Client'}}{$backup_date}{'cumulative_size_total'} 
+= $tmpHolder{'Kilobytes'};
      $dataHolder{$tmpHolder{'Client'}}{$backup_date}{'cumulative_time_total'} 
+= $total_elapsed;

      # Get a total for each media server and record which host
      $server_total{$tmpHolder{' Host'}}{$backup_date}{'cumulative_size_total'} 
+= $tmpHolder{'Kilobytes'};
      $server_total{$tmpHolder{' 
Host'}}{$backup_date}{'cumulative_hosts'}{$tmpHolder{'Client'}} += 
$tmpHolder{'Kilobytes'};
    }

    if ( $tmpHolder{'Schedule Type'} eq 'FULL (0)' ) {
      $dataHolder{$tmpHolder{'Client'}}{$backup_date}{'full_size_total'} += 
$tmpHolder{'Kilobytes'};
      $dataHolder{$tmpHolder{'Client'}}{$backup_date}{'full_time_total'} += 
$total_elapsed;
      $server_total{$tmpHolder{' Host'}}{$backup_date}{'full_size_total'} += 
$tmpHolder{'Kilobytes'};
      $server_total{$tmpHolder{' 
Host'}}{$backup_date}{'full_hosts'}{$tmpHolder{'Client'}} += 
$tmpHolder{'Kilobytes'};
    }

    %tmpHolder = ();
    $proceed = 0;
  }
}

# Now we got a hash named $dataHolder that holds the data we just parsed.  Now 
it's time to output them.
# We are going to do some massive outputting, you ready?

# This hash is used to calculate rate of growth
%rates = ();
# Output buffer
$meout = '';

# Each Host
for my $key (sort keys %dataHolder)  {

  $meout .= "\nHost: $key\n";

  # $key2 = date
  for my $key2 (sort keys %{$dataHolder{$key}}) {
    $meout .= "[$key2]\n";

    # Each tag and its value
    for my $key3 ( sort keys %{$dataHolder{$key}{$key2}} ) {

      $value3 = $dataHolder{$key}{$key2}{$key3};
      # outputs backup size
      if ($key3 eq 'full_size_total' || $key3 eq 'cumulative_size_total') {
        $va3 = sprintf "%.2f", ($value3 / 1024);
        $meout .= "$key3 => $va3 Megabytes\n";

        # Calculate growth rate cumulative
        if (! exists( $rates{$key}{$key3} ) ) {
          $rates{$key}{$key3} = $value3;
          $growth_text = "No Growth Rate(Lack of previous data).\n";
        }
        else {
          #print "( ($value3 - $rates{$key}{$key3}) / $rates{$key}{$key3} )\n";
          if ( $rates{$key}{$key3} != 0 ) {
            $growth = ( ($value3 - $rates{$key}{$key3}) / $rates{$key}{$key3} ) 
* 100;
            $growth = sprintf "%.2f", $growth;
            $growth_text = "$key3 Growth Rate: $growth percent\n";
          }
          else {
            $growth_text = "$key3 is 0\n";
          }
          
          $rates{$key}{$key3} = $value3;

        }
      }

      # Outputs Time and average backup speed
      if ($key3 eq 'full_time_total' || $key3 eq 'cumulative_time_total') {
        $meout .= "$key3 => $value3 Seconds\n";
        if ( $key3 eq 'full_time_total' ) {
           $k = 'full_size_total';
        }
        if ( $key3 eq 'cumulative_time_total' ) {
           $k = 'cumulative_size_total';
        }

        # Speed of the backup
        $speed = sprintf "%.2f", ($rates{$key}{$k} / $value3);
        $speed2 = sprintf "%.2f", ($rates{$key}{$k} / $value3 * 60 / 1024);
        
        # Uncomment this if you run differential incremental backup.  It does 
not work right for cumulative backup.
        #print $growth_text;

        $meout .= "Speed => $speed Kilobytes/Second, $speed2 
Megabytes/Minute\n";
        
      }
    } # Each field
  $meout .= "\n";
  } # Each Date
  $meout .= "-------------------------------------------------------\n";
  $corpbackup1 = 0;
  $mdcbackup1 = 0;
}


# Host Information
if ( defined($opts{'v'}) || defined($opts{'c'}) ) {
  print $meout;
}

# if single client option is not define, then print total server backup.
if (! defined($opts{'c'}) ) {

  # Print out the total storage each day
  # There are only two types of backup that can happen for us everyday,
  # a cumulative back or full backup.  Clients does their full backup
  # according to their own schedule.  That's why you see full backup everyday.
  print "########################\n";
  print "# Media Server Summary #\n";
  print "########################\n";
  
  for my $key (sort keys %server_total)  {

    # Server Name
    print "*** $key ***\n";
    # Each Date
    for my $key2 (sort keys %{$server_total{$key}}) {
      print "[$key2]\n";
      $size = sprintf "%.2f", 
($server_total{$key}{$key2}{'cumulative_size_total'} / 1024);
      for my $host (sort keys 
%{$server_total{$key}{$key2}{'cumulative_hosts'}}) {
        $value = $server_total{$key}{$key2}{'cumulative_hosts'}{$host} / 1024;
        $value = sprintf "%.2f", $value;
        $hosts .= $host . " - " . $value . " Megs\n";
      }
      print "Cumulative Total: $size Megabytes. \n"; 
      if ( defined($opts{'v'}) ) {
        print "Hosts: \n$hosts\n";
      }
      $hosts = '';

      $size = sprintf "%.2f", ($server_total{$key}{$key2}{'full_size_total'} / 
1024);
      for my $host (sort keys %{$server_total{$key}{$key2}{'full_hosts'}}) {
        $value = $server_total{$key}{$key2}{'full_hosts'}{$host} / 1024;
        $value = sprintf "%.2f", $value;
        $hosts .= $host . " - " . $value . " Megs\n";
      }
      print "Full Total: $size Megabytes\n";
      if ( defined($opts{'v'}) ) {
        print "Hosts: \n$hosts\n";
      }
      $hosts = '';
      print "\n";
    }
    print "-------------------------------------------------------\n";
    print "\n";
  }
  
}


_____________________________________________________________________________________________________________________________
FXCM, L.L.C.® assumes no responsibility for errors, inaccuracies or omissions 
in these materials. FXCM, L.L.C.® does not warrant the accuracy or completeness 
of the information, text, graphics, links or other items contained within these 
materials. FXCM, L.L.C.® shall not be liable for any special, indirect, 
incidental, or consequential damages, including without limitation losses, lost 
revenues, or lost profits that may result from these materials. All information 
contained in this e-mail is strictly confidential and is only intended for use 
by the recipient.


<Prev in Thread] Current Thread [Next in Thread>