Re(2): backup large Pg databases with large objects

From: "pgsql-admin" <pgsql-admin(at)fc(dot)emc(dot)com(dot)ph>
To: pgsql-admin(at)postgresql(dot)org
Subject: Re(2): backup large Pg databases with large objects
Date: 2000-10-06 13:01:11
Message-ID: fc.000f567200724aae000f567200724aae.724d7d@fc.emc.com.ph
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

Thank you for your reply. Pretty nice script actually.
Thanks also to the pg_dumplo-0.0.5 of zakkr(at)zf(dot)jcu(dot)cz(dot)

I kinda like experimenting on perl script and I encountered problems using
pg_dumplo-0.0.5 in exporting. So I decided to make my own perl script for
exporting
all large object of all Pg database. Ideas of exporting were borrowed from
pg_dumplo-0.0.5. It is still in debugging stage but it works. I will be
glad to receive
some comments.

Sherwin

Ah, Here it is:
#!/usr/bin/perl -w
use strict;
use Pg;

my $space = '/fs/db/pgsql/backups/tmp';
my $progname = 'export_lobj.pl';

my $conn_all = Pg::connectdb("dbname=template1");
die $conn_all->errorMessage unless PGRES_CONNECTION_OK eq
$conn_all->status;

# find all database
my $sql = "SELECT datname FROM pg_database ORDER BY datname";
my $res_all = $conn_all->exec($sql);
die $conn_all->errorMessage unless PGRES_TUPLES_OK eq
$res_all->resultStatus;

my $counter = 0;
while (my ($database) = $res_all->fetchrow) {
my $conn_db = Pg::connectdb("dbname=$database");
die $conn_db->errorMessage unless PGRES_CONNECTION_OK eq
$conn_db->status;

# find any candidate tables with columns of type oid
$sql = qq/SELECT c.relname, a.attname
FROM pg_class c, pg_attribute a, pg_type t
WHERE a.attnum > 0
AND a.attrelid = c.oid
AND a.atttypid = t.oid
AND t.typname = 'oid'
AND c.relname NOT LIKE 'pg_%';
/;

my $res_db = $conn_db->exec($sql);
die $conn_db->errorMessage unless PGRES_TUPLES_OK eq
$res_db->resultStatus;

my $path;
local (*F);
while (my ($table, $fld) = $res_db->fetchrow) {

# find large object id
$sql = sprintf ("SELECT x.%s FROM %s x WHERE EXISTS (SELECT c.oid FROM
pg_class c WHERE c.relkind = 'l' AND x.%s = c.oid)",
$fld, $table, $fld);

# find large object id
#$sql = sprintf ("SELECT x.%s FROM %s x, pg_class c WHERE x.%s = c.oid
and c.relkind = 'l'",
# $fld, $table, $fld);

my $res_tbl = $conn_db->exec($sql);
die $conn_db->errorMessage unless PGRES_TUPLES_OK eq
$res_tbl->resultStatus;

my $tuples;
next unless (($tuples = $res_tbl->ntuples) > 0);
$counter += $tuples;

$path = sprintf ("%s/%s", $space, $database);

if ( -e $path) {

# creates file if it don't exist and appends to it
open(F,">>$path/lo_dump.index") || die "\n $0 Cannot open $! \n";
} else {

# create dir for database
mkdir($path, 0755) || die "\n Can't create $path: $! \n";

# opens file for writing. overwrite existing file
open(F, ">$path/lo_dump.index") || die "\n $0 Cannot open $! \n";

# temporarily turn off warnings
# there might be undef
local $^W = 0;

print F "#\n# This is the PostgreSQL large object dump index\n#\n";
printf F "#\tDate: %s\n", scalar(localtime);
printf F "#\tHost: %s\n", $conn_db->host;
printf F "#\tDatabase: %s\n", $conn_db->db;
printf F "#\tUser: %s\n", $conn_db->user;
printf F "#\n# oid\ttable\tattribut\tinfile\n#\n";
}

$path = sprintf ("%s/%s", $path, $table);

# create dir for table
mkdir($path, 0755) || die "\n Can't create $path: $! \n";

$path = sprintf ("%s/%s", $path, $fld);

# create dir for field
mkdir($path, 0755) || die "\n Can't create $path: $! \n";

printf "dump %s.%s (%d large obj)\n", $table, $fld, $tuples;

while (my ($lobjOid) = $res_tbl->fetchrow) {

$path = sprintf ("%s/%s/%s/%s/%s",
$space, $database, $table, $fld, $lobjOid);

my $res_lobj = $conn_db->exec("BEGIN");
die $conn_db->errorMessage unless PGRES_COMMAND_OK eq
$res_lobj->resultStatus;

# export large object
if ( 1 == $conn_db->lo_export($lobjOid, $path) ) {
printf F "%s\t%s\t%s\t%s/%s/%s/%s\n",
$lobjOid, $table, $fld, $database, $table, $fld, $lobjOid;
} else {
printf STDERR "%s: %s\n", $conn_db->errorMessage, $progname;
}

$res_lobj = $conn_db->exec("END");
die $conn_db->errorMessage unless PGRES_COMMAND_OK eq
$res_lobj->resultStatus;

}
close(F);
}
undef $conn_db;
}

printf "\nExported %d large objects.\n\n", $counter;
undef $conn_all;

camm(at)enhanced(dot)com writes:
>
>Greetings! Here is my poor man's replication/backup script for a DB
>with large objects:
>
>=============================================================================
>pgbackup
>=============================================================================
>#!/bin/bash
>
>H=db
>D=maguire
>BDIR=/home/camm/pgbackup
>
>set -e
>
>rm -fr $BDIR/*
>psql -h $H -d $D -t -q -c "select
>lo_export(image,textcat('$BDIR/',image::text)) from check_images"
>>/dev/null
>j=$(ls -l $BDIR/* | wc -l)
>pg_dump -c -o -h $H $D | gzip -9 >$BDIR/$D.dump.gz
>destroydb -h db1 $D
>createdb -h db1 $D
>zcat $BDIR/$D.dump.gz | psql -h db1 $D >/dev/null 2>&1 || true
>k=$(psql -t -q -h db1 $D -c "select image_import('$BDIR')")
>#echo $j $k
>if [ $j -ne $k ] ; then
> echo Image copy error: $j $k
> exit 1
>fi
>exit 0
>
>=============================================================================
>pgplsql function image_import
>=============================================================================
>drop function image_import(text);
>create function image_import(text) returns int4 as '
>declare
> tci check_images%rowtype;
> ttext alias for $1;
> tint int4;
>begin
> update check_images set image = 0 from check_images t1 where t1.number =
>0 and check_images.number != 0 and t1.image = check_images.image;
> tint := 0;
> for tci in select * from check_images where image != 0 loop
> update check_images set image =
>lo_import(textcat(textcat(ttext,''/''),image::text)) where image =
>tci.image;
> raise notice ''% %'', tint, tci.image;
> tint := tint + 1;
> end loop;
> update check_images set image = t1.image from check_images t1 where
>t1.number = 0 and check_images.image = 0;
> return tint;
>end;
>' language 'plpgsql';
>=============================================================================
>
>"pgsql-admin" <pgsql-admin(at)fc(dot)emc(dot)com(dot)ph> writes:
>
>> Has anyone tried to backup all Pg databases with large objects?
>> Pls share. Thanks.
>>
>> Sherwin
>>
>>
>>
>
>--
>Camm Maguire camm(at)enhanced(dot)com
>==========================================================================
>"The earth is but one country, and mankind its citizens." -- Baha'u'llah

Browse pgsql-admin by date

  From Date Subject
Next Message SchiSchi 2000-10-06 13:47:21 Any experiencies of PG 7.0.2 on OS/2 Warp?
Previous Message Ragnar Kjørstad 2000-10-06 12:07:28 Re: Re(2): Crontab and PostgreSQL Backup