Backup database with entries > 8192 KBytes

From: "G(dot) Anthony Reina" <reina(at)nsi(dot)edu>
To: "pgsql-hackers(at)postgreSQL(dot)org" <pgsql-hackers(at)postgreSQL(dot)org>
Subject: Backup database with entries > 8192 KBytes
Date: 1999-08-03 18:41:21
Message-ID: 37A737D0.C185EF38@nsi.edu
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

I know that I can't insert a tuple into Postgres > 8192 KBytes long. We
need to store data in a variable length float array which can take up a
total length of greater than this amount. To get around the limit, we
simply insert a zeroed array (which takes up less character space) and
then update the array in chunks specifying where in the array to put the
data.

e.g. INSERT INTO tablename VALUES '{0,0,0,0,0,0, .... }';
to pad the array with zeros (this, of course, has to be less
than 8192 KBytes)

then

UPDATE tablename SET array1[1:100] = '{123.9, 12345.987, 123454555.87,
.... }'
etc.

This works fine.

Okay, long intro for a short question. When we do a pg_dump and then
restore the database should the COPY contained within the pg_dumped file
be able to handle these long arrays?

-Tony Reina

Browse pgsql-hackers by date

  From Date Subject
Next Message The Hermit Hacker 1999-08-03 18:45:14 Re: [HACKERS] Mega-commits to "stable" version
Previous Message Hannu Krosing 1999-08-03 18:39:59 Re: [HACKERS] Mega-commits to "stable" version