From: | Patrick Coulombe <pcoulombe(at)mediacces(dot)com> |
---|---|
To: | pgsql-sql(at)postgresql(dot)org |
Subject: | 3 options |
Date: | 2001-03-29 08:13:29 |
Message-ID: | 005d01c0b828$230b0020$04d1ca18@11h11 |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-sql |
Hi,
Using : PostgreSQL 6.5.3 on i686-pc-linux-gnu
Platform : Linux (Red Hat 6.0)
3 errors :
1)
pg_dump medias > medias.pgdump280301
pqWait() -- connection not open
PQendcopy: resetting connection
SQL query to dump the contents of Table 'dossier' did not execute correctly.
After we read all the table contents from the backend, PQendcopy() failed.
Explanation from backend: 'pqWait() -- connection not open
'.
The query was: 'COPY "dossier" TO stdout;
'.
2)
medias=> vacuum;
NOTICE: Rel dossier: Uninitialized page 28 - fixing
NOTICE: Rel dossier: Uninitialized page 29 - fixing
NOTICE: BlowawayRelationBuffers(dossier, 28): block 28 is dirty (private 0,
last 0, global 0)
pqReadData() -- backend closed the channel unexpectedly.
This probably means the backend terminated abnormally
before or while processing the request.
We have lost the connection to the backend, so further processing is
impossible. Terminating.
3)
on my website (using php) :
Warning: PostgreSQL query failed: ERROR: Tuple is too big: size 9968 in
/xxxxxxx/enregistrer3.php on line 45
---
1 & 2 seem to be ok, because right now i can do a pg_dump without the error,
but I've searched in the mailing-list for my question (3) and this is what
i came with:
---
i have 3 options - which one would you recommand me ?
- change the default size block in include/config.h (recompile - hum...)
- use large object interface (what is that?)
- upgrade to 7.1 (fear to lost my data - i'm not a linux guru)
---
thank you
patrick, montreal, canada
From | Date | Subject | |
---|---|---|---|
Next Message | Marcin Kowalski | 2001-03-29 08:13:58 | Re: pg_dump potential bug |
Previous Message | Martijn van Dijk | 2001-03-29 08:05:48 | Escaping \ |