Re: Problems with Large Objects using Postgres 7.2.1

From: "Chris White" <cjwhite(at)cisco(dot)com>
To: <pgsql-jdbc(at)postgresql(dot)org>, <pgsql-admin(at)postgresql(dot)org>
Subject: Re: Problems with Large Objects using Postgres 7.2.1
Date: 2003-04-09 16:25:35
Message-ID: 010c01c2feb4$a6ebe100$ff926b80@amer.cisco.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin pgsql-jdbc

Never saw any responses to this. Anybody have any ideas?
-----Original Message-----
From: pgsql-admin-owner(at)postgresql(dot)org
[mailto:pgsql-admin-owner(at)postgresql(dot)org]On Behalf Of Chris White
Sent: Thursday, April 03, 2003 8:36 AM
To: pgsql-jdbc(at)postgresql(dot)org; pgsql-admin(at)postgresql(dot)org
Subject: [ADMIN] Problems with Large Objects using Postgres 7.2.1

I have a Java application which writes large objects to the database using
the JDBC interface. The application reads binary input from an input stream
and writes large objects to the database in 8K chunks and calculates the
length of the data. At the end of the input stream it closes the large
object and commits the large object and then updates associated tables with
the large object id and large object length and commits that info to the
database. The application has multiple threads (max 8) simultaneously
writing these large objects each using their own connection. Whenever the
system has a problem we have a monitor application which detects a need for
a system shutdown and shuts down Postgres using a smart shutdown.

What I am seeing is that when all 8 threads are running and the system is
shutdown, large objects committed in transactions near to the shutdown are
corrupt when the database is restarted. I know the large objects are
committed, because the associated entries in the tables which point to the
large objects are present after the restart with valid information about the
large object length and oid. However when I access the large objects I am
only returned a 2K chunk even though the table entry tells me the entry
should be 320K.

Anybody have any ideas what is the problem? Are there any know issues with
the recovery of large objects?

Chris White

In response to

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message Tom Lane 2003-04-09 16:34:15 Re: SignalChildren: sending signal 12
Previous Message Mendola Gaetano 2003-04-09 16:24:38 Re: SignalChildren: sending signal 12

Browse pgsql-jdbc by date

  From Date Subject
Next Message Tom Lane 2003-04-09 16:51:01 Re: Problems with Large Objects using Postgres 7.2.1
Previous Message Nic Ferrier 2003-04-09 13:00:28 Re: Callable Statements