From: | Dave Cramer <pg(at)fastcrypt(dot)com> |
---|---|
To: | Eileen <hey_here(at)yahoo(dot)com> |
Cc: | "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: JDBC 5 million function insert returning Single Transaction Lock Access Exclusive Problem |
Date: | 2012-08-31 13:50:08 |
Message-ID: | CADK3HHLNh8AK-yNQ9XGFwMcTPqGrMO72-4TqdUcq7WDbZuUhJQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Dave Cramer
dave.cramer(at)credativ(dot)ca
http://www.credativ.ca
On Thu, Aug 30, 2012 at 2:34 AM, Eileen <hey_here(at)yahoo(dot)com> wrote:
> Hi,
>
> I have written some Java code which builds a postgresql function. That
> function calls approximately 6 INSERT statements with a RETURNING clause. I
> recreate and re-run the function about 900,000 times. I use JDBC to execute
> these functions on postgresql 8.3 on Windows. When I tried running this on
> a single Connection of Postgresql, it failed (some kind of memory error).
> So I split the JDBC connections up into chunks of 5000. I reran and
> everything was fine. It took about 1 hour to execute all the updates.
>
> Since it took so long to perform the update, I wanted to prevent other users
> from querying the data during that time. So I read about the LOCK command.
> It seemed like I should LOCK all the tables in the database with an ACCESS
> EXCLUSIVE mode. That would prevent anyone from getting data while the
> database was making its updates.
Do you understand how MVCC works? Do you really need to lock out users ?
>
> Since a LOCK is only valid for 1 transaction, I set autocommit to FALSE. I
> also removed the code which chunked up the inserts. I had read that a
> single transaction ought to have better performance than committing after
> each insert, but that was clearly not what ended up happening in my case.
We would need more information as to what you are doing.
>
> In my case, a few problems occurred. Number 1, the process ran at least 8
> hours and never finished. It did not finish because the hard drive was
> filled up. After running a manual vacuum (VACUUM FULL), no space was freed
> up. I think this has cost me 20 GB of space. Is there any way to free this
> space up? I even dropped the database to no avail.
>
> Secondly, why did this process take over 8 hours to run? While reading the
> performance mailing list, it seems like recommendations are to run lots of
> INSERTS in a single commit. Is 5 million too many? Is redefining a
> function over and over inside a transaction a problem? Does the RETURNING
> clause present a problem during a single transaction?
VACUUM FULL on 8.3 is not a good idea
>
> If anyone has any suggestions for me, I would really appreciate it.
>
Can you explain at a high level what you are trying to do ?
> Tina
From | Date | Subject | |
---|---|---|---|
Next Message | Albe Laurenz | 2012-08-31 14:18:21 | Re: JDBC 5 million function insert returning Single Transaction Lock Access Exclusive Problem |
Previous Message | Heikki Linnakangas | 2012-08-31 13:36:16 | Re: exponential performance decrease in ISD transaction |