From: | u15074 <u15074(at)hs-harz(dot)de> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Question regarding performance (large objects involved) |
Date: | 2003-06-26 06:40:49 |
Message-ID: | 1056609649.3efa957189b24@webmail.hs-harz.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
I have a small test program (using libpq) inserting a lot of data into the
database. Each command inserts a small large object (about 5k) into the database
and inserts one row into a table, that references the large object oid.
I repeat this 100.000 times. Each insert consists of his own transaction (begin
-> insert large object, insert row -> commit ...). I also measure the time taken
for always 100 inserts.
The performance is ok and stays constant over the whole time. But I have the
following effect: from time to time a short interruption occurs (my test program
is standing still for a moment) and then it goes on.
Has anyone an idea what might cause these pauses? Is it due to caching
mechanisms of the database?
Another question is concerning the reading of the written data. When I finished
the test, I used psql to check the written data. Therefore I started some
queries, searching for certain large objects in pg_largeobject (... where loid =
XX). These queries took very much time (about 5 seconds or more). After calling
vacuum on the database, the queries got fast. Can anyone explain this? Is the
index on pg_largeobject built by calling vacuum?
Thanks, Andreas.
-------------------------------------------------
This mail sent through IMP: http://horde.org/imp/
From | Date | Subject | |
---|---|---|---|
Next Message | btober | 2003-06-26 07:17:12 | Re: How many fields in a table are too many |
Previous Message | Kallol Nandi | 2003-06-26 05:05:34 | Re: |