From: | Doug McNaught <doug(at)wireboard(dot)com> |
---|---|
To: | Eric Lee Green <eric(at)badtux(dot)org> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Performance issues |
Date: | 2002-03-19 02:11:42 |
Message-ID: | m3elihi9wh.fsf@varsoon.denali.to |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Eric Lee Green <eric(at)badtux(dot)org> writes:
> PostGreSQL 7.2, Red Hat Linux 6.2:
>
> I am having a very strange performance issue. I am using postgresql-ruby, but
> I don't think this is a postgresql-ruby issue. If I issue 1,000 inserts in a
> row as part of a transaction, I can get 900 inserts per second. If I mix the
> inserts with a select inbetween on the same row, I can insert at only 7
> inserts per second.
>
> I.e.:
[snippage]
Hmm, my guess is that SELECT queries that return a null result set
are what's slowing things down--they have to scan the whole table. Is
there any reason you have to do this?
You might try timing 1000 null-returning SELECTs against a populated
table and see how long they take, just to see if my hypothesis is
correct.
The INSERT-without-SELECT goes fast because PG just appends to the
table without having to scan it. IF you do this, or even better, use
COPY rather than INSERT for bulk loading, it'll go fast.
What usage patterns is this app going to have? If "record not there"
is the common case, try putting a UNIQUE INDEX on 'source' and just
catch INSERT errors when they happen.
-Doug
--
Doug McNaught Wireboard Industries http://www.wireboard.com/
Custom software development, systems and network consulting.
Java PostgreSQL Enhydra Python Zope Perl Apache Linux BSD...
From | Date | Subject | |
---|---|---|---|
Next Message | Greg Sabino Mullane | 2002-03-19 04:09:37 | Re: doing math with date function |
Previous Message | Eric Lee Green | 2002-03-19 02:07:48 | Re: Performance issues |