From: | Rod Taylor <pg(at)rbt(dot)ca> |
---|---|
To: | David Teran <david(dot)teran(at)cluster9(dot)com> |
Cc: | PgSQL Performance ML <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: speeding up a select with C function? |
Date: | 2004-03-07 15:02:55 |
Message-ID: | 1078671774.57624.52.camel@jester |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
> I have run vacuum analyze before executing the statements. I wonder now
> if there is any chance to speed this up.
Is this an active table for writes? You may want to take a look at
CLUSTER. In some circumstances, it can take an order of magnitude off
the query time by allowing less pages to be retrieved from disk.
Other than that, if you're willing to drop performance of all queries
not hitting the table to speed up this one, you can pin the index and
table into memory (cron job running a select periodically to ensure it
sticks).
Shrink the actual data size (Drop the OID column, use a smallint instead
of an integer, etc.)
One final option is to alter PostgreSQL into possibly doing a
sudo-sequential scan on the table when reading indexes, rather than
pulling data from the table in a random order as it is found in the
index. This is a rather complex project, but doable.
http://momjian.postgresql.org/cgi-bin/pgtodo?performance
From | Date | Subject | |
---|---|---|---|
Next Message | Steven Butler | 2004-03-08 00:26:21 | Using bigint needs explicit cast to use the index |
Previous Message | Andrew Sullivan | 2004-03-07 14:49:07 | Re: Fixed width rows faster? |