Re: Re: Postgres slowdown on large table joins

From: Dave Edmondson <david(at)jlc(dot)net>
To: pgsql-general(at)postgresql(dot)org
Subject: Re: Re: Postgres slowdown on large table joins
Date: 2001-02-19 20:24:51
Message-ID: 20010219152451.A61259@verdi.jlc.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

> > yes. I ran VACUUM ANALYZE after creating the indicies. (Actually, I VACUUM
> > the database twice a day.) The data table literally has 145972 rows, and
> > 145971 will match conf_id 4...
>
> Hm. In that case the seqscan on data looks pretty reasonable ... not
> sure if you can improve on this much, except by restructuring the tables.
> How many rows does the query actually produce, anyway? It might be that
> most of the time is going into sorting and delivering the result rows.

All I'm really trying to get is the latest row with a conf_id of 4... I'm
not sure if there's an easier way to do this, but it seems a bit ridiculous
to read in almost 146000 rows to return 1. :(

--
David Edmondson <david(at)jlc(dot)net>
GMU/FA d-(--) s+: a18>? C++++$ UB++++$ P+>+++++ L- E--- W++ N- o K-> w-- O?
M-(--) V? PS+ PE+ Y? PGP t 5 X R+ tv-->! b DI+++ D+ G(--) e>* h!>+ r++ y+>++
ICQ: 79043921 AIM: AbsintheXL #music,#hellven on irc.esper.net

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Stephan Szabo 2001-02-19 21:07:32 Re: Foreign keys
Previous Message Tom Lane 2001-02-19 20:14:55 Re: Re: Postgres slowdown on large table joins