From: | "Josh Berkus" <josh(at)agliodbs(dot)com> |
---|---|
To: | "Chad Thompson" <chad(at)weblinkservices(dot)com>, <josh(at)agliodbs(dot)com>, "pgsql-novice" <pgsql-novice(at)postgresql(dot)org> |
Subject: | Re: Simple but slow |
Date: | 2002-08-23 16:18:02 |
Message-ID: | web-1621968@davinci.ethosmedia.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-novice |
Chad,
> Thanks for your reply Josh, as usual i learn from you whenever you
> write.
You're quite welcome!
> Ive been having a hard time understanding what explain is telling me.
> I was able to get the query down to 19 secs w/o the distinct. I
> think i'll
> move the distinct to one of my faster queries.
Distinct on large result sets can be quite brutal. Here's why your
query was slow with DISTINCT:
1. First the query has to sort by the DISTINCT field.
2. Then it has to "roll up" all the non-distinct entries
3. Then it has to re-sort by your output sort.
This isn't much of a problem on small tables, but with 2 million
records, that's 3 table scans of the whole table, which either requires
a lot of patience or a server with 2gb of RAM and a really fast RAID
array.
> If its not too much trouble id like you to look at another. This is
> really
> being a beast.
I think somebody already posted a solution for this.
> Thanks for your help.
> I have also enjoyed your "The Joy of Index". I look forward to the
> next
> issue.
You're welcome again. According to Tom and Bruno, I need to post some
corrections ... look for them early next week.
-Josh Berkus
"Standing on the shoulders of giants."
From | Date | Subject | |
---|---|---|---|
Next Message | eric soroos | 2002-08-23 16:46:26 | Security Implications |
Previous Message | Aarni Ruuhimäki / Megative Tmi / KYMI.com | 2002-08-23 07:52:01 | Re: changing the size of a column without losing data |