From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | John A Meinel <john(at)arbash-meinel(dot)com> |
Cc: | Joel Fradkin <jfradkin(at)wazagua(dot)com>, Postgresql Performance <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Joel's Performance Issues WAS : Opteron vs Xeon |
Date: | 2005-04-21 04:35:32 |
Message-ID: | 11216.1114058132@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
John A Meinel <john(at)arbash-meinel(dot)com> writes:
> Joel Fradkin wrote:
>> Postgres was on the second run
>> Total query runtime: 17109 ms.
>> Data retrieval runtime: 72188 ms.
>> 331640 rows retrieved.
> How were you measuring "data retrieval time"?
I suspect he's using pgadmin. We've seen reports before suggesting that
pgadmin can be amazingly slow, eg here
http://archives.postgresql.org/pgsql-performance/2004-10/msg00427.php
where the *actual* data retrieval time as shown by EXPLAIN ANALYZE
was under three seconds, but pgadmin claimed the query runtime was 22
sec and data retrieval runtime was 72 sec.
I wouldn't be too surprised if that time was being spent formatting
the data into a table for display inside pgadmin. It is a GUI after
all, not a tool for pushing vast volumes of data around.
It'd be interesting to check the runtimes for the same query with
LIMIT 3000, ie, see if a tenth as much data takes a tenth as much
processing time or not. The backend code should be pretty darn
linear in this regard, but maybe pgadmin isn't.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | William Yu | 2005-04-21 05:02:42 | Re: How to improve db performance with $7K? |
Previous Message | Josh Berkus | 2005-04-21 03:32:14 | Re: Joel's Performance Issues WAS : Opteron vs Xeon |