From: | Mischa Sandberg <mischa(dot)sandberg(at)telus(dot)net> |
---|---|
To: | Alvaro Herrera <alvherre(at)dcc(dot)uchile(dot)cl> |
Cc: | Joel Fradkin <jfradkin(at)wazagua(dot)com>, 'Andreas Pflug' <pgadmin(at)pse-consulting(dot)de>, 'Dave Page' <dpage(at)vale-housing(dot)co(dot)uk>, pgsql-performance(at)postgresql(dot)org, ac(at)wazagua(dot)com, "Joshua D(dot) Drake" <jd(at)commandprompt(dot)com>, wheyliger(at)wazagua(dot)com, 'Steve Hatt' <shatt(at)wazagua(dot)com> |
Subject: | Re: Joel's Performance Issues WAS : Opteron vs Xeon |
Date: | 2005-04-22 20:53:50 |
Message-ID: | 1114203230.4269645e44160@webmail.telus.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Quoting Alvaro Herrera <alvherre(at)dcc(dot)uchile(dot)cl>:
> One further question is: is this really a meaningful test? I mean, in
> production are you going to query 300000 rows regularly? And is the
> system always going to be used by only one user? I guess the question
> is if this big select is representative of the load you expect in
> production.
While there may be some far-out queries that nobody would try,
you might be surprised what becomes the norm for queries,
as soon as the engine feasibly supports them. SQL is used for
warehouse and olap apps, as a data queue, and as the co-ordinator
or bridge for (non-SQL) replication apps. In all of these,
you see large updates, large result sets and volatile tables
("large" to me means over 20% of a table and over 1M rows).
To answer your specific question: yes, every 30 mins,
in a data redistribution app that makes a 1M-row query,
and writes ~1000 individual update files, of overlapping sets of rows.
It's the kind of operation SQL doesn't do well,
so you have to rely on one big query to get the data out.
My 2c
--
"Dreams come true, not free." -- S.Sondheim, ITW
From | Date | Subject | |
---|---|---|---|
Next Message | Joel Fradkin | 2005-04-22 21:04:19 | Re: Joel's Performance Issues WAS : Opteron vs Xeon |
Previous Message | Josh Berkus | 2005-04-22 20:36:08 | Re: Bad n_distinct estimation; hacks suggested? |