From: | Bruno Wolff III <bruno(at)wolff(dot)to> |
---|---|
To: | Vasilis Ventirozos <vendi(at)cosmoline(dot)com> |
Cc: | Donald Fraser <demolish(at)cwgsy(dot)net>, pgsql-admin(at)postgresql(dot)org |
Subject: | Re: Are 50 million rows a problem for postgres ? |
Date: | 2003-09-08 13:21:08 |
Message-ID: | 20030908132108.GD14906@wolff.to |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
On Mon, Sep 08, 2003 at 13:26:05 +0300,
Vasilis Ventirozos <vendi(at)cosmoline(dot)com> wrote:
> This is a simple statement that i run
>
> core_netfon=# EXPLAIN select spcode,count(*) from callticket group by spcode;
> QUERY PLAN
> ---------------------------------------------------------------------------------------
> Aggregate (cost=2057275.91..2130712.22 rows=979151 width=4)
> -> Group (cost=2057275.91..2106233.45 rows=9791508 width=4)
> -> Sort (cost=2057275.91..2081754.68 rows=9791508 width=4)
> Sort Key: spcode
> -> Seq Scan on callticket (cost=0.00..424310.08 rows=9791508
> width=4)
> (5 rows)
In addition to making the changes to the config file as suggested in other
responses, you may also want to do some testing with the 7.4 beta.
Hash aggreates will most likely speed this query up alot (assuming there
aren't millions of unique spcodes). The production release of 7.4 will
probably happen in about a month.
From | Date | Subject | |
---|---|---|---|
Next Message | Daniel Seichter | 2003-09-08 13:22:42 | Logifle analysis |
Previous Message | Bruno Wolff III | 2003-09-08 13:15:50 | Re: Are 50 million rows a problem for postgres ? |