From: | Kevin Grittner <kgrittn(at)ymail(dot)com> |
---|---|
To: | Hengky Liwandouw <hengkyliwandouw(at)gmail(dot)com> |
Cc: | PostgreSQL general <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Query runs slow |
Date: | 2013-11-25 19:24:40 |
Message-ID: | 1385407480.41987.YahooMailNeo@web162904.mail.bf1.yahoo.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hengky Liwandouw <hengkyliwandouw(at)gmail(dot)com> wrote:
> On Nov 24, 2013, at 11:21 PM, Kevin Grittner wrote:
>> Hengky Lie <hengkyliwandouw(at)gmail(dot)com> wrote:
>>
>>> this query takes long time to process. It takes around 48
>>> seconds to calculate about 690 thousand record.
>>
>>> Is there any way to make calculation faster ?
>>
>> Quite possibly -- that's about 70 microseconds per row, and even
>> fairly complex queries can often do better than that.
> After reading the link you gave to me, changing shared_buffers to
> 25% (512MB) of available RAM and effective_cache_size to 1500MB
> (about 75% of available RAM) make the query runs very fast.
> Postgres only need 1.8 second to display the result.
That's 4.6 microseconds per row. Given the complexity of the
query, it might be hard to improve on that. A simple tablescan
that returns all rows generally takes 1 to 2 microseconds on the
hardware I generally use.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Janek Sendrowski | 2013-11-25 19:31:13 | Debugging of C functions |
Previous Message | Andrew Sullivan | 2013-11-25 19:20:39 | Re: pg_xlog is getting bigger |