From: | "Scott Marlowe" <scott(dot)marlowe(at)gmail(dot)com> |
---|---|
To: | "Howard Cole" <howardnews(at)selestial(dot)com> |
Cc: | "Teodor Sigaev" <teodor(at)sigaev(dot)ru>, "PgSql General" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: How to reduce impact of a query. |
Date: | 2008-11-17 16:52:09 |
Message-ID: | dcc563d10811170852m7296bd76pc44d0a62ce933537@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Mon, Nov 17, 2008 at 9:36 AM, Howard Cole <howardnews(at)selestial(dot)com> wrote:
> Scott Marlowe wrote:
>>
>> The problem is most likely you're I/O bound. If one query is hitting
>> a table it can pull in data (sequentially) at 40 to 80 megabytes per
>> second. Since most of your queries are small, they don't run into
>> each other a lot, so to speak. As soon as your big reporting query
>> hits it's likely hitting the drives much longer and getting in the way
>> of all the other queries.
>>
>>
>
> Thanks for the input Scott. You are correct - I am IO bound, but only for
> the query described. 99% of the time, my IO runs at 3% or less, even during
> peak times, only this one query, which happens approximately 10 times a day
> grinds the system to a halt.
>
> I am considering running this query in the background every couple of
> minutes or so to force the tables/indexes into cache. Once the tables are
> cached the query runs very quickly and there is no significant IO
> impact.This is a bodge but hopefully should work.
Simpler to probably just do:
select count(*) from sometable;
to force it into cache. Buy some more memory and it'll likely stay in
memory without such tricks.
Best of luck on this.
From | Date | Subject | |
---|---|---|---|
Next Message | Howard Cole | 2008-11-17 17:29:16 | Re: How to reduce impact of a query. |
Previous Message | Howard Cole | 2008-11-17 16:36:03 | Re: How to reduce impact of a query. |