From: | "Scott Marlowe" <scott(dot)marlowe(at)gmail(dot)com> |
---|---|
To: | "Jaime Casanova" <jcasanov(at)systemguards(dot)com(dot)ec> |
Cc: | "Robert Haas" <robertmhaas(at)gmail(dot)com>, "psql performance list" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: measure database contention |
Date: | 2008-12-17 18:57:33 |
Message-ID: | dcc563d10812171057s58200049ydb7b8a2bfd6c3e03@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Wed, Dec 17, 2008 at 11:19 AM, Jaime Casanova
<jcasanov(at)systemguards(dot)com(dot)ec> wrote:
> On Wed, Dec 17, 2008 at 11:56 AM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>>> is the exact query... i think it will be removed later today because
>>> is a bad query anyway... but my fear is that something like happens
>>> even with good ones...
>>>
>>> maybe chekpoints could be the problem?
>>> i have 8.3.5 and condigured checkpoint_timeout in 15 minutes,
>>> chekpoint_segments 6 and checkpoint_completion_target to 0.5
>>
>> Well, it might help if you could provide the query, and the EXPLAIN output.
>>
>
> ok... remember i say it's a bad query ;)
> actually, seems there's a suitable index for that query (i guess it is
> using it because of the order by)
>
> mic=# explain analyze
> mic-# SELECT * FROM tgen_persona ORDER BY empresa_id, persona_id ASC;
> QUERY PLAN
> ---------------------------------------------------------------------------------------------------------------------------------------------
> Index Scan using pk_tgen_persona on tgen_persona (cost=0.00..8534.09
> rows=86547 width=884) (actual time=0.096..129.980 rows=86596 loops=1)
> Total runtime: 175.952 ms
> (2 rows)
>
> as you see, explain analyze says it will execute in 175.952ms and
> because of network transfer of data executing this from pgadmin in
> another machine it runs for 17s... but from time to time pgFouine is
> shown upto 345.11 sec
I know it's a bad query but did you try clustering on that index?
Then a seq scan followed by a sort would likely be cheaper and faster.
85k rows aren't that many really.
From | Date | Subject | |
---|---|---|---|
Next Message | David Rees | 2008-12-17 19:59:14 | Re: insert and Update slow after implementing slony. |
Previous Message | Jaime Casanova | 2008-12-17 18:19:36 | Re: measure database contention |