Re: extremly bad select performance on huge table

From: Björn Wittich <Bjoern_Wittich(at)gmx(dot)de>
To: pgsql-performance(at)postgresql(dot)org
Subject: Re: extremly bad select performance on huge table
Date: 2014-10-24 05:16:48
Message-ID: 5449E0C0.60904@gmx.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Hi,

with a cursor the behaviour is the same. So I would like to ask a more
general question:

My client needs to receive data from a huge join. The time the client
waits for being able to fetch the first row is very long. When the
retrieval starts after about 10 mins, the client itself is I/O bound so
it is not able to catch up the elapsed time.

My workaround was to build a queue of small joins (assuming the huge
join delivers 10 mio rows I now have 10000 joins delivering 1000 rows ).
So the general question is: Is there a better solution then my crude
workaround?

Thank you

> Hi Kevin,
>
>
> this is what I need (I think). Hopefully a cursor can operate on a
> join. Will read docu now.
>
> Thanks!
>
>
> Björn
>
> Am 22.10.2014 16:53, schrieb Kevin Grittner:
>> Björn Wittich <Bjoern_Wittich(at)gmx(dot)de> wrote:
>>
>>> I do not want the db server to prepare the whole query result at
>>> once, my intention is that the asynchronous retrieval starts as
>>> fast as possible.
>> Then you probably should be using a cursor.
>>
>> --
>> Kevin Grittner
>> EDB: http://www.enterprisedb.com
>> The Enterprise PostgreSQL Company
>>
>>
>
>
>

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Huang, Suya 2014-10-28 06:26:48 unnecessary sort in the execution plan when doing group by
Previous Message pinker 2014-10-23 13:24:00 Checkpoints tuning