From: | Sam Mason <sam(at)samason(dot)me(dot)uk> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: select count() out of memory |
Date: | 2007-10-26 11:03:47 |
Message-ID: | 20071026110347.GC27400@frubble.xen.chris-lamb.co.uk |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Fri, Oct 26, 2007 at 08:26:13AM +0200, Thomas Finneid wrote:
> Scott Marlowe wrote:
> >It may well be that one big table and partial indexes would do what
> >you want. Did you explore partial indexes against one big table?
> >That can be quite handy.
>
> Hmm, interresting, I suppose it could work. Tanks for the suggestion,
> Ill keep it in mind.
That's still going to have to do a tablescan on the whole dataset (a
couple of terabytes?) before building the index isn't it? that doesn't
sound like something you'd want to do too often.
Are there any thoughts of deferring index update so that many rows
can be merged simultaneously, rather than doing many individual index
operations? It sounds as though this is what Thomas is really after and
it would also remove the need for dropping indexes while doing a bulk
insert of data. I apologise if this has been discussed before!
Sam
From | Date | Subject | |
---|---|---|---|
Next Message | Martijn van Oosterhout | 2007-10-26 11:05:10 | Re: INDEX and JOINs |
Previous Message | Sam Mason | 2007-10-26 10:58:24 | Re: select count() out of memory |