Re: Table partition for very large table

From: Scott Marlowe <smarlowe(at)g2switchworks(dot)com>
To: Yudie Gunawan <yudiepg(at)gmail(dot)com>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Table partition for very large table
Date: 2005-03-28 17:52:55
Message-ID: 1112032374.12450.28.camel@state.g2switchworks.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Mon, 2005-03-28 at 11:32, Yudie Gunawan wrote:
> I have table with more than 4 millions records and when I do select
> query it gives me "out of memory" error.
> Does postgres has feature like table partition to handle table with
> very large records.
> Just wondering what do you guys do to deal with very large table?

Is this a straight "select * from table" or is there more being done to
the data?

If it's a straight select, you are likely running out of memory to hold
the result set, and need to look at using a cursor to grab the result in
pieces.

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Joshua D. Drake 2005-03-28 17:56:20 Re: Table partition for very large table
Previous Message Yudie Gunawan 2005-03-28 17:32:04 Table partition for very large table