From: | Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com> |
---|---|
To: | Michael Monnerie <michael(dot)monnerie(at)is(dot)it-management(dot)at> |
Cc: | pgsql-admin(at)postgresql(dot)org |
Subject: | Re: 8.3.5 broken after power fail |
Date: | 2009-02-21 09:43:04 |
Message-ID: | dcc563d10902210143m5d05fa01ge4b3122693490269@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
On Sat, Feb 21, 2009 at 1:23 AM, Michael Monnerie
<michael(dot)monnerie(at)is(dot)it-management(dot)at> wrote:
> Also a question: Because I must read all data, the psql client runs out
> of memory, trying to cache all the 10GB from that table. I circumvented
> this with selecting only parts of the table all the time. Is there a
> smart way to do such a select without caching the results in memory? Is
> that what temporary tables and "select into" are made for? I just want
> to know the recommended way for doing huge queries.
You can dump individual tables with pg_dump -t table1 -t table2. That
should work without running out of memory. And yeah, temp tables and
select into are a good way to get your data ready to be pg_dumped.
From | Date | Subject | |
---|---|---|---|
Next Message | Tena Sakai | 2009-02-21 09:59:06 | Re: very, very slow performance |
Previous Message | Scott Marlowe | 2009-02-21 09:40:43 | Re: 8.3.5 broken after power fail SOLVED |