Re: Large Tables(>1 Gb)

From: Andrew Snow <als(at)fl(dot)net(dot)au>
To: pgsql-general(at)hub(dot)org
Subject: Re: Large Tables(>1 Gb)
Date: 2000-06-30 03:41:03
Message-ID: Pine.BSF.4.21.0006301338010.75384-100000@giskard.fl.net.au
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Thu, 29 Jun 2000 Fred_Zellinger(at)seagate(dot)com wrote:

> I look around at some backend configuration parameters to see if I can get
> Postgres to do some neat memory stuff(but later realize that it was the
> front-end and not the backend that was eating up memory...I tried pg_dump
> on the database/table, and stuff started spooling right away)

> Rather than trying to fix the problem, I decided to subvert it by breaking
> my table into a bunch of little tables, each one less than my RAM size, so
> that I would never dig into SWAP space on a select *....(all of you who are
> laugh at me, you can just quit reading right now).

*stops laughing* ;-)

> Anyway, just wanted to see if all my assumptions are correct, or if anyone
> has a better explanation for my observation, and/or some solutions.

If you want to SELECT 1GB of data into RAM, you ought to have over 1GB of
RAM, don't you think?

What exactly is the problem you're trying to fix?

- Andrew

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Stephan Szabo 2000-06-30 03:58:03 Re: Large Tables(>1 Gb)
Previous Message Dustin Sallings 2000-06-30 03:39:51 Re: Large Tables(>1 Gb)