Re: Large Tables(>1 Gb)

From: "Mitch Vincent" <mitch(at)venux(dot)net>
To: "Jeffery Collins" <collins(at)onyx-technologies(dot)com>, <Fred_Zellinger(at)seagate(dot)com>
Cc: <pgsql-general(at)hub(dot)org>
Subject: Re: Large Tables(>1 Gb)
Date: 2000-06-30 14:19:52
Message-ID: 00a401bfe29e$42746e20$0300000a@doot.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

You could also use LIMIT and OFFSET.. That's what I do (though my database
isn't to a gigabyte yet)..

Maybe using a CURSOR is better, I'm not sure...

-Mitch

----- Original Message -----
From: Jeffery Collins <collins(at)onyx-technologies(dot)com>
To: <Fred_Zellinger(at)seagate(dot)com>
Cc: <pgsql-general(at)hub(dot)org>
Sent: Friday, June 30, 2000 8:47 AM
Subject: Re: [GENERAL] Large Tables(>1 Gb)

> Fred_Zellinger(at)seagate(dot)com wrote:
>
> > (I have a 500MHz PentIII, with 256 Mb RAM, UW SCSI, running Linux Kernel
> > 2.2.9, with libc-2.1.2
> > I am running Postgres 7.0 which I compiled myself.)
> >
> > So, I created a database, a table, and started dumping data into it.
Then
> > I added an index on the table. Life was good.
> >
> > After a few weeks, my table eclipsed approximately 1Gb, and when I
looked
> > at it in my PG_DATA/database directory, I noticed that there were two
> > files: MYTABLE and MYTABLE.1. I was curious why this happened, but I
> > figured that Postgres must break up tables over 1Gb into multiple
> > files.(right?)
> >
> > Then, while running psql, I did a "select * from MYTABLE;" Well, psql
just
> > sits there while the hard drive light blinks like crazy, pulling the
table
> > up into memory. I have 256Mb of RAM, so this takes awhile. When I
start
> > up "top" and watch my process table, the postgres backend is sucking up
the
> > CPU time pulling the data and the psql frontend is sucking up the memory
> > accepting the results.
> >
> > Fred
>
> Okay, I didn't laugh the entire time...
>
> I suggest you take a look at cursors. I have the same thing. There are
times
> I will need to select my entire >2Gig table but instead of doing:
>
> SELECT * FROM table ;
>
> I do
>
> DECLARE tmp CURSOR FOR SELECT * FROM table ;
>
> do {
> FETCH 100 FORWARD FROM tmp ;
> } while there are rows left.
>
> This only pulls 100 (or whatever number you specify) into memory at a
time.
>
> Jeff
>
>
>

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Stephane Bortzmeyer 2000-06-30 14:36:10 Re: function for date difference ?
Previous Message Philip Warner 2000-06-30 13:47:42 Re: pg_dumpall and check constraints