From: | Ron Johnson <ron(dot)l(dot)johnson(at)cox(dot)net> |
---|---|
To: | PgSQL Novice ML <pgsql-novice(at)postgresql(dot)org> |
Subject: | Re: wat is the max number of rows that can be returned |
Date: | 2002-12-04 17:35:05 |
Message-ID: | 1039023304.8776.21.camel@haggis |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-novice |
On Wed, 2002-12-04 at 10:55, Tom Lane wrote:
> Joel Burton <joel(at)joelburton(dot)com> writes:
> > And you're wondering what will happen if you write a query that returns
> > more than 2 billion rows (2000 million)?
>
> The first thing that will happen is that you'll run out of memory in the
> client process to store all those rows in your PGresult.
>
> You can handle ridiculously-large query results by opening a cursor
> and fetching reasonable-sized chunks with FETCH; it might take awhile
> but in theory you could process an indefinitely large result that way.
>
> I don't think the int-sized result of PGntuples is going to be a issue
> in the foreseeable future --- platforms that could hold >2G tuples are
> no doubt going to use a wider size of int.
Besides, at 100ns per insert, it would take 33,000 years to populate
the table...
Also, those 9.2x10^18 records at 8 bytes each would need 7.3x10^19
bytes. And that's not even considering that each record will be
considerably larger that just the 8 bytes.
--
+------------------------------------------------------------+
| Ron Johnson, Jr. mailto:ron(dot)l(dot)johnson(at)cox(dot)net |
| Jefferson, LA USA http://members.cox.net/ron.l.johnson |
| |
| "they love our milk and honey, but preach about another |
| way of living" |
| Merle Haggard, "The Fighting Side Of Me" |
+------------------------------------------------------------+
From | Date | Subject | |
---|---|---|---|
Next Message | Dmitri Touretsky | 2002-12-04 18:00:09 | Yet one more stupid question |
Previous Message | Dmitri Touretsky | 2002-12-04 17:31:49 | Re: Rp. : Very slow performance |