From: | "Michael Labhard" <ince(at)pacifier(dot)com> |
---|---|
To: | pgsql-cygwin(at)postgresql(dot)org |
Subject: | PostgreSQL unsuitable for large datasets? |
Date: | 2002-08-31 18:48:19 |
Message-ID: | akr31g$7mo$1@main.gmane.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-cygwin |
We have a PostgreSQL (7.2.1) database on cygwin with one table that
contains some 2 x 10**6 records. The database is about 100 MB total. This
does not seem so large, but while attempting to do an insert into an empty
table using a select that joins to the large table on all the rows the
error,
Memory exhausted in AllocSetAlloc(36)
occurs. Reviewing posts on this subject as well as the PostgreSQL FAQ did
not reveal any specific solution. ulimit -a:
$ ulimit -a
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
open files (-n) 256
pipe size (512 bytes, -p) 8
stack size (kbytes, -s) 2046
cpu time (seconds, -t) unlimited
max user processes (-u) 63
virtual memory (kbytes, -v) 2097152
Virtual memory would appear to be adequate. Increasing the SORT_MEM
parameter in the postgresql.conf file did not help. Finally had to
partition the large table into 10 parts and do each individually.
Seems like this problem would need to have a ready and definitive
solution before PostgreSQL could be used for large, serious databases. How
large are the databases with which others are using PostgreSQL? How do
other avoid this error with large datasets?
-- Michael
From | Date | Subject | |
---|---|---|---|
Next Message | Jason Tishler | 2002-08-31 23:52:18 | Re: PostgreSQL unsuitable for large datasets? |
Previous Message | Michael Labhard | 2002-08-31 16:29:56 | Re: Why sequential scan with index present? |