From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Manos Karpathiotakis <mk(at)di(dot)uoa(dot)gr> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Maximum number of tables |
Date: | 2011-03-15 17:15:40 |
Message-ID: | 631.1300209340@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Manos Karpathiotakis <mk(at)di(dot)uoa(dot)gr> writes:
> Hi all,
> I am using postgres 9.0 under CentOS 5.5 (Dual Xeon Quad Core @ 2,44 GHz,
> 64GB RAM, 2TB RAID 5). In my case, postgres is used as a backend for the RDF
> store Sesame.
> I am trying to store a dataset that consists of approximately 900.000.000
> insertions (organized in either 10 tables, or in an arbitrary number of
> tables exceeding 2000). Loading the data in 10 tables takes about two days,
> while in the 2nd case, an org.postgresql.util.PSQLException: ERROR: out of
> shared memory error is thrown.
I'm guessing that you're seeing this through some client-side code that
helpfully suppresses the HINT about raising max_locks_per_transaction
:-(
If your application tries to touch all 2000 tables in one transaction,
you will need to raise that parameter to avoid running out of locktable
space for the AccessShareLocks it needs on all those tables.
However ... generally speaking, I'd guess that whatever you did to
refactor 10 tables into 2000 was a bad idea --- one table with an extra
key column is almost always a better design than N basically-identical
tables. The latter will be a *lot* harder to use, and probably won't
fix your performance problem anyway. You should attack the performance
problem in a different way. Have you read
http://www.postgresql.org/docs/9.0/static/populate.html ?
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | general_lee | 2011-03-15 17:24:31 | Re: How to add hosts to pg_hba.conf and postgresql.conf? |
Previous Message | Bruce Momjian | 2011-03-15 17:11:44 | Re: PostgreSQL for Holdem Manager could not be installed. |