Re: PostgreSQL limitations question

From: Craig Ringer <ringerc(at)ringerc(dot)id(dot)au>
To: Bartosz Dmytrak <bdmytrak(at)gmail(dot)com>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: PostgreSQL limitations question
Date: 2012-07-12 01:07:37
Message-ID: 4FFE2359.1020901@ringerc.id.au
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On 07/12/2012 05:01 AM, Bartosz Dmytrak wrote:
> 1. Create Table:
> CREATE TABLE test.limits("RowValue" text) WITH (OIDS=FALSE,
> FILLFACTOR=100);
>
> 2. Fill table (I used pgScript available in pgAdmin);
I suspect that's a pretty slow way to try to fill your DB up. You're
doing individual INSERTs and possibly in individual transactions
(unsure, I don't use PgAdmin); it's not going to be fast.

Try COPYing rows in using psql. I'd do it in batches via shell script
loop myself. Alternately, you could use the COPY support of the DB
drivers in perl or Python to do it.

> 3. do Vacuum full to be sure free space is removed
> VACUUM FULL test.limits;
Which version of Pg are you running? If it's older than 9.0 you're
possibly better off using "CLUSTER" instead of "VACUUM FULL".

> 4. I checked table size:
> SELECT * FROM pg_size_pretty(pg_relation_size('test.limits'::regclass));
> and I realized table size is 32 kB.

Use pg_total_relation_size to include TOAST tables too.

--
Craig Ringer

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Toby Corkindale 2012-07-12 01:18:34 Re: Bug? Prepared queries continue to use search_path from their preparation time
Previous Message Craig Ringer 2012-07-12 00:42:51 Re: Sequence moves forward when failover is triggerred