From: | Jörg Schulz <jschulz(at)sgbs(dot)de> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: update problem? |
Date: | 2002-07-10 10:13:39 |
Message-ID: | 3D2C08D3.6080801@sgbs.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general pgsql-novice |
Hi,
I'm working with Juergen at the same Problem.
Consider the following table:
> create table test (a char);
Now fill the table with some data from test.txt
(100.000 rows each row one character)
> \copy test from test.txt
Now update the table
> update test set a='x';
This takes nearly 3 seconds. That would be ok for us.
But adding a column and fill it with some data
> alter table test add b text;
> update test set b='foobar...foobar...foobar... ...';
now the update
> update test set a='y';
takes 8 seconds. When I add further columns or increase
the amount of data a column holds it gets even worse.
Before the column was added explain said:
> explain update test set a='x';
> Seq Scan on test2 (cost=0.00..20.00 rows=1000 width=6)
(btw: why rows=1000? there are 100.000 rows in the table)
And after it was added:
> explain update test set a='y';
> Seq Scan on test2 (cost=0.00..20.00 rows=1000 width=38)
And for our "real world" table (plain table, no foreign keys) it says:
> explain update karte_archiv set ar_k_rechmod='2';
> Seq Scan on karte_archiv (cost=0.00..1609.99 rows=45399 width=359)
This update takes over 30 seconds!
It uses 7 or 8 logfiles in the pg_xlog directory 16MB each.
But does this explain why it take *so* long?
Jörg
From | Date | Subject | |
---|---|---|---|
Next Message | Christoph Dalitz | 2002-07-10 11:52:27 | static linkage with libpq? |
Previous Message | Daniel Lucazeau | 2002-07-10 10:03:45 | Re: DB GUI design tool? |
From | Date | Subject | |
---|---|---|---|
Next Message | Mourad EL HADJ MIMOUNE | 2002-07-10 13:28:54 | Shared table across all databases |
Previous Message | Jürgen Mischke | 2002-07-10 07:55:32 | Re: update problem? |