Re: Trading off large objects (arrays, large strings, large tables) for timeseries

From: Antonios Christofides <anthony(at)itia(dot)ntua(dot)gr>
To: pgsql-general(at)postgresql(dot)org
Subject: Re: Trading off large objects (arrays, large strings, large tables) for timeseries
Date: 2005-02-16 09:04:00
Message-ID: 20050216090400.GA3131@itia.ntua.gr
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Tom Lane wrote:
> Antonios Christofides <anthony(at)itia(dot)ntua(dot)gr> writes:
> > Why 25 seconds for appending an element?
>
> Would you give us a specific test case, rather than a vague description
> of what you're doing?

OK, sorry, here it is (on another machine, thus times are different.
8.0.1 on a PIV 1.6GHz 512 MB RAM, Debian woody, kernel 2.4.18):

CREATE TABLE test(id integer not null primary key, records text[]);

INSERT INTO test(id, records) VALUES (1,
'{"1993-09-30 13:20,182,",
"1993-09-30 13:30,208,",
"1993-09-30 13:51,203,",
[snipping around 2 million rows]
"2057-02-13 02:31,155,",
"2099-12-08 10:39,198,"}');

[Took 60 seconds]

SELECT array_dims(records) FROM test;
array_dims
-------------
[1:2000006]
(1 row)

UPDATE test SET records[2000007] = 'hello, world!';

[11 seconds]

UPDATE test SET records[1000000] = 'hello, world!';

[15 seconds (but the difference may be because of system load - I
don't have a completely idle machine available right now)]

I thought the two above UPDATE commands would be instant.

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Pritesh Shah 2005-02-16 09:09:11 postgresql8.0 and postgis1.0.0
Previous Message Ed L. 2005-02-16 05:51:59 hung postmaster?