From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Antonios Christofides <anthony(at)itia(dot)ntua(dot)gr> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Trading off large objects (arrays, large strings, large tables) for timeseries |
Date: | 2005-02-15 14:56:22 |
Message-ID: | 12079.1108479382@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Antonios Christofides <anthony(at)itia(dot)ntua(dot)gr> writes:
> Why 25 seconds for appending an element?
Would you give us a specific test case, rather than a vague description
of what you're doing?
> (2) I also tried using a large (80M) text instead (i.e. instead of
> storing an array of lines, I store a huge plain text file). What
> surprised me is that I can get the 'tail' of the file (using
> substring) in only around one second, although it is transparently
> compressed (to 17M). It doesn't decompress the entire string, does
> it? Does it store it somehow chunked?
http://www.postgresql.org/docs/8.0/static/storage-toast.html
> What I'm trying to do is find a good way to store timeseries. A
> timeseries is essentially a series of (date, value) pairs, and more
> specifically it is an array of records, each record consisting of
> three items: date TIMESTAMP, value DOUBLE PRECISION, flags TEXT.
In practically every case, the answer is to use a table with rows
of that form. SQL just isn't designed to make it easy to do something
else.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Shridhar Daithankar | 2005-02-15 15:00:23 | Re: Trading off large objects (arrays, large strings, large tables) for timeseries |
Previous Message | Peter Eisentraut | 2005-02-15 14:47:14 | Re: database encoding "WIN" -- Western or Cyrillic? |