Re: Trading off large objects (arrays, large strings, large tables) for timeseries

From: Antonios Christofides <anthony(at)itia(dot)ntua(dot)gr>
To: pgsql-general(at)postgresql(dot)org
Subject: Re: Trading off large objects (arrays, large strings, large tables) for timeseries
Date: 2005-02-16 09:24:49
Message-ID: 20050216092449.GA3165@itia.ntua.gr
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Shridhar Daithankar wrote:
> Perhaps you could attempt to store a fix small number of records per row, say
> 4-6? Or may be a smaller fixed size array, That should make the row overhead
> less intrusive...

Thanks, I didn't like your idea, but it helped me come up with another
idea:

(timeseries_id integer, top text, middle text, bottom text);

The entire timeseries is the concatenation of 'top' (a few records),
'middle' (millions of records), and 'bottom' (a few records). To get
the last record, or to append a record, you only read/write 'bottom',
which is very fast. Whenever the entire timeseries is written (a less
frequent operation), the division into these three parts will be
redone, thus keeping 'bottom' small.

In response to

Browse pgsql-general by date

  From Date Subject
Next Message strk 2005-02-16 09:32:17 Re: [postgis-users] postgresql8.0 and postgis1.0.0
Previous Message Pritesh Shah 2005-02-16 09:09:11 postgresql8.0 and postgis1.0.0