Re: huge price database question..

From: Jim Green <student(dot)northwestern(at)gmail(dot)com>
To: Andy Colson <andy(at)squeakycode(dot)net>
Cc: David Kerr <dmk(at)mr-paradox(dot)net>, pgsql-general(at)postgresql(dot)org
Subject: Re: huge price database question..
Date: 2012-03-21 02:35:34
Message-ID: CACAe89xTza6HUAmKapmB-DrAqE1jfx0020t86+6DmRmV16Hftw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On 20 March 2012 22:25, Andy Colson <andy(at)squeakycode(dot)net> wrote:
> I think the decisions:
>
> 1) one big table
> 2) one big partitioned table
> 3) many little tables
>
> would probably depend on how you want to read the data.  Writing would be
> very similar.
>
> I tried to read through the thread but didnt see how you're going to read.
>
> I have apache logs in a database.  Single table, about 18 million rows.  I
> have an index on hittime (its a timestamp), and I can pull a few hundred
> records based on a time, very fast.  On the other hand, a count(*) on the
> entire table takes a while.  If you are going to hit lots and lots of
> records, I think the multi-table (which include partitioning) would be
> faster.  If you can pull out records based on index, and be very selective,
> then one big table works fine.
> On the perl side, use copy.  I have code in perl that uses it (and reads
> from .gz as well), and its very fast.  I can post some if you'd like.

my queries would mostly consider select for one symbol for one
particular day or a few hours in a particular day, occasionally I
would do select on multiple symbols for some timestamp range. you code
sample would be appreciated, Thanks!

Jim.

>
> -Andy
>

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Andy Colson 2012-03-21 02:43:36 Re: huge price database question..
Previous Message Jim Green 2012-03-21 02:30:16 Re: huge price database question..