Re: Using Postgres to store high volume streams of sensor readings

From: "Diego Schulz" <dschulz(at)gmail(dot)com>
To: pgsql-general(at)postgresql(dot)org
Subject: Re: Using Postgres to store high volume streams of sensor readings
Date: 2008-11-21 20:26:02
Message-ID: 47dcfe400811211226u625e96e7hd9694ef73d5aa612@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Fri, Nov 21, 2008 at 9:50 AM, Ciprian Dorin Craciun <
ciprian(dot)craciun(at)gmail(dot)com> wrote:

>
> Currently I'm benchmarking the following storage solutions for this:
> * Hypertable (http://www.hypertable.org/) -- which has good insert
> rate (about 250k inserts / s), but slow read rate (about 150k reads /
> s); (the aggregates are manually computed, as Hypertable does not
> support other queries except scanning (in fact min, and max are easy
> beeing the first / last key in the ordered set, but avg must be done
> by sequential scan);)
> * BerkeleyDB -- quite Ok insert rate (about 50k inserts / s), but
> fabulos read rate (about 2M reads / s); (the same issue with
> aggregates;)
> * Postgres -- which behaves quite poorly (see below)...
> * MySQL -- next to be tested;
>

I think it'll be also interesting to see how SQLite 3 performs in this
scenario. Any plans?

regards

diego

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Bruce Momjian 2008-11-21 20:44:39 Re: Postgres mail list traffic over time
Previous Message Magnus Hagander 2008-11-21 20:22:11 Re: compiling libpq.dll with Borland C++, is it possible?