| From: | "Ciprian Dorin Craciun" <ciprian(dot)craciun(at)gmail(dot)com> | 
|---|---|
| To: | "Scott Marlowe" <scott(dot)marlowe(at)gmail(dot)com> | 
| Cc: | "Shane Ambler" <pgsql(at)sheeky(dot)biz>, "Diego Schulz" <dschulz(at)gmail(dot)com>, pgsql-general(at)postgresql(dot)org | 
| Subject: | Re: Using Postgres to store high volume streams of sensor readings | 
| Date: | 2008-11-22 21:54:32 | 
| Message-ID: | 8e04b5820811221354j4a19b6ddk9b9ba60e3a6bb2a4@mail.gmail.com | 
| Views: | Whole Thread | Raw Message | Download mbox | Resend email | 
| Thread: | |
| Lists: | pgsql-general | 
On Sat, Nov 22, 2008 at 11:51 PM, Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com> wrote:
> On Sat, Nov 22, 2008 at 2:37 PM, Ciprian Dorin Craciun
> <ciprian(dot)craciun(at)gmail(dot)com> wrote:
>>
>>    Hello all!
> SNIP
>>    So I would conclude that relational stores will not make it for
>> this use case...
>
> I was wondering you guys are having to do all individual inserts or if
> you can batch some number together into a transaction.  Being able to
> put > 1 into a single transaction is a huge win for pgsql.
    I'm aware of the performance issues between 1 insert vs x batched
inserts in one operation / transaction. That is why in the case of
Postgres I am using COPY <table> FROM STDIN, and using 5k batches...
(I've tried even 10k, 15k, 25k, 50k, 500k, 1m inserts / batch and no
improvement...)
Ciprian Craciun.
| From | Date | Subject | |
|---|---|---|---|
| Next Message | David Wilson | 2008-11-22 22:16:49 | Re: Using Postgres to store high volume streams of sensor readings | 
| Previous Message | Scott Marlowe | 2008-11-22 21:51:57 | Re: Using Postgres to store high volume streams of sensor readings |