From: | Alvaro Herrera <alvherre(at)commandprompt(dot)com> |
---|---|
To: | Michal Szymanski <dyrex(at)poczta(dot)onet(dot)pl> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Using Postgres to store high volume streams of sensor readings |
Date: | 2008-11-22 22:32:32 |
Message-ID: | 20081122223232.GE3813@alvh.no-ip.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
> On 21 Lis, 13:50, ciprian(dot)crac(dot)(dot)(dot)(at)gmail(dot)com ("Ciprian Dorin Craciun")
> wrote:
> > What have I observed / tried:
> > * I've tested without the primary key and the index, and the
> > results were the best for inserts (600k inserts / s), but the
> > readings, worked extremly slow (due to the lack of indexing);
> > * with only the index (or only the primary key) the insert rate is
> > good at start (for the first 2 million readings), but then drops to
> > about 200 inserts / s;
I didn't read the thread so I don't know if this was suggested already:
bulk index creation is a lot faster than retail index inserts. Maybe
one thing you could try is to have an unindexed table to do the inserts,
and a separate table that you periodically truncate, refill with the
contents from the other table, then create index. Two main problems: 1.
querying during the truncate/refill/reindex process (you can solve it by
having a second table that you "rename in place"); 2. the query table is
almost always out of date.
--
Alvaro Herrera http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.
From | Date | Subject | |
---|---|---|---|
Next Message | blackwater dev | 2008-11-22 22:34:26 | date stamp on update? |
Previous Message | Alvaro Herrera | 2008-11-22 22:26:27 | Re: Using Postgres to store high volume streams of sensor readings |