From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | femski <hypertree(at)yahoo(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Postgres batch write very slow - what to do |
Date: | 2007-03-16 02:15:39 |
Message-ID: | 29684.1174011339@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
femski <hypertree(at)yahoo(dot)com> writes:
> If 17k record/sec is right around expected then I must say I am little
> disappointed from the "most advanced open source database".
Well, the software is certainly capable of much more than that;
for instance, on a not-too-new Dell x86_64 machine:
regression=# \timing
Timing is on.
regression=# create table t1(f1 int);
CREATE TABLE
Time: 3.614 ms
regression=# insert into t1 select * from generate_series(1,1000000);
INSERT 0 1000000
Time: 3433.483 ms
which works out to something a little shy of 300K rows/sec. Of course
the main difference from what I think you're trying to do is the lack of
any per-row round trips to the client code. But you need to look into
where the bottleneck is, not just assume it's insoluble.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Alexey Romanchuk | 2007-03-16 05:41:50 | Re: Determine dead tuples size |
Previous Message | Steve Atkins | 2007-03-15 15:58:25 | Re: Dispatch-Merge pattern |