From: | Hannu Krosing <hannu(at)tm(dot)ee> |
---|---|
To: | Christopher Browne <cbbrowne(at)libertyrms(dot)info> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Performance Concern |
Date: | 2003-10-24 20:58:11 |
Message-ID: | 1067029090.5995.28.camel@fuji.krosing.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Christopher Browne kirjutas R, 24.10.2003 kell 22:10:
> That might be something of an improvement, but it oughtn't be
> cripplingly different to use a text field rather than an integer.
I suspect his slowness comes from not running analyze when it would be
time to start using indexes for fk checks - if you run analyze on an
empty table and then do 10000 inserts, then all these will run their
checks using seqscan, as this is the fastest way to do it on an empty
table ;)
> What's crippling is submitting 100,000 queries in 100,000
> transactions. Cut THAT down to size and you'll see performance return
> to being reasonable.
even this should not be too crippling.
I 0nce did some testing for insert performance and got about 9000
inserts/sec on 4 CPU Xeon with 2GB ram and RAID-5 (likely with battery
backed cache).
This 9000 dropped to ~250 when I added a primary key index (to a
60.000.000 record table, so that the pk index fit only partly in
memory), all this with separate transactions, but with many clients
running concurrently. (btw., the clients were not java/JDBC but
Python/psycopg)
With just one client you are usually stuck to 1 trx/disk revolution, at
least with no battery-backed write cache.
even 250/sec should insert 10000 in 40 sec.
--------------
Hannu
From | Date | Subject | |
---|---|---|---|
Next Message | Richard Jones | 2003-10-24 21:06:10 | Memcache |
Previous Message | Bruce Momjian | 2003-10-24 20:50:34 | Re: My own performance/tuning q&a |