From: | Shridhar Daithankar <shridhar(at)frodo(dot)hserus(dot)net> |
---|---|
To: | Josué Maldonado <josue(at)lamundial(dot)hn> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Insert speed question |
Date: | 2004-06-02 07:16:56 |
Message-ID: | 200406021246.56772.shridhar@frodo.hserus.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Tuesday 01 June 2004 21:42, Josué Maldonado wrote:
> Thanks for your responses,
>
> I did the vacuum but I cannot make the insert again at this moment, even
> when that server is not in production so all the resources should be
> dedicated to Postgres I think I still have some perfomance issues
I am not sure I understand. You could not insert? Why? Was there any problem
with database? Can you use typical linux tools such as vmstat/top to locate
the bottleneck?
> Did some changes to postgresql.conf according the tuning guide:
> tcpip_socket = true
> max_connections = 28
> shared_buffers = 32768 # min max_connections*2 or 16, 8KB each
> max_fsm_relations = 500 # min 10, fsm is free space map, ~40 bytes
> max_fsm_pages = 80000 # min 1000, fsm is free space map, ~6
> max_locks_per_transaction = 64 # min 10
> sort_mem = 16384 # min 64, size in KB
> vacuum_mem = 419430 # min 1024, size in KB
> checkpoint_segments = 10
> effective_cache_size = 819200 # typically 8KB each
OK, I would say the parameters are still slightly oversized but there is no
perfect set of parameters. You still might have to tune it according to your
usual workload.
> A simple query on the 4.8 million row table:
>
> dbmund=# explain analyze select * from pkardex where pkd_procode='8959';
> QUERY
> PLAN
> ---------------------------------------------------------------------------
>------------------------------------------------------------ Index Scan
> using ix_pkardex_procode on pkardex (cost=0.00..3865.52 rows=991
> width=287) (actual time=10.879..100.914 rows=18 loops=1)
> Index Cond: (pkd_procode = '8959'::bpchar)
> Total runtime: 101.057 ms
> (3 rows)
>
>
> A simple query on 1.2 million row
>
> explain analyze select * from pmdoc where pdc_docto='744144';
> QUERY PLAN
>
> ---------------------------------------------------------------------------
>--------------------------------------------- Index Scan using
> ix_pmdoc_docto on pmdoc (cost=0.00..5.20 rows=2 width=206) (actual
> time=0.081..0.085 rows=1 loops=1)
> Index Cond: (pdc_docto = '744144'::bpchar)
> Total runtime: 0.140 ms
> (3 rows)
I wouldn't say these timings have performance issues. 100ms is pretty fast so
much is 0.140 ms.
Note that there is a latency involved. No matter how much you tune, it can not
drop below a certain level. On my last machine(P-III/1GHz with IDE disk) I
observed it to be 200ms no matter what you do. But it could do 70 concurrent
connections with worst case latency of 210ms.(This was long back so number
means little but this is just an illustration)
This could be different on your setup but trend should be roughly same.
> I would appreciate any comment or suggestion, does a hardware upgrade is
> needed, does it seems "normal" for postgresql perfomance.
I would ask the question otherway round. What is the level of performance you
are looking at for your current workload. By how much this performance is
worse than your expectation?
IMO it is essential to set a target for performance tuning otherwise it
becomes an endless loop with minimal returns..
HTH
Shridhar
From | Date | Subject | |
---|---|---|---|
Next Message | Richard Huxton | 2004-06-02 08:10:08 | Re: after using pg_resetxlog, db lost |
Previous Message | Hadley Willan | 2004-06-02 06:29:20 | Dynamic SQL |