From: | edipoelder(at)ig(dot)com(dot)br |
---|---|
To: | pgsql-sql(at)postgresql(dot)org |
Subject: | Memory and performance |
Date: | 2001-04-04 17:15:05 |
Message-ID: | 200104041715.f34HF8858323@postgresql.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-sql |
Hi all,
I have noted that Postgresql don't make a good memory handle. I have
made the tables/procedure (in attached file) and run it as "select bench(10,
5000)". This will give a 50000 records inserts (5 x 10000). (well, I run it
on a P200+64MB of RAM, under Linux, and Postgres 7.0.2. In a more powerfull
machine, you can try other values).
I get as result, the following times:
id | objname | benchtime
----+---------+-----------
1 | group 1 | 00:00:32
2 | group 2 | 00:00:47
3 | group 3 | 00:01:13
4 | group 4 | 00:01:41
5 | group 5 | 00:02:08
(5 rows)
Note that, with memory increse, the system becomes slow, even if the
system has free memory to alocate (yes, 64MB is enough to this test). I
didn't see the source code (yet), but I think that the data estructure used
to keep the changed records is a kind of chained list; and to insert a new
item, you have to walk to the end of this list. Can it be otimized?
The system that I'm developing, I have about 25000 (persons) x 8 (exams)
x 15 (answers per exam) = 3000000 records to process and it is VERY SLOW.
thanks,
Edipo Elder
[edipoelder(at)ig(dot)com(dot)br]
_________________________________________________________
Oi! Voc quer um iG-mail gratuito?
Ento clique aqui: http://www.ig.com.br/paginas/assineigmail.html
Attachment | Content-Type | Size |
---|---|---|
teste.zip | application/octet-stream | 620 bytes |
From | Date | Subject | |
---|---|---|---|
Next Message | jkakar | 2001-04-04 17:57:38 | Need to do an ALTER TABLE. |
Previous Message | Tom Lane | 2001-04-04 17:01:15 | Re: performance of functions - or rather lack of it |