From: | Richard Huxton <dev(at)archonet(dot)com> |
---|---|
To: | edipoelder(at)ig(dot)com(dot)br |
Cc: | pgsql-sql(at)postgresql(dot)org |
Subject: | Re: Memory and performance |
Date: | 2001-04-04 20:34:45 |
Message-ID: | 3ACB8562.FC449374@archonet.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-sql |
edipoelder(at)ig(dot)com(dot)br wrote:
>
> Hi all,
>
> I have noted that Postgresql don't make a good memory handle. I have
> made the tables/procedure (in attached file) and run it as "select bench(10,
> 5000)". This will give a 50000 records inserts (5 x 10000). (well, I run it
> on a P200+64MB of RAM, under Linux, and Postgres 7.0.2. In a more powerfull
> machine, you can try other values).
That's 50,000 inserts in one transaction - have you tried 50
transactions of 1000 inserts?
> I get as result, the following times:
> 5 | group 5 | 00:02:08
>
> Note that, with memory increse, the system becomes slow, even if the
> system has free memory to alocate (yes, 64MB is enough to this test). I
> didn't see the source code (yet), but I think that the data estructure used
> to keep the changed records is a kind of chained list; and to insert a new
> item, you have to walk to the end of this list. Can it be otimized?
I don't fancy your chances before 7.1 ;-)
> The system that I'm developing, I have about 25000 (persons) x 8 (exams)
> x 15 (answers per exam) = 3000000 records to process and it is VERY SLOW.
If you need to import large quantities of data, look at the copy
command, that tends to be faster.
- Richard Huxton
From | Date | Subject | |
---|---|---|---|
Next Message | Gordon A. Runkle | 2001-04-04 20:47:59 | UNION in a VIEW? |
Previous Message | J.H.M. Dassen Ray | 2001-04-04 20:33:47 | Re: Memory exhaustion |