From: | Cédric Villemain <cedric(dot)villemain(dot)debian(at)gmail(dot)com> |
---|---|
To: | Konrad Garus <konrad(dot)garus(at)gmail(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: shared_buffers advice |
Date: | 2010-05-28 08:12:52 |
Message-ID: | AANLkTimwxZ-OiNtLEzKJO_MCuUPW6u-HjW4lW9r0en7D@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
2010/5/28 Konrad Garus <konrad(dot)garus(at)gmail(dot)com>:
> 2010/5/27 Cédric Villemain <cedric(dot)villemain(dot)debian(at)gmail(dot)com>:
>
>> Exactly. And the time to browse depend on the number of blocks already
>> in core memory.
>> I am interested by tests results and benchmarks if you are going to do some :)
>
> I am still thinking whether I want to do it on this prod machine.
> Maybe on something less critical first (but still with a good amount
> of memory mapped by page buffers).
>
> What system have you tested it on? Has it ever run on a few-gig system? :-)
databases up to 300GB for the stats purpose.
The snapshot/restore was done for bases around 40-50GB but with only
16GB of RAM.
I really thing some improvments are posible before using it in
production, even if it should work well as it is.
At least something to remove the orphan snapshot files (in case of
drop table, or truncate). And probably increase the quality of the
code around the prefetch.(better handling of
effective_io_concurrency...the prefetch is linerar but blocks requests
are grouped)
If you are able to test/benchs on a pre-production env, do it :)
--
Cédric Villemain 2ndQuadrant
http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support
From | Date | Subject | |
---|---|---|---|
Next Message | Joachim Worringen | 2010-05-28 11:04:13 | Re: performance of temporary vs. regular tables |
Previous Message | Konrad Garus | 2010-05-28 07:57:40 | Re: shared_buffers advice |