From: | Stefan Keller <sfkeller(at)gmail(dot)com> |
---|---|
To: | Jeff Janes <jeff(dot)janes(at)gmail(dot)com>, Wales Wang <wormwang(at)yahoo(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org, Stephen Frost <sfrost(at)snowman(dot)net> |
Subject: | Re: PG as in-memory db? How to warm up and re-populate buffers? How to read in all tuples into memory? |
Date: | 2012-02-26 10:56:44 |
Message-ID: | CAFcOn28k5=B=EY3WOHF0HYOkcAZqzpS+FvTAXzgfj85PM08Msw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Hi Jeff and Wales,
2012/2/26 Jeff Janes <jeff(dot)janes(at)gmail(dot)com> wrote:
>> The problem is that the initial queries are too slow - and there is no
>> second chance. I do have to trash the buffer every night. There is
>> enough main memory to hold all table contents.
>
> Just that table, or the entire database?
The entire database consisting of only about 5 tables which are
similar but with different geometry types plus a relations table (as
OpenStreetMap calls it).
>> 1. How can I warm up or re-populate shared buffers of Postgres?
>
> Instead, warm the OS cache. Then data will get transferred into the
> postgres shared_buffers pool from the OS cache very quickly.
>
> tar -c $PGDATA/base/ |wc -c
Ok. So with "OS cache" you mean the files which to me are THE database itself?
A cache to me is a second storage with "controlled redudancy" because
of performance reasons.
>> 2. Are there any hints on how to tell Postgres to read in all table
>> contents into memory?
>
> I don't think so, at least not in core. I've wondered if it would
> make sense to suppress ring-buffer strategy when there are buffers on
> the free-list. That way a sequential scan would populate
> shared_buffers after a restart. But it wouldn't help you get the
> indexes into cache.
So, are there any developments going on with PostgreSQL as Stephen
suggested in the former thread?
2012/2/26 Wales Wang <wormwang(at)yahoo(dot)com>:
> You can try PostgreSQL 9.x master/slave replication, then try run slave
> on persistent RAM Fileystem (tmpfs)
> So, access your all data from slave PostgreSQL that run on tmpfs..
Nice idea.
I do have a single upscaled server and up to now I hesitated to
allocate say 48 Gigabytes (out of 72) to such a RAM Fileystem (tmpfs).
Still, would'nt it be more flexible when I could dynamically instruct
PostgreSQL to behave like an in-memory database?
Yours, Stefan
From | Date | Subject | |
---|---|---|---|
Next Message | Thomas Kellerer | 2012-02-26 11:46:03 | Re: PG as in-memory db? How to warm up and re-populate buffers? How to read in all tuples into memory? |
Previous Message | Reuven M. Lerner | 2012-02-26 10:46:28 | Re: Very long deletion time on a 200 GB database |