From: | Jeff Janes <jeff(dot)janes(at)gmail(dot)com> |
---|---|
To: | Stefan Keller <sfkeller(at)gmail(dot)com> |
Cc: | Claudio Freire <klaussfreire(at)gmail(dot)com>, postgres performance list <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: PG as in-memory db? How to warm up and re-populate buffers? How to read in all tuples into memory? |
Date: | 2012-03-02 00:35:23 |
Message-ID: | CAMkU=1xfHkC6JuSMxYqXuK9gXr9b-_LxX-_0-zvywVMbrWwTjQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Wed, Feb 29, 2012 at 7:28 AM, Stefan Keller <sfkeller(at)gmail(dot)com> wrote:
> 2012/2/29 Stefan Keller <sfkeller(at)gmail(dot)com>:
>> 2012/2/29 Jeff Janes <jeff(dot)janes(at)gmail(dot)com>:
>>>> It's quite possible the vacuum full is thrashing your disk cache due
>>>> to maintainance_work_mem. You can overcome this issue with the tar
>>>> trick, which is more easily performed as:
>>>>
>>>> tar cf /dev/null $PG_DATA/base
>>>
>>> But on many implementations, that will not work. tar detects the
>>> output is going to the bit bucket, and so doesn't bother to actually
>>> read the data.
>>
>> Right.
>> But what about the commands cp $PG_DATA/base /dev/null or cat
>> $PG_DATA/base > /dev/null ?
>> They seem to do something.
For me they both give errors, because neither of them works on an
directory rather than ordinary files.
>
> ...or let's try /dev/zero instead /dev/null:
> tar cf /dev/zero $PG_DATA/base
That does seem to work.
So, does it solve your problem?
Cheers,
Jeff
From | Date | Subject | |
---|---|---|---|
Next Message | Claudio Freire | 2012-03-02 00:58:17 | Re: PG as in-memory db? How to warm up and re-populate buffers? How to read in all tuples into memory? |
Previous Message | Peter van Hardenberg | 2012-03-02 00:28:02 | Re: PG as in-memory db? How to warm up and re-populate buffers? How to read in all tuples into memory? |