From: | Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com> |
---|---|
To: | Jon Nelson <jnelson+pgsql(at)jamponi(dot)net> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: regarding CLUSTER and HUGE work_mem / maintenance_work_mem |
Date: | 2012-01-27 18:05:06 |
Message-ID: | 4F22E752.5000506@enterprisedb.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On 27.01.2012 19:43, Jon Nelson wrote:
> Let's say I have a 7GB table with 3-4 indices for a total of 10-12GB.
> Furthermore, let's say I have a machine with sufficient memory for me
> to set the work_mem and maintenance_work_mem to 20GB (just for this
> session).
> When I issue a CLUSTER using one of the indices, I see PostgreSQL (by
> way of strace) performing an index scan which amounts to large
> quantities of random I/O.
> In my case, that means it takes a very, very long time. PostgreSQL is
> largely at defaults, except for a 2GB shared_buffers and a few
> unrelated changes. The system itself has 32GB of physical RAM and has
> plenty free.
> Why didn't PostgreSQL just read the table into memory (and the
> interesting index) as a sequential scan, sort, and then write it out?
> It seems like there would be more than enough memory for that. The
> sequential I/O rate on this machine is 50-100x the random I/O rate.
>
> I'm using 8.4.10 (with the 'inet' de-toasting patch) on Scientific Linux 6.1.
The suppport for doing a seqscan+sort in CLUSTER was introduced in
version 9.1. Before that, CLUSTER always did an indexscan. See release
notes: http://www.postgresql.org/docs/9.1/static/release-9-1.html#AEN107416
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | Jayashankar K B | 2012-01-27 18:30:00 | Re: Postgress is taking lot of CPU on our embedded hardware. |
Previous Message | Jon Nelson | 2012-01-27 17:43:10 | regarding CLUSTER and HUGE work_mem / maintenance_work_mem |