From: | "Lawrence, Ramon" <ramon(dot)lawrence(at)ubc(dot)ca> |
---|---|
To: | "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | <pgsql-hackers(at)postgresql(dot)org>, <pandasuit(at)gmail(dot)com> |
Subject: | Re: The testing of multi-batch hash joins with skewed data sets patch |
Date: | 2009-02-11 04:04:16 |
Message-ID: | 6EEA43D22289484890D119821101B1DF2C1950@exchange20.mercury.ad.ubc.ca |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
> -----Original Message-----
> From: pgsql-hackers-owner(at)postgresql(dot)org [mailto:pgsql-hackers-
> owner(at)postgresql(dot)org] On Behalf Of Tom Lane
> But really there are two different performance regimes here, one where
> the hash data is large enough to spill to disk and one where it isn't.
> Reducing work_mem will cause data to spill into kernel disk cache, but
> if the total problem fits in RAM then very possibly that data won't
ever
> really go to disk. So I suspect such a test case will act more like
the
> small-data case than the big-data case. You probably actually need
more
> data than RAM to be sure you're testing the big-data case.
Is there a way to limit the kernel disk cache? (We are running SUSE
Linux.)
We have been testing hybrid hash join performance and have seen that the
performance varies considerably less than expected even for dramatic
changes in work_mem and the I/Os that appear to be performed.
--
Ramon Lawrence
From | Date | Subject | |
---|---|---|---|
Next Message | Bruce Momjian | 2009-02-11 04:09:25 | Re: PQinitSSL broken in some use casesf |
Previous Message | Robert Haas | 2009-02-11 03:59:27 | Re: GIN fast insert |