From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Brian Fehrle <brianf(at)consistentstate(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: two table join just not fast enough. |
Date: | 2011-11-02 23:53:02 |
Message-ID: | 29294.1320277982@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Brian Fehrle <brianf(at)consistentstate(dot)com> writes:
> I've got a query that I need to squeeze as much speed out of as I can.
Hmm ... are you really sure this is being run with work_mem = 50MB?
The hash join is getting "batched", which means the executor thinks it's
working under a memory constraint significantly less than the size of
the filtered inner relation, which should be no more than a couple
megabytes according to this.
I'm not sure how much that will save, since the hashjoin seems to be
reasonably speedy anyway, but there's not much other fat to trim here.
One minor suggestion is to think whether you really need string
comparisons here or could convert that to use of an enum type.
String compares ain't cheap, especially not in non-C locales.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Gavin Flower | 2011-11-03 00:17:49 | Re: Guide to PG's capabilities for inlining, predicate hoisting, flattening, etc? |
Previous Message | CS DBA | 2011-11-02 21:53:28 | Re: Poor performance on a simple join |