Re: question on hash joins

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: "Hartranft, Robert M(dot) (GSFC-423(dot)0)[RAYTHEON CO]" <robert(dot)m(dot)hartranft(at)nasa(dot)gov>
Cc: "pgsql-admin(at)postgresql(dot)org" <pgsql-admin(at)postgresql(dot)org>
Subject: Re: question on hash joins
Date: 2017-10-19 14:14:09
Message-ID: 30523.1508422449@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

"Hartranft, Robert M. (GSFC-423.0)[RAYTHEON CO]" <robert(dot)m(dot)hartranft(at)nasa(dot)gov> writes:
> Sorry if I am being dense, but I still have a question…
> Is it possible for me to estimate the size of the hash and a value for
> the temp_file_limit setting using information in the explain plan?

Well, it'd be (row_overhead + data_width) * number_of_rows.

Poking around in the source code, it looks like the row_overhead in
a tuplestore temp file is 10 bytes (can be more if you have nulls in
the data). Your example seemed to be storing one bigint column,
so data_width is 8 bytes. data_width can be a fairly squishy thing
to estimate if the data being passed through the join involves variable-
width columns, but the planner's number is usually an OK place to start.

> For example, one possibility is that the hash contains the entire tuple for each
> matching row.

No, it's just the columns that need to be used in or passed through the
join. If you want to be clear about this you can use EXPLAIN VERBOSE
and check what columns are emitted by the plan node just below the hash.

regards, tom lane

In response to

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message Hartranft, Robert M. (GSFC-423.0)[RAYTHEON CO] 2017-10-19 15:48:05 Re: question on hash joins
Previous Message Hartranft, Robert M. (GSFC-423.0)[RAYTHEON CO] 2017-10-19 13:09:42 Re: question on hash joins