Re: What's faster? BEGIN ... EXCEPTION or CREATE TEMP TABLE IF NOT EXISTS?

From: Merlin Moncure <mmoncure(at)gmail(dot)com>
To: Moshe Jacobson <moshe(at)neadwerx(dot)com>
Cc: pgsql-general <pgsql-general(at)postgresql(dot)org>
Subject: Re: What's faster? BEGIN ... EXCEPTION or CREATE TEMP TABLE IF NOT EXISTS?
Date: 2012-10-02 13:18:24
Message-ID: CAHyXU0x4F0=11ZgvLGUDRKhF3R8w3W9UE7D_BapK=0Aw_p3pmg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Mon, Oct 1, 2012 at 3:58 PM, Moshe Jacobson <moshe(at)neadwerx(dot)com> wrote:
> On Mon, Oct 1, 2012 at 12:22 PM, Merlin Moncure <mmoncure(at)gmail(dot)com> wrote:
>>
>> >> *) Functions without exception blocks are faster than those with.
>> >> *) Therefore, CREATE/IF NOT EXISTS is probably faster (test to be sure)
>> >
>> > I don't think that can be assumed by your premise above. Essentially we
>> > are
>> > comparing the price of starting an exception block against checking the
>> > catalog for a table.
>>
>> A vanilla create table has to scan the catalogs also.
>
>
> Yes but that is irrelevant to the discussion. I am comparing the speed of
> repeated table existence checks with the speed of repeated exception blocks
> that access said table.

Both approaches have to do a catalog scan (even if you've established
an exception block the server still internally has to do a catalog
scan in order to raise an appropriate error). The exception block has
the additional overhead of a subtransaction.

>> > We already use connection pooling with pgbouncer, but upon disconnect,
>> > it
>> > issues a DISCARD ALL statement [...]
>>
>> Especially if you're using pgbouncer transaction mode, using temporary
>> tables is probably not a good idea.
>
> We are using it in session mode, so none of that is relevant to my
> situation.

OK. (but I don't like solutions that prevent stateless connection
pooling). In just about all cases where scalability was a concern and
I used session scoped objects I ended up regretting it somewhere down
the line. A more stateless approach has a lot of advantages besides
supporting more aggressive connection pooling -- for example you can
restart the server and all your connected clients wont lose local
temporary data. Anyways, enough lecturing -- I'm sure you've got
things pretty well figured out :-).

>> >> *) You can rig permanent tables around pg_backend_pid(). [...]
>>
>> >
>> > We currently do use permanent tables using pg_backend_pid(). It's
>> > because of
>> > the connection pooling specifically that we are having problems with
>> > stale
>> > data. I have been unable to find a way to automatically clear that data
>> > upon
>> > start or end of a session, or at least check if it's been set in this
>> > session or not.
>>
>> IMO the right way to do it is to generate a unique application token
>> [...] when your application session logs in. That token should be passed
>> into
>> *all* your session specific backend functions [...]
>
>
> No, this will not work because the backend functions are trigger functions,
> so they cannot be passed this data.

Sure they can...there are a number of ways to do it (although they all
require A. an extra round trip to establish the ID for later
statements in the transaction to read or B. a wrapping function that
handles the work on the server side). That said, if you're super
duper sure you'll never use transaction mode pooling, Temp tables are
ok to use unless your sessions are quite short (in which case all the
plan invalidation flying around will start to hurt).

merlin

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Willy-Bas Loos 2012-10-02 13:24:02 Re: insert ... returning in plpgsql
Previous Message Cédric Villemain 2012-10-02 13:13:10 Re: insert ... returning in plpgsql