Re: Benchmarking partitioning triggers and rules

From: Jim Nasby <Jim(dot)Nasby(at)BlueTreble(dot)com>
To: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Tim Uckun <timuckun(at)gmail(dot)com>
Cc: pgsql-general <pgsql-general(at)postgresql(dot)org>
Subject: Re: Benchmarking partitioning triggers and rules
Date: 2015-04-08 00:25:41
Message-ID: 55247585.8000302@BlueTreble.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On 3/12/15 8:15 AM, Tomas Vondra wrote:
> On 12.3.2015 04:57, Tim Uckun wrote:
>> I am using postgres 9.4, the default install with "brew install
>> postgres, no tuning at all. BTW if I use postgres.app application the
>> benchmarks run twice as slow!
>
> I have no idea what brew or postgres.app is. But I strongly recommend
> you to do some tuning.
>
> https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server
>
>>
>> Why do you think there is such dramatic difference between
>>
>> EXECUTE 'INSERT INTO ' || quote_ident(partition_name) || ' SELECT
>> ($1).*' USING NEW ;
>>
>> and
>>
>> EXECUTE 'INSERT INTO ' || quote_ident(partition_name) || ' VALUES(
>> ($1).*)' USING NEW ;
>>
>> One is thirty percent faster than the other. Also is there an even
>> better way that I don't know about.
>
> Because processing dynamic SQL commands (i.e. EXECUTE '...') is simply
> more expensive, as it needs to do more stuff (on every execution). There
> are reasons for that, but you may think of it as regular queries vs.
> prepared statements.
>
> Prepared statements are parsed and planned once, regular query needs to
> be parsed and planned over and over again.

BTW, if you're that concerned about performance you could probably do a
lot better than a plpgsql trigger by creating one in C. There's an
enormous amount of code involved just in parsing and starting a plpgsql
trigger, and then it's going to have to re-parse the dynamic SQL for
every single row, whereas a C trigger could avoid almost all of that.

Rules are likely to be even faster (at least until you get to a fairly
large number of partitions), but as Thomas mentioned they're very tricky
to use. The critical thing to remember with them is they're essentially
hacking on the original query itself.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Jim Nasby 2015-04-08 00:32:09 Re: Basic Question on Point In Time Recovery
Previous Message Jim Nasby 2015-04-08 00:16:22 Re: Asynchronous replication in postgresql