Re: auto-filling a field on insert

From: Craig Ringer <craig(at)postnewspapers(dot)com(dot)au>
To: Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>
Cc: semi-ambivalent <thefronny(at)gmail(dot)com>, pgsql-general(at)postgresql(dot)org
Subject: Re: auto-filling a field on insert
Date: 2009-10-25 13:20:38
Message-ID: 4AE450A6.4050401@postnewspapers.com.au
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Scott Marlowe wrote:
> On Fri, Oct 23, 2009 at 9:32 PM, semi-ambivalent <thefronny(at)gmail(dot)com> wrote:
>> Thanks everyone for the pointers. I like the idea of getting rid of
>> the concatenated field even though it reduced the worst case query
>> time to one tenth of what I had been seeing. But for now I'm going to
>> keep it there because I'm ignorant about triggers so this will be a
>> good opportunity to learn about them before I drop the column for
>> something more efficient, assuming there is.
>
> The multi column index should give you equivalent speed.

In fact, it may well give you significantly superior speed, particularly
if at least one of the columns always has a value specified for it in a
query. Put that column first in the index column list and you'll only
have to search for the other two values in the subset of the index that
matches the first value. This could be a *lot* faster than a full index
scan on your concatenated field.

A trigger isn't necessary or desirable as a solution to this problem -
you'd only need a trigger if you really want to have a "real"
concatenated column. A multi-column index is almost certainly the best
choice in this situation. A functional index can be used in situations
where a multi-column index doesn't apply, or where you need something
more complex, but here a multi-column index looks just ideal.

--
Craig Ringer

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Ian Barwick 2009-10-25 13:38:43 Re: PostgreSQL Conference 2009 Japan
Previous Message Lew 2009-10-25 02:00:06 Re: is postgres a good solution for billion record data.. what about mySQL?