Re: [HACKERS] how to deal with sparse/to-be populated tables

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Alfred Perlstein <bright(at)wintelcom(dot)net>
Cc: chris(at)bitmead(dot)com, pgsql-hackers(at)postgreSQL(dot)org
Subject: Re: [HACKERS] how to deal with sparse/to-be populated tables
Date: 2000-02-04 06:06:53
Message-ID: 1706.949644413@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Alfred Perlstein <bright(at)wintelcom(dot)net> writes:
> (yes, I just thought about only indexing, and trying the update
> first and only on failure doing an insert, however we really can't
> determine if the initial update failed because no record matched(ok),
> or possible some other error (ouch))

Uh ... why not? "UPDATE 0" is a perfectly recognizable result
signature, it seems like. (I forget just how that looks at the
libpq API level, but if psql can see it so can you.)

Alternatively, if you think the insert is more likely to be the
right thing, try it first and look to see if you get a "can't
insert duplicate key into unique index" error.

You're right that SQL provides no combination statement that would
allow these sequences to be done with only one index probe. But
FWIW, I'd think that the amount of wasted I/O would be pretty minimal;
the relevant index pages should still be in the buffer cache when
the second query gets to the backend.

regards, tom lane

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2000-02-04 06:33:45 Re: [HACKERS] Another nasty cache problem
Previous Message Chris Bitmead 2000-02-04 05:57:54 Re: [HACKERS] Another nasty cache problem