From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us> |
Cc: | "Michael A(dot) Olson" <mao(at)sleepycat(dot)com>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Performance (was: The New Slashdot Setup (includes MySql server)) |
Date: | 2000-05-19 17:36:29 |
Message-ID: | 8774.958757789@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general pgsql-hackers |
Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us> writes:
> All the sequential catalog scans that return one row are gone. What has
> not been done is adding indexes for scans returning more than one row.
I've occasionally wondered whether we can't find a way to use the
catcaches for searches that can return multiple rows. It'd be easy
enough to add an API for catcache that could return multiple rows given
a nonunique search key. The problem is how to keep the catcache up to
date with underlying reality for this kind of query. Deletions of rows
will be handled by the existing catcache invalidation mechanism, but
how can we know when some other backend has added a row that will match
a search condition? Haven't seen an answer short of scanning the table
every time, which makes the catcache no win at all.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | davidb | 2000-05-19 17:40:05 | Re: beginner Table data type question |
Previous Message | Ross J. Reedstrom | 2000-05-19 17:19:22 | Re: simple C function |
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Eisentraut | 2000-05-19 17:38:43 | Re: AW: question about index cost estimates |
Previous Message | Bruce Momjian | 2000-05-19 17:14:30 | Re: Performance (was: The New Slashdot Setup (includes MySql server)) |