| From: | Enrico Sirola <enrico(dot)sirola(at)gmail(dot)com> |
|---|---|
| To: | pgsql-general(at)postgresql(dot)org |
| Subject: | index organized tables use case |
| Date: | 2007-12-12 12:27:30 |
| Message-ID: | 475FD3B2.7060002@gmail.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-general |
Hello,
I'm thinking about migrating from another DBMS to postgresql. I have an
almost working proof of concept, but still have some doubts about the
following use case.
I have a table like the following
CREATE TABLE test
(
code character varying(32) NOT NULL,
tag integer NOT NULL,
value double precision,
CONSTRAINT test_pkey PRIMARY KEY (code, tag)
);
It represents a sequence (with holes) of values associated with a code.
The application code usually performs selection queries like
select tag, value from test where code='XXX';
also, deletions are like
delete from test where code='XXX';
and insertions follow the same pattern (all the data for a code is
inserted using a loop in a single transaction). That's more or less all.
so this type of workload is greatly enhanced by an index-organized table
(oracle) or a clustered index (SQL Server/Sybase).
From what I understood this kind of table is presently not supported by
postgresql (is it?) so, what do you advice?
I would think about something like
CREATE TABLE testnew
(
code character varying(32) NOT NULL,
first_tag integer, /* the tag value associated with the first value */
"values" double precision[], /* the datum, or NaN if not valid */
valid_values bit(1)[], /* true if a datum is present */
CONSTRAINT testnew_pkey PRIMARY KEY (code)
);
but this would require an application refactoring. Any idea?
TIA,
e.
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Pavel Stehule | 2007-12-12 12:29:42 | Re: Possible bug in PostgreSQL 8.3beta4 (postgres process segfaults) |
| Previous Message | Ashish Karalkar | 2007-12-12 12:25:20 | Re: very slow query |