From: | Yudie Gunawan <yudiepg(at)gmail(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Table partition for very large table |
Date: | 2005-03-28 19:02:45 |
Message-ID: | e460d0c05032811024ad44f31@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
I actualy need to join from 2 tables. Both of them similar and has
more than 4 millions records.
CREATE TABLE prdt_old (
groupnum int4 NOT NULL,
sku varchar(30) NOT NULL,
url varchar(150),
);
CREATE TABLE prdt_new(
groupnum int4 NOT NULL,
sku varchar(30) NOT NULL,
url varchar(150) NOT NULL,
);
The query returns group number and sku from old table where has no url
in prdt_new table.
INSERT into prdtexpired
SELECT pn.groupnum, pn.sku
FROM prdt_old po
LEFT OUTER JOIN prdt_new pn
ON (pn.groupnum = po.groupnum and pn.sku = po.sku)
WHERE pn.url is null or pn.url= '';
I already have resolution for this problem where I seperate the query
for each group.
But when I address this question, I hope that Postgresql has some kind
of table optimazion for large records. Based my experience it is
faster to query from chopped smaller table rather than query from
single huge table. I heard Oracle has some kind of table partition
that acts like single table.
From | Date | Subject | |
---|---|---|---|
Next Message | Avishai Weissberg | 2005-03-28 19:06:12 | general purpose full text indexing |
Previous Message | Avishai Weissberg | 2005-03-28 19:00:58 | general purpose full text indexing |