From: | Alvaro Herrera <alvherre(at)atentus(dot)com> |
---|---|
To: | Boris Köster <koester(at)x-itec(dot)de> |
Cc: | Curt Sampson <cjs(at)cynic(dot)net>, Gunther Schadow <gunther(at)aurora(dot)regenstrief(dot)org>, <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Mass-Data question |
Date: | 2002-04-16 20:18:02 |
Message-ID: | Pine.LNX.4.33L2.0204161613120.5690-100000@aguila.protecne.cl |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Tue, 16 Apr 2002, [ISO-8859-15] Boris Köster wrote:
> Normally it sounds not very complex to do parallelized
> reading/writing but getting the results in the right order that is a
> problem. Maybe I could collect data parallelized from several
> machines via threads, writing the content to a (new) machine (?) if the numer of rows is
> not higher than x rows to avoid disk-overrun. The advantage could be
> that if this works, its possible to use that feature with pgsql+mysql.
Maybe you can use dblink to retrieve the results from the various
"parallel servers" into one central server and then merge them (UNION,
maybe?). That would work for simple SELECTs, but when you have a couple
of triggers you start getting into trouble.
Obviously you would have to split UPDATEs and INSERTs appropiately.
Who knows, maybe you can even get it to actually work.
--
Alvaro Herrera (<alvherre[(at)]atentus(dot)com>)
"On the other flipper, one wrong move and we're Fatal Exceptions"
(T.U.X.: Term Unit X - http://www.thelinuxreview.com/TUX/)
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Jenkins | 2002-04-16 20:29:09 | Re: Large table update/vacuum PLEASE HELP! |
Previous Message | Dorward Villaruz | 2002-04-16 19:31:15 | no keys... |