From: | "A(dot)M(dot)" <agentm(at)themactionfaction(dot)com> |
---|---|
To: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: An idea for parallelizing COPY within one backend |
Date: | 2008-02-27 14:35:27 |
Message-ID: | 16377CE3-7581-4E94-BB3A-5846440D09CC@themactionfaction.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Feb 27, 2008, at 9:11 AM, Florian G. Pflug wrote:
> Dimitri Fontaine wrote:
>> Of course, the backends still have to parse the input given by
>> pgloader, which only pre-processes data. I'm not sure having the
>> client prepare the data some more (binary format or whatever) is a
>> wise idea, as you mentionned and wrt Tom's follow-up. But maybe I'm
>> all wrong, so I'm all ears!
>
> As far as I understand, pgloader starts N threads or processes that
> open up N individual connections to the server. In that case, moving
> then text->binary conversion from the backend into the loader won't
> give any
> additional performace I'd say.
>
> The reason that I'd love some within-one-backend solution is that
> I'd allow you to utilize more than one CPU for a restore within a
> *single* transaction. This is something that a client-side solution
> won't be able to deliver, unless major changes to the architecture
> of postgres happen first...
It seems like multiple backends should be able to take advantage of
2PC for transaction safety.
Cheers,
M
From | Date | Subject | |
---|---|---|---|
Next Message | Brian Hurt | 2008-02-27 15:05:38 | Re: An idea for parallelizing COPY within one backend |
Previous Message | Richard Huxton | 2008-02-27 14:35:00 | Re: select avanced |