From: | "Carlo Stonebanks" <stonec(dot)register(at)sympatico(dot)ca> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Performace Optimization for Dummies |
Date: | 2006-09-29 04:37:37 |
Message-ID: | efi7u5$265b$1@news.hub.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
> imo, the key to high performance big data movements in postgresql is
> mastering sql and pl/pgsql, especially the latter. once you get good
> at it, your net time of copy+plpgsql is going to be less than
> insert+tcl.
If this implies bulk inserts, I'm afraid I have to consider something else.
Any data that has been imported and dedpulicated has to be placed back into
the database so that it can be available for the next imported row (there
are currently 16 tables affected, and more to come). If I was to cache all
inserts into a seperate resource, then I would have to search 32 tables -
the local pending resources, as well as the data still in the system. I am
not even mentioning that imports do not just insert rows, they could just
rows, adding their own complexity.
From | Date | Subject | |
---|---|---|---|
Next Message | Carlo Stonebanks | 2006-09-29 04:46:54 | Re: Performace Optimization for Dummies |
Previous Message | Carlo Stonebanks | 2006-09-29 04:30:23 | Re: Performace Optimization for Dummies |