From: | Ben Brehmer <benbrehmer(at)gmail(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Cc: | Thom Brown <thombrown(at)gmail(dot)com>, Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>, craig_james(at)emolecules(dot)com, kbuckham(at)applocation(dot)net, scott(dot)lists(at)enterprisedb(dot)com, Greg Smith <greg(at)2ndquadrant(dot)com> |
Subject: | Re: Load experimentation |
Date: | 2009-12-08 07:22:10 |
Message-ID: | 4B1DFEA2.3070804@gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Thanks for all the responses. I have one more thought;
Since my input data is split into about 200 files (3GB each), I could
potentially spawn one load command for each file. What would be the
maximum number of input connections Postgres can handle without bogging
down? When I say 'input connection' I mean "psql -U postgres -d dbname
-f one_of_many_sql_files".
Thanks,
Ben
On 07/12/2009 12:59 PM, Greg Smith wrote:
> Ben Brehmer wrote:
>> By "Loading data" I am implying: "psql -U postgres -d somedatabase -f
>> sql_file.sql". The sql_file.sql contains table creates and insert
>> statements. There are no indexes present nor created during the load.
>> COPY command: Unfortunately I'm stuck with INSERTS due to the nature
>> this data was generated (Hadoop/MapReduce).
> Your basic options here are to batch the INSERTs into bigger chunks,
> and/or to split your data file up so that it can be loaded by more
> than one process at a time. There's some comments and links to more
> guidance here at http://wiki.postgresql.org/wiki/Bulk_Loading_and_Restores
>
> --
> Greg Smith 2ndQuadrant Baltimore, MD
> PostgreSQL Training, Services and Support
> greg(at)2ndQuadrant(dot)com www.2ndQuadrant.com
>
From | Date | Subject | |
---|---|---|---|
Next Message | Greg Smith | 2009-12-08 07:35:12 | Re: Load experimentation |
Previous Message | Greg Smith | 2009-12-08 05:43:47 | Re: Dynamlically updating the estimated cost of a transaction |