From: | "Gary Doades" <gpd(at)gpdnet(dot)co(dot)uk> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: how much ram do i give postgres? |
Date: | 2004-10-20 17:47:25 |
Message-ID: | 4176B2BD.12806.675A446D@localhost |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 20 Oct 2004 at 11:37, Josh Close wrote:
> On Wed, 20 Oct 2004 09:52:25 -0600, Scott Marlowe <smarlowe(at)qwest(dot)net> wrote:
> > 1: Is the bulk insert being done inside of a single transaction, or as
> > individual inserts?
>
> The bulk insert is being done by COPY FROM STDIN. It copies in 100,000
> rows at a time, then disconnects, reconnects, and copies 100k more,
> and repeats 'till done. There are no indexes on the tables that the
> copy is being done into either, so it won't be slowed down by that at
> all.
>
> >
What about triggers? Also constraints (check contraints, integrity
constraints) All these will slow the inserts/updates down.
If you have integrity constraints make sure you have indexes on the
referenced columns in the referenced tables and make sure the data
types are the same.
How long does 100,000 rows take to insert exactly?
How many updates are you performing each hour?
Regards,
Gary.
From | Date | Subject | |
---|---|---|---|
Next Message | Eric E | 2004-10-20 17:52:59 | Re: Sequence question |
Previous Message | Hicham G. Elmongui | 2004-10-20 17:43:44 | create table/type |