From: | Hervé Piedvache <herve(at)elma(dot)fr> |
---|---|
To: | Ezra Nugroho <ezran(at)goshen(dot)edu> |
Cc: | Michael Loftis <mloftis(at)wgops(dot)com>, pgsql-general(at)postgresql(dot)org, pgreplication-general(at)gborg(dot)postgresql(dot)org |
Subject: | Re: [Pgreplication-general] DBMIRROR and INSERT transactions lose time ... |
Date: | 2003-03-31 15:56:56 |
Message-ID: | 200303311756.56638.herve@elma.fr |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hi Ezra,
Le Lundi 31 Mars 2003 17:27, Ezra Nugroho a écrit :
> Try doing it without replication, check the time.
I wrote it in my mail ... 16 minutes without the DBMirror trigger ... ;o)
> I think your problem has nothing to do with replication. It is simply
> because you have a huge one-shot transactions. Each time you run
> something in transaction, db needs to perform the sql in a rollback-able
> segment instead of in a permanent storage. It means that you are eating
> virtual memory like nuts...
> After a while page swap has to be done too frequently that your
> performance drops.
hum ... I look to my checker for yesterday ... no memory trouble I only lose
5% of memory I have 2Gb of RAM ... swap was at 100% free during all this time
... (9h of treatment)
Why using transaction ? ... To have a secured activity on my database ;*)
> Do you really have to run those 320 000 inserts in a transaction?
What other idea do you have ? In mean secured system please ... I remind you
that during this time ... other actions, like users, automate etc. are using
my database ... and once a month I have to insert those data ... so how to do
that securly without transaction ? ...
Regards,
> On Mon, 2003-03-31 at 06:33, Hervé Piedvache wrote:
> > Hi Michael,
> >
> > Le Lundi 31 Mars 2003 12:22, Michael Loftis a écrit :
> > > Couple of things here. If I read you correctly after you COMMIT the
> > > first part of the transaction goes quickly, but begins to slow down?
> >
> > No It's inside my transaction ... I'm not arrived to the COMMIT point ...
> > only INSERT command ... and it's going slow ... just doing INSERT ... and
> > some UPDATE, about 2000 updates are done in the time of the 320 000
> > inserts in the same transaction.
> > I mean ... I do :
> > Begin;
> > INSERT (xxx);
> > INSERT (xxx);
> > ....
> > UPDATE (xxx);
> > ...
> > INSERT (xxx);
> > INSERT (xxx);
> > ...
> > ... x 320 000
> > Then I COMMIT ...
> >
> > In my Perl script ... I do a FOR { ... } where are my INSERT, and I just
> > print each 100 loop the time passed and the current value of my loop to
> > know where I am ... so without DBMirror each 100 I have 0.3 seconds ...
> > never move to this value ... with DBMirror ... I have 0.3 for the first
> > 1000 then I lose 1 second each 3000 ... so I get 2 seconds ... then 3
> > seconds etc ... and after 9 hours .. I was up to 45 seconds to passed the
> > 100 INSERT of my loop ... Without DBMirror (only the trigger dropped) it
> > take about 15 min to do the transaction fully ...
> >
> > > HAve you considered removing any indexes you have on the tables prior
> > > to doing such a large insert and creating htem afterwards? It's much
> > > cheaper like that.
> >
> > I have only one index, my primary key index on a Serial ... I can't
> > delete it ... because during this script other programs can access to the
> > table ... for reading ... and as I told you previusly without the
> > DBMirror trigger it's running perfectly ...
> >
> > > Does this happen if you don't run DBMirror (IE local only) copy? I've
> > > never used such large transactions myself before.
> >
> > It's running perfectly if I drop the DBMirror trigger I have a constant
> > flow of 0.3 seconds for 100 INSERT command.
> >
> > Thanks per advance for your help ... or ideas ;o)
> >
> > Regards,
> >
> > > --On Monday, March 31, 2003 9:46 AM +0100 Hervé Piedvache
> > > <herve(at)elma(dot)fr>
> > >
> > > wrote:
> > > > Hi,
> > > >
> > > > Who can give us some help with DBMirror ??
> > > >
> > > > We make some test with DBMirror ... for us it's running perfectly ;o)
> > > >
> > > > Only one big trouble ... inserting data in transaction ...
> > > > We try to make one transaction with 320 000 inserts ... if the
> > > > trigger of DBMirror is not connected we have 100 inserts done in 0.3
> > > > sec, with DBMirror it start quickly but after 1000 insert we lose
> > > > and lose many time ... 1 second losed by 3000/4000 insert ... after
> > > > 9 hours we get 45 sec for 100 insert ... :o(
> > > >
> > > > Any idea ? Update ? Patch ? ...
> > > >
> > > > Thanks per advance for your help ... ! :o)
> > > >
> > > > Regards,
> > > > --
> > > > Hervé
> > > > _______________________________________________
> > > > Pgreplication-general mailing list
> > > > Pgreplication-general(at)gborg(dot)postgresql(dot)org
> > > > http://gborg.postgresql.org/mailman/listinfo/pgreplication-general
> >
> > --
> > Hervé
> > _______________________________________________
> > Pgreplication-general mailing list
> > Pgreplication-general(at)gborg(dot)postgresql(dot)org
> > http://gborg.postgresql.org/mailman/listinfo/pgreplication-general
--
Hervé Piedvache
Elma Ingénierie Informatique
6 rue du Faubourg Saint-Honoré
F-75008 - Paris - France
Pho. 33-144949901
Fax. 33-144949902
From | Date | Subject | |
---|---|---|---|
Next Message | Lamar Owen | 2003-03-31 16:00:30 | Re: redhat 7.1 upgrade |
Previous Message | James Gregory | 2003-03-31 15:47:44 | Re: Referencing Problem/bug? |