Re: max_expr_depth

From: Joseph Shraibman <jks(at)selectacast(dot)net>
To: Doug McNaught <doug(at)wireboard(dot)com>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org>
Subject: Re: max_expr_depth
Date: 2001-06-19 02:57:35
Message-ID: 3B2EBF9F.CEC12521@selectacast.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Doug McNaught wrote:
>
> Joseph Shraibman <jks(at)selectacast(dot)net> writes:
>
> > Doug McNaught wrote:
> > >
> > > Joseph Shraibman <jks(at)selectacast(dot)net> writes:
> > >
> > > > Compared to 1000 updates that took between 25 and 47 seconds, an update
> > > > with 1000 itmes in the IN() took less than three seconds.
> > >
> > > Did you wrap the 1000 separate updates in a transaction?
> > >
> > > -Doug
> >
> > No, at a high level in my application I was calling the method to do the
> > update. How would putting it in a transaction help?
>
> If you don't, every update is its own transaction, and Postgres will
> sync the disks (and wait for the sync to complete) after every one.
> Doing N updates in one transaction will only sync after the whole
> transaction is complete. Trust me; it's *way* faster.

I thought WAL did away with most of the syncing.

Do you really think I should do 1000 updates in a transaction instead of
an IN with 1000 items? I can do my buffer flush any way I want but I'd
have to think the overhead of making 1000 calls to the backend would be
more than overwhelm the cost of the big OR statement (especially if the
server and client aren't on the same machine).

--
Joseph Shraibman
jks(at)selectacast(dot)net
Increase signal to noise ratio. http://www.targabot.com

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Doug McNaught 2001-06-19 03:09:43 Re: max_expr_depth
Previous Message Thomas T. Thai 2001-06-19 02:33:42 patent