Re: printing results of query to file in different times

From: Mariel Cherkassky <mariel(dot)cherkassky(at)gmail(dot)com>
To: George Neuner <gneuner2(at)comcast(dot)net>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: printing results of query to file in different times
Date: 2017-09-05 11:28:44
Message-ID: CA+t6e1kS59uQ7HYSrx-KH2ieFLhJUsnprJ1rGBNPvO1n0YjvbA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

I'm using an extension that is called orafce.
Yes I add the raise notice in order to see what happening but it doesnt
work faster. The execution plan isnt relevant because It happens for many
queries not for a specific one. I didnt understand what do you mean by
REPEATABLE_READ .

2017-08-31 16:24 GMT+03:00 George Neuner <gneuner2(at)comcast(dot)net>:

> On Thu, 24 Aug 2017 16:15:19 +0300, Mariel Cherkassky <
> mariel(dot)cherkassky(at)gmail(dot)com> wrote:
>
> >I'm trying to understand what postgresql doing in an issue that I'm
> having.
> >Our app team wrote a function that runs with a cursor over the results of
> a
> >query and via the utl_file func they write some columns to a file. I dont
> >understand why, but postgresql write the data into the file in the fs in
> >parts. I mean that it runs the query and it takes time to get back results
> >and when I see that the results back postgresql write to file the data and
> >then suddenly stops for X minutes. After those x minutes it starts again
> to
> >write the data and it continues that way until its done. The query returns
> >total *100* rows. I want to understand why it stops suddenly. There arent
>
> >any locks in the database during this operation.
> >
> >my function looks like that :
> >
> >func(a,b,c...)
> >cursor cr for
> >select ab,c,d,e.....
> >begin
> >raise notice - 'starting loop time - %',timeofday();
> > for cr_record in cr
> > Raise notice 'print to file - '%',timeofday();
> > utl_file.write(file,cr_record)
> > end loop
> >end
> >
> >I see the log of the running the next output :
> >
> >starting loop 16:00
> >print to file : 16:03
> >print to file : 16:03
> >print to file : 16:07
> >print to file : 16:07
> >print to file : 16:07
> >print to file : 16:010
> >
> >......
> >
> >Can somebody explain to me this kind of behavior ? Why is it taking some
> >much time to write and in different minutes after the query already been
> >executed and finished ? Mybe I'm getting from the cursor only part of the
> >rows ?
>
>
> First I'd ask where did you get utl_file from? Postrgesql has no such
> facility, so you must be using an extension. And not one I'm familiar with
> either - EnterpriseDB has a utl_file implementation in their Oracle
> compatibility stuff, but it uses "get" and "put" calls rather than "read"
> and "write".
>
> Second, raising notices can be slow - I assume you added them to see what
> was happening? How does the execution time compare if you remove them?
>
> I saw someone else asked about the execution plan, but I'm not sure that
> will help here because it would affect only the initial select ... the
> cursor would be working with the result set and should be able to skip
> directly to the target rows. I might expect several seconds for the loop
> with I/O ... but certainly not minutes unless the server is severely
> overloaded.
>
> One thing you might look at is the isolation level of the query. If you
> are using READ_COMMITTED or less, and the table is busy, other writing
> queries may be stepping on yours and even potentially changing your result
> set during the cursor loop. I would try using REPEATABLE_READ and see what
> happens.
>
>
> George
>
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
>

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Soni M 2017-09-05 13:24:24 slow index scan performance
Previous Message 우성민 2017-09-04 22:54:03 Re: Handling small inserts from many connections.