From: | "Brian Moran" <brianm(at)crosstier(dot)com> |
---|---|
To: | pgsql-hackers(at)postgresql(dot)org(dot)pgsql-general(at)postgresql(dot)org |
Subject: | Re: setting up a trace through extended stored procedures |
Date: | 2002-02-26 13:17:53 |
Message-ID: | #sQLpfsvBHA.2468@tkmsftngp04 |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general pgsql-hackers |
I don't have Books Online in front of me right now, and don't remember the
syntax, but... you can set maximum sizes for a trace file. You can set this
up from the Profiler GUI and then save the trace definition to see the
correct syntax. I know this isn't a lot of detail, but I think it should
help you. I'll try to post more detail when I get to work and have BOL
handy...
--
Brian Moran
SQL Server MVP
"Adam" <nospam(at)nospam(dot)com> wrote in message
news:Lkhe8(dot)104765$s43(dot)24721268(at)typhoon(dot)columbus(dot)rr(dot)com(dot)(dot)(dot)
> This question is for the hardcore SQL Server gurus out there.
>
> In summary, the question is pertaining to setting up a custom trace using
> extended stored procedures in production. The idea is to find the source
of
> a deadlock that occurs on occasional basis. My question deals mainly with
> the following stored procedures.
> xp_trace_setqueryhistory
> xp_trace_flushqueryhistory
> xp_trace_addnewqueue
>
>
> Here is the detail: Every other day or so, the insert to this one table
> fails in production environment. Unfortunately, the error handling code
> doesn't trap the error number so I am not sure what error is occurring and
> causing the insert to fail. I have tried and have been unsuccessful in
> reproducing the problem in the development environment.
>
> I am guessing that it is due to a dead lock and would like to find the
> source of the problem through SQL Profiler. Since I am not at the site, I
> can't just fire up SQL Profiler and try to trace the source of the
problem.
>
> After some research, I found a way to set up a custom trace that will log
> the output of the profiler to a log file (or can output to a table)....
> please see xp_trace_addnewqueue in BOL.
>
> My concern is that the log file may get too big and may take up all the
disk
> space if it ran for too long. I also saw another extended procedure that
> flushes the last 100 events from the trace to a log file.... but that is
> only for a particular trace (please see xp_trace_setqueryhistory and
> xp_trace_flushqueryhistory).
>
> My question therefore is, can I make the trace only dump the last... say
> 5,000 events to the file when the deadlock occurs. I could setup an Alert
> for the error number for the deadlock. This alert could then execute a
> similar command as xp_trace_flushqueryhistory showing me only the last
5,000
> events at the time the error occurs.
>
> Any help would be appreciated.
>
> Thank you.
>
>
From | Date | Subject | |
---|---|---|---|
Next Message | Alexey Borzov | 2002-02-26 13:31:02 | trying to use contrib/rserv... |
Previous Message | Martín Marqués | 2002-02-26 13:08:33 | Re: Wisconsin Benchmark |
From | Date | Subject | |
---|---|---|---|
Next Message | Helge Bahmann | 2002-02-26 13:34:10 | Re: WAL Performance Improvements |
Previous Message | Oleg Bartunov | 2002-02-26 12:52:13 | single task postgresql |