From: | Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com> |
---|---|
To: | Josh Berkus <josh(at)agliodbs(dot)com> |
Cc: | PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: four minor proposals for 9.5 |
Date: | 2014-03-20 00:46:19 |
Message-ID: | CAFj8pRAz_B2g4B9kvKp+2Q3PToesNnKJdodwsKcGMrn9-KfxQg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
2014-03-20 1:28 GMT+01:00 Josh Berkus <josh(at)agliodbs(dot)com>:
> Pavel,
>
> > I wrote a few patches, that we use in our production. These patches are
> > small, but I hope, so its can be interesting for upstream:
> >
> > 1. cancel time - we log a execution time cancelled statements
>
> Manually cancelled? statement_timeout?
>
Manually cancelled - we have a more levels - user cancel 3..10 minutes,
query executor timeout 20 minutes, and hard core statement limit 25 minutes.
logging of execution time helps to us identify reason of cancel, and helps
to us identify impatient user (and where is a limit). It is same like query
time, when you log it.
>
> Anyway, +1 to add the time to the existing log message, but not in an
> additional log line.
>
> BTW, what do folks think of the idea of adding a new log column called
> "timing", which would record duration etc.? It would be really nice not
> to have to parse this out of the text message all the time ...
>
> > 2. fatal verbose - this patch ensure a verbose log for fatal errors. It
> > simplify a investigation about reasons of error.
>
> Configurable, or not?
>
we have not configuration, but it should be configurable naturally - A main
motivation about this feature was a same message for more errors - and
fatal level helps to us identify a source. But we cannot to enable verbose
level as default due log size and log overhead.
>
> > 3. relation limit - possibility to set session limit for maximum size of
> > relations. Any relation cannot be extended over this limit in session,
> when
> > this value is higher than zero. Motivation - we use lot of queries like
> > CREATE TABLE AS SELECT .. , and some very big results decreased a disk
> free
> > space too much. It was high risk in our multi user environment.
> Motivation
> > is similar like temp_files_limit.
>
> I'd think the size of the relation you were creating would be difficult
> to measure. Also, would this apply to REINDEX/VACUUM FULL/ALTER? Or
> just CREATE TABLE AS/SELECT INTO?
>
It was only relation limit without indexes or anything else. It just early
stop for queries where statement timeout is too late (and allocated space
on disc is too long). Our statement limit is 20 minutes and then a query
can create table about 100GB - only ten unlimited users had to take our
full free space on Amazon disc.
>
> > 4. track statement lock - we are able to track a locking time for query
> and
> > print this data in slow query log and auto_explain log. It help to us
> with
> > lather slow query log analysis.
>
> I'm very interested in this. What does it look like?
>
We divided locks to three kinds (levels): tables, tuples, and others. As
results we print three numbers for any SQL statement - waiting to table's
locks, waiting to tuple's locks and waiting to other's locks (extending
page locks and similar). I don't remember so we used any info in this
detail's level, but it is interesting for slow queries. You don't spend
time over analyse of mystical fast/slow queries - you see clearly so
problem was in locks.
Regards
Pavel
>
> --
> Josh Berkus
> PostgreSQL Experts Inc.
> http://pgexperts.com
>
From | Date | Subject | |
---|---|---|---|
Next Message | Kouhei Kaigai | 2014-03-20 01:45:44 | Re: Custom Scan APIs (Re: Custom Plan node) |
Previous Message | Robert Haas | 2014-03-20 00:35:02 | Re: Patch to send transaction commit/rollback stats to the stats collector unconditionally. |