From: | Jim Nasby <Jim(dot)Nasby(at)BlueTreble(dot)com> |
---|---|
To: | anj patnaik <patna73(at)gmail(dot)com>, pgsql-general <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: duplicate key errors in log file |
Date: | 2015-11-18 17:35:47 |
Message-ID: | 564CB6F3.2090105@BlueTreble.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 11/17/15 5:33 PM, anj patnaik wrote:
> The pg log files apparently log error lines every time a user inserts a
> duplicate. I implemented a composite primary key and then when I see the
> exception in my client app I update the row with the recent data.
>
> however, I don't want the log file to fill out with these error messages
> since it's handled by the client.
>
> is there a way to stop logging certain messages?
>
> Also do any of you use any options to cause log files not to fill up the
> disk over time?
Not really. You could do something like SET log_min_messages = PANIC for
that statement, but then you won't get a log for any other errors.
In any case, the real issue is that you shouldn't do this in the client.
I'll bet $1 that your code has race conditions. Even if you got rid of
those, the overhead of the back-and-forth with the database is huge
compared to doing this in the database.
So really you should create a plpgsql function ala example 40-2 at
http://www.postgresql.org/docs/9.4/static/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
From | Date | Subject | |
---|---|---|---|
Next Message | Adrian Klaver | 2015-11-18 17:42:32 | Re: Taking lot time |
Previous Message | Karsten Hilbert | 2015-11-18 17:35:15 | Re: Taking lot time |