From: | Christophe Pettus <xof(at)thebuild(dot)com> |
---|---|
To: | Dominique Devienne <ddevienne(at)gmail(dot)com> |
Cc: | pgsql-general(at)lists(dot)postgresql(dot)org |
Subject: | Re: Advice on efficiently logging outputs to PostgreSQL |
Date: | 2024-10-15 17:56:25 |
Message-ID: | CFFA29A0-24DC-44EA-A27B-89ECE130C8BD@thebuild.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
> On Oct 15, 2024, at 07:17, Dominique Devienne <ddevienne(at)gmail(dot)com> wrote:
> Am I worrying too much? :)
Probably. :-) The main things I'd worry about is:
1. What's the ratio of log lines to database updates? You want this to be as high as usefully possible, since in effect you are doing write amplification by writing to the logs as well as to the "real" database.
2. One of the things to watch out for if you are writing the log lines to the database is that if you write a log line in a transaction, and that transaction rolls back, you lose the log line. That may or may not be what you want: if it's reporting an error (such as the reason that the transaction rolled back), you want to preserve that data. One way of handling this is to have the application have a separate session for logging, although you are now doing 2x the number of connections.
If the write volume is very high, you might consider using a dedicate log-ingestion service (there are tons) rather than PostgreSQL, so that you aren't overburdening the database with log activity.
From | Date | Subject | |
---|---|---|---|
Next Message | Vijaykumar Jain | 2024-10-15 19:50:56 | how to know if the sql will run a seq scan |
Previous Message | KK CHN | 2024-10-15 17:10:53 | WAL replication standby server: query |