From: | Jeff Janes <jeff(dot)janes(at)gmail(dot)com> |
---|---|
To: | redtux1(at)gmail(dot)com |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Advice on logging strategy |
Date: | 2018-10-11 15:36:06 |
Message-ID: | CAMkU=1w9BU4Z7tsoB0xrQuAROB61TD0FunCVNqt58OnD1gw=5A@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Thu, Oct 11, 2018 at 6:27 AM Mike Martin <redtux1(at)gmail(dot)com> wrote:
> I have a question on logging strategy
>
> I have loggin set to
> log_statement = 'all' on a network database with logging set to csv so I
> can import it to a logging table
>
> However the database is populated via a nightly routine downloading data
> via REST APIusing prepared statements
>
> This results in enormous log files which take ages to import using copy
> becuase each execute statement is logged with the parameters chosen
>
> Is there any way around this?
>
One option is to convert to using COPY...FROM STDIN rather than prepared
INSERTs.
Another is to create a user specifically for bulk population, and do a
'ALTER USER bulk_load SET log_statement=none` to override the global
log_statement setting.
Cheers,
Jeff
From | Date | Subject | |
---|---|---|---|
Next Message | Adrian Klaver | 2018-10-11 16:06:59 | Re: RHEL 7 (systemd) reboot |
Previous Message | Alban Hertroys | 2018-10-11 14:56:38 | Re: Want to acquire lock on tables where primary of one table is foreign key on othere |