From: | Jerry Sievers <gsievers19(at)comcast(dot)net> |
---|---|
To: | Mark Moellering <markmoellering(at)psyberation(dot)com> |
Cc: | Postgres General <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: best way to write large data-streams quickly? |
Date: | 2018-04-10 18:09:52 |
Message-ID: | 8737039bm7.fsf@jsievers.enova.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Mark Moellering <markmoellering(at)psyberation(dot)com> writes:
<snip>
>
> How long can you run COPY? I have been looking at it more closely.
> In some ways, it would be simple just to take data from stdin and
> send it to postgres but can I do that literally 24/7? I am
> monitoring data feeds that will never stop and I don't know if that
> is how Copy is meant to be used or if I have to let it finish and
> start another one at some point?
Launch a single copy and pipe data into it for an extended period an/or
bulk is fine but nothing will be visible until the statement is finished
and, if it were run in a transaction block, the block committed.
HTH
>
> Thanks for everyones' help and input!
>
> Mark Moellering
>
>
>
>
--
Jerry Sievers
Postgres DBA/Development Consulting
e: postgres(dot)consulting(at)comcast(dot)net
p: 312.241.7800
From | Date | Subject | |
---|---|---|---|
Next Message | Jehan-Guillaume (ioguix) de Rorthais | 2018-04-10 19:19:30 | Re: Postgresql Split Brain: Which one is latest |
Previous Message | Raghavendra Rao J S V | 2018-04-10 17:45:22 | Planning to change autovacuum_vacuum_scale_factor value to zero. Please suggest me if any negative impact. |