From: | Victor Hooi <victorhooi(at)yahoo(dot)com> |
---|---|
To: | pgsql-general <pgsql-general(at)postgresql(dot)org> |
Subject: | Breaking up a PostgreSQL COPY command into chunks? |
Date: | 2013-11-08 02:06:42 |
Message-ID: | CAMnnoU+2=ZjcouydmqAXu7dG0HzjVF5mV3+-bs3icWu9-TV=DQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hi,
We're using psycopg2 with COPY to dump CSV output from a large query.
The actual SELECT query itself is large (both in number of records/columns,
and also in width of values in columns), but still completes in around
under a minute on the server.
However, if you then use a COPY with it, it will often time out.
We're using psycopg2 to run the command, the trace we get is something like:
Traceback (most recent call last):
File "foo.py", line 259, in <module>
jobs[job].run_all()
File "foo.py", line 127, in run_all
self.export_to_csv()
File "foo.py", line 168, in export_to_csv
cur.copy_expert(self.export_sql_statement, f)
psycopg2.extensions.TransactionRollbackError: canceling statement due to
conflict with recovery
DETAIL: User was holding shared buffer pin for too long.
My question is, what are some simple ways we can use to chunk up the query?
Could we pull down a list of all the ids (auto-incrementing int), break
this list up, then use a WHERE clause to break it up, running multiple COPY
commands?
Or would it be better to use LIMIT/OFFSET to break it up? I'm not sure how
we'd figure out when we reached the end of the results set though (apart
from just counting the results?).
Or are there other approaches you guys could recommend?
Cheers,
Victor
From | Date | Subject | |
---|---|---|---|
Next Message | wd | 2013-11-08 02:15:13 | Re: Breaking up a PostgreSQL COPY command into chunks? |
Previous Message | Adrian Klaver | 2013-11-07 23:00:33 | Re: Is it advisable to pg_upgrade directly from 9.0 to 9.3? |