From: | Gmail <robjsargent(at)gmail(dot)com> |
---|---|
To: | Justin Pryzby <pryzby(at)telsasoft(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: INSERT ON CONFLICT of "wide" table: target lists can have at most 1664 entries |
Date: | 2016-12-04 16:52:45 |
Message-ID: | 00F6777D-4500-4FEF-9BCE-0D1A22C1A2AB@gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
> On Dec 4, 2016, at 9:32 AM, Justin Pryzby <pryzby(at)telsasoft(dot)com> wrote:
>
> Our application INSERTs data from external sources, and infrequently UPDATEs
> the previously-inserted data (currently, it first SELECTs to determine whether
> to UPDATE).
>
> I'm implementing unique indices to allow "upsert" (and pg_repack and..), but
> running into a problem when the table has >830 columns (we have some tables
> which are at the 1600 column limit, and have previously worked around that
> limit using arrays or multiple tables).
>
> I tried to work around the upsert problem by using pygresql inline=True
> (instead of default PREPAREd statements) but both have the same issue.
>
> I created a test script which demonstrates the problem (attached).
>
> It seems to me that there's currently no way to "upsert" such a wide table?
Pardon my intrusion here, but I'm really curious what sort of datum has so many attributes?
From | Date | Subject | |
---|---|---|---|
Next Message | Joseph Brenner | 2016-12-04 17:14:58 | Re: Select works only when connected from login postgres |
Previous Message | Rich Shepard | 2016-12-04 16:47:46 | Re: Postgres and LibreOffice's 'Base' |