From: | pf(at)pfortin(dot)com |
---|---|
To: | pgsql-general(at)lists(dot)postgresql(dot)org |
Subject: | Re: cache lookup failed for function 0 |
Date: | 2023-09-30 16:33:06 |
Message-ID: | 20230930123306.12d10df2@pfortin.com |
Views: | Whole Thread | Raw Message | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Fri, 29 Sep 2023 18:21:02 -0400 Tom Lane wrote:
>pf(at)pfortin(dot)com writes:
>> As a test, rather than use INSERT, I recently wrote a python test script
>> to import some 8M & 33M record files with COPY instead. These worked with
>> last weekend's data dump. Next, I wanted to look into importing a subset
>> of columns using the below logic; but I'm getting "ERROR: cache lookup
>> failed for function 0". Re-running the same full imports that worked
>> Saturday, I now get the same error.
>
>> Could something in the DB cause this "function" error?
>
>"cache lookup failed" certainly smells like a server internal error,
>but we'd have heard about it if the trivial case you show could reach
>such a problem. I'm thinking there's things you haven't told us.
>What extensions do you have installed? Maybe an event trigger?
I have one production DB with fuzzystrmatch installed; but it's not in
any other DB. I'm trying to import into a test DB, and not yet at the
point of understanding or using triggers. This is a very simple setup,
other than the volume of data. The production DB has many tables, mostly
in the range of 8M-33M rows.
>Also, the reference to ENCODING 'ISO-8859-1' makes me wonder what
>encoding conversion is being performed.
The source files are mostly UTF-8; some files have the 1/2 (0xbd)
character in street addresses, hence the ISO...
> regards, tom lane
Thanks,
Pierre
From | Date | Subject | |
---|---|---|---|
Next Message | pf | 2023-09-30 16:35:27 | Re: cache lookup failed for function 0 |
Previous Message | grimy.outshine830 | 2023-09-30 16:22:06 | Re: Gradual migration from integer to bigint? |