Getting "cache lookup failed for aggregate" error

From: Patrick Krecker <patrick(at)judicata(dot)com>
To: pgsql-general(at)postgresql(dot)org
Subject: Getting "cache lookup failed for aggregate" error
Date: 2014-06-24 22:54:06
Message-ID: CAK2mJFNtrxRg8Cm3Y30nJ_vEat4DNnFvJfNFTgyF8FWETFMNWw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Hello everyone -- We received a strange error today on our production write
master. During a routine maintenance script, we got the following error:

"ERROR: cache lookup failed for aggregate 5953992"

I think I found the culprit. We have a cron script that (among other
things) recreates the aggregate function array_cat_aggregate()once every
minute. My guess is that a) transactions do not guarantee a snapshot of
custom functions and b) we got unlucky and the aggregate was deleted during
the execution of the UPDATE.

I was just wondering if my analysis is correct, so I can make the necessary
changes to production to prevent this from happening again.

Here is the query that produced the error:

UPDATE marbury_case SET
components_vector = (
SELECT array_cat_aggregate(component) FROM (
SELECT ARRAY[
id,
type_id,
"offset",
length,
internal_id,
parent_id,
right_sibling_id] AS component
FROM marbury_component WHERE case_id = marbury_case.id ORDER BY id)
AS foo),
attributes_json = (
SELECT array_to_json(array_agg(attributes || hstore('_ind',
ind::text))) FROM (
SELECT (rank() OVER (ORDER BY id)) - 1 AS ind, attributes
FROM marbury_component
WHERE case_id = marbury_case.id ORDER BY id)
AS foo WHERE attributes IS NOT NULL
AND array_length(akeys(attributes), 1) > 0),
vectors_updated = timeofday()::timestamp
WHERE id = 71865

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Adrian Klaver 2014-06-24 23:24:06 Re: Re: Error When Trying to Use Npgsql to COPY into a PostgreSQL Database
Previous Message Vik Fearing 2014-06-24 21:16:59 Re: JSON Indexes