From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Frank Joerdens <frank(at)x9media(dot)com> |
Cc: | pgsql-sql(at)postgresql(dot)org |
Subject: | Re: [SQL] How to avoid "Out of memory" using aggregate functions? |
Date: | 1999-12-06 16:23:27 |
Message-ID: | 18807.944497407@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-sql |
Frank Joerdens <frank(at)x9media(dot)com> writes:
> [ complex query ]
> which stopped working after the table grew over the 1 million mark
> (approximately) - I get the error
> FATAL 1: Memory exhausted in AllocSetAlloc()
> This sounds to me like the problem described below . . . my question is:
> Would the changes you already made to the current sources fix it for me?
Afraid not. I think what's killing you is all those date_part() and
date_trunc() operations --- the resultant datetime or float8 value from
each one occupies memory that won't get reclaimed till end of statement
:-(.
Silly as it sounds, you might be able to put off the problem by
rearranging the order of the WHERE clauses, remembering that AND stops
evaluating its subclauses as soon as it finds a FALSE. Presumably the
day check eliminates many more rows than the time-of-day checks, so
WHERE
domains.internet = stunde_test.destination
AND
date_trunc('day', stunde_test.date) = 'yesterday'
AND
date_part('hour', stunde_test.date) > 6
AND
date_part('hour', stunde_test.date) < 23
would probably about halve the number of date_part+date_trunc
calculations done.
Of course the real fix is to recycle temporary memory for all
expressions intra-query, but I do not know if that will get done for
7.0. It will get done eventually.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Marcio Macedo | 1999-12-06 16:41:42 | getting table info |
Previous Message | Tom Lane | 1999-12-06 16:01:26 | Re: [SQL] pqReadData() error |