From: | DANTE ALEXANDRA <ALEXANDRA(dot)DANTE(at)BULL(dot)NET> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: out of memory during query execution |
Date: | 2005-12-20 12:35:03 |
Message-ID: | 43A7FA77.9010402@BULL.NET |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hello,
The postmaster is launched by the user "pg_810" who is not the root user.
When I launch the "ulimit -a" command, I've got :
$ ulimit -a
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) unlimited
stack(kbytes) unlimited
memory(kbytes) unlimited
coredump(blocks) unlimited
nofiles(descriptors) 2000
You will find below the explain plan of one of the queries which has
finished with "out of memory". This query contains aggregate and a
sub-select with 6 joins :
QUERY
PLAN
------------------------------------------------------------------------------------------------------------------------------
---------
GroupAggregate (cost=103283274.03..103283274.07 rows=1 width=76)
-> Sort (cost=103283274.03..103283274.04 rows=1 width=76)
Sort Key: nation.n_name, date_part('year'::text,
(orders.o_orderdate)::timestamp without time zone)
-> Nested Loop (cost=2447049.00..103283274.02 rows=1 width=76)
Join Filter: ("outer".s_nationkey = "inner".n_nationkey)
-> Nested Loop (cost=2447049.00..103283272.45 rows=1
width=55)
-> Nested Loop (cost=2447049.00..103283267.25
rows=1 width=59)
-> Hash Join (cost=2447049.00..103256685.03
rows=4800 width=80)
Hash Cond: ("outer".l_suppkey =
"inner".s_suppkey)
-> Hash Join
(cost=2311445.00..102985544.04 rows=2880228 width=64)
Hash Cond: ("outer".l_partkey =
"inner".p_partkey)
-> Seq Scan on lineitem
(cost=0.00..69142803.64 rows=1800142464 width=56)
-> Hash
(cost=2311205.00..2311205.00 rows=96000 width=8)
-> Seq Scan on part
(cost=0.00..2311205.00 rows=96000 width=8)
Filter:
((p_name)::text ~~ '%green%'::text)
-> Hash (cost=110525.00..110525.00
rows=3000000 width=16)
-> Seq Scan on supplier
(cost=0.00..110525.00 rows=3000000 width=16)
-> Index Scan using i_ps_partkey_suppkey on
partsupp (cost=0.00..5.52 rows=1 width=27)
Index Cond: ((partsupp.ps_partkey =
"outer".l_partkey) AND (partsupp.ps_suppkey = "outer".l_s
uppkey))
-> Index Scan using i_o_orderkey on orders
(cost=0.00..5.19 rows=1 width=12)
Index Cond: (orders.o_orderkey =
"outer".l_orderkey)
-> Seq Scan on nation (cost=0.00..1.25 rows=25 width=37)
(22 rows)
Regards,
Alexandra DANTE
Tom Lane a écrit :
>DANTE ALEXANDRA <ALEXANDRA(dot)DANTE(at)BULL(dot)NET> writes:
>
>
>>I am a PostGreSQL newbie. I work with the 8.1.0 release on AIX 5.3, with
>>300GB of datas.
>>Some of the queries launched on this database finish with an "*out of
>>memory*". The queries which have failed contain a lot of join (between 6
>>tables), sub-select and aggregate. For these queries, the log file
>>contains :
>>psql:Q9.sql:40: ERROR: out of memory
>>DETAIL: Failed on request of size 148.
>>
>>
>
>Hmm ... what ulimit settings are you running the postmaster under?
>Could we see the EXPLAIN plans for some of the failing queries?
>
> regards, tom lane
>
>---------------------------(end of broadcast)---------------------------
>TIP 3: Have you checked our extensive FAQ?
>
> http://www.postgresql.org/docs/faq
>
>
>
From | Date | Subject | |
---|---|---|---|
Next Message | Francisco Reyes | 2005-12-20 12:47:06 | Re: One DB not backed up by pg_dumpall |
Previous Message | DANTE ALEXANDRA | 2005-12-20 09:17:36 | Re: out of memory during query execution |