From: | Gary Doades <gpd(at)gpdnet(dot)co(dot)uk> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Postgres backend using huge amounts of ram |
Date: | 2004-11-25 20:35:25 |
Message-ID: | 41A6420D.2030307@gpdnet.co.uk |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
How much RAM can a single postgres backend use?
I've just loaded a moderately sized dataset into postgres and was
applying RI constraints to the tables (using pgadmin on windows). Part
way though I noticed the (single) postgres backend had shot up to using
300+ MB of my RAM!
The two tables are:
create table reqt_dates
(
reqt_date_id serial,
reqt_id integer not null,
reqt_date date not null,
primary key (reqt_date_id)
) without oids;
and
create table booking_plan
(
booking_plan_id serial,
reqt_date_id integer not null,
booking_id integer not null,
booking_date date not null,
datetime_from timestamp not null,
datetime_to timestamp not null,
primary key (booking_plan_id)
) without oids;
and I was was trying to do:
alter table booking_plan add
foreign key
(
reqt_date_id
) references reqt_dates (
reqt_date_id
) on delete cascade;
Since I can't get an explain of what the alter table was doing I used this:
select count(*) from booking_plan,reqt_dates where
booking_plan.reqt_date_id = reqt_dates.reqt_date_id
and sure enough this query caused the backend to use 300M RAM. The plan
for this was:
QUERY PLAN
Aggregate (cost=37.00..37.00 rows=1 width=0) (actual
time=123968.000..123968.000 rows=1 loops=1)
-> Hash Join (cost=15.50..36.50 rows=1000 width=0) (actual
time=10205.000..120683.000 rows=1657709 loops=1)
Hash Cond: ("outer".reqt_date_id = "inner".reqt_date_id)
-> Seq Scan on booking_plan (cost=0.00..15.00 rows=1000
width=4) (actual time=10.000..4264.000 rows=1657709 loops=1)
-> Hash (cost=15.00..15.00 rows=1000 width=4) (actual
time=10195.000..10195.000 rows=0 loops=1)
-> Seq Scan on reqt_dates (cost=0.00..15.00 rows=1000
width=4) (actual time=0.000..6607.000 rows=2142184 loops=1)
Total runtime: 124068.000 ms
I then analysed the database. Note, there are no indexes at this stage
except the primary keys.
the same query then gave:
QUERY PLAN
Aggregate (cost=107213.17..107213.17 rows=1 width=0) (actual
time=57002.000..57002.000 rows=1 loops=1)
-> Hash Join (cost=35887.01..106384.32 rows=1657709 width=0)
(actual time=9774.000..54046.000 rows=1657709 loops=1)
Hash Cond: ("outer".reqt_date_id = "inner".reqt_date_id)
-> Seq Scan on booking_plan (cost=0.00..22103.55 rows=1657709
width=4) (actual time=10.000..19648.000 rows=1657709 loops=1)
-> Hash (cost=24355.92..24355.92 rows=2142184 width=4)
(actual time=9674.000..9674.000 rows=0 loops=1)
-> Seq Scan on reqt_dates (cost=0.00..24355.92
rows=2142184 width=4) (actual time=0.000..4699.000 rows=2142184 loops=1)
Total runtime: 57002.000 ms
This is the same set of hash joins, BUT the backend only used 30M of
private RAM.
Platform is Windows XP, Postgres 8.0 beta 5
shared_buffers = 4000
work_mem = 8192
Any explanations?
Thanks,
Gary.
From | Date | Subject | |
---|---|---|---|
Next Message | Andrew McMillan | 2004-11-26 01:37:12 | Re: Postgres vs. DSpam |
Previous Message | ON.KG | 2004-11-25 15:02:40 | Re: Trigger before insert |