Re: The database is very slow !

From: Christopher Browne <cbbrowne(at)acm(dot)org>
To: pgsql-general(at)postgresql(dot)org
Subject: Re: The database is very slow !
Date: 2003-08-08 14:46:58
Message-ID: 60ekzw2om5.fsf@dev6.int.libertyrms.info
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

proghome(at)silesky(dot)com (krystoffff) writes:
> I currently have PostgreSQL 7.1 installed on a server with about 700
> Mb of RAM.
>
> I have many problems of speed with a database I created. For example,
> it took almost 12 sec to run the query "select * from table" directly
> from PostgreSQL, on a table with 4000 records and 60 fields ...
> And the whole application built on this database is very very slow
> (some pages take almost 20 seconds to load !)
>
> I verifed the indexes, I think they are ok, and I tried to make my
> queries as short as possible (without select * but with select
> field1, field2, ...)
>
> But anyway, I guess there is a problem of speed directly with the
> database, because I think that is not normal to need 12 sec to run a
> query on a table with only 4000 records ...

The perpetual first question...

"Have you run VACUUM ANALYZE?"

You may want to run VACUUM VERBOSE on the database, and see how many
dead tuples get deleted; if a lot of data gets thrown away, that's a
good sign that things ought to shortly get faster.

ANALYZE will recalculate statistics that are used for query planning,
and that, too, is likely to be helpful.

If you can, head over to the [Performance] list, and supply some
sample queries with the output of running "EXPLAIN ANALYZE" on them.

Here's a sample for a query that nests together a whole bunch of
views.

portfolio=# explain analyze select * from cdnportfolio;
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Subquery Scan cdnportfolio (cost=6311.41..11477.08 rows=22 width=153) (actual time=3327.63..3330.07 rows=25 loops=1) -> Aggregate (cost=6311.41..11477.08 rows=22 width=153) (actual time=3327.62..3329.91 rows=25 loops=1)
-> Group (cost=6311.41..11473.17 rows=224 width=153) (actual time=3327.38..3328.26 rows=39 loops=1)
-> Merge Join (cost=6311.41..11472.61 rows=224 width=153) (actual time=3327.36..3327.99 rows=39 loops=1)
Merge Cond: ("outer".symbol = "inner".symbol)
-> Sort (cost=263.39..263.41 rows=8 width=102) (actual time=50.49..50.51 rows=39 loops=1)
Sort Key: stocks.symbol
-> Hash Join (cost=261.57..263.27 rows=8 width=102) (actual time=49.58..50.16 rows=39 loops=1)
Hash Cond: ("outer".symbol = "inner".symbol)
-> Seq Scan on stocktxns (cost=0.00..1.40 rows=40 width=31) (actual time=0.08..0.23 rows=40 loops=1)
-> Hash (cost=261.55..261.55 rows=5 width=71) (actual time=49.40..49.40 rows=0 loops=1)
-> Hash Join (cost=260.10..261.55 rows=5 width=71) (actual time=49.05..49.33 rows=25 loops=1)
Hash Cond: ("outer".exchange = "inner".exchange)
-> Seq Scan on stocks (cost=0.00..1.25 rows=25 width=15) (actual time=0.08..0.16 rows=25 loops=1)
-> Hash (cost=260.10..260.10 rows=1 width=56) (actual time=48.87..48.87 rows=0 loops=1)
-> Nested Loop (cost=220.18..260.10 rows=1 width=56) (actual time=48.71..48.84 rows=4 loops=1)
-> Merge Join (cost=220.18..221.97 rows=7 width=29) (actual time=48.62..48.65 rows=4 loops=1)
Merge Cond: ("outer".from_curr = "inner".currency)
-> Sort (cost=219.10..219.94 rows=338 width=22) (actual time=48.35..48.35 rows=2 loops=1)
Sort Key: lastcurrdate.from_curr
-> Subquery Scan lastcurrdate (cost=0.00..204.92 rows=338 width=22) (actual time=0.51..48.24 rows=2 loops=1)
-> Aggregate (cost=0.00..204.92 rows=338 width=22) (actual time=0.50..48.22 rows=2 loops=1)
-> Group (cost=0.00..196.47 rows=3377 width=22) (actual time=0.41..43.22 rows=3376 loops=1)
-> Index Scan using currency_conversion_idx on currency_conversion (cost=0.00..179.59 rows=3377 width=22) (actual time=0.40..28.93 rows=3376 loops=1)
Filter: (to_curr = 'CDN'::bpchar)
-> Sort (cost=1.08..1.09 rows=4 width=14) (actual time=0.24..0.24 rows=4 loops=1)
Sort Key: exchanges.currency
-> Seq Scan on exchanges (cost=0.00..1.04 rows=4 width=14) (actual time=0.10..0.12 rows=4 loops=1)
-> Index Scan using currency_conversion_idx on currency_conversion (cost=0.00..5.63 rows=1 width=27) (actual time=0.03..0.04 rows=1 loops=4)
Index Cond: (("outer".from_curr = currency_conversion.from_curr) AND (currency_conversion.to_curr = 'CDN'::bpchar) AND (currency_conversion.date = "outer".lastdate))
-> Materialize (cost=11203.81..11203.81 rows=990 width=51) (actual time=3276.83..3276.87 rows=53 loops=1)
-> Merge Join (cost=6048.02..11203.81 rows=990 width=51) (actual time=1797.46..3276.67 rows=39 loops=1)
Merge Cond: (("outer".symbol = "inner".symbol) AND ("outer".date = "inner".lastdate)) -> Index Scan using stockprice_by_date on stockprices (cost=0.00..4827.31 rows=104549 width=27) (actual time=0.46..921.54 rows=104549 loops=1)
-> Sort (cost=6048.02..6074.15 rows=10455 width=16) (actual time=1713.56..1713.63 rows=39 loops=1)
Sort Key: lastdate.symbol, lastdate.lastdate
-> Subquery Scan lastdate (cost=0.00..5350.05 rows=10455 width=16) (actual time=178.81..1713.10 rows=39 loops=1)
-> Aggregate (cost=0.00..5350.05 rows=10455 width=16) (actual time=178.80..1712.77 rows=39 loops=1)
-> Group (cost=0.00..5088.68 rows=104549 width=16) (actual time=0.03..1550.08 rows=104549 loops=1)
-> Index Scan using stockprice_by_date on stockprices (cost=0.00..4827.31 rows=104549 width=16) (actual time=0.02..1062.58 rows=104549 loops=1)
Total runtime: 3332.18 msec
(41 rows)
--
(reverse (concatenate 'string "gro.mca" "@" "enworbbc"))
http://cbbrowne.com/info/postgresql.html
It's easy to identify people who can't count to ten. They're in front
of you in the supermarket express lane. -June Henderson

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Tom Lane 2003-08-08 15:06:08 Re: problem with timeofday() function in cvs PostgreSQL
Previous Message krystoffff 2003-08-08 14:23:19 The database is very slow !