From: | "Richard Huxton" <dev(at)archonet(dot)com> |
---|---|
To: | "Alex Howansky" <alex(at)wankwood(dot)com>, <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: "critical mass" reached? |
Date: | 2001-03-13 09:26:22 |
Message-ID: | 009401c0ab9f$b18c8520$1001a8c0@archonet.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
From: "Alex Howansky" <alex(at)wankwood(dot)com>
> Using 7.0.3, I've got a database that has about 30 tables. One in
particular
> seems to be giving us problems. It's a pretty simple table with one index
that
> logs certain web site activity. It gets about 100K insertions per day, and
now
> has about 6 million records.
>
> All of a sudden (yesterday), we seem to have reached a "critical mass". No
> other tables or code have changed significantly (or more than normal).
However,
> the database performance has abruptly become abyssmal -- the server which
> previously hadn't broken a load average of 4 now spikes continuously
between 20
> and 40, rarely dropping below 10. Web server logs show normal activity.
Also,
> in the pgsql/data/base/dbname directory, I'm getting hundreds of
pg_sorttemp
> and pg_noname files lying around.
Presumably you're running vacuum analyze regularly (at least once a day I'd
guess) so I can only suspect that something has tipped the balance in the
cost estimations. Is there a particular query that's slow and can you post
an EXPLAIN?
> I thought there might be some data or index corruption, so I've even gone
so
> far as to initdb and recreate the database from scratch, using a previous
> pg_dump output, but it has not helped.
Looks like you've ruled out damage to the DB. What happens if you delete 3
million of the records in your log-table?
> Six million tables doesn't seem like it should be too much of a problem,
but we
> suspect this table might be the cause of the problem because it's the only
one
> that changes significantly from day to day. Memory is ok, there is no
swapping,
> disk space is plentiful, I don't know where else to look. Any ideas?
Six million _tables_ is a lot, but you're right 6M records is pretty small
compared to what some people are using.
See if you can't post an EXPLAIN of a problem query and the relevant table
defs.
- Richard Huxton
From | Date | Subject | |
---|---|---|---|
Next Message | Gordan Bobic | 2001-03-13 09:35:59 | Re: Re: Data type for storing images? |
Previous Message | De Leeuw Guy | 2001-03-13 09:15:30 | Re: Create trigger problem : |