From: | "Jorge Montero" <jorge_montero(at)homedecorators(dot)com> |
---|---|
To: | "kuopo" <spkuo(at)cs(dot)nctu(dot)edu(dot)tw>,<pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: how to handle a big table for data log |
Date: | 2010-07-19 15:37:55 |
Message-ID: | 4C442B03.2E1C.0042.0@homedecorators.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Large tables, by themselves, are not necessarily a problem. The problem is what you might be trying to do with them. Depending on the operations you are trying to do, partitioning the table might help performance or make it worse.
What kind of queries are you running? How many days of history are you keeping? Could you post an explain analyze output of a query that is being problematic?
Given the amount of data you hint about, your server configuration, and custom statistic targets for the big tables in question would be useful.
>>> kuopo <spkuo(at)cs(dot)nctu(dot)edu(dot)tw> 7/19/2010 1:27 AM >>>
Hi,
I have a situation to handle a log table which would accumulate a
large amount of logs. This table only involves insert and query
operations. To limit the table size, I tried to split this table by
date. However, the number of the logs is still large (46 million
records per day). To further limit its size, I tried to split this log
table by log type. However, this action does not improve the
performance. It is much slower than the big table solution. I guess
this is because I need to pay more cost on the auto-vacuum/analyze for
all split tables.
Can anyone comment on this situation? Thanks in advance.
kuopo.
From | Date | Subject | |
---|---|---|---|
Next Message | Daniel Ferreira de Lima | 2010-07-19 19:53:37 | Re: IDE x SAS RAID 0 on HP DL 380 G5 P400i controller performance problem |
Previous Message | Vitalii Tymchyshyn | 2010-07-19 15:35:42 | Re: Big field, limiting and ordering |