From: | Chris Ernst <cernst(at)esoft(dot)com> |
---|---|
To: | Developer <dev002(at)pas-world(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: More speed counting rows |
Date: | 2009-07-27 14:59:08 |
Message-ID: | 4A6DC0BC.309@esoft.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Developer wrote:
> Hello,
>
> I am trying to optimize the count of files when I am using filters
> (select by some row/s parameter/s)
>
> In this case I think that postgresql really count all files.
> Resulting in unacceptable times of 4 seconds in http server response.
> Triggers+store in this case do not see very acceptable, because I need
> store 1.5 millions of counting possibilities.
>
> My question is:
> Any method for indirect count like ordered indexes + quadratic count?
> Any module?
> Any suggestion?
>
I had a similar problem where HTTP requests triggered a count(*) over a
table that was growing rapidly. The bigger the table got, the longer
the count took. In my case, however, the counts only have to be a
reasonable estimate of the current state, so I solved this problem with
a count_sums table that gets updated every 30 minutes using a simple
perl script in a cron job. The HTTP requests now trigger a very fast
select from a tiny, 9 row, 2 column table.
How "up to date" do the counts need to be? If the count takes 4
seconds, can you run it every minute and store the counts in a table for
retrieval by the HTTP requests? Or does it absolutely have to be the
exact count at the moment of the request?
If it needs to be more real-time, you could expand on this by adding
post insert/delete triggers that automatically update the counts table
to keep it current. In my case it just wasn't necessary.
- Chris
From | Date | Subject | |
---|---|---|---|
Next Message | David Wilson | 2009-07-27 15:25:23 | Re: More speed counting rows |
Previous Message | Kevin Grittner | 2009-07-27 14:43:38 | Re: select query performance question |