From: | Brian Hurt <bhurt(at)janestcapital(dot)com> |
---|---|
To: | cyber-postgres(at)midnightfantasy(dot)com |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: 121+ million record table perf problems |
Date: | 2007-05-18 18:53:42 |
Message-ID: | 464DF636.40104@janestcapital.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
cyber-postgres(at)midnightfantasy(dot)com wrote:
> I need some help on recommendations to solve a perf problem.
>
> I've got a table with ~121 million records in it. Select count on it
> currently takes ~45 minutes, and an update to the table to set a value
> on one of the columns I finally killed after it ran 17 hours and had
> still not completed. Queries into the table are butt slow, and
>
This is way too long. I just did a select count(*) on a table of mine
that has 48 million rows and it took only 178 seconds. And this is on a
serious POS disk subsystem that's giving me about 1/2 the read speed of
a single off the shelf SATA disk.
As select count(*) has to read the whole table sequentially, the time it
takes is linear with the size of the table (once you get large enough
that the whole table doesn't get cached in memory). So I'd be surprised
if a 121 million record table took more than 500 or so seconds to read,
and would expect it to be less.
So my advice: vacuum. I'll bet you've got a whole boatload of dead
tuples kicking around. Then analyze. Then consider firing off a
reindex and/or cluster against the table. The other thing I'd consider
is dropping the money on some more hardware- a few hundred bucks to get
a battery backed raid card and half a dozen SATA drives would probably
do wonders for your performance.
>
> shared_buffers = 24MB
Up your shared buffers. This is a mistake I made originally as well-
but this is the total number of shared buffers used by the system. I
had originally assumed that the number of shared buffers used was this
times the number of backends, but it's not.
With 2G of memory, I'd start with shared buffers of 512MB, and consider
upping it to 768MB or even 1024MB. This will also really help performance.
> stats_start_collector = off
> stats_row_level = off
>
I think I'd also recommend turning these one.
Brian
From | Date | Subject | |
---|---|---|---|
Next Message | Gene Hart | 2007-05-18 18:57:32 | choosing fillfactor |
Previous Message | Mark Harris | 2007-05-18 18:51:58 | Re: reading large BYTEA type is slower than expected |