From: | "Peter Koczan" <pjkoczan(at)gmail(dot)com> |
---|---|
To: | "sathiya psql" <sathiya(dot)psql(at)gmail(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: postgresql is slow with larger table even it is in RAM |
Date: | 2008-03-26 23:48:56 |
Message-ID: | 4544e0330803261648l6b299d0ckc938f44a822d4d0e@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Tue, Mar 25, 2008 at 3:35 AM, sathiya psql <sathiya(dot)psql(at)gmail(dot)com> wrote:
> Dear Friends,
> I have a table with 32 lakh record in it. Table size is nearly 700 MB,
> and my machine had a 1 GB + 256 MB RAM, i had created the table space in
> RAM, and then created this table in this RAM.
>
> So now everything is in RAM, if i do a count(*) on this table it returns
> 327600 in 3 seconds, why it is taking 3 seconds ????? because am sure that
> no Disk I/O is happening. ( using vmstat i had confirmed, no disk I/O is
> happening, swap is also not used )
>
> Any Idea on this ???
>
> I searched a lot in newsgroups ... can't find relevant things.... ( because
> everywhere they are speaking about disk access speed, here i don't want to
> worry about disk access )
>
> If required i will give more information on this.
Two things:
- Are you VACUUM'ing regularly? It could be that you have a lot of
dead rows and the table is spread out over a lot of pages of mostly
dead space. That would cause *very* slow seq scans.
- What is your shared_buffers set to? If it's really low then postgres
could be constantly swapping from ram-disk to memory. Not much would
be cached, and performance would suffer.
FWIW, I did a select count(*) on a table with just over 300000 rows,
and it only took 0.28 sec.
Peter
From | Date | Subject | |
---|---|---|---|
Next Message | Shane Ambler | 2008-03-27 04:09:13 | Re: how can a couple of expensive queries drag my system down? |
Previous Message | PFC | 2008-03-26 21:58:16 | Re: how can a couple of expensive queries drag my system down? |