Re: Huge Data

From: Richard Huxton <dev(at)archonet(dot)com>
To: Sezai YILMAZ <sezai(dot)yilmaz(at)pro-g(dot)com(dot)tr>, pgsql-general(at)postgresql(dot)org
Subject: Re: Huge Data
Date: 2004-01-14 11:48:15
Message-ID: 200401141148.15286.dev@archonet.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Wednesday 14 January 2004 11:11, Sezai YILMAZ wrote:
> Hi,
>
> I use PostgreSQL 7.4 for storing huge amount of data. For example 7
> million rows. But when I run the query "select count(*) from table;", it
> results after about 120 seconds. Is this result normal for such a huge
> table? Is there any methods for speed up the querying time? The huge
> table has integer primary key and some other indexes for other columns.

PG uses MVCC to manage concurrency. A downside of this is that to verify the
exact number of rows in a table you have to visit them all.

There's plenty on this in the archives, and probably the FAQ too.

What are you using the count() for?

--
Richard Huxton
Archonet Ltd

In response to

  • Huge Data at 2004-01-14 11:11:42 from Sezai YILMAZ

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Richard Huxton 2004-01-14 11:49:52 Re: What are nested transactions then? was Nested transaction workaround?
Previous Message Együd Csaba 2004-01-14 11:42:42 Using regular expressions in LIKE