Re: Performance with very large tables

From: "Shoaib Mir" <shoaibmir(at)gmail(dot)com>
To: "Richard Huxton" <dev(at)archonet(dot)com>
Cc: "Jan van der Weijde" <Jan(dot)van(dot)der(dot)Weijde(at)attachmate(dot)com>, pgsql-general(at)postgresql(dot)org
Subject: Re: Performance with very large tables
Date: 2007-01-15 11:24:24
Message-ID: bf54be870701150324j3ec5126blcb02c362c73dbff6@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

You can also opt for partitioning the tables and this way select will only
get the data from the required partition.

--------------
Shoaib Mir
EnterpriseDB (www.enterprisedb.com)

On 1/15/07, Richard Huxton <dev(at)archonet(dot)com> wrote:
>
> Jan van der Weijde wrote:
> > Hello all,
> >
> > one of our customers is using PostgreSQL with tables containing millions
> > of records. A simple 'SELECT * FROM <table>' takes way too much time in
> > that case, so we have advised him to use the LIMIT and OFFSET clauses.
>
> That won't reduce the time to fetch millions of rows.
>
> It sounds like your customer doesn't want millions of rows at once, but
> rather a few rows quickly and then to fetch more as required. For this
> you want to use a cursor. You can do this via SQL, or perhaps via your
> database library.
>
> In SQL:
> http://www.postgresql.org/docs/8.2/static/sql-declare.html
> http://www.postgresql.org/docs/8.2/static/sql-fetch.html
> In pl/pgsql:
> http://www.postgresql.org/docs/8.2/static/plpgsql-cursors.html
>
> HTH
> --
> Richard Huxton
> Archonet Ltd
>
> ---------------------------(end of broadcast)---------------------------
> TIP 6: explain analyze is your friend
>

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Jan van der Weijde 2007-01-15 11:26:24 Re: Performance with very large tables
Previous Message Richard Huxton 2007-01-15 11:00:50 Re: Performance with very large tables