Re: Transaction table

From: Vick Khera <vivek(at)khera(dot)org>
To: Postgres General Postgres General <pgsql-general(at)postgresql(dot)org>
Subject: Re: Transaction table
Date: 2010-03-21 19:30:30
Message-ID: 2968dfd61003211230n1e1545ddte8a14c4b7ace1f81@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Sat, Mar 20, 2010 at 4:47 AM, Deepa Thulasidasan
<deepatulsidasan(at)yahoo(dot)co(dot)in> wrote:
> transaction table to grow by 10 times in near future. In this regard, we would like to know if this same structure of the transaction table and the indexing would be sufficient for quick retrivel of data  or do we have to partition this table? If so what kind of partition would be suitable?

My experience has been that when the tables are approaching the 100
million record mark things tend to slow down. Running reindex and
vacuum on those tables also takes much longer since you tend not to
have enough memory to do those operations efficiently.

I like to partition tables such that they end up having under 10
million records each. I just (a few hours ago...) finished
partitioning and migrating the data from a single table that had about
120 million records into 100 partitions of about 1.2 million rows
each. For this particular case, I just partitioned on a mod 100
operation of one of the ID keys on which I do the bulk of my searches.

Like the two Scott M's recommended, figure out your usage patterns and
partition across those vectors to optimize those searches. I would
not worry about optimizing the insert pattern.

You really *never* delete this data? I would suspect then that having
a partitioning scheme where the number of partitions can grow over
time is going to be important to you.

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Tom Lane 2010-03-21 19:30:50 Re: Problems with "CREATE CAST"
Previous Message John Shott 2010-03-21 19:20:51 Problems with "CREATE CAST"