From: | "Davor J(dot)" <DavorJ(at)live(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Optimizer: ranges and partial indices? Or use partitioning? |
Date: | 2010-06-21 17:13:17 |
Message-ID: | hvo6lj$g7v$1@news.hub.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
I have the same table as yours with potential to grow over 50 billion of
records once operational. But our hardware is currently very limited (8GB
RAM).
I concur with Tom Lane about the fact that partial indexes aren't really an
option, but what about partitioning?
I read from the Postgres docs that "The exact point at which a table will
benefit from partitioning depends on the application, although a rule of
thumb is that the size of the table should exceed the physical memory of the
database server."
http://www.postgresql.org/docs/current/static/ddl-partitioning.html
Now, a table with 500M records would exceed our RAM, so I wonder what impact
a table of 50G would have on simple lookup performance (i.e. source = fixed,
timestamp = range), taking into account that a global index would exceed our
RAM on some 1G records.
Did anyone do some testing? Is partitioning a viable option in such
scenario?
"Adrian von Bidder" <avbidder(at)fortytwo(dot)ch> wrote in message
news:201003020849(dot)19133(at)fortytwo(dot)ch(dot)(dot)(dot)
From | Date | Subject | |
---|---|---|---|
Next Message | Geoffrey | 2010-06-21 17:44:50 | Re: pgpool |
Previous Message | Scott Marlowe | 2010-06-21 17:10:22 | Re: A thought about other open source projects |