From: | Samuel Gendler <sgendler(at)ideasculptor(dot)com> |
---|---|
To: | Craig Ringer <ringerc(at)ringerc(dot)id(dot)au> |
Cc: | Navaneethan R <nava(at)gridlex(dot)com>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Scaling 10 million records in PostgreSQL table |
Date: | 2012-10-08 22:42:39 |
Message-ID: | CAEV0TzDCeAA1Tvej6rprk8agRcRPJk3DzmPVMDv+5dK99Z_55g@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Mon, Oct 8, 2012 at 1:27 PM, Craig Ringer <ringerc(at)ringerc(dot)id(dot)au> wrote:
>
> If you already have appropriate indexes and have used `explain analyze` to
> verify that the query isn't doing anything slow and expensive, it's
> possible the easiest way to improve performance is to set up async
> replication or log shipping to a local hot standby on real physical
> hardware, then do the query there.
>
I've run postgresql on medium instances using elastic block store for the
storage and had no difficulty running queries like this one on tables of
comparable (and larger) size. It might not come back in 10ms, but such
queries weren't so slow that I would describe the wait as "a lot of time"
either. My guess is that this is a sequential scan on a 10 million record
table with lots of bloat due to updates. Without more info about table
structure and explain analyze output, we are all just guessing, though.
Please read the wiki page which describes how to submit performance
problems and restate your question.
From | Date | Subject | |
---|---|---|---|
Next Message | Claudio Freire | 2012-10-08 22:44:04 | Re: Two identical systems, radically different performance |
Previous Message | Craig James | 2012-10-08 22:42:33 | Re: Two identical systems, radically different performance |