From: | Merlin Moncure <mmoncure(at)gmail(dot)com> |
---|---|
To: | Anibal David Acosta <aa(at)devshock(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: how fast index works? |
Date: | 2011-09-08 17:35:58 |
Message-ID: | CAHyXU0wgqZ6V5MR+n4Yzt9=VQ_ccU_kuPm8WdgGWjeywvzA0HQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Tue, Sep 6, 2011 at 1:31 PM, Anibal David Acosta <aa(at)devshock(dot)com> wrote:
> Hi everyone,
>
>
>
> My question is, if I have a table with 500,000 rows, and a SELECT of one row
> is returned in 10 milliseconds, if the table has 6,000,000 of rows and
> everything is OK (statistics, vacuum etc)
>
> can i suppose that elapsed time will be near to 10?
The problem with large datasets does not come from the index, but that
they increase cache pressure. On today's typical servers it's all
about cache, and the fact that disks (at least non ssd drives) are
several orders of magnitude slower than memory. Supposing you had
infinite memory holding your data files in cache or infinitely fast
disks, looking up a record from a trillion record table would still be
faster than reading a record from a hundred record table that had to
fault to a spinning disk to pull up the data.
merlin
From | Date | Subject | |
---|---|---|---|
Next Message | Anibal David Acosta | 2011-09-08 19:34:00 | Re: how delete/insert/update affects select performace? |
Previous Message | Kevin Grittner | 2011-09-08 17:00:32 | Re: how delete/insert/update affects select performace? |