From: | Suya Huang <shuang(at)connexity(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Claudio Freire <klaussfreire(at)gmail(dot)com>, "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: what's the slowest part in the SQL |
Date: | 2016-08-10 02:02:10 |
Message-ID: | 3793BB13-D4D7-41CF-8C8B-FEB6EE20234B@connexity.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Not really, the server has 2 GB memory (PROD is a lot more than this dev box), so the table should be able to fit in memory if we preload them.
MemTotal: 2049572 kB
dev=# select pg_size_pretty(pg_relation_size('data'));
pg_size_pretty
----------------
141 MB
(1 row)
Time: 2.640 ms
dev=# select pg_size_pretty(pg_relation_size('order'));
pg_size_pretty
----------------
516 MB
(1 row)
Thanks,
Suya
On 8/10/16, 11:57 AM, "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
Suya Huang <shuang(at)connexity(dot)com> writes:
> Thank you Tom very much, that’s the piece of information I miss.
> So, should I expect that the nested loop join would be much faster if I cache both tables (use pg_prewarm) into memory as it waives the disk read?
pg_prewarm is not going to magically fix things if your table is bigger
than RAM, which it apparently is.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Ivan Voras | 2016-08-10 11:13:48 | Logging queries using sequential scans |
Previous Message | Tom Lane | 2016-08-10 01:57:45 | Re: what's the slowest part in the SQL |