From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | David Yeu <david(dot)yeu(at)skype(dot)net> |
Cc: | "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>, "ops(at)groupme(dot)com" <ops(at)groupme(dot)com> |
Subject: | Re: Performance on large, append-only tables |
Date: | 2012-02-10 15:35:52 |
Message-ID: | 13841.1328888152@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
David Yeu <david(dot)yeu(at)skype(dot)net> writes:
> Our queries essentially fall into the following cases:
> * WHERE group_id = ? ORDER BY created_at DESC LIMIT 20;
> * WHERE group_id = ? AND id > ? ORDER BY created_at DESC;
> * WHERE group_id = ? AND id < ? ORDER BY created_at DESC LIMIT 20;
> * WHERE group_id = ? ORDER BY created_at DESC LIMIT 20 OFFSET ?;
All of those should be extremely cheap if you've got the right indexes,
with the exception of the last one. Large OFFSET values are never a
good idea, because Postgres always has to scan and discard that many
rows. If you need to fetch successive pages, consider using a cursor
with a series of FETCH commands. Another possibility, if the data is
sufficiently constrained, is to move the limit point with each new
query, ie instead of OFFSET use something like
WHERE group_id = ? AND created_at < last-previous-value
ORDER BY created_at DESC LIMIT 20;
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Kevin Grittner | 2012-02-10 15:44:02 | Re: Performance on large, append-only tables |
Previous Message | Marti Raudsepp | 2012-02-10 15:34:39 | Re: Performance on large, append-only tables |