From: | Chris Browne <cbbrowne(at)acm(dot)org> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Message queue table.. |
Date: | 2008-04-18 19:57:10 |
Message-ID: | 60mynros15.fsf@dba2.int.libertyrms.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
jesper(at)krogh(dot)cc (Jesper Krogh) writes:
> I have this "message queue" table.. currently with 8m+
> records. Picking the top priority messages seem to take quite
> long.. it is just a matter of searching the index.. (just as explain
> analyze tells me it does).
>
> Can anyone digest further optimizations out of this output? (All
> records have funcid=4)
>
> # explain analyze SELECT job.jobid, job.funcid, job.arg, job.uniqkey,
> job.insert_time, job.run_after, job.grabbed_until, job.priority,
> job.coalesce FROM workqueue.job WHERE (job.funcid = 4) AND
> (job.run_after <= 1208442668) AND (job.grabbed_until <= 1208442668)
> AND (job.coalesce = 'Efam') ORDER BY funcid, priority ASC LIMIT 1
> ;
There might be value in having one or more extra indices...
Here are *plausible* candidates:
1. If "funcid = 4" is highly significant (e.g. - you are always
running this query, and funcid often <> 4), then you might add a
functional index such as:
create index job_funcid_run_after on workqueue.job (run_after) where funcid = 4;
create index job_funcid_grabbeduntil on workqueue.job (grabbed_until) where funcid = 4;
2. Straight indices like the following:
create index job_run_after on workqueue.job(run_after);
create index job_grabbed_until on workqueue.job(grabbed_until);
create index job_funcid on workqueue.job(funcid);
create index job_coalesce on workqueue.job(coalesce);
Note that it is _possible_ (though by no means guaranteed) that all
three might prove useful, if you're running 8.1+ where PostgreSQL
supports bitmap index scans.
Another possibility...
3. You might change your process to process multiple records in a
"run" so that you might instead run the query (perhaps via a cursor?)
with LIMIT [Something Bigger than 1].
It does seem mighty expensive to run a 245ms query to find just one
record. It seems quite likely that you could return the top 100 rows
(LIMIT 100) without necessarily finding it runs in any more time.
Returning 100 tuples in 245ms seems rather more acceptable, no? :-)
--
(format nil "~S(at)~S" "cbbrowne" "linuxfinances.info")
http://linuxdatabases.info/info/linuxdistributions.html
Rules of the Evil Overlord #32. "I will not fly into a rage and kill a
messenger who brings me bad news just to illustrate how evil I really
am. Good messengers are hard to come by."
<http://www.eviloverlord.com/>
From | Date | Subject | |
---|---|---|---|
Next Message | Heikki Linnakangas | 2008-04-19 05:57:11 | Re: 3-days-long vacuum of 20GB table |
Previous Message | James Mansion | 2008-04-18 19:55:31 | full_page_write and also compressed logging |