From: | Gregory Stark <stark(at)enterprisedb(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Mike Charnoky <noky(at)nextbus(dot)com>, pgsql-general(at)postgresql(dot)org |
Subject: | Re: intermittant performance problem |
Date: | 2009-03-10 12:22:56 |
Message-ID: | 87ljrd7ejj.fsf@oxford.xeocode.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> writes:
> Mike Charnoky <noky(at)nextbus(dot)com> writes:
>> The sampling query which runs really slow on some days looks something
>> like this:
>
>> INSERT INTO sampled_data
>> (item_name, timestmp, ... )
>> SELECT item_name, timestmp, ... )
>> FROM raw_data
>> WHERE timestmp >= ? and timestmp < ?
>> AND item_name=?
>> AND some_data_field NOTNULL
>> ORDER BY random()
>> LIMIT ?;
>
> Hmph, I'd expect that that would run pretty slowly *all* the time :-(.
> There's no good way to optimize "ORDER BY random()".
This seems kind of unlikely but does the parameter to the LIMIT vary a lot? If
it's small enough to fit all the chosen records in work_mem then you'll avoid
a disk-sort and do a top-k scan. If it overflows work_mem then it'll fail over
to do a full disk sort of all the records picked from raw_data.
It does seem much more likely that whatever index you have it using on
timestmp or item_name or some_data_field is sometimes being used and sometimes
not. Perhaps it's switching from an index on one of those columns to an index
on some other column and that's what's throwing it off.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL training!
From | Date | Subject | |
---|---|---|---|
Next Message | Gregory Stark | 2009-03-10 12:23:49 | Re: postgresql and LDAP |
Previous Message | Pavel Stehule | 2009-03-10 12:16:35 | Re: [GENERAL] How to Convert VarChar to Date in PgSQL |