From: | Vladimir Sitnikov <sitnikov(dot)vladimir(at)gmail(dot)com> |
---|---|
To: | Jonathan Morra <jonmorra(at)gmail(dot)com> |
Cc: | Steve Crawford <scrawford(at)pinpointresearch(dot)com>, "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Performance of complicated query |
Date: | 2013-05-23 19:23:37 |
Message-ID: | CAB=Je-FDkDGR6sN2O=4rLS07SK9EzNA-Yv8bmAWD8+CV6ghXgg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
>>This leads to the WHERE clause, WHERE read_datetime = max_read, and hence
I'm only summing the last read for each device for each patient.
Is "reads" table insert-only? Do you have updates/deletes of the
"historical" rows?
>>3. Can I modify my tables to make this query (which is the crux of my
application) run faster?
Can you have a second "reads" table that stores only up to date values?
That will eliminate max-over completely, enable efficient usage in other
queries, and make your queries much easier to understand by humans and
computers.
PS. read_datetime = max_read is prone to "what if two measurements have
same date" errors.
PPS. distinct MAX(max_read) OVER(PARTITION BY patient_id) AS latest_read
looks like a complete mess. Why don't you just use group by?
>
Regards,
Vladimir
From | Date | Subject | |
---|---|---|---|
Next Message | Jonathan Morra | 2013-05-23 19:43:55 | Re: Performance of complicated query |
Previous Message | Jonathan Morra | 2013-05-23 17:57:26 | Re: Performance of complicated query |