From: | Greg Spiegelberg <gspiegelberg(at)gmail(dot)com> |
---|---|
To: | Venki Ramachandran <venki_ramachandran(at)yahoo(dot)com> |
Cc: | "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Parallel Scaling of a pgplsql problem |
Date: | 2012-04-26 16:13:36 |
Message-ID: | CAEtnbpU3=saTXKcmr5n3uZ9Z+OTWjWkt7MsdRxiyOst1F0ckKQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Wed, Apr 25, 2012 at 12:52 PM, Venki Ramachandran <
venki_ramachandran(at)yahoo(dot)com> wrote:
>
> Now I have to run the same pgplsql on all possible combinations of
> employees and with 542 employees that is about say 300,000 unique pairs.
>
> So (300000 * 40)/(1000 * 60 * 60) = 3.33 hours and I have to rank them and
> show it on a screen. No user wants to wait for 3 hours, they can probably
> wait for 10 minutes (even that is too much for a UI application). How do I
> solve this scaling problem? Can I have multiple parellel sessions and each
> session have multiple/processes that do a pair each at 40 ms and then
> collate the results. Does PostGres or pgplsql have any parallel computing
> capability.
>
Interesting problem.
How frequently does the data change? Hourly, daily, monthly?
How granular are the time frames in the typical query? Seconds, minutes,
hours, days, weeks?
I'm thinking if you can prepare the data ahead of time as it changes via a
trigger or client-side code then your problem will go away pretty quickly.
-Greg
From | Date | Subject | |
---|---|---|---|
Next Message | Claudio Freire | 2012-04-26 17:37:54 | Weird plan variation with recursive CTEs |
Previous Message | Yeb Havinga | 2012-04-26 06:49:12 | Re: Parallel Scaling of a pgplsql problem |