From: | Lincoln Yeoh <lyeoh(at)pop(dot)jaring(dot)my> |
---|---|
To: | "Billings, John" <John(dot)Billings(at)PAETEC(dot)com>, <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Using the power of the GPU |
Date: | 2007-06-16 14:54:42 |
Message-ID: | 200706161459.l5GExRWX077399@smtp4.jaring.my |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
At 01:26 AM 6/9/2007, Billings, John wrote:
>Does anyone think that PostgreSQL could benefit from using the video
>card as a parallel computing device? I'm working on a project using
>Nvidia's CUDA with an 8800 series video card to handle non-graphical
>algorithms. I'm curious if anyone thinks that this technology could
>be used to speed up a database? If so which part of the database,
>and what kind of parallel algorithms would be used?
>Thanks,
>-- John Billings
>
I'm sure people can think of many ways to do it BUT my concern is how
accurate and consistent would the calculations be?
So far in the usual _display_only_ applications if there's an error
in the GPU calculations people might not really notice the error in
the output. A few small "artifacts" in one frame? No big deal to most
people's eyes.
There have been cases where if you rename the application executable,
you get different output and "performance".
Sure that's more a "driver issue", BUT if those vendors have that
sort of attitude and priorities, I wouldn't recommend using their
products for anything where calculation accuracy is important, no
matter what sort of buzzwords they throw at you (in fact the more
buzzwords they use, the less likely I'd want to use their stuff for
that purpose).
I'd wait for other people to get burnt first.
But go ahead, I'm sure it can speed up _your_ database ;).
Regards,
Link.
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2007-06-16 15:17:34 | Re: Using the GPU |
Previous Message | Mark Soper | 2007-06-16 13:57:33 | Re: Dynamically generating DDL for postgresql object |