Re: How the Planner in PGStrom differs from PostgreSQL?

From: Mark Anns <aishwaryaanns(at)gmail(dot)com>
To: pgsql-general(at)postgresql(dot)org
Subject: Re: How the Planner in PGStrom differs from PostgreSQL?
Date: 2016-11-21 13:15:17
Message-ID: 1479734117063-5931271.post@n3.nabble.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

What are the functions (for example) are available/not available to get
transformed to GPU source code?

What is the factor value u consider to get multiplied with actual cost for
CPU? For example, default cpu_tuple_cost is 0.01.

Consider, for example, if the cost=0.00..458.00 for seq scan, how can it be
multiplied to get the cost for GPU? considering any one gpu card.

Is there any documentation regarding these details in GPU?

--
View this message in context: http://postgresql.nabble.com/How-the-Planner-in-PGStrom-differs-from-PostgreSQL-tp5929724p5931271.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Benedikt Grundmann 2016-11-21 13:44:39 How to introspect autovacuum analyze settings
Previous Message Man 2016-11-21 13:12:13 Re: How to change order sort of table in HashJoin