From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | pi(dot)songs(at)gmail(dot)com |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Hadoop backend? |
Date: | 2009-02-23 02:09:16 |
Message-ID: | 603c8f070902221809w21b084ddra1b1636f31699959@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Sun, Feb 22, 2009 at 5:18 PM, pi song <pi(dot)songs(at)gmail(dot)com> wrote:
> One more problem is that data placement on HDFS is inherent, meaning you
> have no explicit control. Thus, you cannot place two sets of data which are
> likely to be joined together on the same node = uncontrollable latency
> during query processing.
> Pi Song
It would only be possible to have the actual PostgreSQL backends
running on a single node anyway, because they use shared memory to
hold lock tables and things. The advantage of a distributed file
system would be that you could access more storage (and more system
buffer cache) than would be possible on a single system (or perhaps
the same amount but at less cost). Assuming some sort of
per-tablespace control over the storage manager, you could put your
most frequently accessed data locally and the less frequently accessed
data into the DFS.
But you'd still have to pull all the data back to the master node to
do anything with it. Being able to actually distribute the
computation would be a much harder problem. Currently, we don't even
have the ability to bring multiple CPUs to bear on (for example) a
large sequential scan (even though all the data is on a single node).
...Robert
From | Date | Subject | |
---|---|---|---|
Next Message | Bruce Momjian | 2009-02-23 03:03:18 | 8.4 features presentation |
Previous Message | Tom Lane | 2009-02-22 23:51:00 | Re: some broken on pg_stat_user_functions |