Re: postgres external table

From: Greg Stark <gsstark(at)mit(dot)edu>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Greg Smith <greg(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)postnewspapers(dot)com(dot)au>, Amy Smith <vah123(at)gmail(dot)com>, pgsql-general(at)postgresql(dot)org
Subject: Re: postgres external table
Date: 2010-01-18 15:10:22
Message-ID: 407d949e1001180710g61b7541ch7198f59bec9d73c4@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Mon, Jan 18, 2010 at 2:57 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> I'm finding it hard to visualize a use-case for that.  We must postulate
> that the table is so big that you don't want to import it, and yet you
> don't feel a need to have any index on it.  Which among other things
> implies that every query will seqscan the whole table.  Where's the
> savings?

I think it's usually more "my data is updated by other tools and it
would be hard/impossible/annoying to insert another step into the
pipeline to copy it to yet another place". The main benefit is that
you can access the authoritative data directly without having to copy
it and have some sort of process in place to do that regularly.

Text files are kind of useless but they're a baseline bit of
functionality on top of which to add more sophisticated external forms
such as data available over at some url or over some kind of rpc -- to
which various conditions could be pushed using external indexes -- or
ultimately in another database to which whole joins can be pushed.

--
greg

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Sam Mason 2010-01-18 15:10:28 Re: postgres external table
Previous Message Tom Lane 2010-01-18 14:57:02 Re: postgres external table