From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | "Albe Laurenz" <laurenz(dot)albe(at)wien(dot)gv(dot)at> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Analyzing foreign tables & memory problems |
Date: | 2012-04-30 15:23:25 |
Message-ID: | 8329.1335799405@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
"Albe Laurenz" <laurenz(dot)albe(at)wien(dot)gv(dot)at> writes:
> Tom Lane wrote:
>> I'm fairly skeptical that this is a real problem, and would prefer not
>> to complicate wrappers until we see some evidence from the field that
>> it's worth worrying about.
> If I have a table with 100000 rows and default_statistics_target
> at 100, then a sample of 30000 rows will be taken.
> If each row contains binary data of 1MB (an Image), then the
> data structure returned will use about 30 GB of memory, which
> will probably exceed maintenance_work_mem.
> Or is there a flaw in my reasoning?
Only that I don't believe this is a real-world scenario for a foreign
table. If you have a foreign table in which all, or even many, of the
rows are that wide, its performance is going to suck so badly that
you'll soon look for a different schema design anyway.
I don't want to complicate FDWs for this until it's an actual bottleneck
in real applications, which it may never be, and certainly won't be
until we've gone through a few rounds of performance refinement for
basic operations.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Simon Riggs | 2012-04-30 15:29:17 | Re: Analyzing foreign tables & memory problems |
Previous Message | Tom Lane | 2012-04-30 15:12:56 | Re: Patch: add timing of buffer I/O requests |