From: | "Mark Harris" <mharris(at)esri(dot)com> |
---|---|
To: | "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: reading large BYTEA type is slower than expected |
Date: | 2007-05-18 19:37:00 |
Message-ID: | D7BFFE348C53EF4E8AA0698B1E395FA9085ABEE3@flybywire.esri.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Tom,
Actually the 120 records I quoted is a mistake. Since it is a three band
image the number of records should be 360 records or 120 records for
each band.
Mark
-----Original Message-----
From: Tom Lane [mailto:tgl(at)sss(dot)pgh(dot)pa(dot)us]
Sent: Friday, May 18, 2007 10:48 AM
To: Mark Harris
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: [PERFORM] reading large BYTEA type is slower than expected
"Mark Harris" <mharris(at)esri(dot)com> writes:
> We have recently ported our application to the postgres database. For
> the most part performance has not been an issue; however there is one
> situation that is a problem and that is the initial read of rows
> containing BYTEA values that have an average size of 2 kilobytes or
> greater. For BYTEA values postgres requires as much 3 seconds to read
> the values from disk into its buffer cache.
How large is "large"?
(No, I don't believe it takes 3 sec to fetch a single 2Kb value.)
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2007-05-18 19:37:41 | Re: 121+ million record table perf problems |
Previous Message | Scott Marlowe | 2007-05-18 19:36:22 | Re: Slow queries on big table |