From: | Merlin Moncure <mmoncure(at)gmail(dot)com> |
---|---|
To: | "Sander, Ingo (NSN - DE/Munich)" <ingo(dot)sander(at)nsn(dot)com> |
Cc: | ext Craig Ringer <craig(at)postnewspapers(dot)com(dot)au>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Runtime dependency from size of a bytea field |
Date: | 2010-10-07 17:16:36 |
Message-ID: | AANLkTinYcfrnkr66HPzA7x0HL-h-NFjn+OTAASjJRvhA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Thu, Oct 7, 2010 at 10:49 AM, Merlin Moncure <mmoncure(at)gmail(dot)com> wrote:
> On Thu, Oct 7, 2010 at 12:11 AM, Sander, Ingo (NSN - DE/Munich)
> <ingo(dot)sander(at)nsn(dot)com> wrote:
>> As written before I have rerun the test a) without compression and b)
>> with enlarged BLOCK_SIZE. Result was the same.
>
> Using libpqtypes (source code follows after sig), stock postgres,
> stock table, I was not able to confirm your results. 4000 bytea
> blocks, loops of 1000 I was able to send in about 600ms. 50000 byte
> blocks I was able to send in around 2 seconds on workstation class
> hardware -- maybe something else is going on?.
I re-ran the test, initializing the bytea data to random values (i
wondered if uninitialized data getting awesome compression was skewing
the results).
This slowed down 50000 bytea case to around 3.5-4 seconds. That's
12-15mb/sec from single thread which is IMNSHO not too shabby. If
your data compresses decently and you hack a good bang/buck
compression alg into the backend like lzo you can easily double that
number.
merlin
From | Date | Subject | |
---|---|---|---|
Next Message | Kevin Grittner | 2010-10-07 17:21:21 | Re: [HACKERS] MIT benchmarks pgsql multicore (up to 48)performance |
Previous Message | Vincenzo Romano | 2010-10-07 17:08:24 | Re: On Scalability |