From: | wieck(at)debis(dot)com (Jan Wieck) |
---|---|
To: | tgl(at)sss(dot)pgh(dot)pa(dot)us (Tom Lane) |
Cc: | frankpit(at)pop(dot)dn(dot)net, pgsql-hackers(at)postgreSQL(dot)org |
Subject: | Re: [HACKERS] MAX Query length |
Date: | 1999-07-15 08:36:16 |
Message-ID: | m114h04-0003kMC@orion.SAPserv.Hamburg.dsh.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Tom Lane wrote:
>
> Bernard Frankpitt <frankpit(at)pop(dot)dn(dot)net> writes:
> > Tom Lane wrote:
> >> Sure: you want to be able to INSERT a tuple of maximum size. In the
> >> absence of dynamically sized text buffers, a reasonable estimate of
> >> the longest INSERT command of interest is going to depend on BLCKSZ.
>
> > Perhaps it would be a good idea to increase
> > the multiplier in
> > #define MAX_QUERY_SIZE (BLCKSZ * 2)
> > to something larger than 2.
>
> This entire chain of logic will fall to the ground anyway once we support
> tuples larger than a disk block, which I believe is going to happen
> before too much longer. So, rather than argue about what the multiplier
> ought to be, I think it's more productive to just press on with making
> the query buffers dynamically resizable...
Yes, even if we choose to make some other limit (like Vadim
suggested around 64K), a query operating on them could be
much bigger. I already had some progress with a data type
that uses a simple, byte oriented lz compression buffer as
internal representation.
Jan
--
#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me. #
#========================================= wieck(at)debis(dot)com (Jan Wieck) #
From | Date | Subject | |
---|---|---|---|
Next Message | Ansley, Michael | 1999-07-15 08:56:42 | RE: [HACKERS] MAX Query length |
Previous Message | Hiroshi Inoue | 1999-07-15 08:33:01 | RE: [HACKERS] What does explain show ? |