From: | "Steven Flatt" <steven(dot)flatt(at)gmail(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Very long SQL strings |
Date: | 2007-06-21 18:33:01 |
Message-ID: | 357fa7590706211133u2ae43d09id1653ca63d7a7838@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
I can't seem to find a definitive answer to this.
It looks like Postgres does not enforce a limit on the length of an SQL
string. Great. However is there some point at which a query string becomes
ridiculously too long and affects performance? Here's my particular case:
consider an INSERT statement where you're using the new multi-row VALUES
clause or SELECT ... UNION ALL to group together tuples. Is it always
better to group as many together as possible?
For example, on a toy table with two columns, I noticed about a 20% increase
when bulking together 1000 tuples in one INSERT statement as opposed to
doing 1000 individual INSERTS. Would this be the same for 10000? 100000?
Does it depend on the width of the tuples or the data types?
Are there any values A and B such that grouping together A tuples and B
tuples separately and running two statements, will be faster than grouping
A+B tuples in one statement?
Steve
From | Date | Subject | |
---|---|---|---|
Next Message | Heikki Linnakangas | 2007-06-21 18:33:47 | Re: Database-wide VACUUM ANALYZE |
Previous Message | Rainer Bauer | 2007-06-21 17:51:23 | Re: Data transfer very slow when connected via DSL |