From: | Simon Riggs <simon(at)2ndquadrant(dot)com> |
---|---|
To: | philb(at)vodafone(dot)ie |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Query plan for very large number of joins |
Date: | 2005-06-03 23:23:57 |
Message-ID: | 1117841037.3844.1250.camel@localhost.localdomain |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Fri, 2005-06-03 at 13:22 +0100, philb(at)vodafone(dot)ie wrote:
>
> >>> I am using PostgreSQL (7.4) with a schema that was generated
> >>> automatically (using hibernate). The schema consists of about 650
> >>> relations. One particular query (also generated automatically)
> >>> consists of left joining approximately 350 tables.
> Despite being fairly restricted in scope,
> the schema is highly denormalized hence the large number of tables.
Do you mean normalized? Or do you mean you've pushed the superclass
details down onto each of the leaf classes?
I guess I'm interested in what type of modelling led you to have so many
tables in the first place?
Gotta say, never seen 350 table join before in a real app.
Wouldn't it be possible to smooth out the model and end up with less
tables? Or simply break things up somewhere slightly down from the root
of the class hierarchy?
Best Regards, Simon Riggs
From | Date | Subject | |
---|---|---|---|
Next Message | Christopher Browne | 2005-06-04 01:20:08 | Re: Insert slow down on empty database |
Previous Message | William Yu | 2005-06-03 23:22:49 | Re: Forcing use of specific index |