From: | Greg Stark <gsstark(at)mit(dot)edu> |
---|---|
To: | Phil Endecott <spam_from_postgresql_general(at)chezphil(dot)org> |
Cc: | pgsql-general(at)postgresql(dot)org, Greg Stark <gsstark(at)mit(dot)edu> |
Subject: | Re: Megabytes of stats saved after every connection |
Date: | 2005-07-30 14:23:47 |
Message-ID: | 87hdec74p8.fsf@stark.xeocode.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Phil Endecott <spam_from_postgresql_general(at)chezphil(dot)org> writes:
> Greg Stark wrote:
>
> > You're omitting the time spent finding the actual table for the correct
> > user in your current scheme. That's exactly the same as the log(u) factor
> > above.
>
> I hope not - can anyone confirm?
>
> I have the impression that within a plpgsql function, the table lookup cost
> happens once, and subsequent accesses to the same table are cheap. In fact this
> characteristic has caused problems for me in the past, see
> http://archives.postgresql.org/pgsql-general/2004-09/msg00316.php
>
> I hope that the same is true of PQexecPrepared - can anyone confirm?
Are you really keeping prepared queries for each of your thousands of users?
Then I have to wonder about the time to look up the relevant prepared query
from amongst the thousands of prepared queries in the system.
I'm not saying it's a problem; it's (presumably) a small cost, just like
looking up the table the system tables (using indexes) is a small cost. And
just like having another level in the btree index would be a small cost.
I'm just saying you're not getting something for free here by having lots of
small indexes instead of one big one. There can be some small linear gains
like database using a sequential scan instead of an index scan for some
queries, but there there's no algorithmic gain here.
> I could use something like "CONNECT BY", though last time I investigated I
> believe there were some stability concerns with the patch.
I think the main problem was that it changed some internal structures such
that a database created with a postgres with that patch was incompatible with
a postgres without the patch. And if you switched back and forth you corrupted
the database.
> Thanks for your suggestions Greg, but I think I know what I'm doing. The
> Postgresql core copes well with this setup. It's just peripheral things, like
> autovacuum and this stats writing issue, where poor big-O
> complexity had gone un-noticed.
Well that's useful for Postgres development in a "guinea pig" sort of way at
least :)
--
greg
From | Date | Subject | |
---|---|---|---|
Next Message | Jon Christian Ottersen | 2005-07-30 19:30:00 | Tool for database design documentation? |
Previous Message | Chris Travers | 2005-07-30 05:59:49 | Re: Design question |