From: | "Rod Taylor" <rbt(at)zort(dot)ca> |
---|---|
To: | "Christopher Kings-Lynne" <chriskl(at)familyhealth(dot)com(dot)au>, "Arguile" <arguile(at)lucentstudios(dot)com> |
Cc: | <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Intervening in Parser -> Planner Stage |
Date: | 2002-03-06 01:03:23 |
Message-ID: | 04a001c1c4aa$fd67d790$b002000a@jester |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
> > I need to give community access directly to the database, so I
need to
> > impose some sane controls. As I can't hold any of that class of
users
> > accountable, I need to impose the limits in the software.
>
> For comparison, I know that MySQL allows you to set the max queries
per
> hour for a particular user...
Which is completely useless. A single table with a few thousand
records joined to itself 40 or so times as a cartesian product will
completely wipe out most MySQL boxes and thats only 1 query :)
PostgreSQL has the same problem..
What would be interesting is the ability to limit a query to a maximum
of X cost. This requires knowing the cost of a function call which is
currently evaluated as nothing.
Perhaps even 'nice' processeses running for certain users / groups.
Offline or background processes could be niced to 20 to allow the
interactive sessions to run unnoticed. Not so good if the background
process locks a bunch of stuff though -- so that may be more trouble
than good.
Either way the user can simply attempt to make thousands of database
connections which would effectivly block anyone else from connecting
at all.
From | Date | Subject | |
---|---|---|---|
Next Message | Neil Conway | 2002-03-06 01:27:54 | Re: ext2 vs ext3 vs RAID5 (software) benchmark |
Previous Message | Manuel Sugawara | 2002-03-06 01:01:24 | ext2 vs ext3 vs RAID5 (software) benchmark |