From: | Alban Hertroys <dalroi(at)solfertje(dot)student(dot)utwente(dot)nl> |
---|---|
To: | Stefan Keller <sfkeller(at)gmail(dot)com> |
Cc: | pgsql-general List <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: How to guard PostgreSQL from overcharged and/or malicious (read-only) queries? |
Date: | 2010-12-20 11:43:38 |
Message-ID: | AE8DEB67-A7DA-40E2-A386-4559E5E99B47@solfertje.student.utwente.nl |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 20 Dec 2010, at 10:05, Stefan Keller wrote:
> I'd like to guard postgres from overcharged and/or malicious queries.
>
> The queries are strinctly read-only (from a SQL users perspective).
> For doing this I'd like to code two functions (preferrably pl/pgsql):
>
> 1. Filter out all SQL commands which are *not* read-only (no
> DROP/DELETE/UPDATE/TRUNCATE).
Most people do this using permissions.
> 2. Get the estimated time (units) from PostgreSQL planner in a
> reliable way (if possible standard/ANSI).
I don't think there's a way to do that directly, not without hacking the source.
What you can do is to have all users go through a SECURITY DEFINER type of function that does this for them. That function can then read the output of EXPLAIN <query> for its estimates. Those aren't exactly times, but cost estimates. The actual time taken depends on your hardware, you would need to do some measurements to see how planned costs and actual time relate.
I'm not sure this is a good idea though.
Firstly, testing the query plan adds a little overhead to every query coming in. It's not a lot, but if someone fires a lot of small fast queries it could become a problem. You would be hurting the people who're using your database correctly, instead of the people who're "abusing" it.
Secondly, you could achieve a similar effect by limiting the amount of time a query is allowed to run. I'm pretty sure there are configuration options that cause long-running queries to get killed after a set time.
Thirdly... Reliable estimates??? Lol!
Seriously, as hard as Postgres tries to keep statistics that make sense, I don't think they can ever be considered entirely reliable. You may not be vacuuming frequently enough, your statistics target may be too small or your data might not be suitable for statistical analysis (random numbers and very unbalanced distributions are good examples).
Therefore, if you plan to rely on the estimated time a query takes, you're going to be wrong sometimes.
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll see there is no forest.
!DSPAM:737,4d0f4181802653553761881!
From | Date | Subject | |
---|---|---|---|
Next Message | Raimon Fernandez | 2010-12-20 13:19:41 | libpq ASYNC with PQgetResult and PQisBusy |
Previous Message | Tatsuhito Kasahara | 2010-12-20 11:03:23 | Re: pg_statsinfo problem |