From: | Sim Zacks <sim(at)compulab(dot)co(dot)il> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: plpythonu / using pg as an application server |
Date: | 2010-06-01 10:56:17 |
Message-ID: | 4C04E751.2090906@compulab.co.il |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 6/1/2010 11:12 AM, Szymon Guz wrote:
>
>
> 2010/6/1 Sim Zacks <sim(at)compulab(dot)co(dot)il <mailto:sim(at)compulab(dot)co(dot)il>>
>
> PG 8.2
>
> I am using plpythonu to add application server functionality to my
> postgresql database.
>
> For example, I have triggers and functions that FTP files, sends
> email,
> processes files, etc..
>
>
> Is there any good reason not to include this functionality directly in
> the database? (Too much parallel processing, engine not equipped for
> that kind of processing, threading issues...)
>
>
> Thanks
> Sim
>
>
> The problem is that such a trigger can last very long and makes some
> non transactional operations. When you perform some insert or update,
> and the trigger sends an email, the insert/update lasts much longer
> while blocking other transactions. As as result the overall database
> efficiency is much worse.
> Another problem is that sometimes sending an email can fail, should
> then be made rollback of the insert/update operation?
> I'd rather use some message queue so the trigger just inserts an email
> info to a table `emails` instead of sending it. Another trigger would
> just insert some information to a table `ftpsites` to indicate some
> ftp address to download. There should also be some process at the
> background that will select the information from those tables and send
> emails, process the ftp sites and so on.
>
I am actually using a number of methods.
Triggers are only used when the function does have to be completed as
part of the transaction or it is considered an error. Also a big
advantage of the plpythonu is that you can use try..except blocks so
that if something fails you can process the failure and still allow the
transaction to complete.
For all other functions, such as email and FTP I am using either queues
or the Listen/Notify mechanism.
The queues run in a cron job that call a database function to complete
the task. I have a database function called SendQueuedEmails which runs
a loop on the emaildetails table and sends each email one by one. Any
errors are written to the errors table and a "bounce" email is sent to
the user.
I use Listen/Notify for what I call "asynchronous triggers". Something
that I want to happen immediately upon a specific transaction, but I
don't want to wait for the result and the transaction is not dependent
on. Errors are written to an error table so I can review them later and
a "bounce" email is sent when relevant.
I just prefer to have all the functionality in the database, so I have a
single location for all server code and a single standard method of
calling those functions.
Sim
From | Date | Subject | |
---|---|---|---|
Next Message | Sim Zacks | 2010-06-01 11:01:11 | Re: plpythonu / using pg as an application server |
Previous Message | Szymon Guz | 2010-06-01 10:32:46 | Re: create index concurrently - duplicate index to reduce time without an index |