From: | "David G(dot) Johnston" <david(dot)g(dot)johnston(at)gmail(dot)com> |
---|---|
To: | Stuart McGraw <smcg4191(at)mtneva(dot)com> |
Cc: | pgsql-general <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: existence of a savepoint? |
Date: | 2018-05-30 03:36:38 |
Message-ID: | CAKFQuwZpA7n6vw5Ms8CY8Qi6JSq5_C2LjCEWL58v12SKryPxbQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Tuesday, May 29, 2018, Stuart McGraw <smcg4191(at)mtneva(dot)com> wrote:
> But in my case I don't control the size of the input data
>
Not in production but you have an idea of both size and complexity and
should be able to generate performance test scenarios, and related
monitoring queries (system and service) to obtain some idea. The specifics
are beyond my experience but this is not brand new technology and people
have done similar stuff with it before.
And, as an extension to what you said, given such lack of control you are
going to want to monitor performance in production anyway even with an
assumed bullet-resistant solution.
David J.
From | Date | Subject | |
---|---|---|---|
Next Message | Łukasz Jarych | 2018-05-30 05:33:26 | Tracking DDL and DML changes in Postgresql and different versions of database (advance) |
Previous Message | Stuart McGraw | 2018-05-30 02:24:00 | Re: existence of a savepoint? |