Re: unorthodox use of PG for a customer

From: Andrew Kerber <andrew(dot)kerber(at)gmail(dot)com>
To: richter(at)simkorp(dot)com(dot)br
Cc: pgsql-general(at)lists(dot)postgresql(dot)org
Subject: Re: unorthodox use of PG for a customer
Date: 2018-08-24 19:35:24
Message-ID: CAJvnOJaSgzHWuROzMTWBrduqQcLPQ_UA+WxQSRuPLG-_1P+mQw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Unless I am missing something, it sounds like you might be able to do this
with an nfs export shared to each workstation. But I am not sure if I
understood what you were describing either.

On Fri, Aug 24, 2018 at 2:22 PM Edson Carlos Ericksson Richter <
richter(at)simkorp(dot)com(dot)br> wrote:

> Em 24/08/2018 16:07, David Gauthier escreveu:
> > I tried to convince him of the wisdom of one central DB. I'll try again.
> >
> > >>So are the 58 database(stores) on the workstation going to be working
> > with data independent to each or is the data shared/synced between
> > instances?
> >
> > No, 58 workstations, each with its own DB. There's a concept of a
> > "workarea" (really a dir with a lot of stuff in it) where the script
> > runs. He wants to tie all the runs for any one workarea together and
> > is stuck on the idea that there should be a separate DB per workarea.
> > I told him you could just stick all the data in the same table just
> > with a "workarea" column to distinguish between the workareas. He
> > likes the idea of a separate DB per workarea. He just doesn't gt it.
> >
> > >>I'm no expert, but I've dozens of PostgreSQL databases running mostly
> > without manual maintenance for years.
> >
> > Ya, I've sort of had the same experience with PG DBs. Like the
> > everready bunny, they just keep on running. But these workstations
> > are pretty volatile as they keep overloading them and crash them. Of
> > course any DB running would die too and have to be
> > restarted/recovered. So the place for the DB is really elsewhere, on
> > an external server that wouldn't be subject to this volatility and
> > crashing. I told him about transactions and how you could prevent
> > partial writing of data sets.
> >
> > So far, I'm not hearing of anything that looks like a solution given
> > the constraints he's put on this. Don't get me wrong, he's a very
> > smart and sharp software engineer. Very smart. But for some reason,
> > he doesn't like the client/server DB model which would work so nicely
> > here. I'm just trying to make sure I didn't miss some sort of
> > solution, PG or not, that would work here.
> >
> > Thanks for your interest and input everyone !
> >
> >
> >
> >
> > On Fri, Aug 24, 2018 at 2:39 PM Edson Carlos Ericksson Richter
> > <richter(at)simkorp(dot)com(dot)br <mailto:richter(at)simkorp(dot)com(dot)br>> wrote:
> >
> > Em 24/08/2018 15:18, David Gauthier escreveu:
> > > Hi Everyone:
> > >
> > > I'm going to throw this internal customer request out for ideas,
> > even
> > > though I think it's a bit crazy. I'm on the brink of telling
> > him it's
> > > impractical and/or inadvisable. But maybe someone has a solution.
> > >
> > > He's writing a script/program that runs on a workstation and
> > needs to
> > > write data to a DB. This process also sends work to a batch
> > system on
> > > a server farm external to the workstation that will create
> > multiple,
> > > parallel jobs/processes that also have to write to the DB as
> > well. The
> > > workstation may have many of these jobs running at the same
> > time. And
> > > there are 58 workstation which all have/use locally mounted
> > disks for
> > > this work.
> > >
> > > At first blush, this is easy. Just create a DB on a server and
> > have
> > > all those clients work with it. But he's also adamant about having
> > > the DB on the same server(s) that ran the script AND on the locally
> > > mounted disk. He said he doesn't want the overhead,
> > dependencies and
> > > worries of anything like an external DB with a DBA, etc... . He
> > also
> > > wants this to be fast.
> > > My first thought was SQLite. Apparently, they now have some
> > sort of
> > > multiple, concurrent write ability. But there's no way those batch
> > > jobs on remote machines are going to be able to get at the locally
> > > mounted disk on the workstation. So I dismissed that idea. Then I
> > > thought about having 58 PG installs, one per workstation, each
> > serving
> > > all the jobs pertaining to that workstation. That could work.
> > But 58
> > > DB instances ? If he didn't like the ideal of one DBA, 58 can't be
> > > good. Still, the DB would be on the workstation which seems to be
> > > what he wants.
> > > I can't think of anything better. Does anyone have any ideas?
> > >
> > > Thanks in Advance !
> > >
> >
> > I'm no expert, but I've dozens of PostgreSQL databases running mostly
> > without manual maintenance for years, just do the backups, and you
> > are fine.
> > In any way, if you need any kind of maintenance, you can program
> > it in
> > your app (even backup, restore and vacuum) - it is easy to throw
> > administrative commands thru the available interfaces.
> > And if the database get out of access, no matter if it is
> > centralized or
> > remote: you will need someone phisically there to fix it.
> > AFAIK, you don't even PostgreSQL installer - you can run it embed
> > if you
> > wish.
> >
> > Just my2c,
> >
> > Edson
> >
> >
> I think its worth to add, PG or not PG, if the workstation crash, you
> will be in trouble with ANY database or file solution you choose - but
> with PG you can minimize the risk by fine tunning the flush to disk
> (either in PG and in OS). When correctly tuned, it works like a tank,
> and is hard to defeat.
>
> Regards,
>
> Edson.
>
>

--
Andrew W. Kerber

'If at first you dont succeed, dont take up skydiving.'

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Dimitri Maziuk 2018-08-24 19:54:57 Re: unorthodox use of PG for a customer
Previous Message Edson Carlos Ericksson Richter 2018-08-24 19:22:30 Re: unorthodox use of PG for a customer