Re: Really unique session ID - PID + connection timestamp?

From: Durumdara <durumdara(at)gmail(dot)com>
To:
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Really unique session ID - PID + connection timestamp?
Date: 2016-04-10 07:07:34
Message-ID: CAEcMXhnqiZ8O7OZKY=fyu7q54VkfbRUhGUSkhDjAASUgHo3TMw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Dear Adrian!

Again. As I see the beginning blocks are removed by mailing system in the
code.

We have an "ourlocks" table which hold records (TableName, RecordID,
SessionInnerID, TimeStamp, etc, with TableName/RecordID prikey).

If anybody wants to lock record "for long time", "over the transactions" it
try to insert a new record here.
If other process want to lock same record, it can see this record (or got
violation error), so it stopped.

This is not for protect all tables, only for protect main entities have
many subtables like "Products", "Offers", etc.
We can't use transactions, because in the editor they must post/commit
subdata.
And because PG is different from other DBs, so if ANY of statements failed,
it rollback whole thing automatically (see my prior mail).
In FireBird we can do simple record lock protection with main connection's
second transaction, but in MS and PG not.

So we used real records in a real table. But how to clean if client
disconnected without delete own records?
For this we created own sessioninfo table with inner id, user id,
timestamp, and [connectionid, connectiontime].
The locking mechanism checks for same lock (Offer, 117), if anybody locks
the record, it checks for he's on or not.
If active connection (ID + TS) then lock is valid, and we can show
information that "who is editing, please ask for him to release, etc.".
If not, we can eliminate the record and insert our.

The main goal is to protect the main entities. It is working in MS.
My question was about how to get my client's connection timestamp as
get_backend_pid.
But as you wrote I can get it from activity log. Because PID can't be same
as mine, I can select my from the table.

You said it have danger (like guid repetition). Yes, it have. And not the
BAD SYSADMIN, but the summer/winter time changes are dangerous (the
backward version). In PG we can extend our "guid" with IP and Port too, and
this could be enough safe for us.

Thanks

2016-04-09 16:05 GMT+02:00 Adrian Klaver <adrian(dot)klaver(at)aklaver(dot)com>:

> On 04/09/2016 01:30 AM, Durumdara wrote:
>
>> Dear Everybody!
>>
>>
>> In MS we had a "persistent lock" structure and method.
>> This over transactions because based on real records in a real table
>> with pri key (tablename + id).
>>
>> For garbaging we had a special session info.
>> In MS the session id is smallint, so it can repeats after server
>> restarts, but my coll. found a "session creation timestamp".
>> This is a key which unique.
>> With this we can check for died sessions and we can clean their records.
>>
>
> It might help to explain more what it is you are trying to achieve.
>
> First I am not sure what you mean by 'persistent lock', especially as it
> applies to Postgres?
>
> Second, I assume by garbaging you mean garbage collection of something?
> If that is the case what exactly are you garbage collecting?
> I see 'clean records', what records would these be?
> In particular, on Postgres, where are you going to do this?
>
>
>> We want create same mechanism.
>>
>
> If the above questions did not already cover this, what mechanism?
>
>
> I know there are adv. locks in PG, but I want to use session id.
>>
>> This could be:
>> |pg_backend_pid|()
>>
>> May pid repeats.
>> Where I can get timestamp or some other unique data with I can create a
>> combined primary key?
>>
>> Thanks for your help!
>>
>> dd
>>
>>
>
> --
> Adrian Klaver
> adrian(dot)klaver(at)aklaver(dot)com
>

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message David G. Johnston 2016-04-10 07:30:26 Re: Bypassing NULL elements in row_to_json function
Previous Message Michael Nolan 2016-04-10 04:48:24 Re: Bypassing NULL elements in row_to_json function