From: | Philipp Kraus <philipp(dot)kraus(at)flashpixx(dot)de> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | logger table |
Date: | 2012-12-24 03:01:42 |
Message-ID: | 38526465-08D1-40A0-B2AB-A4DAA3ACB858@flashpixx.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hello,
I need some ideas for creating a PG based logger. I have got a job, which can run more than one time. So the PK is at the moment jobid & cycle number.
The inserts in this table are in parallel with the same username from different host (clustering). The user calls in the executable "myprint" and the message
will insert into this table, but at the moment I don't know a good structure of the table. Each print call can be different length, so I think a text field is a good
choice, but I don't know how can I create a good PK value. IMHO a sequence can be create problems that I'm logged in with the same user on multiple
hosts, a hash key value like SHA1 based on the content are not a good choice, because content is not unique, so I can get key collisions.
I would like to create on each "print" call a own record in the table, but how can I create a good key value and get no problems in parallel access.
I think there can be more than 1000 inserts each second.
Does anybody can post a good idea?
Thanks
Phil
From | Date | Subject | |
---|---|---|---|
Next Message | Alejandro Carrillo | 2012-12-24 03:13:20 | Re: logger table |
Previous Message | News Subsystem | 2012-12-24 02:45:13 |