From: | Herouth Maoz <herouth(at)unicell(dot)co(dot)il> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Question about optimizing access to a table. |
Date: | 2013-12-10 16:23:05 |
Message-ID: | 8897EA3F-AA53-4321-ABB4-033334256D7A@unicell.co.il |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hello.
I have one particular table with very specialized use. I am sending messages to some partner. The partner processes them asynchronously, and then returns the status report to me. The table is used to store a serialized version of the message object, together with a few identifiers, expiry date, and a reference from the partner. When the status report comes back from the partner, we:
Select the record using the partner's reference number
reconstruct and queue the object.
Delete the record from database using the serial primary key ID.
Every once in a while we run a garbage collection process which selects expired messages, reconstructs and queues them for processing, and deletes the record using the primary key.
This works wonderfully as long as the table remains small - a few thousand records waiting for their status report, and that's it. The table is set to have frequent auto-anylize runs.
The problem starts when our partner has some glitch, under high load, and fails to send back a few hundred thousand reports. In that case, the table grows to a few hundred records, and they are not deleted until they hit their expiry date, at which point the "garbage collector" takes care of them and everything goes back to normal. When it contains hundreds of thousands of records, performance deteriorates considerably.
I am trying to figure out a solution that will keep the system working well even when there is a large number of records in the table. At first I thought of partitioning the table on the partner's reference field. But the DELETE would actually slow down if I do this, right?
Any ideas?
Here is the table structure (some irrelevant fields cut):
Column | Type | Modifiers
-----------------+-----------------------------+--------------------------------------------------------
id | bigint | not null default nextval('transient_id_seq'::regclass)
time_submitted | timestamp without time zone | not null
validity | timestamp without time zone |
msisdn | character varying(16) | not null
sme_reference | integer | not null
smsc_reference | numeric(21,0) | not null
sm | text |
Indexes:
"transient_pkey" PRIMARY KEY, btree (id)
"transient_msisdn_ref" btree (msisdn, smsc_reference)
"transient_sm_vp" btree (validity)
Has OIDs: no
Options: autovacuum_enabled=true, autovacuum_analyze_threshold=200, autovacuum_analyze_scale_factor=0.001
TIA,
Herouth
From | Date | Subject | |
---|---|---|---|
Next Message | Merlin Moncure | 2013-12-10 16:33:53 | Re: Q: regarding backends |
Previous Message | bricklen | 2013-12-10 16:21:26 | Re: DB Audit |