From: | "Rajesh Kumar Mallah" <mallah(dot)rajesh(at)gmail(dot)com> |
---|---|
To: | "Kenji Morishige" <kenjim(at)juniper(dot)net> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: optimizing db for small table with tons of updates |
Date: | 2006-04-03 18:36:45 |
Message-ID: | a97c77030604031136m4579129dw26b5f032c5b2b42a@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Dear Kenji,
we had similar issuse with a banner impression update system,
that had high concurrency. we modfied the system to use insert
instead of update of the same row. performance wise things are
much better , but you have to keep deleting old data.
hope you extrapolate what i mean if its applicable to your case.
Regds
Rajesh Kumar Mallah
On 4/3/06, Kenji Morishige <kenjim(at)juniper(dot)net> wrote:
> I am using postgresql to be the central database for a variety of tools for
> our testing infrastructure. We have web tools and CLI tools that require access
> to machine configuration and other states for automation. We have one tool that
> uses a table that looks like this:
>
> systest_live=# \d cuty
> Table "public.cuty"
> Column | Type | Modifiers
> -------------+--------------------------+-----------
> resource_id | integer | not null
> lock_start | timestamp with time zone |
> lock_by | character varying(12) |
> frozen | timestamp with time zone |
> freeze_end | timestamp with time zone |
> freeze_by | character varying(12) |
> state | character varying(15) |
> Indexes:
> "cuty_pkey" PRIMARY KEY, btree (resource_id)
> "cuty_main_idx" btree (resource_id, lock_start)
> Foreign-key constraints:
> "cuty_resource_id_fkey" FOREIGN KEY (resource_id) REFERENCES resource(resource_id) ON UPDATE CASCADE ON DELETE CASCADE
>
> Various users run a tool that updates this table to determine if the particular
> resource is available or not. Within a course of a few days, this table can
> be updated up to 200,000 times. There are only about 3500 records in this
> table, but the update and select queries against this table start to slow
> down considerablly after a few days. Ideally, this table doesn't even need
> to be stored and written to the filesystem. After I run a vacuum against this
> table, the overall database performance seems to rise again. When database
> is running with recent vacuum the average server load is about .40, but after
> this table is updated 200,000+ times, the server load can go up to 5.0.
>
> here is a typical update query:
> 2006-04-03 10:53:39 PDT testtool systest_live kyoto.englab.juniper.net(4888) LOG: duration: 2263.741 ms statement: UPDATE cuty SET
> lock_start = NOW(),
> lock_by = 'tlim'
> WHERE resource_id='2262' and (lock_start IS NULL OR lock_start < (NOW() - interval '3600 second'))
>
> We used to use MySQL for these tools and we never had any issues, but I believe
> it is due to the transactional nature of Postgres that is adding an overhead
> to this problem. Are there any table options that enables the table contents
> to be maintained in ram only or have delayed writes for this particular table?
>
> Thanks in advance,
> Kenji
>
> ---------------------------(end of broadcast)---------------------------
> TIP 9: In versions below 8.0, the planner will ignore your desire to
> choose an index scan if your joining column's datatypes do not
> match
>
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2006-04-03 18:39:10 | Re: optimizing db for small table with tons of updates |
Previous Message | Josh Berkus | 2006-04-03 18:29:42 | Re: optimizing db for small table with tons of updates |