From: | Josh Berkus <josh(at)agliodbs(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Cc: | Kenji Morishige <kenjim(at)juniper(dot)net> |
Subject: | Re: optimizing db for small table with tons of updates |
Date: | 2006-04-03 18:29:42 |
Message-ID: | 200604031129.42194.josh@agliodbs.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Kenji,
> We used to use MySQL for these tools and we never had any issues, but I
> believe it is due to the transactional nature of Postgres that is adding
> an overhead to this problem.
You're correct.
> Are there any table options that enables
> the table contents to be maintained in ram only or have delayed writes
> for this particular table?
No. That's not really the right solution anyway; if you want
non-transactional data, why not just use a flat file? Or Memcached?
Possible solutions:
1) if the data is non-transactional, consider using pgmemcached.
2) if you want to maintain transactions, use a combination of autovacuum
and vacuum delay to do more-or-less continuous low-level vacuuming of the
table. Using Postgres 8.1 will help you to be able to manage this.
--
--Josh
Josh Berkus
Aglio Database Solutions
San Francisco
From | Date | Subject | |
---|---|---|---|
Next Message | Rajesh Kumar Mallah | 2006-04-03 18:36:45 | Re: optimizing db for small table with tons of updates |
Previous Message | Kenji Morishige | 2006-04-03 18:24:03 | optimizing db for small table with tons of updates |