From: | AJ Weber <aweber(at)comcast(dot)net> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Partition table in 9.0.x? |
Date: | 2013-01-04 21:31:31 |
Message-ID: | 50E74A33.5000204@comcast.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Hi all,
I have a table that has about 73mm rows in it and growing. Running
9.0.x on a server that unfortunately is a little I/O constrained. Some
(maybe) pertinent settings:
default_statistics_target = 50
maintenance_work_mem = 512MB
constraint_exclusion = on
effective_cache_size = 5GB
work_mem = 18MB
wal_buffers = 8MB
checkpoint_segments = 32
shared_buffers = 2GB
The server has 12GB RAM, 4 cores, but is shared with a big webapp
running in Tomcat -- and I only have a RAID1 disk to work on. Woes me...
Anyway, this table is going to continue to grow, and it's used
frequently (Read and Write). From what I read, this table is a
candidate to be partitioned for performance and scalability. I have
tested some scripts to build the "inherits" tables with their
constraints and the trigger/function to perform the work.
Am I doing the right thing by partitioning this? If so, and I can
afford some downtime, is dumping the table via pg_dump and then loading
it back in the best way to do this?
Should I run a cluster or vacuum full after all is done?
Is there a major benefit if I can upgrade to 9.2.x in some way that I
haven't realized?
Finally, if anyone has any comments about my settings listed above that
might help improve performance, I thank you in advance.
-AJ
From | Date | Subject | |
---|---|---|---|
Next Message | nobody nowhere | 2013-01-04 21:38:12 | Re[6]: [PERFORM] Re[2]: [PERFORM] SMP on a heavy loaded database |
Previous Message | Daniel Westermann | 2013-01-04 21:29:57 | Re: FW: performance issue with a 2.5gb joinded table |