From: | luckyjackgao <luckyjackgao(at)gmail(dot)com> |
---|---|
To: | pgsql-admin(at)postgresql(dot)org |
Subject: | Re: PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects |
Date: | 2013-10-11 07:24:59 |
Message-ID: | 1381476299799-5774252.post@n5.nabble.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin pgsql-sql |
Hello
I have encountered some issues of PG crash when dealing with too much
data.
It seems that PG tries to do its task as quckly as it can be and will use as
much resource as it can.
Later I tried cgroups to limit resource usage to avoid PG consuming too much
memory etc. too quickly.
And PG works fine.
I edited the following files:
/etc/cgconfig.conf
mount {
cpuset = /cgroup/cpuset;
cpu = /cgroup/cpu;
cpuacct = /cgroup/cpuacct;
memory = /cgroup/memory;
devices = /cgroup/devices;
freezer = /cgroup/freezer;
net_cls = /cgroup/net_cls;
blkio = /cgroup/blkio;
}
group test1 {
perm {
task{
uid=postgres;
gid=postgres;
}
admin{
uid=root;
gid=root;
}
} memory {
memory.limit_in_bytes=300M;
}
}
/etc/cgrules.conf
# End of file
postgres memory test1/
#
Then set service on and restart , then login as postgres
chkconfig cgconfig on
chkconfig cgred on
And I can find PG works under 300M memory limit.
Best Regards
jian gao
--
View this message in context: http://postgresql.1045698.n5.nabble.com/PostgreSQL-9-2-pg-dump-out-of-memory-when-backuping-a-database-with-300000000-large-objects-tp5772931p5774252.html
Sent from the PostgreSQL - admin mailing list archive at Nabble.com.
From | Date | Subject | |
---|---|---|---|
Next Message | alfred | 2013-10-11 12:02:00 | Mysql to Postgresql |
Previous Message | Scott Whitney | 2013-10-10 23:07:56 | Re: convert from latin1 to utf8 |
From | Date | Subject | |
---|---|---|---|
Next Message | Bruce Momjian | 2013-10-11 19:44:37 | Re: [SQL] Comparison semantics of CHAR data type |
Previous Message | Craig R. Skinner | 2013-10-10 20:55:25 | Re: Many to many link tables with history? |