From: | Craig James <craig_james(at)emolecules(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | pg_restore causes 100 |
Date: | 2007-07-12 07:34:45 |
Message-ID: | 4695D995.9040207@emolecules.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Here's an oddity. I have 10 databases, each with about a dozen connections to Postgres (about 120 connections total), and at midnight they're all idle. These are mod_perl programs (like a FastCGI -- they stay connected so they're ready for instant service). So using "ps -ef" and grep, we find one of the databases looks like this:
postgres 22708 7619 0 Jul11 ? 00:00:06 postgres: chemmega chemmega 192.168.10.192(46915) idle
postgres 22709 7619 0 Jul11 ? 00:00:06 postgres: chemmega chemmega 192.168.10.192(46916) idle
postgres 22710 7619 0 Jul11 ? 00:00:06 postgres: chemmega chemmega 192.168.10.192(46917) idle
postgres 22711 7619 0 Jul11 ? 00:00:06 postgres: chemmega chemmega 192.168.10.192(46918) idle
postgres 22712 7619 0 Jul11 ? 00:00:06 postgres: chemmega chemmega 192.168.10.192(46919) idle
postgres 22724 7619 0 Jul11 ? 00:00:06 postgres: chemmega chemmega 192.168.10.192(42440) idle
postgres 22725 7619 0 Jul11 ? 00:00:06 postgres: chemmega chemmega 192.168.10.192(42441) idle
postgres 22726 7619 0 Jul11 ? 00:00:06 postgres: chemmega chemmega 192.168.10.192(42442) idle
postgres 22727 7619 0 Jul11 ? 00:00:06 postgres: chemmega chemmega 192.168.10.192(42443) idle
postgres 22728 7619 0 Jul11 ? 00:00:06 postgres: chemmega chemmega 192.168.10.192(42444) idle
postgres 22731 7619 0 Jul11 ? 00:00:06 postgres: chemmega chemmega 192.168.10.192(42447) idle
Now here's the weird thing. I'm running a pg_restore of a database (on the order of 4GB compressed, maybe 34M rows of ordinary data, and 15M rows in one BLOB table that's typically 2K per blob). When I do this, ALL of the postgress backends start working at about 1% CPU apiece. This means that the 120 "idle" postgres backends are together using almost 100% of one CPU on top of the 100% CPU being used by pg_restore. See the output of top(1) below.
Is this normal? All I can guess at is that something's going on in shared memory that every Postgres backend has to respond to.
Thanks,
Craig
Tasks: 305 total, 1 running, 304 sleeping, 0 stopped, 0 zombie
Cpu(s): 33.5% us, 1.5% sy, 0.0% ni, 57.8% id, 6.6% wa, 0.2% hi, 0.4% si
Mem: 4151456k total, 4011020k used, 140436k free, 10096k buffers
Swap: 2104504k total, 94136k used, 2010368k free, 3168596k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
6681 postgres 16 0 217m 188m 161m D 50.4 4.6 4:29.30 postmaster
1577 root 10 -5 0 0 0 S 1.0 0.0 108:01.97 md0_raid1
8487 postgres 15 0 187m 8704 4996 S 1.0 0.2 0:06.56 postmaster
8506 postgres 15 0 187m 8604 4892 S 1.0 0.2 0:06.37 postmaster
8507 postgres 15 0 187m 8708 5004 S 1.0 0.2 0:06.42 postmaster
8512 postgres 15 0 187m 8612 4904 S 1.0 0.2 0:06.65 postmaster
8751 postgres 15 0 187m 10m 7520 S 1.0 0.3 0:07.95 postmaster
8752 postgres 15 0 187m 10m 7492 S 1.0 0.3 0:07.84 postmaster
14053 postgres 15 0 187m 8752 5044 S 1.0 0.2 0:06.53 postmaster
16515 postgres 15 0 187m 8156 4452 S 1.0 0.2 0:06.33 postmaster
25351 postgres 15 0 187m 9772 6064 S 1.0 0.2 0:06.75 postmaster
25387 postgres 15 0 187m 8444 4752 S 1.0 0.2 0:06.45 postmaster
25425 postgres 15 0 187m 9.8m 6340 S 1.0 0.2 0:06.75 postmaster
30626 postgres 15 0 187m 8472 4792 S 1.0 0.2 0:06.52 postmaster
30628 postgres 15 0 187m 8536 4840 S 1.0 0.2 0:06.50 postmaster
30630 postgres 15 0 187m 8524 4844 S 1.0 0.2 0:06.49 postmaster
30637 postgres 15 0 187m 8692 4880 S 1.0 0.2 0:06.25 postmaster
31679 postgres 15 0 187m 8544 4860 S 1.0 0.2 0:06.39 postmaster
31681 postgres 15 0 187m 8528 4848 S 1.0 0.2 0:06.25 postmaster
1751 postgres 15 0 187m 8432 4748 S 1.0 0.2 0:06.26 postmaster
11620 postgres 15 0 187m 8344 4644 S 1.0 0.2 0:06.23 postmaster
11654 postgres 15 0 187m 8316 4624 S 1.0 0.2 0:06.36 postmaster
19173 postgres 15 0 187m 9372 5668 S 1.0 0.2 0:06.49 postmaster
19670 postgres 15 0 187m 9236 5528 S 1.0 0.2 0:06.29 postmaster
20380 postgres 15 0 187m 8656 4956 S 1.0 0.2 0:06.20 postmaster
20649 postgres 15 0 187m 8280 4584 S 1.0 0.2 0:06.16 postmaster
22731 postgres 15 0 187m 8408 4700 S 1.0 0.2 0:06.03 postmaster
11045 postgres 15 0 185m 71m 68m S 0.7 1.8 0:19.35 postmaster
6408 postgres 15 0 187m 11m 7520 S 0.7 0.3 0:07.89 postmaster
6410 postgres 15 0 187m 10m 7348 S 0.7 0.3 0:07.53 postmaster
6411 postgres 15 0 187m 10m 7380 S 0.7 0.3 0:07.83 postmaster
6904 postgres 15 0 187m 8644 4788 S 0.7 0.2 0:06.15 postmaster
6905 postgres 15 0 187m 8288 4596 S 0.7 0.2 0:06.15 postmaster
6906 postgres 15 0 187m 8488 4764 S 0.7 0.2 0:06.18 postmaster
6907 postgres 15 0 187m 8580 4856 S 0.7 0.2 0:06.37 postmaster
7049 postgres 15 0 187m 8488 4800 S 0.7 0.2 0:06.07 postmaster
7054 postgres 15 0 187m 8376 4672 S 0.7 0.2 0:06.28 postmaster
7188 postgres 15 0 187m 8588 4868 S 0.7 0.2 0:06.39 postmaster
7190 postgres 15 0 187m 8832 5120 S 0.7 0.2 0:06.52 postmaster
7191 postgres 15 0 187m 8632 4916 S 0.7 0.2 0:06.48 postmaster
7192 postgres 15 0 187m 8884 5176 S 0.7 0.2 0:06.55 postmaster
8511 postgres 15 0 187m 8612 4904 S 0.7 0.2 0:06.39 postmaster
8513 postgres 15 0 187m 8776 5064 S 0.7 0.2 0:06.60 postmaster
8750 postgres 15 0 187m 10m 7220 S 0.7 0.3 0:07.72 postmaster
8768 postgres 15 0 187m 10m 7508 S 0.7 0.3 0:07.77 postmaster
8769 postgres 15 0 187m 10m 7448 S 0.7 0.3 0:07.81 postmaster
8775 postgres 15 0 187m 10m 7064 S 0.7 0.3 0:07.72 postmaster
8782 postgres 15 0 187m 10m 7316 S 0.7 0.3 0:07.84 postmaster
13947 postgres 15 0 187m 8500 4780 S 0.7 0.2 0:06.36 postmaster
13949 postgres 15 0 187m 8536 4824 S 0.7 0.2 0:06.36 postmaster
13951 postgres 15 0 187m 8504 4804 S 0.7 0.2 0:06.35 postmaster
14041 postgres 15 0 187m 8548 4828 S 0.7 0.2 0:06.45 postmaster
14046 postgres 15 0 187m 8560 4812 S 0.7 0.2 0:06.39 postmaster
14052 postgres 15 0 187m 8744 5024 S 0.7 0.2 0:06.54 postmaster
14055 postgres 15 0 187m 8580 4868 S 0.7 0.2 0:06.52 postmaster
14061 postgres 15 0 187m 8464 4760 S 0.7 0.2 0:06.45 postmaster
14092 postgres 15 0 187m 8624 4920 S 0.7 0.2 0:06.52 postmaster
16358 postgres 15 0 187m 8284 4596 S 0.7 0.2 0:06.54 postmaster
16367 postgres 15 0 187m 8392 4568 S 0.7 0.2 0:06.24 postmaster
From | Date | Subject | |
---|---|---|---|
Next Message | Adriaan van Os | 2007-07-12 07:37:40 | Re: TRUNCATE TABLE |
Previous Message | A. Kretschmer | 2007-07-12 05:06:31 | Re: bitmap-index-scan slower than normal index scan |