From: | Maxim Boguk <maxim(dot)boguk(at)gmail(dot)com> |
---|---|
To: | Craig Ringer <craig(at)postnewspapers(dot)com(dot)au> |
Cc: | pgsql-bugs(at)postgresql(dot)org |
Subject: | Re: BUG #6088: copy to stdout cannot be stopped with kill(pid) or pg_terminate_backend |
Date: | 2011-07-03 01:23:31 |
Message-ID: | CAK-MWwQnisEFMu4935wDO0kWD0pZKyuL7h1rO6hUy6K+BPSR_Q@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
After some experiments I found not only copy to stdout but simple
select count(*) from changes can not be stopped too.
However select count(*) finishes successfully and no long IO locks or
waits was found in ktrace results.
Because situation seems unusual I tried to find what not usual with
the table itself.
Table size:
SELECT pg_size_pretty(pg_relation_size('changes'));
pg_size_pretty
----------------
152 GB
However, and here is unusual part: first 135GB of the table is
completely dead/empty space without single live tuple
(how that happen is another story, if short it is result of many
unsuccessful attempts of Slony populate the table on on slave
server... so it was begin; delete from changes; copy table from stdin;
get an error (cluster wide statement timeout); abort; repeat N times
over night).
So, (and here going pure theory), while code perform seq scan over
large empty space it is not perform check for interrupts while looping
over completely dead/empty area.
PS: just to compare backtrace of unstoppable count(*)
#0 0x0000000801961b5c in read () from /lib/libc.so.7
#1 0x0000000000658968 in FileRead (file=7, buffer=0xc8edc6ae0
"╫\023", amount=8192) at fd.c:1151
#2 0x000000000066b2b2 in mdread (reln=0xdd08895e0,
forknum=MAIN_FORKNUM, blocknum=762045, buffer=0xc8edc6ae0 "╫\023") at
md.c:618
#3 0x000000000066ce5a in smgrread (reln=0xdd08895e0,
forknum=MAIN_FORKNUM, blocknum=762045, buffer=0xc8edc6ae0 "╫\023")
at smgr.c:401
#4 0x0000000000653495 in ReadBuffer_common (smgr=0xdd08895e0,
isLocalBuf=0 '\000', forkNum=MAIN_FORKNUM, blockNum=762045,
mode=RBM_NORMAL, strategy=0xdd08b0410, hit=0x7fffffffde53 "") at
bufmgr.c:426
#5 0x0000000000652eea in ReadBufferExtended (reln=0xdd0849f38,
forkNum=MAIN_FORKNUM, blockNum=762045, mode=RBM_NORMAL,
strategy=0xdd08b0410) at bufmgr.c:240
#6 0x0000000000468ec9 in heapgetpage (scan=0xdd08b0000, page=762045)
at heapam.c:221
#7 0x000000000046a706 in heapgettup_pagemode (scan=0xdd08b0000,
dir=ForwardScanDirection, nkeys=0, key=0x0) at heapam.c:808
#8 0x000000000046b006 in heap_getnext (scan=0xdd08b0000,
direction=ForwardScanDirection) at heapam.c:1320
--part before here is the same as I see in copy to stdout backtrace
#9 0x00000000005a4e5a in SeqNext (node=0xdd08afa10) at nodeSeqscan.c:93
#10 0x0000000000593a10 in ExecScan (node=0xdd08afa10,
accessMtd=0x5a4d40 <SeqNext>) at execScan.c:68
#11 0x00000000005a4eba in ExecSeqScan (node=0xdd08afa10) at nodeSeqscan.c:131
#12 0x0000000000588f85 in ExecProcNode (node=0xdd08afa10) at execProcnode.c:363
#13 0x0000000000599cec in agg_retrieve_direct (aggstate=0xdd08af550)
at nodeAgg.c:945
#14 0x0000000000599b10 in ExecAgg (node=0xdd08af550) at nodeAgg.c:837
#15 0x0000000000589093 in ExecProcNode (node=0xdd08af550) at execProcnode.c:431
#16 0x0000000000586794 in ExecutePlan (estate=0xdd08af030,
planstate=0xdd08af550, operation=CMD_SELECT, numberTuples=0,
direction=ForwardScanDirection, dest=0xdd08ab790) at execMain.c:1504
#17 0x00000000005849a5 in standard_ExecutorRun (queryDesc=0xdd0830c30,
direction=ForwardScanDirection, count=0) at execMain.c:309
#18 0x00000000005848a5 in ExecutorRun (queryDesc=0xdd0830c30,
direction=ForwardScanDirection, count=0) at execMain.c:258
#19 0x0000000000674e18 in PortalRunSelect (portal=0xdd0885030,
forward=1 '\001', count=0, dest=0xdd08ab790) at pquery.c:953
#20 0x0000000000674ab9 in PortalRun (portal=0xdd0885030,
count=9223372036854775807, isTopLevel=1 '\001', dest=0xdd08ab790,
altdest=0xdd08ab790, completionTag=0x7fffffffe570 "") at pquery.c:779
#21 0x000000000066f0e6 in exec_simple_query (query_string=0x8037e7030
"SELECT count(*) from changes\n;") at postgres.c:997
#22 0x0000000000672f72 in PostgresMain (argc=4, argv=0x803732930,
username=0x803732900 "pgsql") at postgres.c:3676
#23 0x000000000063b5cd in BackendRun (port=0x803703c00) at postmaster.c:3467
#24 0x000000000063ab2a in BackendStartup (port=0x803703c00) at postmaster.c:3081
#25 0x00000000006382dc in ServerLoop () at postmaster.c:1387
#26 0x0000000000637abb in PostmasterMain (argc=3, argv=0x7fffffffeb88)
at postmaster.c:1040
#27 0x00000000005c0b5a in main (argc=3, argv=0x7fffffffeb88) at main.c:188
On Sat, Jul 2, 2011 at 10:00 PM, Maxim Boguk <maxim(dot)boguk(at)gmail(dot)com> wrote:
> Hi and thanks for responding...
>
> While i performing my tests I used pg_dump via local socket
> (
> pg_dump -F c -Z 0 -t changes -a db > /dev/null
> )
>
>
> Now:
>
> billing=# SELECT * from db_activity ;
> age | datname | procpid | usename | waiting |
> query_start | client_addr | client_port |
> current_query
> -----------------+---------+---------+---------+---------+-------------------------------+-------------+-------------+----------------------------------------------------------------------------------------------
> 00:03:07.609308 | billing | 36353 | pgsql | f | 2011-07-02
> 15:50:21.508394+04 | | -1 | COPY bill.changes
> (id, cdate, mdate, status, table_name, pk_id, old_row, new_row) TO
> stdout;
> (1 row)
>
> billing=# SELECT pg_terminate_backend(36353);
> pg_terminate_backend
> ----------------------
> t
> (1 row)
>
> billing=# SELECT * from db_activity ;
> age | datname | procpid | usename | waiting |
> query_start | client_addr | client_port |
> current_query
> -----------------+---------+---------+---------+---------+-------------------------------+-------------+-------------+----------------------------------------------------------------------------------------------
> 00:07:59.735749 | billing | 36353 | pgsql | f | 2011-07-02
> 15:50:21.508394+04 | | -1 | COPY bill.changes
> (id, cdate, mdate, status, table_name, pk_id, old_row, new_row) TO
> stdout;
> (1 row)
>
> bt look completely same all time while I attached to postgres process
> 5 times over 20 min run:
>
> /usr/local/bin/gdb72 -pid=36353
> --directory=/usr/local/pgsql/src/postgresql-8.4.7/src
> ...
> #0 0x0000000801961b5c in read () from /lib/libc.so.7
> #1 0x0000000000658968 in FileRead (file=102, buffer=0x8b246cae0
> "╫\023", amount=8192) at fd.c:1151
> #2 0x000000000066b2b2 in mdread (reln=0xdd088a518,
> forknum=MAIN_FORKNUM, blocknum=2428854, buffer=0x8b246cae0 "╫\023") at
> md.c:618
> #3 0x000000000066ce5a in smgrread (reln=0xdd088a518,
> forknum=MAIN_FORKNUM, blocknum=2428854, buffer=0x8b246cae0 "╫\023")
> at smgr.c:401
> #4 0x0000000000653495 in ReadBuffer_common (smgr=0xdd088a518,
> isLocalBuf=0 '\000', forkNum=MAIN_FORKNUM, blockNum=2428854,
> mode=RBM_NORMAL, strategy=0xdd08e0650, hit=0x7fffffffddb3 "") at
> bufmgr.c:426
> #5 0x0000000000652eea in ReadBufferExtended (reln=0xdd091e070,
> forkNum=MAIN_FORKNUM, blockNum=2428854, mode=RBM_NORMAL,
> strategy=0xdd08e0650) at bufmgr.c:240
> #6 0x0000000000468ec9 in heapgetpage (scan=0xdd08b7a18, page=2428854)
> at heapam.c:221
> #7 0x000000000046a706 in heapgettup_pagemode (scan=0xdd08b7a18,
> dir=ForwardScanDirection, nkeys=0, key=0x0) at heapam.c:808
> #8 0x000000000046b006 in heap_getnext (scan=0xdd08b7a18,
> direction=ForwardScanDirection) at heapam.c:1320
> #9 0x00000000005315c8 in CopyTo (cstate=0xdd0831030) at copy.c:1410
> #10 0x000000000053112d in DoCopyTo (cstate=0xdd0831030) at copy.c:1268
> #11 0x0000000000530c29 in DoCopy (stmt=0x8037e7cc8,
> queryString=0x8037e7030 "COPY bill.changes (id, cdate, mdate,
> status, table_name, pk_id, old_row, new_row) TO stdout;")
> at copy.c:1163
> #12 0x0000000000676888 in ProcessUtility (parsetree=0x8037e7cc8,
> queryString=0x8037e7030 "COPY bill.changes (id, cdate, mdate,
> status, table_name, pk_id, old_row, new_row) TO stdout;",
> params=0x0, isTopLevel=1 '\001', dest=0x8037e7850,
> completionTag=0x7fffffffe570 "") at utility.c:600
> #13 0x00000000006753d4 in PortalRunUtility (portal=0xdd0885030,
> utilityStmt=0x8037e7cc8, isTopLevel=1 '\001', dest=0x8037e7850,
> completionTag=0x7fffffffe570 "") at pquery.c:1192
> #14 0x000000000067554f in PortalRunMulti (portal=0xdd0885030,
> isTopLevel=1 '\001', dest=0x8037e7850, altdest=0x8037e7850,
> completionTag=0x7fffffffe570 "") at pquery.c:1297
> #15 0x0000000000674bd0 in PortalRun (portal=0xdd0885030,
> count=9223372036854775807, isTopLevel=1 '\001', dest=0x8037e7850,
> altdest=0x8037e7850, completionTag=0x7fffffffe570 "") at pquery.c:823
> #16 0x000000000066f0e6 in exec_simple_query (
> query_string=0x8037e7030 "COPY bill.changes (id, cdate, mdate,
> status, table_name, pk_id, old_row, new_row) TO stdout;")
> at postgres.c:997
> #17 0x0000000000672f72 in PostgresMain (argc=4, argv=0x803732930,
> username=0x803732900 "pgsql") at postgres.c:3676
> #18 0x000000000063b5cd in BackendRun (port=0x803703c00) at postmaster.c:3467
> #19 0x000000000063ab2a in BackendStartup (port=0x803703c00) at postmaster.c:3081
> #20 0x00000000006382dc in ServerLoop () at postmaster.c:1387
> #21 0x0000000000637abb in PostmasterMain (argc=3, argv=0x7fffffffeb88)
> at postmaster.c:1040
> ---Type <return> to continue, or q <return> to quit---
> #22 0x00000000005c0b5a in main (argc=3, argv=0x7fffffffeb88) at main.c:188
>
>
> Seems no blocking issues at all in backtrace.
> Postgres just continue running and ignore signals.
>
>
> On Sat, Jul 2, 2011 at 7:29 PM, Craig Ringer
> <craig(at)postnewspapers(dot)com(dot)au> wrote:
>> On 2/07/2011 12:42 PM, Maksym Boguk wrote:
>>
>>> But I found I can not stop
>>> COPY bill.changes (id, cdate, mdate, status, table_name, pk_id, old_row,
>>> new_row) TO stdout;
>>> with pg_terminate_backend(procpid) or kill (procpid).
>>
>> Which FreeBSD are you on? Version and architecture.
>>
>> Can you obtain a backtrace of the backend you've sent the kill signal to,
>> after you've set the signal? Given the rest of what you've said, I'll bet
>> it's blocked in the kernel waiting to send data down a socket. If the
>> kernel's socket writes aren't interrupted by signals, there's nothing much
>> Pg can do about it.
>>
>>> At the same time copy to stdout completely ignore fact other side (pg_dump
>>> itself) was killed long time ago.
>>
>> That's got to be a buffering issue, surely. Once the TCP send buffers fill
>> up the backend would block in write() as the OS tries to send enough data to
>> make room in the send buffer for the next write.
>>
>> If your OS keeps big buffers, Pg might've buffered many megabytes of data
>> before you killed the client side. It might keep on trying to send that data
>> for quite a long time if it doesn't get notified that the other peer
>> (pg_dump in this case) has vanished.
>>
>> How were you running pg_dump? ssh was involved, so I'm wondering if you were
>> running pg_dump over an ssh tcp tunnel. Was that it? Or were you running
>> pg_dump on the server side and dumping the SQL text down the ssh connection
>> to the client? If the latter, then unless you killed ssh as well the OS
>> would merrily keep on sending the already-buffered data from before pg_dump
>> was killed; you'd have to kill sshd or the client side of ssh to actually
>> stop the data flow.
>>
>> To help with this sort of thing, reduce your send and receive buffer sizes.
>> You'll pay a price in terms of throughput, but you'll gain responsiveness.
>>
>> Without more information, it's hard to know if this is anything except a
>> buffering issue. Possibly one that doesn't even have anything to do with Pg
>> - it could easily just be sshd writes being buffered.
>>
>> --
>> Craig Ringer
>>
>
>
>
> --
> Maxim Boguk
> Senior Postgresql DBA.
>
> Skype: maxim.boguk
> Jabber: maxim(dot)boguk(at)gmail(dot)com
>
> LinkedIn profile: http://nz.linkedin.com/in/maximboguk
> If they can send one man to the moon... why can't they send them all?
>
> МойКруг: http://mboguk.moikrug.ru/
> Сила солому ломит, но не все в нашей жизни - солома, да и сила далеко не все.
>
--
Maxim Boguk
Senior Postgresql DBA.
Skype: maxim.boguk
Jabber: maxim(dot)boguk(at)gmail(dot)com
LinkedIn profile: http://nz.linkedin.com/in/maximboguk
If they can send one man to the moon... why can't they send them all?
МойКруг: http://mboguk.moikrug.ru/
Сила солому ломит, но не все в нашей жизни - солома, да и сила далеко не все.
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2011-07-03 02:13:23 | Re: BUG #6088: copy to stdout cannot be stopped with kill(pid) or pg_terminate_backend |
Previous Message | Jeff Davis | 2011-07-03 01:21:22 | Re: Re: PG regression with row comparison when btree_gist is enabled (BUG) |