From: | Tom Mercha <mercha_t(at)hotmail(dot)com> |
---|---|
To: | "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Understanding TupleQueue impact and overheads? |
Date: | 2019-10-16 01:24:04 |
Message-ID: | DB6PR0201MB24552647BB3C7850A6289E24F4920@DB6PR0201MB2455.eurprd02.prod.outlook.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
I have been looking at PostgreSQL's Tuple Queue
(/include/executor/tqueue.h) which provides functionality for queuing
tuples between processes through shm_mq. I am still familiarising myself
with the bigger picture and TupTableStores. I can see that a copy (not a
reference) of a HeapTuple (obtained from TupleTableSlot or SPI_TupTable
etc) can be sent to a queue using shm_mq. Then, another process can
receive these HeapTuples, probably later placing them in 'output'
TupleTableSlots.
What I am having difficulty understanding is what happens to the
location of the HeapTuple as it moves from one TupleTableSlot to the
other as described above. Since there most likely is a reference to a
physical tuple involved, am I incurring a disk-access overhead with each
copy of a tuple? This would seem like a massive overhead; how can I keep
such overheads to a minimum?
Furthermore, to what extent can I expect other modules to impact a
queued HeapTuple? If some external process updates this tuple, when will
I see the change? Would it be a possiblity that the update is not
reflected on the queued HeapTuple but the external process is not
blocked/delayed from updating? In other words, like operating on some
kind of multiple snapshots? When does DBMS logging kick in whilst I am
transferring a tuple from TupTableStore to another?
Thanks,
Tom
From | Date | Subject | |
---|---|---|---|
Next Message | Masahiko Sawada | 2019-10-16 01:49:51 | Re: maintenance_work_mem used by Vacuum |
Previous Message | Masahiko Sawada | 2019-10-16 01:19:42 | Re: [HACKERS] Block level parallel vacuum |