From: | Alvaro Herrera <alvherre(at)2ndquadrant(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Magnus Hagander <magnus(at)hagander(dot)net>, Joel Jacobson <joel(at)trustly(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: pg_stat_*_columns? |
Date: | 2015-06-20 23:05:08 |
Message-ID: | 20150620230508.GE133018@postgresql.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Robert Haas wrote:
> If we arranged things so that the processes could use the data in the
> DSM directly rather than having to copy it out, we'd presumably save
> quite a bit of memory, since the whole structure would be shared
> rather than each backend having its own copy. But if the structure
> got too big to map (on a 32-bit system), then you'd be sort of hosed,
> because there's no way to attach just part of it. That might not be
> worth worrying about, but it depends on how big it's likely to get - a
> 32-bit system is very likely to choke on a 1GB mapping, and maybe even
> on a much smaller one.
How realistic it is that you would get a 1 GB mapping on a 32-bit
system? Each table entry is 106 bytes at the moment if my count is
right, so you need about one million tables to get that large a table.
It doesn't sound really realistic to have such a database on a smallish
machine.
--
Álvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2015-06-21 03:55:53 | Re: pg_stat_*_columns? |
Previous Message | Tom Lane | 2015-06-20 23:01:07 | Re: pg_stat_*_columns? |