From: | Deepak Balasubramanyam <deepak(dot)balu(at)gmail(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Postgresql jsonb |
Date: | 2015-08-14 12:09:49 |
Message-ID: | CAAerrx-hOAzCGQ3vrKGVEszF7EJuR1210354ZYLibuE4GPHdRA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hi,
I have a table (20 million rows) in Postgresql 9.4 that contains a bigint
id as the primary key and another column that contains jsonb data. Queries
run on this table look like so...
------------
## Query
------------
select ... from table
WHERE table.column ->'item'->> 'name' = 'value'
------------
I'd like to make an effort to get Postgresql to keep all data available in
this table and any index on this table in memory. This would ensure that
sequence or index scans made on the data are fairly fast.
Research into this problem indicates that there is no reliable way to get
Postgresql to run off of RAM memory completely (
http://stackoverflow.com/a/24235439/830964) Assuming the table and its
indexes amount to 15 gb of data on the disk and the machine contains 64GB
of RAM with shared buffers placed at anywhere from 16-24 GB, here are my
questions...
1. When postgresql returns data from this query, how can I tell how much of
the data was cached in memory?
2. I'm aware that I can tweak the shared buffer so that more data is
cached. Is there a way to monitor this value for its effectiveness?
3. Is there a reliable way / calculation (or close to it), to determine a
point after which Postgresql will ask the disk for data Vs the caches?
Thank you for taking the time to read my question.
- Deepak
From | Date | Subject | |
---|---|---|---|
Next Message | Bill Moran | 2015-08-14 12:25:14 | Re: Postgresql jsonb |
Previous Message | Martín Marqués | 2015-08-14 10:41:18 | Re: repmgr won't update witness after failover |