From: | Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Index creation running now for 14 hours |
Date: | 2015-08-26 22:42:02 |
Message-ID: | 55DE40BA.5000401@2ndquadrant.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On 08/26/2015 10:26 PM, Tory M Blue wrote:
>
> the table is 90GB without indexes, 285GB with indexes and bloat, The
> row count is not actually completing.. 125Million rows over 13 months,
> this table is probably close to 600million rows.
You don't need to do SELECT COUNT(*) if you only need an approximate
number. You can look at pg_class.reltuples:
SELECT reltuples FROM pg_class WHERE relname = 'impressions';
That should be a sufficiently accurate estimate.
> The above is when it had finished copying the table and started on the
> index..
>
> Well as I said I'm running out of storage as the index is creating some
> serious data on the filesystem, I'll have to kill it, try to massage the
> data a bit and increase the maintenance_work mem to use some of my 256GB
> of ram to try to get through this. Right now the 100% cpu process which
> is this index is only using 3.5GB and has been for the last 15 hours
Please post details on the configuration (shared_buffer, work_mem,
maintenance_work_mem and such).
BTW while the the CREATE INDEX is reporting 3.5GB, it most likely wrote
a lot of data into on-disk chunks when sorting the data. So it's
actually using the memory through page cache (i.e. don't increase
maintenance_work_mem too much, you don't want to force the data to disk
needlessly).
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Geoghegan | 2015-08-26 22:58:05 | Re: Index creation running now for 14 hours |
Previous Message | Tomas Vondra | 2015-08-26 22:36:03 | Re: Index creation running now for 14 hours |