From: | Fd Habash <fmhabash(at)gmail(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | "pgsql-performance(at)lists(dot)postgresql(dot)org" <pgsql-performance(at)lists(dot)postgresql(dot)org> |
Subject: | RE: Select count(*) on a 2B Rows Tables Takes ~20 Hours |
Date: | 2018-09-13 19:35:23 |
Message-ID: | 5b9abbfa.1c69fb81.ecad0.d1fd@mx.google.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Just checked metrics while the count was running …
Read latency < 3.5 ms
Write latency < 4 ms
Read throughput ~ 40 MB/sec with sporadic peaks at 100
Read IOPS ~ 5000
QDepth < 3
----------------
Thank you
From: Tom Lane
Sent: Thursday, September 13, 2018 2:12 PM
To: Fd Habash
Cc: pgsql-performance(at)lists(dot)postgresql(dot)org
Subject: Re: Select count(*) on a 2B Rows Tables Takes ~20 Hours
Fd Habash <fmhabash(at)gmail(dot)com> writes:
> Based on my research in the forums and Google , it is described in multiple places that ‘select count(*)’ is expected to be slow in Postgres because of the MVCC controls imposed upon the query leading a table scan. Also, the elapsed time increase linearly with table size.
> However, I do not know if elapsed time I’m getting is to be expected.
> Table reltuples in pg_class = 2,266,649,344 (pretty close)
> Query = select count(*) from jim.sttyations ;
> Elapsed time (ET) = 18.5 hrs
That's pretty awful. My recollection is that in recent PG releases,
SELECT COUNT(*) runs at something on the order of 100ns/row given an
all-in-memory table. Evidently you're rather badly I/O bound.
> This is an Aurora cluster running on r4.2xlarge (8 vCPU, 61g).
Don't know much about Aurora, but I wonder whether you paid for
guaranteed (provisioned) IOPS, and if so what service level.
> refpep-> select count(*) from jim.sttyations;
> QUERY PLAN
> ------------------------------------------------------------------------------------------------------------------
> Aggregate (cost=73451291.77..73451291.78 rows=1 width=8)
> Output: count(*)
> -> Index Only Scan using stty_indx_fk03 on jim.sttyations (cost=0.58..67784668.41 rows=2266649344 width=0)
> Output: vsr_number
> (4 rows)
Oh, hmm ... the 100ns figure I mentioned was for a seqscan. IOS
could be a lot worse for a number of reasons, foremost being that
if the table isn't mostly all-visible then it'd involve a lot of
random heap access. It might be interesting to try forcing a
seqscan plan (see enable_indexscan).
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Andres Freund | 2018-09-13 19:43:47 | Re: Select count(*) on a 2B Rows Tables Takes ~20 Hours |
Previous Message | Tom Lane | 2018-09-13 18:12:02 | Re: Select count(*) on a 2B Rows Tables Takes ~20 Hours |