From: | Raj kumar <rajkumar820999(at)gmail(dot)com> |
---|---|
To: | Florents Tselai <florents(dot)tselai(at)gmail(dot)com> |
Cc: | Pgsql-admin <pgsql-admin(at)lists(dot)postgresql(dot)org>, Raj kumar <rajkumar820999(at)gmail(dot)com> |
Subject: | Re: Improve "select count(*)" query - takes more than 30 mins for some large tables |
Date: | 2022-07-11 09:15:27 |
Message-ID: | CACxU--VB9_7ou_C0AJ7f2acKN7XvrHsabnCRx4fdo=i5ukVWHA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
Thanks Florents,
I tried psql -c "select count(*)" . It has brought down the time from 30
minutes to 2 minutes.
Thanks alot.
Thanks Holger,
I'm going to try this query now.
Thanks,
Raj Kumar
On Mon, Jul 11, 2022 at 12:53 PM Florents Tselai <florents(dot)tselai(at)gmail(dot)com>
wrote:
>
>
> > On 11 Jul 2022, at 10:16 AM, Raj kumar <rajkumar820999(at)gmail(dot)com> wrote:
> >
> > Hi,
> >
> > How can I improve "select count(*)" for larger tables? I'm doing a db
> migration and need to validate the data count.
> > "select count(*) " queries are taking more than 30 minutes for some
> tables which is more than the downtime we have.
> > Will work_mem increase help? or how can i speed up this row count?
>
> Personally, whenever I’ve had slow count(*) or count (distinct id),
> eventually
> I’ve resorted to Unix tools.
>
> psql “select id from my_table" | sort -u | wc -l
>
> The convenience/performance tradeoff depends heavily on your schema.
> After all unix streams don’t know much about your integrity requirements.
>
> >
> > Thanks,
> > Raj
>
>
From | Date | Subject | |
---|---|---|---|
Next Message | Thomas Kellerer | 2022-07-11 12:11:24 | Re: Storing large large JSON objects in JSONB |
Previous Message | Holger Jakobs | 2022-07-11 08:33:41 | Re: Improve "select count(*)" query - takes more than 30 mins for some large tables |