Best partition type for billions of addresses

From: Arya F <arya6000(at)gmail(dot)com>
To: pgsql-performance(at)lists(dot)postgresql(dot)org
Subject: Best partition type for billions of addresses
Date: 2020-05-02 13:20:06
Message-ID: CAFoK1ayT3Ma8e_d+AXEBVO6wGffH0bs_LDeY2+9X5NRcGYp_SA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

I need to store about 600 million rows of property addresses across
multiple counties. I need to have partitioning setup on the table as
there will be updates and inserts performed to the table frequently
and I want the queries to have good performance.

From what I understand hash partitioning would not be the right
approach in this case, since for each query PostgreSQL has to check
the indexes of all partitions?

Would list partitioning be suitable? if I want PostgreSQL to know
which partition the row is it can directly load the relevant index
without having to check other partitions. Should I be including the
partition key in the where clause?

I'd like to hear some recommendations on the best way to approach
this. I'm using PostgreSQL 12

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Justin Pryzby 2020-05-02 14:00:32 Re: Best partition type for billions of addresses
Previous Message Moises Lopez 2020-04-30 16:26:27 Re: The query plan get all columns but I'm using only one column.