Re: Read performance on Large Table

From: Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>
To: Scott Ribe <scott_ribe(at)elevated-dev(dot)com>
Cc: Kido Kouassi <jjkido(at)gmail(dot)com>, "pgsql-admin(at)postgresql(dot)org" <pgsql-admin(at)postgresql(dot)org>
Subject: Re: Read performance on Large Table
Date: 2015-05-21 15:21:59
Message-ID: CAOR=d=08=D-nui40rhsk7dM36L8L99ocvHXJ75Agyvd-_hazpQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

On Thu, May 21, 2015 at 9:18 AM, Scott Ribe <scott_ribe(at)elevated-dev(dot)com> wrote:
> On May 21, 2015, at 9:05 AM, Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com> wrote:
>>
>> I've done a lot of partitioning of big data sets in postgresql and if
>> there's some common field, like data, that makes sense to partition
>> on, it can be a huge win.
>
> Indeed. I recently did it on exactly this kind of thing, a log of activity. And the common queries weren’t slow at all.
>
> But if I wanted to upgrade via dump/restore with minimal downtime, rather than set up Slony or try my luck with pg_upgrade, I could dump the historical partitions, drop those tables, then dump/restore, then restore the historical partitions at my convenience. (In this particular db, history is unusually huge compared to the live data.)

I use an interesting method to setup partitioning. I setup my
triggers, then insert the data in chunks from the master table to
itself.

insert into master_table select * from only master_table limit 10000;

and run that over and over. The data is all in the same "table" to the
application. But it's slowly moving to the partitions without
interruption.

Note: ALWAYS use triggers for partitioning. Rules are way too slow.

In response to

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message Scott Ribe 2015-05-21 15:41:28 Re: Read performance on Large Table
Previous Message Scott Ribe 2015-05-21 15:18:30 Re: Read performance on Large Table