From: | Shital A <brightuser2019(at)gmail(dot)com> |
---|---|
To: | Kenneth Marshall <ktm(at)rice(dot)edu> |
Cc: | Ron <ronljohnsonjr(at)gmail(dot)com>, pgsql-general(at)lists(dot)postgresql(dot)org |
Subject: | Re: Compression In Postgresql 9.6 |
Date: | 2019-08-06 08:14:59 |
Message-ID: | CAMp7vw_CFcxE9i223M_6TkFpoy4GJMCacoEv1t3TG-1wcotrVw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Mon, 5 Aug 2019, 18:57 Kenneth Marshall, <ktm(at)rice(dot)edu> wrote:
> > >Hi,
> > >
> > >On RHEL/Centos you can use VDO filesystem compression to make an archive
> > >tablespace to use for older data. That will compress everything.
> >
> > Doesn't this imply that either his table is partitioned or he
> > regularly moves records from the main table to the archive table?
> >
>
> Hi,
>
> Yes, he will need to do something to meet his goal of both a 100k TPS
> and have older archives online. He could also use something like
> postgres_fdw to have the archives on a seperate server completely.
>
> Regards,
> Ken
Thanks for the suggestions guys !
After checking i am thinking about following approach:
1. Create a FS on a separate drive on the server with VDO
2. Create a tablespace on FS created above for storing the historical/less
update intensive data
3. Other tablespaces remain on non compressed FS
4. Use table partitioning and create the tables in tablespace created in
step 2.
- will this complicate the DB design? In terms of replication, backup and
restores
- Can this give optimum performance.
Let me know your views !
Thank You !
From | Date | Subject | |
---|---|---|---|
Next Message | VO Ipfix | 2019-08-06 08:25:43 | Preventing in-session 'set role' commands |
Previous Message | Luca Ferrari | 2019-08-06 08:13:35 | vacuum & free space map |