| From: | John R Pierce <pierce(at)hogranch(dot)com> |
|---|---|
| To: | pgsql-general(at)postgresql(dot)org |
| Subject: | Re: Pg and compress |
| Date: | 2011-09-26 19:33:35 |
| Message-ID: | 4E80D38F.3010500@hogranch.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-general |
On 09/26/11 6:59 AM, Jov wrote:
>
> Hi all,
> We are going to use pg as data warehouse,but after some test,we found
> that plain text with csv format is 3 times bigger when load to pg.we
> use copy to load data.we try some optimize and it reduce to 2.5 times
> bigger.other db can avarage compress to 1/3 of the plain text.bigger
> data means heavy io.
> So my question is how to make data compressed in pg?is some fs such
> as zfs,berfs with compression feature can work well with pg?
>
your source data is CSV, what data types are the fields in the table(s)
? do you have a lot of indexes on this table(s)?
--
john r pierce N 37, W 122
santa cruz ca mid-left coast
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Marti Raudsepp | 2011-09-26 20:22:17 | Re: Batching up data into groups of n rows |
| Previous Message | Merlin Moncure | 2011-09-26 19:30:51 | Re: "all" not inclusive of "replication" in pg_hba.conf |