From: | Jov <zhao6014(at)gmail(dot)com> |
---|---|
To: | John R Pierce <pierce(at)hogranch(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Pg and compress |
Date: | 2011-09-27 00:53:41 |
Message-ID: | CADyrUxMDxbiRh4rgz9xwbUCAiXx7tkAOyqKVr59sH8BTo7S1nw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Most are bigint and one field is varchar.
There is no index.
在 2011-9-27 上午3:34,"John R Pierce" <pierce(at)hogranch(dot)com>写道:
>
> On 09/26/11 6:59 AM, Jov wrote:
>>
>>
>> Hi all,
>> We are going to use pg as data warehouse,but after some test,we found
that plain text with csv format is 3 times bigger when load to pg.we use
copy to load data.we try some optimize and it reduce to 2.5 times
bigger.other db can avarage compress to 1/3 of the plain text.bigger data
means heavy io.
>> So my question is how to make data compressed in pg?is some fs such as
zfs,berfs with compression feature can work well with pg?
>>
>
> your source data is CSV, what data types are the fields in the table(s) ?
do you have a lot of indexes on this table(s)?
>
>
>
> --
> john r pierce N 37, W 122
> santa cruz ca mid-left coast
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
From | Date | Subject | |
---|---|---|---|
Next Message | John R Pierce | 2011-09-27 01:09:24 | Re: Pg and compress |
Previous Message | planas | 2011-09-27 00:42:41 | Re: Quick-and-Dirty Data Entry with LibreOffice3? |