From: | Bruno Wolff III <bruno(at)wolff(dot)to> |
---|---|
To: | rabt(at)dim(dot)uchile(dot)cl |
Cc: | pgsql-novice(at)postgresql(dot)org |
Subject: | Re: BIG files |
Date: | 2005-06-19 12:48:08 |
Message-ID: | 20050619124808.GC32482@wolff.to |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-novice |
On Sat, Jun 18, 2005 at 13:45:42 -0400,
rabt(at)dim(dot)uchile(dot)cl wrote:
> Hi all Postgresql users,
>
> I've been using MySQL for years and now I have decided to switch to Postgresql,
> because I needed more robust "enterprise" features like views and triggers. I
> work with VERY large datasets: 60 monthly tables with 700,000 rows and 99
> columns each, with mostly large numeric values (15 digits) ( NUMERIC(15,0)
> datatypes, not all filled). So far, I've migrated 2 of my tables to a dedicated
>
> The main problem is disk space. The database files stored in postgres take 4 or
> 5 times more space than in Mysql. Just to be sure, after each bulk load, I
> performed a VACUUM FULL to reclaim any posible lost space, but nothing gets
> reclaimed. My plain text dump files with INSERTS are just 150 Mb in size, while
> the files in Postgres directory are more than 1 Gb each!!. I've tested other
> free DBMS like Firebird and Ingres, but Postgresql is far more disk space
> consumer than the others.
From discussions I have seen here, MYSQL implements Numeric using a floating
point type. Postgres stores it using something like a base 10000 digit
for each 4 bytes of storage. Plus there will be some overhead for storing
the precision and scale. You might be better off using bigint to store
your data. That will take 8 bytes per datum and is probably the same size
as was used in MYSQL.
From | Date | Subject | |
---|---|---|---|
Next Message | Michael Fuhr | 2005-06-19 14:17:42 | Re: BIG files |
Previous Message | Bruno Wolff III | 2005-06-19 12:34:33 | Re: Storing an array to Postgresql table |