From: | Fernan Aguero <fernan(at)iib(dot)unsam(dot)edu(dot)ar> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Large files on linux |
Date: | 2000-12-12 17:59:06 |
Message-ID: | p04320401b65c1476dad6@[200.10.117.179] |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
>Bruce Momjian wrote:
>
>This has nothing to do with PostgreSQL. We don't need large files to go
>over 4gig tables. We split them on our own.
>Peter Eisentraut wrote:
>
>But, unless you do chunking, it _does_ affect dumpfile size. Someone
>posted awhile back a script that did dumpchunking. Should be in the
>archives.
>--
>Lamar Owen
>WGCR Internet Radio
>1 Peter 4:11
Sorry for not being clear in my first post. The problems I am having
are when dumping data, as Peter rightly guessed.
I remembered reading something about this on the list, that's why i
asked my questions here.
Although I considered writing a script to divide the dumped data into
smaller chunks, i (for other reasons) preferred to go for a cleaner
solution, and that is having a system able to cope with large files.
Thanks to all for your replies.
Here's a brief summary of things:
Linux kernel 2.4 supports large files.
As mentioned, SuSE and RedHat, both at version 7.0 come with patched
2.2.x kernels that support large files. SuSE uses the ReiserFS, while
RedHat uses ext2fs.
Since I'm using RH 6.2, the best thing to do is to update to 7.0 and
see if that helps. BTW, RH 7.0 is ready for 2.4 kernels (Thanks Trond)
--
Lic. Fernan Aguero Tel:
(54-11) 4752-0021
Instituto de Investigaciones Biotecnologicas Fax:
(54-11) 4752-9639
Universidad Nacional de General San Martin
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2000-12-12 18:00:23 | Re: conversion |
Previous Message | Dmitriy Agafonov | 2000-12-12 17:54:54 | performance : index selection |