From: | Pierre-Frédéric Caillaud <lists(at)boutiquenumerique(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: hundreds of millions row dBs |
Date: | 2005-01-05 00:20:50 |
Message-ID: | opsj3sk0f7cq72hf@musicbox |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
To speed up load :
- make less checkpoints (tweak checkpoint interval and other parameters
in config)
- disable fsync (not sure if it really helps)
- have source data, database tables, and log on three physically
different disks
- have the temporary on a different disk too, or in ramdisk
- gunzip while restoring to read less data from the disk
> "Dann Corbit" <DCorbit(at)connx(dot)com> writes:
>> Here is an instance where a really big ram disk might be handy.
>> You could create a database on a big ram disk and load it, then build
>> the indexes.
>> Then shut down the database and move it to hard disk.
>
> Actually, if you have a RAM disk, just change the
> $PGDATA/base/nnn/pgsql_tmp
> subdirectory into a symlink to some temp directory on the RAM disk.
> Should get you pretty much all the win with no need to move stuff around
> afterwards.
>
> You have to be sure the RAM disk is bigger than your biggest index
> though.
>
> regards, tom lane
>
> ---------------------------(end of broadcast)---------------------------
> TIP 5: Have you checked our extensive FAQ?
>
> http://www.postgresql.org/docs/faqs/FAQ.html
>
From | Date | Subject | |
---|---|---|---|
Next Message | Lonni J Friedman | 2005-01-05 00:27:26 | Re: vacuum is failing |
Previous Message | Tom Lane | 2005-01-04 23:30:43 | Re: vacuum is failing |