| From: | Andrew Dunstan <andrew(at)dunslane(dot)net> |
|---|---|
| To: | Shalu Gupta <sgupta5(at)unity(dot)ncsu(dot)edu> |
| Cc: | pgsql-general(at)postgresql(dot)org, pgsql-hackers(at)postgresql(dot)org |
| Subject: | Re: [HACKERS] TPC H data |
| Date: | 2004-04-26 17:00:35 |
| Message-ID: | 408D4033.9060504@dunslane.net |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-general pgsql-hackers |
Shalu Gupta wrote:
>Hello,
>
>We are trying to import the TPC-H data into postgresql using the COPY
>command and for the larger files we get an error due to insufficient
>memory space.
>
>We are using a linux system with Postgresql-7.3.4
>
>Is it that Postgresql cannot handle such large files or is there some
>other possible reason.
>
>Thanks
>Shalu Gupta
>NC State University.
>
>
>
Shalu,
I loaded the largest TPC-H table (lineitem, roughly 6 million rows) the
other day into a completely untuned 7.5devel PostgreSQL instance running
on RH 9, and it didn't raise a sweat. I delayed creating the indexes
until after the load. Data load took roughly 10 minutes, index creation
took a further 35 minutes (there are 13 of them).
HTH. (I'm just down the road from NCSU, would be happy to help out)
cheers
andrew
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Bruno Wolff III | 2004-04-26 17:02:43 | Re: Restart increment to 0 each year = re-invent the sequences mecanism ? |
| Previous Message | Alvaro Herrera | 2004-04-26 16:55:57 | Re: List Removal |
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Simon Riggs | 2004-04-26 17:01:13 | Re: PITR Phase 1 - Test results |
| Previous Message | Simon Riggs | 2004-04-26 16:51:53 | PITR Phase 2 - Design Planning |