From: | "Zhang, Anna" <azhang(at)verisign(dot)com> |
---|---|
To: | "'Christopher Smith'" <christopherl_smith(at)yahoo(dot)com>, pgsql-admin(at)postgresql(dot)org |
Subject: | Re: large table support 32,000,000 rows |
Date: | 2002-03-25 17:39:58 |
Message-ID: | 5511D658682A7740BA295CCF1E1233A635A881@vsvapostal2.bkup3 |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
I have multi tables with over 10,000, 000 rows, the biggest one is 70, 000,
000 rows. For each table, there are several indexes, almost all columns are
varchar2. In my experiences, with many indexes on large table, data
insertion will be a pain. In my case, I have 30, 000 rows to be inserted to
a table every day, it takes hours for each table, if I drop indexes,
insertion speeds up, but recreate such indexes takes 7 hours. Usually
querying data is not the problem, but I think you must consider the
performance if you have to insert, update data frequently like me.
Hope it helps!
Anna Zhang
-----Original Message-----
From: Christopher Smith [mailto:christopherl_smith(at)yahoo(dot)com]
Sent: Wednesday, March 20, 2002 5:26 PM
To: pgsql-admin(at)postgresql(dot)org
Subject: [ADMIN] large table support 32,000,000 rows
I have a set of data that will compose a table with 32 million rows. I
currently run postgresql with tables as large as 750,000 rows.
Does anyone have experience with such large tables data. In addition, I
have been reading information on moving postgresql tables to
another hard-drive can anyone advise me.
Thanks
_____
Do You Yahoo!?
Yahoo! Movies - coverage of the 74th Academy Awards®
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2002-03-25 17:48:00 | Re: to --enable-locale or not to --enable-locale? |
Previous Message | Joel Burton | 2002-03-25 15:32:47 | Re: [ADMIN] no drop column?! |