From: | nair rajiv <nair331(at)gmail(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | splitting data into multiple tables |
Date: | 2010-01-25 17:23:41 |
Message-ID: | d67ff5e61001250923n36f6a31cl6f21616f1d7379dd@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Hello,
I am working on a project that will take out structured content
from wikipedia
and put it in our database. Before putting the data into the database I
wrote a script to
find out the number of rows every table would be having after the data is in
and I found
there is a table which will approximately have 5 crore entries after data
harvesting.
Is it advisable to keep so much data in one table ?
I have read about 'partitioning' a table. An other idea I have is
to break the table into
different tables after the no of rows in a table has reached a certain
limit say 10 lacs.
For example, dividing a table 'datatable' to 'datatable_a', 'datatable_b'
each having 10 lac entries.
I needed advice on whether I should go for partitioning or the approach I
have thought of.
We have a HP server with 32GB ram,16 processors. The storage has
24TB diskspace (1TB/HD).
We have put them on RAID-5. It will be great if we could know the parameters
that can be changed in the
postgres configuration file so that the database makes maximum utilization
of the server we have.
For eg parameters that would increase the speed of inserts and selects.
Thank you in advance
Rajiv Nair
From | Date | Subject | |
---|---|---|---|
Next Message | Amitabh Kant | 2010-01-25 17:33:23 | Re: splitting data into multiple tables |
Previous Message | fkater@googlemail.com | 2010-01-25 14:55:32 | Re: Inserting 8MB bytea: just 25% of disk perf used? |