From: | Rüdiger Sörensen <r(dot)soerensen(at)mpic(dot)de> |
---|---|
To: | "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org> |
Subject: | optimizing advice |
Date: | 2009-12-01 09:34:29 |
Message-ID: | 4B14E325.5020104@mpic.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
dear all,
I am building a database that will be really huge and grow rapidly. It
holds data from satellite observations. Data is imported via a java
application. The import is organized via files, that are parsed by the
application; each file hods the data of one orbit of the satellite.
One of the tables will grow by about 40,000 rows per orbit, there are
roughly 13 orbits a day. The import of one day (13 orbits) into the
database takes 10 minutes at the moment. I will have to import data back
to the year 2000 or even older.
I think that there will be a performance issue when the table under
question grows, so I partitioned it using a timestamp column and one
child table per quarter. Unfortunately, the import of 13 orbits now
takes 1 hour instead of 10 minutes as before. I can live with that, if
the import time will not grow sigificantly as the table grows further.
anybody with comments/advice?
tia,
Ruediger.
Attachment | Content-Type | Size |
---|---|---|
r_soerensen.vcf | text/x-vcard | 186 bytes |
From | Date | Subject | |
---|---|---|---|
Next Message | A.Bhattacharya | 2009-12-01 10:00:52 | WARNING: worker took too long to start; cancelled on VACCUM ANALYZE |
Previous Message | Alban Hertroys | 2009-12-01 09:17:50 | Re: limiting resources to users |