From: | wstrzalka <wstrzalka(at)gmail(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Feature proposal |
Date: | 2010-08-26 07:18:36 |
Message-ID: | a9653f24-130a-4418-81f7-15754a140550@l6g2000yqb.googlegroups.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 26 Sie, 08:06, wstrzalka <wstrza(dot)(dot)(dot)(at)gmail(dot)com> wrote:
> On 26 Aug, 01:28, pie(dot)(dot)(dot)(at)hogranch(dot)com (John R Pierce) wrote:
>
> > On 08/25/10 11:47 AM, Wojciech Strzałka wrote:
>
> > > The data set is 9mln rows - about 250 columns
>
> > Having 250 columns in a single table sets off the 'normalization' alarm
> > in my head.
>
> > --
> > Sent via pgsql-general mailing list (pgsql-gene(dot)(dot)(dot)(at)postgresql(dot)org)
> > To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-general
>
> Yeap - but it is as it is.
> I need to migrate PG first - then start thinking about schema changes
So after turning off fsync & synchronous_commit (which I can afford as
I'm populating database from scratch)
I've stucked at 43 minutes for the mentioned table. There is no PK,
constrains, indexes, ... - nothing except for data.
The behaviour changed - I'm utilizing the core 100%, iostat shows the
write peaks about 70MB/s, the table shown by \d+ is growing all the
time as it growth before.
Is there anything I can look at?
Anyway the load to PG is much faster then dump from the old database
and the current load time is acceptable for me.
From | Date | Subject | |
---|---|---|---|
Next Message | Wappler, Robert | 2010-08-26 07:51:38 | Re: Optimizing queries that use multiple tables and many order by columns |
Previous Message | wstrzalka | 2010-08-26 06:06:39 | Re: Feature proposal |