Re: performance problem on big tables

From: Mariel Cherkassky <mariel(dot)cherkassky(at)gmail(dot)com>
To: Claudio Freire <klaussfreire(at)gmail(dot)com>
Cc: Daniel Blanch Bataller <daniel(dot)blanch(dot)bataller(at)gmail(dot)com>, Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>, "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>
Subject: Re: performance problem on big tables
Date: 2017-08-20 11:00:51
Message-ID: CA+t6e1n6nqwyh1Jb1WTU6kYr9kgkdZe+GABY9XgDozixoS08DA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

I realized something weird. When I`m preforming the copy utility of
postgresql in order to create dump from a local table in my postgresql db
it takes for 32G table 20 minutes. When I try to use copy for a foregin
table (on oracle database) It takes more than 2 hours.. During the copy
operation from the foreign table I dont see alot of write operations, with
iotop i see that its writes 3 M/s. What else I can check ?

2017-08-20 9:39 GMT+03:00 Mariel Cherkassky <mariel(dot)cherkassky(at)gmail(dot)com>:

> This server is dedicated to be a postgresql production database, therefore
> postgresql is the only thing the runs on the server. The fs that I`m using
> is xfs. I`ll add two different disks - one for the wals and one for the
> temp tablespace. Regarding the disk, what size should they be considering
> that the database size is about 250G. Does 16G of ram considered little ? I
> installed iotop and I see that postgresql writer is writing most of the
> time and above all.
>
> I mentioned that I perform alot of insert into table select * from table.
> Before that I remove indexes,constraints and truncate the table. Should I
> run vacuum before or after the operation ?
>
> 2017-08-17 19:37 GMT+03:00 Claudio Freire <klaussfreire(at)gmail(dot)com>:
>
>> On Thu, Aug 17, 2017 at 6:00 AM, Mariel Cherkassky
>> <mariel(dot)cherkassky(at)gmail(dot)com> wrote:
>> > I checked with the storage team in the company and they saw that I have
>> alot
>> > of io on the server. How should I reduce the io that the postgresql
>> uses ?
>>
>> Do you have concurrent activity on that server?
>>
>> What filesystem are you using wherever the data is sitting?
>>
>> If you've got concurrent fsyncs happening, some filesystems handle
>> that poorly. When you've got WAL and data mixed in a single disk, or
>> worse, filesystem, it happens often that the filesystem won't handle
>> the write barriers for the WAL efficiently. I/O gets intermingled with
>> bulk operations, and even small fsyncs will have to flush writes from
>> bulk operations, which makes a mess of things.
>>
>> It is a very good idea, and in fact a recommended practice, to put WAL
>> on its own disk for that reason mainly.
>>
>> With that little RAM, you'll also probably cause a lot of I/O in temp
>> files, so I'd also recommend setting aside another disk for a temp
>> tablespace so that I/O doesn't block other transactions as well.
>>
>> This is all assuming you've got concurrent activity on the server. If
>> not, install iotop and try to see who's causing that much I/O.
>>
>
>

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Mariel Cherkassky 2017-08-20 11:32:09 Re: performance problem on big tables
Previous Message Mariel Cherkassky 2017-08-20 06:39:45 Re: performance problem on big tables