From: | Claudio Freire <klaussfreire(at)gmail(dot)com> |
---|---|
To: | Mike Sofen <msofen(at)runbox(dot)com> |
Cc: | postgres performance list <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Postgres bulk insert/ETL performance on high speed servers - test results |
Date: | 2016-09-02 20:26:49 |
Message-ID: | CAGTBQpa+go_gT2=H7JJROTY-qpo8x12JLNP-L46kaRMuk14zZg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Thu, Sep 1, 2016 at 11:30 PM, Mike Sofen <msofen(at)runbox(dot)com> wrote:
> PASS 2:
> Process: Transform/Load (all work local to the server - read,
> transform, write as a single batch)
> Num Source Rows: 10,554,800 (one batch from just a single source table
> going to a single target table)
> Avg Rowcount Compression: 31.5 (jsonb row compression resulting in
> 31.5x fewer rows)
> AWS Time in Secs: 2,493 (41.5 minutes)
> Cisco Time in Secs: 661 (10 minutes)
> Difference: 3.8x
> Comment:AWS: 4.2k rows/sec Cisco: 16k rows/sec
>
> It's obvious the size of the batch exceeded the AWS server memory, resulting
> in a profoundly slower processing time. This was a true, apples to apples
> comparison between Pass 1 and Pass 2: average row lengths were within 7% of
> each other (1121 vs 1203) using identical table structures and processing
> code, the only difference was the target server.
>
> I'm happy to answer questions about these results.
Are you sure it's a memory thing and not an EBS bandwidth thing?
EBS has significantly less bandwidth than direct-attached flash.
From | Date | Subject | |
---|---|---|---|
Next Message | Mike Sofen | 2016-09-04 12:34:01 | Re: Postgres bulk insert/ETL performance on high speed servers - test results |
Previous Message | Mike Sofen | 2016-09-02 02:30:58 | Postgres bulk insert/ETL performance on high speed servers - test results |