Re: Processing very large TEXT columns (300MB+) using C/libpq

From: Aldo Sarmiento <aldo(at)bigpurpledot(dot)com>
To: Cory Nemelka <cnemelka(at)gmail(dot)com>
Cc: pgsql-admin(at)postgresql(dot)org
Subject: Re: Processing very large TEXT columns (300MB+) using C/libpq
Date: 2017-10-19 23:20:11
Message-ID: CAHX=r6wgC=WvEtgRYbhvNoK80-fqS9QBaJBwqyyU_xv9NpAfuw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

I believe large columns get put into a TOAST table. Max page size is 8k. So
you'll have lots of pages per row that need to be joined with a size like
that: https://www.postgresql.org/docs/9.5/static/storage-toast.html

*Aldo Sarmiento*
President & CTO

8687 Research Dr, Irvine, CA 92618
*O*: (949) 223-0900 - *F: *(949) 727-4265
aldo(at)bigpurpledot(dot)com | www.bigpurpledot.com

On Thu, Oct 19, 2017 at 2:03 PM, Cory Nemelka <cnemelka(at)gmail(dot)com> wrote:

> I have getting very poor performance using libpq to process very large
> TEXT columns (300MB+). I suspect it is IO related but can't be sure.
>
> Anyone had experience with same issue that can help me resolve?
>
> --cnemelka
>

In response to

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message Laurenz Albe 2017-10-20 06:42:29 Re: WAL segement issues on both master and slave server
Previous Message Cory Nemelka 2017-10-19 21:03:31 Processing very large TEXT columns (300MB+) using C/libpq