Re: large object write performance

From: "Graeme B(dot) Bell" <graeme(dot)bell(at)nibio(dot)no>
To: Bram Van Steenlandt <bram(at)diomedia(dot)be>
Cc: "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>
Subject: Re: large object write performance
Date: 2015-10-08 09:45:16
Message-ID: 3D2F06B8-297D-48CF-AA8C-47FD70925863@skogoglandskap.no
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Seems a bit slow.

1. Can you share the script (the portion that does the file transfer) to the list? Maybe you’re doing something unusual there by mistake.
Similarly the settings you’re using for scp.

2. What’s the network like?
For example, what if the underlying network is only capable of 10MB/s peak, and scp is using compression and the files are highly compressible?
Have you tried storing zip or gzip’d versions of the file into postgres? (that’s probably a good idea anyway)

3. ZFS performance can depend on available memory and use of caches (memory + L2ARC for reading, ZIL cache for writing).
Maybe put an intel SSD in there (or a pair of them) and use it as a ZIL cache.

4. Use dd to measure the write performance of ZFS doing a local write to the machine. What speed do you get?

5. Transfer a zip’d file over the network using scp. What speed do you get?

6. Is your postgres running all the time or do you start it before this test? Perhaps check if any background tasks are running when you use postgres - autovacuum, autoanalyze etc.

Graeme Bell

> On 08 Oct 2015, at 11:17, Bram Van Steenlandt <bram(at)diomedia(dot)be> wrote:
>
> Hi,
>
> I use postgresql often but I'm not very familiar with how it works internal.
>
> I've made a small script to backup files from different computers to a postgresql database.
> Sort of a versioning networked backup system.
> It works with large objects (oid in table, linked to large object), which I import using psycopg
>
> It works well but slow.
>
> The database (9.2.9) on the server (freebsd10) runs on a zfs mirror.
> If I copy a file to the mirror using scp I get 37MB/sec
> My script achieves something like 7 or 8MB/sec on large (+100MB) files.
>
> I've never used postgresql for something like this, is there something I can do to speed things up ?
> It's not a huge problem as it's only the initial run that takes a while (after that, most files are already in the db).
> Still it would be nice if it would be a little faster.
> cpu is mostly idle on the server, filesystem is running 100%.
> This is a seperate postgresql server (I've used freebsd profiles to have 2 postgresql server running) so I can change this setup so it will work better for this application.
>
> I've read different suggestions online but I'm unsure which is best, they all speak of files which are only a few Kb, not 100MB or bigger.
>
> ps. english is not my native language
>
> thx
> Bram
>
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Graeme B. Bell 2015-10-08 10:25:02 Re: large object write performance
Previous Message Bram Van Steenlandt 2015-10-08 09:17:49 large object write performance