From: | Adrian Klaver <adrian(dot)klaver(at)aklaver(dot)com> |
---|---|
To: | Eric Hill <Eric(dot)Hill(at)jmp(dot)com>, "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: storing large files in database - performance |
Date: | 2017-05-16 14:36:41 |
Message-ID: | a2c0a694-e7e7-5255-2fb1-5b32059107df@aklaver.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 05/16/2017 05:25 AM, Eric Hill wrote:
> Hey,
>
> I searched and found a few discussions of storing large files in the
> database in the archives, but none that specifically address performance
> and how large of files can realistically be stored in the database.
>
> I have a node.js application using PostgreSQL to store uploaded files.
> The column in which I am storing the file contents is of type “bytea”
> with “Storage” type set to “EXTENDED”. Storing a 12.5 MB file is taking
> 10 seconds, and storing a 25MB file is taking 37 seconds. Two notable
> things about those numbers: It seems like a long time, and the time
> seems to grow exponentially with file size rather than linearly.
>
> Do these numbers surprise you? Are these files just too large for
> storage in PostgreSQL to be practical? Could there be something about
> my methodology that is slowing things down?
Yes, it does surprise me. I just tested inserting an 11MB file using
psycopg2(Python) and it was less then a second.
>
> I do have the Sequelize ORM and the pg driver in between my code and the
> database.
>
> Thanks,
>
> Eric
>
--
Adrian Klaver
adrian(dot)klaver(at)aklaver(dot)com
From | Date | Subject | |
---|---|---|---|
Next Message | Igor Neyman | 2017-05-16 14:37:20 | Re: Different query plan used for the same query depending on how parameters are passed |
Previous Message | Thomas Kellerer | 2017-05-16 14:35:01 | Re: storing large files in database - performance |