From: | Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com> |
---|---|
To: | lup <robjsargent(at)gmail(dot)com> |
Cc: | "pgsql-general(at)postgresql(dot)org >> PG-General Mailing List" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Is it reasonable to store double[] arrays of 30K elements |
Date: | 2014-02-15 05:02:06 |
Message-ID: | CAFj8pRD+PGOwxSzQzwiPNFHH2Kf1wU8JiZ=3x8QZaELWJ0-WZg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hello
I worked with 80K float fields without any problem.
There are possible issues:
* needs lot of memory for detoast - it can be problem with more parallel
queries
* there is a risk of possible repeated detost - some unhappy usage in
plpgsql can be slow - it is solvable, but you have to identify this issue
* any update of large array is slow - so these arrays are good for write
once data
Regards
Pavel
2014-02-14 23:07 GMT+01:00 lup <robjsargent(at)gmail(dot)com>:
> Would 10K elements of float[3] make any difference in terms of read/write
> performance?
> Or 240K byte array?
>
> Or are these all functionally the same issue for the server? If so,
> intriguing possibilities abound. :)
>
>
>
>
>
> --
> View this message in context:
> http://postgresql.1045698.n5.nabble.com/Is-it-reasonable-to-store-double-arrays-of-30K-elements-tp5790562p5792099.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
From | Date | Subject | |
---|---|---|---|
Next Message | James Harper | 2014-02-15 07:23:50 | type aliases |
Previous Message | Behrang Saeedzadeh | 2014-02-15 01:35:51 | Is PostgreSQL 9.3 using indexes for pipelined top-N window function queries? |