From: | Andreas Brandl <ml(at)3(dot)141592654(dot)de> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Array access performance |
Date: | 2011-08-02 15:16:09 |
Message-ID: | 21733433.41.1312298168802.JavaMail.root@store1.zcs.ext.wpsrv.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Hi Tom,
> > I'm looking for a hint how array access performs in PostgreSQL in
> > respect to performance. Normally I would expect access of a
> > 1-dimensional Array at slot i (array[i]) to perform in constant time
> > (random access).
>
> > Is this also true for postgres' arrays?
>
> Only if the element type is fixed-length (no strings for instance) and
> the array does not contain, and never has contained, any nulls.
> Otherwise a scan through all the previous elements is required to find
> a particular element.
We're using bigint elements here and don't have nulls, so this should be fine.
> By and large, if you're thinking of using arrays large enough to make
> this an interesting question, I would say stop right there and
> redesign
> your database schema. You're not thinking relationally, and it's gonna
> cost ya.
In general, I agree. We're having a nice relational database but are facing some perfomance issues. My approach is to build a materialized view which exploits the array feature and heavily relies on constant time access on arrays.
Thank you!
Regards,
Andreas
From | Date | Subject | |
---|---|---|---|
Next Message | Kevin Grittner | 2011-08-02 17:41:33 | Re: Performance die when COPYing to table with bigint PK |
Previous Message | Tom Lane | 2011-08-02 14:49:41 | Re: Array access performance |