From: | Andreas Brandl <ml(at)3(dot)141592654(dot)de> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Array access performance |
Date: | 2011-08-02 13:00:08 |
Message-ID: | 6332413.23.1312290008254.JavaMail.root@store1.zcs.ext.wpsrv.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Hi,
I'm looking for a hint how array access performs in PostgreSQL in respect to performance. Normally I would expect access of a 1-dimensional Array at slot i (array[i]) to perform in constant time (random access).
Is this also true for postgres' arrays?
May concrete example is a 1-dimensional array d of length <= 600 (which will grow at a rate of 1 entry/day) stored in a table's column. I need to access this array two times per tuple, i.e. d[a], d[b]. Therefore I hope access is not linear. Is this correct?
Also I'm having some performance issues building this array. I'm doing this with a used-defined aggregate function, starting with an empty array and using array_append and some calculation for each new entry. I assume this involves some copying/memory allocation on each call, but I could not find the implementation of array_append in postgres-source/git.
Is there an efficient way to append to an array? I could also start with a pre-initialized array of the required length, but this involves some complexity.
Thank you
Regards,
Andreas
From | Date | Subject | |
---|---|---|---|
Next Message | Andreas Brandl | 2011-08-02 13:11:52 | Re: Array access performance |
Previous Message | Vitalii Tymchyshyn | 2011-08-02 08:42:42 | Re: Performance die when COPYing to table with bigint PK |