Re: explosion of tiny tables representing multiple

From: Martijn van Oosterhout <kleptog(at)svana(dot)org>
To: Benjamin Weaver <benjamin(dot)weaver(at)classics(dot)oxford(dot)ac(dot)uk>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: explosion of tiny tables representing multiple
Date: 2006-11-05 10:03:24
Message-ID: 20061105100324.GA3979@svana.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Fri, Nov 03, 2006 at 08:25:25PM +0000, Benjamin Weaver wrote:
> Dear Martijn,
>
> Wow, didn't know about arrays. Did lots of sql, but, as I think about it,
> that was 7 years ago, and we didn't know about arrays then
>
> Are their performance problems with arrays? We will not likely be working
> with more than 50,000 - 100,000 records.

If by records you mean rows in the database, then 50,000 rows is a baby
database, nothing to worry about there.

Performence of arrays scale about linear with the number of elements in
the array. So if most of your arrays have only 2 or 3 elements, the
performence should be good. If you make a single array with 50,000
element, it's going to suck very badly.

Note, recent versions of postgres have better support for arrays,
including for indexing thereof. Especially the new GIN index type may
be useful for you.

Have a nice day,
--
Martijn van Oosterhout <kleptog(at)svana(dot)org> http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to litigate.

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Russell Smith 2006-11-05 10:05:56 Re: ERROR: tuple concurrently updated
Previous Message Mark Morgan Lloyd 2006-11-05 08:45:39 Converting a timestamp to a time