Re: Some array semantics issues

From: Greg Stark <gsstark(at)mit(dot)edu>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Some array semantics issues
Date: 2005-11-16 19:06:06
Message-ID: 878xvotntt.fsf@stark.xeocode.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers


Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> writes:

> regression=# select '[0:2]={1,2,3}'::int[] = '{1,2,3}'::int[];
> ?column?
> ----------
> t
> (1 row)
>
> regression=# select '{1,2,3,4}'::int[] = '{{1,2},{3,4}}'::int[];
> ?column?
> ----------
> t
> (1 row)
>
> This seems pretty bogus as well.

The second case seems utterly bogus. But the first case seems maybe
justifiable. maybe.

In the past Postgres treated the array bounds as so insignificant they weren't
even worth preserving across a dump/restore.

And changing that would make it harder to test just the contents of the array
without having to match bounds as well. That is, You couldn't say "list =
'{1,2}'" to test if the array contained 1,2. You would have to, well, I'm not
even sure how you would test it actually. Maybe something kludgy like
"'{}'::int[] || list = '{1,2}'" ?

I'm not entirely against the idea of making array bounds significant but I
guess we would need some convenient way of taking them out of the picture too.
Perhaps another equality operator.

--
greg

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Dann Corbit 2005-11-16 19:06:15 Re: MERGE vs REPLACE
Previous Message Martijn van Oosterhout 2005-11-16 18:57:54 Re: MERGE vs REPLACE