From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | "Robert Brewer" <fumanchu(at)aminus(dot)org> |
Cc: | pgsql-bugs(at)postgresql(dot)org |
Subject: | Re: SELECT '(1, nan, 3)'::cube; |
Date: | 2011-03-15 17:07:16 |
Message-ID: | 432.1300208836@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
"Robert Brewer" <fumanchu(at)aminus(dot)org> writes:
> I'm working on a hypercube implementation in Postgres using contrib/cube
> and need to insert 80,000 rows in one go from Python. Doing so with
> INSERT, even multiple statements in one call, is pretty slow. I've been
> investigating if using COPY is faster. It is, but there's a problem:
> some of the cubes should include NaN. Writing:
> INSERT INTO foo (coords) VALUES (cube(ARRAY[1, 'nan', 3]::float[]));
> ...works fine. But I can't find the magic incantation to do the same
> thing using COPY FROM.
cube_in doesn't accept either 'nan' or 'inf'. It's probably a bug that
you can get those things into a cube value via cube(float8[]). Or we
could see about upgrading the datatype to allow them, but that would
require looking at all its operations not just cube_in. It seems pretty
likely to me that there are some other things in that module that won't
behave sanely with NaN, because the original author clearly never
thought about it.
I'd suggest rethinking your design to avoid needing NaN in a cube.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Kevin Grittner | 2011-03-15 17:47:12 | Re: BUG #5929: ERROR: found toasted toast chunk for toast value 260340218 in pg_toast_260339342 |
Previous Message | Robert Brewer | 2011-03-15 16:08:42 | SELECT '(1, nan, 3)'::cube; |