From: | Joe Kislo <postgre(at)athenium(dot)com> |
---|---|
To: | Gordan Bobic <gordan(at)freeuk(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Unanswered questions about Postgre |
Date: | 2000-11-30 16:58:36 |
Message-ID: | 3A26873C.9C501219@athenium.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
> That is what transactions are for. If any errors occur, then the
> transacction is aborted. You are supposed to use transactions when you want
> either everything to occur (the whole transaction), or nothing, if an error
> occurs.
Yes. There are certainly times when a transaction needs to be
ABORTed. However, there are many reasons why the database should not
abort a transaction if it does not need to. There is obviously no
reason why a transaction needs to be aborted for syntax errors. There
is obviously no reason why a transaction needs to be aborted for say,
trying to insert a duplicate primary key. The -insert- can fail, report
it as such, and the application can determine if a rollback is
nessasary. If you don't believe me, here's two fully SQL-92 compliant
databases, Oracle and interbase, which do not exhibit this behavior:
-Oracle-
SQL> create table test (i int, primary key (i));
Table created.
SQL> insert into test values (1);
1 row created.
SQL> insert into test values (1);
insert into test values (1)
*
ERROR at line 1:
ORA-00001: unique constraint (TEST.SYS_C001492) violated
SQL> insert into test values (2);
1 row created.
SQL> commit;
Commit complete.
SQL> select * from test;
I
----------
1
2
SQL>
--Interbase--
SQL> create table test (i int not null, primary key (i));
SQL> insert into test values (1);
SQL> insert into test values (1);
Statement failed, SQLCODE = -803
violation of PRIMARY or UNIQUE KEY constraint "INTEG_156" on table
"TEST"
SQL> insert into test values (2);
SQL> commit;
SQL> select * from test;
I
============
1
2
SQL>
> If you don't like this behaviour, then use auto-commit, and make every
> separate statement a transaction in itself. That way if any of the
> statements fails, the next one won't be aborted. This, however, depending
> on the error you get could cause massive irreversible data corrpution. But
> then again, if this is a risk, you should be using transactions which abort
> the whole block on any error.
Auto-commit is not the same thing though. That would make each
statement a transaction. I don't want that, I want the ability of
grouping a set of statements and commiting them or rolling them back as
a whole. I do not, however, want the transaction aborted by the server
when it does not need to be. Clearly in the above case, neither
interbase nor oracle decided that the transaction had to be aborted.
This has to be an option no?
> Several ways. You can set up a shared network area, sort out unique
> file-naming system (which shouldn't be too hard), and send some sort of a
> "URL" as a pointer to the file.
Ahhh. Unfortunatly that is an unacceptable solution :(. So that means
there is no large binary storage available in postgre for me.
> Alternatively, wait for v7.1 (develpment tree available), which will
> support big record sizes (unlimited, AFAIK). Depending on what you are
> trying to do, BLOBS may or may not be the ideal thing, but sometimes they
> are the only way to store large chunks of data. 7.1 will hopefully sort
> that out, as I have bounced my head off the record size limit a few times
> myself.
Hmm, I really hope the 7.1 implementation of blobs is a true blob, and
not just a really long varchar or something. I need to store
arbitrarily large binary data, and be able to retrieve it over the
database connection. I'm really surprised there isn't a facility for
this already... Our application couldn't run on postgre without it!
Thanks for the reply, although I'm disappointed about the lack of blob
support :(
-Joe
From | Date | Subject | |
---|---|---|---|
Next Message | Lamar Owen | 2000-11-30 17:11:56 | Re: Can PostGreSQL handle 100 user database? |
Previous Message | Adam Lang | 2000-11-30 16:54:16 | Re: Can PostGreSQL handle 100 user database - more info |