From: | Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us> |
---|---|
To: | Fernando Nasser <fnasser(at)redhat(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Fernando Nasser <fnasser(at)cygnus(dot)com>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Libpq support for precision and scale |
Date: | 2002-03-07 04:09:59 |
Message-ID: | 200203070409.g2749xN28955@candle.pha.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
I have reviewed this patch and clearly has features I would like to get
into 7.3. We have been pushing too much type knowledge into apps and
this will give people a libpq solution that we can manage. Here are my
comments.
> > These seem okay, but I don't like the API detail that "0 is returned if
> > information is not available". 0 is a valid result, at least for
> > PQfscale. I would recommend returning -1. If you really want to
> > distinguish bad parameters from non-numeric datatype, then return -1
> > and -2 for those two cases.
> >
>
> This seems to be the libpq convention. On calls such as PQfsize and
> PQfmod, for instance, zero is a valid result and is also returned if
> the information is not available.
>
> Please note that we did not make this convention -- our original version
> did return -1. But we decided that following a different rule for these
> two routines was even more confusing. And change the return convention
> for the whole set of functions at this point seems out of the question.
>
> P.S.: Maybe whoever originally designed the libpq interface was trying
> to accomplish some sort of "soft fail" by returning zero. Just a guess
> of course.
I think the problem stems from the fact that some of our functions
legitimately can return -1, so zero was chosen as a failure code, while
others use -1 for failure. In fact, Tom mentioned that there are now
some types that have a valid atttypmod of 0 (timestamp?) meaning we may
have a problem there anyway. Any ideas on how to fix it?
In hindsight, we should have defined a macro equal to -2 and report that
as the failure return for all functions that need it.
> > > Most programs won't need this information and may not be willing
> > > to pay the overhead for metadata retrieval. Thus, we added
> > > an alternative function to be used instead of PQexec if one
> > > wishes extra metadata to be retrieved along with the query
> > > results:
> >
> > > PGresult *PQexecIncludeMetadata(PGconn *conn, const char *query);
> >
> > This strikes me as very ugly, and unnecessary, and inefficient since
> > it retrieves metadata for all columns even though the client might
> > only need to know about some of them.
>
> This part I would not worry about. The new routines are for result sets
> (not arbitrary columns) so the fields present in it have already been
> pre-selected. Also, this kind of information is useful for tools as
> they don't know beforehand what the fields will be. In all cases
> we can think of, the tool will always want metadata about all the
> fields.
I hesitate to add another PQexec function. That could complicate the
API.
> > An even worse problem is that
> > it'll fail entirely with a multi-query query string.
> >
>
> This is a bummer. But I see no solution for this besides documenting
> the restriction in the manual. If I am not mistaken we already have
> the limitation of returning just the last result anyway (we just
> collect the error messages).
>
>
> > What I think would be cleaner would be to do the metadata queries
> > on-the-fly as needed. With the caching that you already have in there,
> > on-the-fly queries wouldn't be any less efficient.
> >
> > But to do a metadata query we must have access to the connection.
> > We could handle it two ways:
> >
> > 1. Add a PGconn parameter to the querying functions.
> >
>
> The problem is that results may be kept longer than connections
> (see below). The current solution did not require the connection
> as the metadata is for the result set, not tables.
>
> The PGconn parameter would be reasonable for retrieving metadata
> about table columns, for instance.
I think this is the way to go. We just require the connection be valid.
If it isn't, we throw an error. I don't see this as a major restriction.
In fact, it would be interesting to just call this function
automatically when someone requests type info.
> > 2. Make use of the PGconn link that's stored in PGresults, and
> > specify that these functions can only be used on PGresults that
> > came from a still-open connection.
> >
>
> That field has been deprecated (see comments in the source code)
> because a result may be kept even after the connection is closed.
>
>
> > I think I prefer the first, since it makes it more visible to the
> > programmer that queries may get executed. But it's a judgment call
> > probably; I could see an argument for the second as well. Any comments,
> > anyone?
> >
>
> It would have to be the former (to avoid the stale pointer problem).
>
> But requiring a connection adds a restriction to the use of this info
> and makes it have a different life span than the object it refers to
> (a PGresult), which is very weird.
Yes, but how often is this going to happen? If we can throw a reliable
error message when it happens, it seems quite safe. "If you are going to
get type info, keep the connection open so we can get it."
> > > The PQftypename function returns the internal PostgreSQL type name.
> > > As some programs may prefer something more user friendly than the
> > > internal type names, we've thrown in a conversion routine as well:
> > > char *PQtypeint2ext(const char *intname);
> > > This routine converts from the internal type name to a more user
> > > friendly type name convention.
> >
> > This seems poorly designed. Pass it the type OID and typmod, both of
> > which are readily available from a PQresult without extra computation.
> > That will let you call the backend's format_type ... of course you'll
> > need a PGconn too for that.
> >
>
> Requiring the PGconn is bad. But we still could have a PQFtypeExt()
> returning the "external" type if people prefer it that way.
> We thought that this should be kept as an explicit conversion
> operation to make clear the distinction of what the backend knows
> about and this outside world view of things.
If they want more info about the result set, keeping the connection open
so we can get that information seems perfectly logical. If we put it in
the manual in its own section as MetaData functions, and mention they
need a valid connection to work, I think it will be clear to everyone.
--
Bruce Momjian | http://candle.pha.pa.us
pgman(at)candle(dot)pha(dot)pa(dot)us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026
From | Date | Subject | |
---|---|---|---|
Next Message | Rod Taylor | 2002-03-07 04:24:09 | Dependencies |
Previous Message | Tom Lane | 2002-03-07 04:04:20 | Re: Point in time recovery: recreating relation files |