|
General
How do I submit an idea for a new
feature or even a patch?
You can send mail to pgsql-interfaces@postgresql.orgHow do I report a bug or other problems?
You can send mail to pgsql-interfaces@postgresql.org When you do, however, you should attach the following:I've seen other Postgres odbc drivers out there. Which one is the right one?
- your odbc driver version
- your postgresql database version
- any error messages you saw on the screen
- the commlog (psqlodbc.log) file (you will have to enable the commlog driver option)
- the odbc trace log (sql.log) file (if the problem warrants it)
Since the development of the driver has changed hands a few times, there has been confusion over this. The correct driver is the one from http://www.insightdist.com/psqlodbc. This driver is always being improved on. It is supported actively through the pgsql-interfaces@postgresql.org mailing list. You may want to subscribe to this list to keep up with the latest changes. Visit the PostgreSQL home page for instruction on subscribing to mailing lists.What operating systems will the driver run on?
The PsqlODBC executable is written and compiled to run on 32 bit Wintel platform. This includes Windows 95 and NT. The driver has not been tested on Windows 98 but it should not have any problems. 16 bit applications are capable of using the 32 bit driver but only on these platforms.How do I install the driver?We do not distribute binaries for Unix. The source code, however has been ported to compile under Unix . The required iODBC driver manager for Unix can be found at: ttp://www.as220.org/FreeODBC
The easiest way to install the driver is to get the self-extracting install (postdrv.exe). Just download and run this program. The installation is a standard Installshield type process that will guide you through the process.Once you have run the full install at least once, future upgrades can be even simpler. Just download the postdll.zip file, which simply contains the driver dll and extract it to your \windows\system directory.The final option is to compile the driver yourself. If you are using this driver with Iodbc on a unix platform, this is currently your only option.Do I even need to compile the driver to use it?
Probably not. If you are using Windows, all you really need to do is download and run the self-extracting install (postdrv.exe). See the above question for more information.How do I setup a datasource?
For Windows, use the ODBC Administrator in Control Panel. Here you can add, modify, or delete data sources.What's the difference between a File DSN, System DSN, and User DSN?
System DSN's and User DSN's differ only in who can access them on the system. A File DSN, however, is not really a datasource. It is a file that contains all the connection parameters used to connect directly to an odbc driver.How do I access more advanced driver and/or datasource options?
First, select your favorite datasource to configure. Then in the "PostgreSQL Driver Setup" dialog, select under "Options (Advanced)" either the Driver button or Datasource button. This will bring up another dialog box which contains options you can configure. Refer to the Configuration Help on the main page for information on all these options.Where can I discover more information about odbc errors?
The odbc driver logs all direct communication (queries, updates, etc.) with the backend as well as error messages and notices in the commlog file. In addition, it now also logs any odbc connection and statement errors in this file with detailed information. This is good for applications that give misleading, little, or no descriptive information when something goes wrong (VisData is a good example).What do I need to do to establish a connection to a database?This feature is enabled in the Advanced driver options dialog.
ODBC Connection Checklist
Verify that the database you are trying to connect to exists and is accessible by you. Also, see the above question in regards to user name and password authentication.Why do I get a message like "Failed to authenticate client as Postgres user using unknown authentication type:be_recvauth: unrecognized message type: 65536" when I try to connect to a datasource?
This message comes from the PostgreSQL backend, most likely when there is a protocol mismatch between the odbc driver and the backend. For example, if you are using Postgres 6.2 as the backend and try to use the odbc driver without correctly setting the protocol, this error will occur. You must check the "6.2 protocol" advanced datasource option in the odbc driver configuration dialog.What data types does the driver support?
The driver provides full support for all PostgreSQL standard data types. These are: bool, int2, int4, float4, float8, date, time, abstime, datetime, timestamp, char, varchar, and text.There is partial support for all other data types. Examples of these: point, circle, box and arrays. String support only is provided for these non-standard types. In other words, they are returned as SQL_VARCHAR and can be displayed and updated like any other data type. The resulting behavior will probably vary some with each application and data type.
In the case of int4[] and MS Access 97 it does an effective job. The array can be displayed and updated cleanly. Arithmetic is a little tricky. MS Access complains about "t.a[1]" in the query builder. It does not like the syntax and never sends it to the backend. The work around is to choose the SQL Pass-thru option. This will allow you to build expressions like "t.a[0] * t.a[1]". The hassle is that every time you run the query in Access 97 it prompts you with a database connection dialog.
Why does the PostgreSQL backend complain about running out of memory when some I browse tables with primary keys?create table foo (
id integer primary key,
name varchar(20)
);
The Jet Database Engine (used by Access) and others can use "keysets" to access records. Depending on how many parts are in the key, performance can range from slow to crashing of the backend. Here is a keyset query using 10 rows (the typical keyset amount):
-- This is a 3 part keyCan I use large objects or OLE?
select ... from foo where
(v1 = "?" AND v2 = "?" AND v3 ="?") OR -- line 1
(v1 = "?" AND v2 = "?" AND v3 ="?") OR -- line 2
...
(v1 = "?" AND v2 = "?" AND v3 ="?") OR -- line 9
(v1 = "?" AND v2 = "?" AND v3 ="?"); -- line 10
-- The question marks ar replaced with the key valuesPrior to Postgres 6.4, this was a major problem. But there are at least 2 fixes in place for this as of 6.4. One of the fixes is called KSQO (Keyset Query Optimization). As of 6.4, the driver now turns this on by default, although this can be changed in the Advanced Driver options settings.
Large objects are mapped to LONGVARBINARY in the driver to allow storing things like OLE objects in Microsoft Access. Multiple SQLPutData and SQLGetData calls are usually used to send and retrieve these objects. The driver creates a new large object and simply inserts its 'identifier' into the respective table. However, since Postgres uses an 'Oid' to identify a Large Object, it is necessary to create a new Postgres type to be able to discriminate between an ordinary Oid and a Large Object Oid. Until this new type becomes an official part of Postgres, it must be added into the desired database and looked up for each connection. The type used in the driver is simply called "lo" and here is the command used to create it:
create type lo (
internallength=4, externallength=10,
input=int4in, output=int4out,
default='', passedbyvalue
);
create table employee (
id integer,
name varchar(30),
picture lo
);
Once this is done, simply use the new 'lo' type to define columns in that database. When the driver sees an 'lo' type, it will handle it as SQL_LONGVARBINARY.
Another important note is that this new type is lacking in functionality. It will not cleanup after itself on updates and deletes, thus leaving orphans around and using up extra disk space. And currently, Postgres does not support the vacuuming of large objects.
It would not be too difficult to write a interim stand-alone cleanup process to run at some interval on the server. It is only a matter of searching pg_attribute for lo data type columns and building a list of lo's by querying each table that contains lo's. Then compare this list with with the xinv.* in pg_class. The xinv.* with out a pointer are orphans and should be dropped.
Hopefully in the future, a real large object data type will be available as a base type. But for now, it sure is fun to stick a Word document, Visio document, or AVI of a dancing baby into a database column, even if you will fill up your server's hard disk after a while!How do I use the row versioning -OR- why do I get a message about no operator for xid and int4?
Some of the operators are missing in the current release of Postgres so in order to use row versioning, you must overload the int4eq function for use with the xid type. Also, you need to create an operator to compare xid to int4. You must do this for each database you want to use this feature on. This will probably not be necessary in Postgres 6.4 since it will be added. Here are the details:With MS Access, why can't I GROUP BY, ORDER BY, or even select WHERE ,on columns which are of type "text"?create function int4eq(xid,int4)
returns bool
as ''
language 'internal';create operator = (
leftarg=xid,
rightarg=int4,
procedure=int4eq,
commutator='=',
negator='<>',
restrict=eqsel,
join=eqjoinsel
);
Text fields, by default, are mapped to SQL_LONGVARCHAR. As a result MS Access treats these colomns as "Memo" types. The good news is that you can store up to the PostgreSQL block size limit in a text column. Currently, PostgreSQL has a tuple limit of just under 8k.With MS Access, why can't I index on text fields -OR- why do I get the "Invalid field definition 'field' in definition of index or relationship?You can change the mapping of Text fields to SQL_VARCHAR by unchecking the Advanced driver option "Text as LongVarchar" under Data Type Options. This should allow text fields to be used.
Text fields are mapped to SQL_LONGVARCHAR by default. As a result MS Access treats these colomns as "Memo" types. The good news is that you can store up to the PostgreSQL block size limit in a text column. Currently, PostgreSQL has a tuple limit of just under 8k.You can change the mapping of Text fields to SQL_VARCHAR by unchecking the Advanced driver option "Text as LongVarchar" under Data Type Options. This should allow text fields to be used but you will be limited to the maximum size of a varchar.
Why does MS Access sometimes complain about a GROUP BY or ORDER BY not being in the target list? (UPDATED)
This message comes from the PostgreSQL backend. Postgres currently requires fields in the ORDER BY and GROUP BY clauses to be included in the target list. However, we have a patch that overcomes this limitation in the backend server. It will be included in the 6.4 release. Click here to download the patch right now for use in PostgreSQL 6.3.Why does Access force me specify the Data Source each time I run my SQL Pass-Thru query?More (Updated) Information on this problem
Older versions of the Microsoft Jet Database Engine, that Access is built on, has some problems that can cause this to occur. Access will insist on throwing in an order by clause in a join query, even if you are not sorting on anything. Even with the above patch for Postgres, the query may no longer error out, BUT it would be sorted in a way you may not want, and there would be no way to change it. To fix this problem, you need to update the Jet database engine to version 3.51. It is available at no charge from Microsoft. Click here to download the latest Jet Engine from the Microsoft support site.
There is a way to specify a Data Source in the query properties so it doesn't ask you each time. Under the view menu, select properties. For the "ODBC Connect Str" property right after "ODBC;" add "DSN=<your_datasource_name_here>". You can also add other properties if you like such as "ODBC;DSN=my_dsn;UID=me;PWD=test".Why do varchar/char datatypes not appear with the correct precision in Borland DBExplorer -OR- why do all varchar/char precisions appear as 128?
When using the 6.4 protocol, this problem should not be an issue.Prior to the Postgres 6.4 protocol, the backend did not return the size of varchar/char datatypes in a query result and Borland relies heavily on this for both simple queries and the data dictionary import. Therefore, there are several driver options that were developed to help out with this.
Consider setting the following advanced driver options if using Borland:
Data Type Options: Uncheck Text as LongVarchar Uncheck Unknowns as LongVarchar Parse Statements option: Enable it, if using a protocol earlier than Postgres 6.4 Unknown Sizes Options: Set to "Longest"