September 26, 2024: PostgreSQL 17 Released!
Unsupported versions: 7.0
This documentation is for an unsupported version of PostgreSQL.
You may want to view the same page for the current version, or one of the other supported versions listed above instead.

Chapter 7. Indices and Keys

Table of Contents
Keys
Partial Indices

Indexes are commonly used to enhance database performance. They should be defined on table columns (or class attributes) which are used as qualifications in repetitive queries. Inappropriate use will result in slower performance, since update and insertion times are increased in the presence of indices.

Indexes may also be used to enforce uniqueness of a table's primary key. When an index is declared UNIQUE, multiple table rows with identical index entries won't be allowed. For this purpose, the goal is ensuring data consistency, not improving performance, so the above caution about inappropriate use doesn't apply.

Two forms of indices may be defined:

  • For a value index, the key fields for the index are specified as column names; multiple columns can be specified if the index access method supports multi-column indexes.

  • For a functional index, an index is defined on the result of a function applied to one or more attributes of a single class. This is a single-column index (namely, the function result) even if the function uses more than one input field. Functional indices can be used to obtain fast access to data based on operators that would normally require some transformation to apply them to the base data.

Postgres provides btree, rtree and hash access methods for indices. The btree access method is an implementation of Lehman-Yao high-concurrency btrees. The rtree access method implements standard rtrees using Guttman's quadratic split algorithm. The hash access method is an implementation of Litwin's linear hashing. We mention the algorithms used solely to indicate that all of these access methods are fully dynamic and do not have to be optimized periodically (as is the case with, for example, static hash access methods).

The Postgres query optimizer will consider using a btree index whenever an indexed attribute is involved in a comparison using one of: <, <=, =, >=, >

The Postgres query optimizer will consider using an rtree index whenever an indexed attribute is involved in a comparison using one of: <<, &<, &>, >>, @, ~=, &&

The Postgres query optimizer will consider using a hash index whenever an indexed attribute is involved in a comparison using the = operator.

Currently, only the btree access method supports multi-column indexes. Up to 16 keys may be specified by default (this limit can be altered when building Postgres).

An operator class can be specified for each column of an index. The operator class identifies the operators to be used by the index for that column. For example, a btree index on four-byte integers would use the int4_ops class; this operator class includes comparison functions for four-byte integers. In practice the default operator class for the field's datatype is usually sufficient. The main point of having operator classes is that for some datatypes, there could be more than one meaningful ordering. For example, we might want to sort a complex-number datatype either by absolute value or by real part. We could do this by defining two operator classes for the datatype and then selecting the proper class when making an index. There are also some operator classes with special purposes:

  • The operator classes box_ops and bigbox_ops both support rtree indices on the box datatype. The difference between them is that bigbox_ops scales box coordinates down, to avoid floating point exceptions from doing multiplication, addition, and subtraction on very large floating-point coordinates. If the field on which your rectangles lie is about 20,000 units square or larger, you should use bigbox_ops.

  • The int24_ops operator class is useful for constructing indices on int2 data, and doing comparisons against int4 data in query qualifications. Similarly, int42_ops support indices on int4 data that is to be compared against int2 data in queries.

The following query shows all defined operator classes:

SELECT am.amname AS acc_name,
       opc.opcname AS ops_name,
       opr.oprname AS ops_comp
    FROM pg_am am, pg_amop amop,
         pg_opclass opc, pg_operator opr
    WHERE amop.amopid = am.oid AND
          amop.amopclaid = opc.oid AND
          amop.amopopr = opr.oid
    ORDER BY acc_name, ops_name, ops_comp
   

Use DROP INDEX to remove an index.

Keys

Author: Written by Herouth Maoz This originally appeared on the User's Mailing List on 1998-03-02 in response to the question: "What is the difference between PRIMARY KEY and UNIQUE constraints?".

Subject: Re: [QUESTIONS] PRIMARY KEY | UNIQUE

        What's the difference between:

              PRIMARY KEY(fields,...) and
              UNIQUE (fields,...)

       - Is this an alias?
       - If PRIMARY KEY is already unique, then why
         is there another kind of key named UNIQUE?
    

A primary key is the field(s) used to identify a specific row. For example, Social Security numbers identifying a person.

A simply UNIQUE combination of fields has nothing to do with identifying the row. It's simply an integrity constraint. For example, I have collections of links. Each collection is identified by a unique number, which is the primary key. This key is used in relations.

However, my application requires that each collection will also have a unique name. Why? So that a human being who wants to modify a collection will be able to identify it. It's much harder to know, if you have two collections named "Life Science", the the one tagged 24433 is the one you need, and the one tagged 29882 is not.

So, the user selects the collection by its name. We therefore make sure, within the database, that names are unique. However, no other table in the database relates to the collections table by the collection Name. That would be very inefficient.

Moreover, despite being unique, the collection name does not actually define the collection! For example, if somebody decided to change the name of the collection from "Life Science" to "Biology", it will still be the same collection, only with a different name. As long as the name is unique, that's OK.

So:

  • Primary key:

    • Is used for identifying the row and relating to it.

    • Is impossible (or hard) to update.

    • Should not allow NULLs.

  • Unique field(s):

    • Are used as an alternative access to the row.

    • Are updateable, so long as they are kept unique.

    • NULLs are acceptable.

As for why no non-unique keys are defined explicitly in standard SQL syntax? Well, you must understand that indices are implementation-dependent. SQL does not define the implementation, merely the relations between data in the database. Postgres does allow non-unique indices, but indices used to enforce SQL keys are always unique.

Thus, you may query a table by any combination of its columns, despite the fact that you don't have an index on these columns. The indexes are merely an implementational aid which each RDBMS offers you, in order to cause commonly used queries to be done more efficiently. Some RDBMS may give you additional measures, such as keeping a key stored in main memory. They will have a special command, for example

CREATE MEMSTORE ON <table> COLUMNS <cols>
    
(this is not an existing command, just an example).

In fact, when you create a primary key or a unique combination of fields, nowhere in the SQL specification does it say that an index is created, nor that the retrieval of data by the key is going to be more efficient than a sequential scan!

So, if you want to use a combination of fields which is not unique as a secondary key, you really don't have to specify anything - just start retrieving by that combination! However, if you want to make the retrieval efficient, you'll have to resort to the means your RDBMS provider gives you - be it an index, my imaginary MEMSTORE command, or an intelligent RDBMS which creates indices without your knowledge based on the fact that you have sent it many queries based on a specific combination of keys... (It learns from experience).