Re: How to get good performance for very large lists/sets?

From: Jim Nasby <Jim(dot)Nasby(at)BlueTreble(dot)com>
To: Richard Frith-Macdonald <richard(dot)frith-macdonald(at)brainstorm(dot)co(dot)uk>, <pgsql-general(at)postgresql(dot)org>
Subject: Re: How to get good performance for very large lists/sets?
Date: 2014-10-06 22:29:52
Message-ID: 543317E0.8030403@BlueTreble.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On 10/6/14, 3:02 AM, Richard Frith-Macdonald wrote:
> I'm wondering if anyone can help with advice on how to manage large lists/sets of items in a postgresql database.
>
> I have a database which uses multiple lists of items roughly like this:
>
> CREATE TABLE List (
> ID SERIAL,
> Name VARCHAR ....
> );
>
> and a table containing individual entries in the lists:
>
> CREATE TABLE ListEntry (
> ListID INT, /* Reference the List table */
> ItemID INT /* References an Item table */
> ) ;
> CREATE UNIQUE INDEX ListEntryIDX ON ListEntry(ListID, ItemID);
BTW, performance-wise, your best bet might be to forget about using a listentry table (BTW, I recommend not using CamelCase for database object naming) and instead put an array in the list table:

CREATE TABLE list(
list_id serial PRIMARY KEY
, list_name varchar NOT NULL UNIQUE
, list_items int[] NOT NULL||||

);

I think there's an extension/add-on that would let you enforce referrential integrity between list_items and the items table, but I can't find it now.
--
Jim Nasby, Data Architect, Blue Treble
Data in Trouble? Get it in Treble! http://BlueTreble.com

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Melvin Davidson 2014-10-06 22:49:25 Re: Converting char to varchar automatically
Previous Message Jim Nasby 2014-10-06 22:18:13 Re: Converting char to varchar automatically