massive INSERT

From: Kurt Overberg <kurt(at)hotdogrecords(dot)com>
To: pgsql-sql(at)postgresql(dot)org
Subject: massive INSERT
Date: 2003-03-13 17:45:32
Message-ID: 3E70C3BC.40904@hotdogrecords.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-sql


Hi everyone!

I'm trying to maintain a 'membership' table in my postgres database.
Membership is determined by a potentially costly algorithm that
basically returns a series of member ids, where I insert those ids into
my membership table (called groups). Up to now, for speed purposes,
I've been deleting all users in a group, then re-building the group from
scratch. The tables look roughly like this:

id | integer | not null default
nextval('"xrefmgrp_id_seq"'::text)

membergroupid | integer | not null default 0
memberid | integer | not null default 0

There's a constraint on the table saying that (membergroupid,memberid)
needs to be UNIQUE.

...so before re-building a table, I do a:

delete from xrefmembergroup where membergroupid = 4 ;

...then blast the id's into the table:

insert into xrefmembergroup (membergroupid, memberid) select 4 as
membergroupid, member.id as memberid from member where <* huge complex
select*>

...I've found this to be faster then running the query, figuring out who
needs to be removed from the group, who needs to be added and whatnot.
The thing that I'm worried about is that this table is going to be
pretty big (potentially millions of rows), and everytime I rebuild this
table I lose (or at least invalidate) all my indexes. Is that the case?
Is constantly deleting then adding large numbers of rows from a table
really bad for performance? I'm worried this isn't going to scale well.
Anyone know of a better way to do this? Thoughts and comments would
be appreciated.

Thanks!

/kurt

Responses

Browse pgsql-sql by date

  From Date Subject
Next Message Ryan 2003-03-13 17:56:03 Re: nearest match
Previous Message Ryan Orth 2003-03-13 17:22:21 Re: nearest match