Re: Triggers do not fire

From: Jason Earl <jdearl(at)yahoo(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Reiner Dassing <dassing(at)wettzell(dot)ifag(dot)de>
Cc: pgsql-sql(at)postgresql(dot)org
Subject: Re: Triggers do not fire
Date: 2001-10-17 18:20:51
Message-ID: 20011017182051.63377.qmail@web10006.mail.yahoo.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-sql

I can vouch for that. I have several tables with 10
to 16 million entries in much the same sort of setup
as you are describing (primary key, timestamp, value).

PostgreSQL is will quite happily use the timestamp
indexes when accessing this table, and it doesn't
default to a sequential scan until a considerable
number of tuples would be searched. For example:

processdata=> explain select count(*) from
caseweights1 where dt > '2001-10-11';
NOTICE: QUERY PLAN:

Aggregate (cost=255053.37..255053.37 rows=1 width=0)
-> Index Scan using caseweights1_dt_idx on
caseweights1 (cost=0.00..254827.01 rows=90544
width=0)

EXPLAIN
processdata=> select count(*) from caseweights1 where
dt > '2001-10-11';
count
--------
146773
(1 row)

processdata=> select count(*) from caseweights1;
count
----------
14984087
(1 row)

As you can see, even though my table is fairly large
PostgreSQL will happily use indexes for queries even
when there is a significant number of tuples that are
to be accessed. The count command with the index took
perhaps a second on my 400MHz 128M ram normal IDE hard
drive test server. The count of all the records, on
the other hand, triggered a sequential scan that took
a long time to complete.

In other words, chances are good that PostgreSQL will
handle your data without special modification.

Jason

--- Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Reiner Dassing <dassing(at)wettzell(dot)ifag(dot)de> writes:
> > I have a table which has a lot of entries (some x
> millions) of the kind
> > (id, timestamp, value)
> > The access (selects) is concentrated to the timely
> last some thousands
> > entries.
> > To adapt this fact I want to setup a "virtual"
> table - test in my
> > example - which
> > is accessed by the clients but in reality the
> entries are separated to
> > different small
> > tables. These table are dynamically created to
> hold the values
> > distinguished by years.
>
> Why bother? Seems like you are just making life
> complicated for
> yourself. One big table with a suitable index ought
> to work fine.
>
> regards, tom lane
>
> ---------------------------(end of
> broadcast)---------------------------
> TIP 5: Have you checked our extensive FAQ?
>
> http://www.postgresql.org/users-lounge/docs/faq.html

__________________________________________________
Do You Yahoo!?
Make a great connection at Yahoo! Personals.
http://personals.yahoo.com

In response to

Browse pgsql-sql by date

  From Date Subject
Next Message Stephan Szabo 2001-10-17 19:04:40 Re: update in rule
Previous Message Joel Burton 2001-10-17 17:53:38 Re: nvl() function