From: | Felson <felson123(at)yahoo(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | pgsql-novice(at)postgresql(dot)org |
Subject: | Re: full table... |
Date: | 2002-08-20 15:39:32 |
Message-ID: | 20020820153932.11925.qmail@web13008.mail.yahoo.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-novice |
Acutely, there are 7 inserts that that take place on
that table before it can talk to the unit that is
broadcasting to me again...
There is a unique constraint on (tstamp, cd_id) but
removing it didn't fix the speed issue...
I am at about 3,000,000 rows give or take a few
thousand. My first take is that I agree with you in
that 3 mill rows should not be an issue at insert
time, but at this point I have no clue on what else it
can be... The rest of the database responds just fine,
all except this table. I also had done a VACUUM
ANALYZE on the table in hopes that it would help...
On my original fix, is there any disadvantage to that
many tables? Other than \d becomes almost useless?
--- Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Felson <felson123(at)yahoo(dot)com> writes:
> > I have a table that stores a HUGE volume of data
> every
> > day. I am now running into a problem where when I
> try
> > to insert data, the remote connection times out
> > because it takes to long... (1 minute)
>
> How much is HUGE? I'm having a really hard time
> believing that a simple
> insert could take > 1min regardless of table size
> ... are there perhaps
> triggers or rules or foreign-key references on this
> table that could be
> eating the time?
>
> regards, tom lane
__________________________________________________
Do You Yahoo!?
HotJobs - Search Thousands of New Jobs
http://www.hotjobs.com
From | Date | Subject | |
---|---|---|---|
Next Message | Donnahoo, George | 2002-08-20 15:42:54 | Installing PostgreSQL |
Previous Message | Thilo Hille | 2002-08-20 15:19:53 | Re: index is not used |