From: | Bill Moran <wmoran(at)collaborativefusion(dot)com> |
---|---|
To: | Geoffrey <esoteric(at)3times25(dot)net> |
Cc: | PostgreSQL List <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: sequence skips 30 values, how? |
Date: | 2007-01-31 13:51:46 |
Message-ID: | 20070131085146.c891c420.wmoran@collaborativefusion.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
In response to Geoffrey <esoteric(at)3times25(dot)net>:
> We are trying to track down an issue with our PostgreSQL application.
> We are running PostgreSQL 7.4.13 on Red Hat Enterprise ES 3.
>
> We have a situation where the postgres backend process drops core and
> dies. We've tracked this to an unusual situation where a sequence value
> that is being created during the process that is causing the core file
> generation. The thing that is bizarre is that the sequence value skips
> 30+ entries.
>
> How is this even possible? Any suggestions would be greatly appreciated.
Don't know why your workers are dropping cores: backtraces and the like
would probably help sort that out.
However, when a transaction requests a new sequence, then aborts (for
whatever reason) that sequence isn't going to back up. My understanding
is that the overhead to making sequences transaction aware and able to
avoid gaps is more than anyone has determined the benefit to be.
What is the problem with gaps? If you're afraid of running out of numbers,
switch to BIGSERIAL.
--
Bill Moran
Collaborative Fusion Inc.
From | Date | Subject | |
---|---|---|---|
Next Message | Merlin Moncure | 2007-01-31 13:55:37 | Re: Any Plans for cross database queries on the same server? |
Previous Message | Markus Schiltknecht | 2007-01-31 13:41:35 | Re: sequence skips 30 values, how? |