| From: | Tino Wildenhain <tino(at)wildenhain(dot)de> | 
|---|---|
| To: | Craig Ringer <craig(at)postnewspapers(dot)com(dot)au> | 
| Cc: | PG-General Mailing List <pgsql-general(at)postgresql(dot)org> | 
| Subject: | Re: match an IP address | 
| Date: | 2008-09-23 13:06:24 | 
| Message-ID: | 48D8E9D0.6050703@wildenhain.de | 
| Views: | Whole Thread | Raw Message | Download mbox | Resend email | 
| Thread: | |
| Lists: | pgsql-general | 
Craig Ringer wrote:
> Phoenix Kiula wrote:
> 
>> 1. What extra tax will this constraint levy on an INSERT or UPDATE on
>> this table? There are about 100,000 inserts a day, and over three
>> times as many UPDATES. The concurrency is pretty high -- I mean
>> sometimes 1,000 users at the same time but no more than that. If the
>> additional cost of insertion/updating is not too heavy, I suppose this
>> could be a nice approach.
> 
...
> I'd try a functional index first. If that didn't do the job, I'd use a
> trigger-maintained column _purely_ as an optimisation (ie I could drop
> it and lose no data) that stored text representations of the data.
> Honestly, though, I expect the functional index would be more than good
> enough and probably wouldn't have much of an INSERT/UPDATE cost.
> 
> Again, of course, I'd test before setting anything in stone.
And do not forget you know beforehand if you are going to lookup a user
or an IP. So I fail to see what problem needs to be solved here :(
Maybe we can get an enlightenment on where every data comes from and
whats going to be its usage? Maybe we can then work out a better
solution at all? Peephole optimizing is great but sometimes the effect
is better if you just open the door :-)
Tino
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Steve Atkins | 2008-09-23 15:37:53 | Re: match an IP address | 
| Previous Message | Craig Ringer | 2008-09-23 12:42:47 | Re: match an IP address |