From: | Hannu Krosing <hannu(at)tm(dot)ee> |
---|---|
To: | "Anthony W(dot) Youngman" <compp(at)thewolery(dot)demon(dot)co(dot)uk> |
Cc: | "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Dreaming About Redesigning SQL |
Date: | 2003-11-09 23:23:48 |
Message-ID: | 1068420228.4486.87.camel@fuji.krosing.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Anthony W. Youngman kirjutas K, 05.11.2003 kell 01:15:
> >1) Your database might change over time and say a table that originally
> >had only a few rows
> >could suddenty grow considerably. Now an optimiser would insulate you
> >from these changes
> >or in the worst case all that would need to be done would be to create
> >an index (and, yes, check
> >that the DBMS starts using it).
>
> Except that an optimiser is *irrelevant* to MV. What do we need to be
> insulated from? MV doesn't care whether a FILE is 4Kb or 40Gb, the cost
> of accessing a single record, AT RANDOM, from within that FILE is almost
> identical. Where would we gain from an optimiser? In practice, it would
> get in the way and slow us down!
getting a single record from any DB ,AT RANDOM, follows the same rules
;)
> >
> >2) You might have a product that runs in a number of sites: large ones
> >and small
> >ones. Now you would not have to reoptimise the programs for each type site.
>
> BUT WE DON'T NEED AN OPTIMISER. IT'S A WASTE OF CPU TIME!!! WE
> *D*O*N*'*T* *N*E*E*D* ONE!!!
on slashdot this would be tagged *funny* ;)
> >3) Complex SQL-queries do quite a lot of things and it might not be very
> >obvious for
> >the programmer how to optimise best.
>
> But a large chunk of SQL's complexity is reassembling a view of an
> entity.
perhaps "a large chunk of initial perceived complexity of SQL" is
reassembling a view of an entity. You will get over it in a day or two
;)
that is *if * the thing you are after *is* an entity.
> MV doesn't have that complexity. An MV program views the
> database the same way as a programmer views the real world.
You mean screenfuls of weird green glowing letters running down the
screen leaving slowly fading tracks ?
> So it's pretty obvious to a MV programmer how to optimise things.
I've never been very good at optimising the real world - the obvious
optimisations have very limited scope.
> >4) depending on input from user (say, a search screen) the optimal
> >access path may be different. An optimiser
> >could generate a different path depending on this input.
>
> Again, MV views the entity as a whole, so probably we don't need to
> generate a "different path" - it's just "get me this entity" regardless
> of what we need to know about it.
Not "what we need to know about it" but "what we already know about it".
So it is always a SEQUENTIAL SCAN , non ?
or is there some magic by which you have all "entities" automatically
hashed by each and every attribute (or combination of attributes) ?
> >> We're not interested in being able to
> >>improve the speed at which the db can find data to respond to an app
> >>request - with an access factor of 1.05 (actually, it's nearer 1.02 or
> >>1.03) we consider any effort there to be a waste of time ...
> >>
> >But isn't it better to have NO disk reads than one? I thought disk I/O
> >was rather expensive? With
> >that mentality you will always be disk bound.
>
> I'm assuming we don't have sufficient RAM to cache stuff ...
>
> Our mentality is to leave disk caching to the OS. The app says "get me
> X". The database knows *exactly* where to look and asks the OS to "get
> me disk sector Y".
How does the database map X to Y, without any extra info (meaning extra
disk accesses) ?
If you can always predict your data needs that well, you dont need a
database, all you need is a file system.
> Any OS worth its salt will have that cached if it's
> been asked for previously recently.
Were you not talking about databases with substantially more data than
fits into RAM ?
> That way, we're only caching stuff
> that's been accessed recently. But because for us the "atomic" chunk is
> an entity, there's a good chance that stuff has been accessed and is in
> cache.
depending on your point of view, anything can be an "entity" (or atomic
chunk) ;)
> SQL optimisation *seems* to be more "efficient" because it tries to
> predict what you're going to want next.
Where do you get your weird ideas about SQL optimisation from ?
> But whereas SQL *guesses* that
> because you've accessed one order detail, you're likely to want other
> order details from the same invoice (a sensible guess), you cannot
> compare this to MV because it gives you those order details as a side
> effect. In order for MV optimisation to be of any use, it would need to
> guess which INVOICE I'm going to access next, and frankly a random
> number generator is probably as good an optimiser as any!
So you claim that MV is good for problems you already know the best way
to solve ?
> >>Basically, the only way you can beat us in the real world is to throw
> >>hardware at the problem - and like I said with linux and macro/micro
> >>kernels, we can do the same :-)
> >>
> >Well, please do!
>
> We do. Which is why we can smoke any relational db for speed unless the
> hardware is big enough to store the entire database in RAM (and even
> then we'd beat it for speed :-) (just not that much in absolute terms,
> although probably a fair bit in percentages :-)
I guess this is the same as some ASM programmer claiming he can beat a C
compiler. This may be true for small very specific tasks on a very
well-understood hardware, but usually not in any more general sense.
Also, while it can take upt to one second for a DBMS to oprimise a
query, it usually takes much longer (minutes, hour or even days) for a
human to do the same.
> >>Can you improve on what I've just done? Is any improvement POSSIBLE?
> >>
> >The whole trick is to have as much stuff in memory as possible, and
> >preferably the *right* stuff. Even
> >if we have a small memory the most used rows will remain in memory and
> >hence minimise the need
> >for disk I/O. And writing to disk is nearly always asyncronous so it
> >will not affect response time.
> >"The only good I/O is a dead I/O" ;-)
> >
> Yep. Which is why our attitude of viewing the world as entities means
> we're probably going to smoke you. Statistics says the chances of us
> being right "by accident" and avoiding the need for i/o is very high.
> While you need artificial intelligence - which has a habit of getting
> things wrong :-)
>
> And anyway. Aren't you jumping to conclusions? You are *assuming* that
> there is such a thing as the "most used rows". In other words, you are
> *assuming* that normal db access only accesses a well-defined subset of
> the database. What if there is no way of predicting what the user is
> going to want next? Your "trick" is worthless ... the last time it was
> accessed is likely to be before the latest reboot ...
Are you referring to some Win32 database ? Or do MV databases inherently
need rebooting ?
> And, because we view the world with our "atom" of an entity, we almost
> certainly stand a better chance than you of related data "just
> happening" to be in RAM when we ask for it ...
If you read more into RAM than absolutely needed, you *may* stand a
better chance of related data "just happening" to be in RAM when we ask
for it, but you also *may* have just have done unnneccesary i/o and
probably pushed something useful out of cache.
> >What if your customer has accumulated, over the years, say 1000 orders?
> >Would you want to pollute
> >your cache with all those orders? Note that this is a problem that you
> >will first accounter after the
> >system has been running for quite a long time. In MV, what would you
> >do in a situation like this?
>
> Ignore it? Because it's not a problem? The only time it's likely to be a
> problem is if the question is "please get all orders for company X".
The question can be much more complicated than that. How about :
"please get all orders for company X that have items Y which are
currently out of stock and which were sold at prices higher than we sold
them to company Z in the same quarter"
(as company X recently merged with company X and we claimed both that
they were getting absolutely lowest prices and so we must be prepared
for damage control).
> And
> even then, provided the ORDERS file is indexed on "customer_id", it's
> still just a SINGLE access to the index and we have a list of EVERY
> order.
How is this different from SQL index ?
Does MV have some hithero unknown index types in addition to traditional
btree, hash, bitmap, ... that allows one to get anything with just *one*
*single* <whatever> ?
> >>>Frankly, it may well be that PICK systems run faster and cheaper than
> >>>relational ones, but certainly
> >>>not for the reasons you state.
> >>>
> >>Well, could you optimise that index any more?
> >>
> >Which index?
>
> I was thinking of that customer names index you were talking about.
> Which basically consists solely of the names, pointers to the records
> they come from, and a bit of empty space. And if I know the name, I can
> find the master record in two goes - one hit to read the index (from
> which I retrieve the record key),
How can you get the record key from index with *one* hit ?
The only way to do it would be using perfect hashes, but that would need
constant recalculation of the whole index.
> and a second hit on the main file to retrieve the company record.
> >Well, I know of a lot of large banks, insurance companies etc... that
> >are using SQL DBMS'es
> >and I think they are running just fine. Amazon uses Oracle. Would you
> >say that their performance
> >is adequate? And I have first hand witnessed and built lot's of fast
> >systems that use SQL DBMS'es.
>
> Mebbe. Why did Temenos (a major supplier of banking software) buy jBASE
> then? jBASE is an MV database.
Perhaps they had developed some software on jBASE, they were the last
customer of jBASE and they wanted to make sure that jBASE is not going
out of business ?
> Unfortunately (a) marketing budget counts, and (b) marketing budgets can
> also set the agenda. Witness that relational theory completely ignores
> performance,
OTOH, I've heard complaints that SQL largely ignores "relational theory"
;-p
> and look at the trouble I'm having trying to prove to you
> that MV is close to the *THEORETICAL* limit of performance (not helped
> by my poor grasp of stats :-)
You are showing us that MV is perhaps 1.5x ahead in speed for simple
well-defined never-changing tasks. For more complex tasks the 1.5X
advantage will evaporate due to Moore's law catching up with development
time, i.e. in MV you spend long enough time to manually program the
complex queries that just waiting for the hardware to cach up will solve
the same problem with no work ...
> It is a *mathematical* *proof* that you cannot beat Huffman compression.
I also remember claims that "You can't beat the feeling", but Huffman
compression is one of the weakest I know of.
How do you prove the unavailability of violent means mathematically ?
Do I sound as delusional as this whole thread ?
> It shouldn't be that hard to prove that you can't beat MV. It's just
> that we're mostly USERS of databases, not database computer scientists.
> And, like me, not skilled in the necessary maths.
So you believe anything the salespeople tell you ?
"We have done the math and can assure one can't get any faster" :)
> >>If that results in running SQL over MV then we've won, I think :-) We
> >>can do that already ...
> >>
> >Not really, because it's the SQL that is the "relational" part (well,
> >it's not purely relational). So
> >the funny thing is that, what ever lies below the surface (=whatever
> >engine we are using) relational
> >get's all the credit!! Unfair, isn't it? As long as it LOOKS like its
> >relational to the user, it does not really matter
> >what happens under the hood.
>
> Yup, it is unfair :-( And yup, it's happening. The more I read about new
> advances in how the underlying relational engines work, the more I see
> that they are just copying 30-year-old MV technology :-(
And they have been doing so for last 40 years :)
I don't thing there is much in basic "technology" that is different
between MV and RDBMS, just that MV puts the user at much lower level and
thus lets/forces one to do more manual labour.
----------------
Hannu
From | Date | Subject | |
---|---|---|---|
Next Message | Jan Wieck | 2003-11-09 23:42:53 | Re: Performance features the 4th |
Previous Message | Jan Wieck | 2003-11-09 23:09:52 | Re: Performance features the 4th |