Re: Oracle vs. PostgreSQL - a comment

From: Paul Förster <paul(dot)foerster(at)gmail(dot)com>
To: Ludovico Caldara <ludovico(dot)caldara(at)gmail(dot)com>
Cc: pgsql-general(at)lists(dot)postgresql(dot)org
Subject: Re: Oracle vs. PostgreSQL - a comment
Date: 2021-04-29 17:13:48
Message-ID: A6B08764-C2B5-443C-841F-4B80C9BA2001@gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Hi Ludovico,

> Sorry for this reply, but I feel it is necessary to make it clear what is reality and what is FUD against Oracle from Paul's e-mails in this thread...

nothing of it was a FUD. It was a comparison done on a single machine. Then, I drew my conclusions from that and added my personal view. You don't necessarily havet to agree to my opinion nor did I ask you to agree. But it's definitely not FUD!

> (Note: I work for Oracle now, but I've had 20 years experience as multi-platform database consultant)

I work *with* Oracle databases too and have been for 20+ years. But I do not work *for* Oracle and I don't feel inclined to spread their advertising.

> That is... not a problem. Is it, for real?

technically no. Still, a) it makes no sense at all to advertise a 64 bit product that still needs 32 bit support (one could even call that an advertising lie!) and b) it may (or may not?) cost performance.

> Although I completely agree that the Oracle installation process is much longer and more complex than PostgreSQL, I disagree with the rest.

to create a CDB, you still have to provide paths which are then hard-coded into the control-file! Oracle software takes tons of space and the installation takes longer.

> The CREATE PLUGGABLE DATABASE is also a single line SQL command... The scripts to create a PDB or a PostgreSQL database depend a lot on what do you want to achieve (empty database? specific users or permissions? sanity checks? pre-emptive backup? add to cmdb?)

yes, create pluggable database. Takes 30+ secs to run, while on PostgreSQL, it takes a few milliseconds. But we require a certain structure in the filesystem which makes the thing much more complex.

> For a new PostgreSQL architecture in the past I have written 230 lines of code to automate the database creation in an existing PostgreSQL cluster. That included setting up application users, hardening the default permissions on the public schema, registering in the CMDB, etc. It is not much code in my opinion and it is done once for all.

again, a simple initdb, or create database would do. For all to be done in PostgreSQL, my script is some 30 lines and includes default user creation, revoking some stuff, etc., nothing compared to what I need for Oracle.

> This is bashing FUD against Oracle or lack of basic Oracle knowledge. Oracle online move, reorganization and patching capabilities are far ahead from PostgreSQL.

nonsense!

> Online Datafile Movement has existed since 12cR1. 8 years! https://oracle-base.com/articles/12c/online-move-datafile-12cr1

yes, I know. But did you try to move SYSTEM, UNDO or TEMP tablespace or online redo log files? Did you try to move the *whole* database? You can move all data/index tablespace files with that (one by one which is tiresome with many files), but you can't move the essential tablespace files! Well, you can move the online reado log files by creating new ones and dropping the old ones but that's about it. You still can't move the essential tablespace files. I admit that I didn't try that with 19.x but it wasn't possible up to now.

> Prior to that, for many years, it was possible to offline, move, rename and online datafiles, either grouped or singularly, without stopping the instance. Online logs can be rotated to a new location online. The only exception are the controlfiles that require an ALTER SYSTEM, shutdown, move, startup.

I know all that but it still requires far to much work! And it still doesn't move the while database!

> PostgreSQL must be stopped in order to move the database to a new path, and if it is to a new filesystem, you need the time for a full copy of the data, unless you do it via backup and recovery to reduce the downtime.

that's not true. pg_basebackup it while running to a new destination. Set up primary_conn_info and replication and start up the copy. Once it's in sync and you have a physical copy, change the port in postgresql.conf of the copy, stop both and then only launch the copy. Promote it then. The switch takes 2-3 secs of downtime.

If downtime doesn't matter but space does, stop the database cluster, move the whole PGDATA to a new location and start it there. It only requires as much downtime as the copy process takes plus a few seconds for shutdown and startup.

> Again no, you don't need to recreate the controlfile for moving the datafiles , and no: altering binary controlfiles with `sed` is nothing a production DBA would ever do...

again no, you can't move SYSTEM, UNDO and TEMP! Also, what I mentioned was a quick and dirty hack and is not recommended.

> The laziness or lack of knowledge of your developers is not a problem with Oracle technology. Still, you can get a "query which is generated and, if printed out in a 11pt. sized font, can fill a billboard on a street", give it to Oracle and get the optimal execution plan 99.9% of the times. And if the execution is not optimal, Statistics Feedback kicks in and tries to produce a better one next time. And if it still fails, you can use hints or produce a trace 10053 and pin-point the reason for the CBO choice and get better statistics (or physical structures) for it.

yes, I know all this too. The problem is not setting an event, asking the tuning advisor of an OEM for help or providing a hint to the query (something I find absolutely disturbing per se!). The problem is the lazy application developer asking a DBA to do the work for him!

> Seriously, I agree to a part of this. I would never buy something as expensive as Oracle for a basic CRUD application, PostgreSQL does the job very well.

but that makes 99.9% of all applications, no matter how big databases become during their lifetime. It's like the typical M$ Office argument: "everyone has it, so it's used in this company too!" Ok, this is whataboutism, but it applies here.

> With Oracle you buy additional features that go beyond that and it is up to you to evaluate what you will be missing when using PostgreSQL instead (you might say "not much for the price", I would not agree), and if you will still be able to respond to your business requirements.

with PostgreSQL you can install any extension you want or need, or even write them yourself. Your point being? Throwing more millions at Oracle? It can't even authenticate LDAP or Windows AD users without buying a very expensive directory service (OID, OUD, or whatever) from them.

And any Oracle software has bugs, bugs and even more bugs. I have never seen such a bug-ridden software as Oracle. After a patch is before the next patch. Many patches even don't work with each other, etc. This is far worse than even Windows has been in its darkest days.

> Because with Oracle you don't buy a good DBA experience (or a good developer experience, even if I think that the developer experience with Oracle is actually pretty good), you buy an enabler for specific business critical requirements: availability, performance, concurrency...

nonsense! You sound like a marketing brochure. :-P

> Trying to minimize the value of Oracle instrumentation is as FUD as it can be...

again, nonsense! If Oracle software tells me to look at some log file and all I find is a hex dump, then it's worthless to me, both as a user and as a DBA.

> Oracle not only gives you the instruments to understand what's happening, but also what's happened in the past.
[..]
> PostgreSQL gives you the source code that is obviously better, but it gives different answers for different problems.

there's the brochure again. Look at PostgreSQL's single most important log file and you have it all, depending on what logging you have enabled or disabled.

> I used to have procedures to create 10046 traces and analyze them. Again, instrumentation is knowledge.

again, why need that at all?

> > Why not just limit the downtime as drastic as can *easily* be done with PostgreSQL in the first place without the whole setup nightmare that Oracle requires? I've been asking myself that for ages and always wondered why it couldn't be just as easy as it is with PostgreSQL.
>
> I completely agree with that.

wow!

> But with RAC you have online patching, and with ADG you have rolling upgrades with almost no downtime. Not for the instance, but for the business. (I know, more money, but still it is possible).

yes, more and more money. In fact, a huge and expensive car per database per year. But since stopping and starting an instance takes about a minute or so, while it takes 2-3 seconds with PostgreSQL, I don't see why anyone should again throw money at Oracle.

Oh, and there's still the LONG datatype, which Oracle can't seem to get rid of in its own database, because it's so bloated but recommends people to not use it for almost 20 years now. SCNR.

And with Oracle, you still can't rename users, a feature that DBAs have been asking for for decades. And there's more stuff, but I'm tired now.

Botton line: You don't work *at* Oracle, you don't work *with* Oracle, you work *for* Oracle as an evangelist. That's why this is my final mail in this thread. I don't discuss religion.

Cheers,
Paul

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message ERR ORR 2021-04-29 19:12:30 Re: Oracle vs. PostgreSQL - a comment
Previous Message Ayub M 2021-04-29 09:01:53 index only scan taking longer to run