From: | Merlin Moncure <mmoncure(at)gmail(dot)com> |
---|---|
To: | Thomas Güttler <guettliml(at)thomas-guettler(dot)de> |
Cc: | Postgres General <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: PG vs ElasticSearch for Logs |
Date: | 2016-08-19 17:51:29 |
Message-ID: | CAHyXU0zbX7F=yJiDM8FyMKJ01yir6v5YbdLQ9cvg1rh7FVpwbQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Fri, Aug 19, 2016 at 2:32 AM, Thomas Güttler
<guettliml(at)thomas-guettler(dot)de> wrote:
> I want to store logs in a simple table.
>
> Here my columns:
>
> Primary-key (auto generated)
> timestamp
> host
> service-on-host
> loglevel
> msg
> json (optional)
>
> I am unsure which DB to choose: Postgres, ElasticSearch or ...?
We use SOLR (which is similar to ElasticSearch) here for json document
retrieval. Agreeing to do this was one of the biggest mistakes in my
professional career. This choice was somewhat forced because at the
time jsonb was not baked. In my opinion, jsonb outclasses these types
of services particularly if you are already invested in postgres. The
specifics of your requirements also plays into this decision
naturally. The bottom line though is that these kinds of systems are
not nearly as fast or robust as they claim to be particularly if you
wander off the use cases they are engineered for (like needing
transactions or joins for example). They also tend to be fairly
opaque in how they operate and the supporting tooling is laughable
relative to established database systems.
Postgres OTOH can be made to do pretty much anything given sufficient
expertise and a progressive attitude.
merlin
From | Date | Subject | |
---|---|---|---|
Next Message | Andy Colson | 2016-08-19 17:59:47 | Re: PG vs ElasticSearch for Logs |
Previous Message | Francisco Olarte | 2016-08-19 17:13:44 | Re: Limit Heap Fetches / Rows Removed by Filter in Index Scans |