Log storage

From: Ivan Sagalaev <maniac(at)softwaremaniacs(dot)org>
To: pgsql-general(at)postgresql(dot)org
Subject: Log storage
Date: 2017-10-18 06:18:18
Message-ID: 2f320b61-a783-7947-8677-95a469b150c4@softwaremaniacs.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Hello everyone,

An inaugural poster here, sorry if I misidentified a list for my question.

I am planning to use PostgreSQL as a storage for application logs (lines
of text) with the following properties:

- Ingest logs at high rate: 3K lines per second minimum, but the more
the better as it would mean we could use one Postgres instance for more
than one app.

- Only store logs for a short while: days, may be weeks.

- Efficiently query logs by an arbitrary time period.

- A "live feed" output, akin to `tail -f` on a file.

For context, I only used Postgres for a bog standard read-heavy web
apps, so I'm completely out of expertise for such a case. Here are my
questions:

- Is it even possible/advisable to use an actual ACID RDBMS for such a
load? Or put another way, can Postgres be tuned to achieve the required
write throughput on some mid-level hardware on AWS? May be at the
expense of sacrificing transaction isolation or something…

- Is there an efficient kind of index that would allow me to do `where
'time' between ... ` on a constantly updated table?

- Is there such a thing as a "live cursor" in Postgres for doing the
`tail -f` like output, or I should just query it in a loop (and skip
records if the client can't keep up)?

Thanks in advance for all the answers!

Responses

Browse pgsql-general by date

  From Date Subject
Next Message STERBECQ Didier 2017-10-18 08:24:49 Table partionning : INSERT with inconsistent return ligne inserted.
Previous Message Andres Freund 2017-10-18 05:21:47 Re: pgcon2015, what happened to SMR disk technolgy ?