From: | Condor <condor(at)stz-bg(dot)com> |
---|---|
To: | Ivan Sagalaev <maniac(at)softwaremaniacs(dot)org> |
Cc: | pgsql-general(at)postgresql(dot)org, pgsql-general-owner(at)postgresql(dot)org |
Subject: | Re: Log storage |
Date: | 2017-10-19 11:33:06 |
Message-ID: | 456d5311e84667e5dd93120d91fca0ac@stz-bg.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 18-10-2017 09:18, Ivan Sagalaev wrote:
> Hello everyone,
>
> An inaugural poster here, sorry if I misidentified a list for my
> question.
>
> I am planning to use PostgreSQL as a storage for application logs
> (lines of text) with the following properties:
>
> - Ingest logs at high rate: 3K lines per second minimum, but the more
> the better as it would mean we could use one Postgres instance for
> more than one app.
>
> - Only store logs for a short while: days, may be weeks.
>
> - Efficiently query logs by an arbitrary time period.
>
> - A "live feed" output, akin to `tail -f` on a file.
>
> For context, I only used Postgres for a bog standard read-heavy web
> apps, so I'm completely out of expertise for such a case. Here are my
> questions:
>
> - Is it even possible/advisable to use an actual ACID RDBMS for such a
> load? Or put another way, can Postgres be tuned to achieve the
> required write throughput on some mid-level hardware on AWS? May be at
> the expense of sacrificing transaction isolation or something…
>
> - Is there an efficient kind of index that would allow me to do `where
> 'time' between ... ` on a constantly updated table?
>
> - Is there such a thing as a "live cursor" in Postgres for doing the
> `tail -f` like output, or I should just query it in a loop (and skip
> records if the client can't keep up)?
>
> Thanks in advance for all the answers!
Hello,
not much on the topic, I had the same problem and I solved it by using a
Redis server (memory is cheap and fast) to store the logs for an
hour / day depending on the load average and then drop them on a csv or
sql file and insert it into Postgresql database.
My Redis record is so structured that I have the ability to review the
current actions of each user like tail -f.
Hardware is not much, Redis server with a lot of memory and cheap server
for database to store logs and I now even try to make different approach
to
remove the database server, because I store every day as separate gziped
log file for backup.
Regards,
Hristo S
From | Date | Subject | |
---|---|---|---|
Next Message | Achilleas Mantzios | 2017-10-19 11:45:58 | Re: Problems with the time in data type timestamp without time zone |
Previous Message | Tom Lane | 2017-10-19 06:12:55 | Re: pgpass file type restrictions |