"fkater(at)googlemail(dot)com" <fkater(at)googlemail(dot)com> wrote:
> Simply because the test case had just < 50 rows (deleting
> all rows older than 2 minues). Later on I would use indices.
Running a performance test with 50 rows without indexes and
extrapolating to a much larger data set with indexes won't tell you
much. The plans chosen by the PostgreSQL optimizer will probably be
completely different, and the behavior of the caches (at all levels)
will be very different.
>> > while it is planned to have the interval set to 6 hours in
>> > the final version (thus creating a FIFO buffer for the
>> > latest 6 hours of inserted data; so the FIFO will keep
>> > approx. 10.000 rows spanning 160-200 GB data).
This might lend itself to partitioning. Dropping a partition
containing data older than six hours would be very fast. Without
knowing what kinds of queries you want to run on the data, it's hard
to predict the performance impact on your other operations, though.
>> > * Autovacuum is on; I believe I need to keep it on,
>> > otherwise I do not free the disk space, right? If I switch
>> > it off, the deletion time reduces from the average 10s down
>> > to 4s.
>> >
>>
>> You may be running autovaccum too aggressively, it may be
>> interfering with I/O to the tables.
>
> Hm, so would should I change then? I wonder if it helps to
> run autovacuum less aggressive if there will not be a
> situation were the whole process is stopped for a while. But
> I'd like to understand what to change here.
I'd be very careful about this, I've seen performance problems more
often (and more dramatic) from not running it aggressively enough.
Monitor performance and bloat closely when you adjust this, and make
sure the data and load are modeling what you expect in production,
or you'll tune for the wrong environment and likely make matters
worse for the environment that really matters.
-Kevin