From: | Ivan Sergio Borgonovo <mail(at)webthatworks(dot)it> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Are there plans to add data compression feature to postgresql? |
Date: | 2008-10-31 21:46:51 |
Message-ID: | 20081031224651.03bed658@dawn.webthatworks.it |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Fri, 31 Oct 2008 17:08:52 +0000
Gregory Stark <stark(at)enterprisedb(dot)com> wrote:
> >> Invisible under normal operation sure, but when something fails
> >> the consequences will surely be different and I can't see how
> >> you could make a compressed filesystem safe without a huge
> >> performance hit.
> >
> > Pardon my naiveness but I can't get why compression and data
> > integrity should be always considered clashing factors.
>
> Well the answer was in the next paragraph of my email, the one
> you've clipped out here.
Sorry I didn't want to hide your argument but just to cut the
length of the email.
Maybe I haven't been clear enough too. I'd consider compression at
the fs level more "risky" than compression at the DB level because
re-compression at fs level may more frequently spawn across more data
structures.
But sorry I still can't get WHY compression as a whole and data
integrity are mutually exclusive.
What I think is going to happen, not necessarily what really happens
is:
- you make a change to the DB
- you ask the underlying fs to write that change to the disk (fsync)
- the fs may decide it has to re-compress more than one block but I'd
think it still have to oblige to the fsync command and *start* to
put them on permanent storage.
Now on *average* the write operations should be faster so the risk
you'll be hit by an asteroid during the time a fsync has been
requested and the time it returns should be shorter.
If you're not fsyncing... you've no warranty that your changes
reached your permanent storage.
Unless compressed fs don't abide to fsync as I'd expect.
Furthermore you're starting from the 3 assumption that may
not be true:
1) partially written compressed data are completely unrecoverable.
2) you don't have concurrent physical writes to permanent storage
3) the data that should have reached the DB would have survived if
they were not sent to the DB
Compression change the granularity of physical writes on a single
write. But if you consider concurrent physical writes and
unrecoverable transmission of data... higher throughput should
reduce data loss.
If I think at changes as trains with wagons the chances a train can
be struck by an asteroid grow as much as the train is long.
When you use compression, small changes to a data structure *may*
result in longer trains leaving the station but on average you
*should* have shorter trains.
> > DB operation are supposed to be atomic if fsync actually does
> > what it is supposed to do.
> > So you'd have coherency assured by proper execution of "fsync"
> > going down to all HW levels before it reach permanent storage.
> fsync lets the application know when the data has reached disk.
> Once it returns you know the data on disk is coherent. What we're
> talking about is what to do if the power fails or the system
> crashes before that happens.
Yeah... actually successful fsync are at a higher integrity level
than just "let as much data as possible reach the disk and made it
so that they can be read later".
But still when you issue an fsync you're asking "put those data on
permanent storage". Until then the fs is free to keep manage them in
cache and modify/compress them there.
The faster they will reach the disk the lower the chances you'll
lose them.
Of course on the assumption that once an asteroid hit a wagon the
whole train is lost that's not ideal... but still the average length
of trains *should* be less and reduce the *average* chances they get
hit.
This *may* still not be the case and it depends on the pattern with
which data change.
If most of the time you're changing 1 bit followed by an fsync and
that requires 2 sectors rewrite that's bad.
The chances that this could happen are higher if compression takes
place at the fs level and not at the DB level since DB should be
more aware of which data can be efficiently compressed and what
could be the trade off in terms of data loss if something goes wrong
in a 2 sector write when without compression you'd just write one.
But I think you could still take advantage of fs compression
without sacrificing integrity choosing which tables should reside on
a compressed fs and which not and in some circumstances fs
compression may get better results than just TOAST.
eg. if there are several columns that are frequently updated
together...
I'd say that compression could be one more tool for managing data
integrity not that it will inevitably have a negative impact on it
(nor a positive one if not correctly managed).
What am I still missing?
--
Ivan Sergio Borgonovo
http://www.webthatworks.it
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2008-10-31 21:54:15 | Re: GEQO randomness? |
Previous Message | Tom Lane | 2008-10-31 21:30:12 | Re: Need Help for a query |