From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Andrew Smith <laconical(at)gmail(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Why is my database so big? |
Date: | 2016-02-22 05:06:00 |
Message-ID: | 1382.1456117560@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Andrew Smith <laconical(at)gmail(dot)com> writes:
> I am setting up a proof of concept database to store some historical data.
> Whilst I've used PostgreSQL a bit in the past this is the first time I've
> looked into disk usage due to the amount of data that could potentially be
> stored. I've done a quick test and I'm a little confused as to why it is
> occupying so much space on disk. Here is my table definition:
> CREATE TABLE "TestSize"
> (
> "Id" integer NOT NULL,
> "Time" timestamp without time zone NOT NULL,
> "Value" real NOT NULL,
> "Status" smallint NOT NULL,
> PRIMARY KEY ("Id", "Time")
> );
> CREATE INDEX test_index ON "TestSize" ("Id");
> With a completely empty table the database is 7 MB. After I insert 1
> million records into the table the database is 121 MB. My understanding is
> that each of the fields is sized as follows:
> integer - 4 bytes
> timestamp without time zone - 8 bytes
> real - 4 bytes
> smallint - 2 bytes
> So for 1 million records, it needs at least 18 million bytes, or ~17 MB to
> store the data. Now I'm sure there is extra space required for managing the
> primary key fields, the index and other misc overhead involved in getting
> this data into the internal storage format used by PostgreSQL. But even if
> I triple the number of bytes stored for each record, I only end up with 51
> MB or so. Am I missing something obvious?
It doesn't sound that far out of line. Postgres is not great with narrow
tables like this one :-(. The factors you're not accounting for include:
* Alignment padding. On a 64-bit machine those fields would occupy 24
bytes, not 18, because row widths are always going to be multiples of 8.
* Row header overhead, which is 28 bytes per row (24-byte row header plus
4-byte row pointer).
* That primary key index is going to need 16 data bytes per entry
(alignment again), plus an 8-byte index tuple header, plus a 4-byte row
pointer.
* The other index similarly requires 8+8+4 bytes per row.
* Indexes tend not to be packed completely full. If you load a PG btree
in exactly sequential order, the leaf pages should get packed to about
the index fillfactor (90% by default). If you load in random order,
ancient wisdom is that the steady-state load factor for a btree is
about 66%.
* There's some other inefficiencies from upper-level btree pages, page
header overhead, inability to split rows across pages, etc; though these
tend to not amount to much unless you have wide rows or wide index
entries.
Given the above considerations, the *minimum* size of this table plus
indexes is 100 bytes/row. Your observed result of 114 bytes/row
suggests you're getting an index load factor of around 80%, if I counted
on my fingers correctly. That's not awful, but I'd guess that at least
one of the indexes is getting loaded nonsequentially.
You could REINDEX the indexes (possibly after playing with their
fillfactor settings) to improve their density. But you're not going
to be able to move the needle by more than about 10% overall, so
personally I wouldn't bother.
The long and the short of it is that Postgres is more oriented to
OLTP-style applications where access to and update of individual
rows is the key performance metric. If you're dealing with historical
data and mainly want aggregated query results, it's possible you'd
get better performance and more-compact storage from a column-store
database.
There's ongoing investigation into extending Postgres to support
column-style storage for better support of applications like that;
but any such feature is probably several years away, and it will
not come without performance compromises of its own.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Sridhar N Bamandlapally | 2016-02-22 05:35:11 | Re: JDBC behaviour |
Previous Message | Venkata Balaji N | 2016-02-22 04:41:43 | Re: Why is my database so big? |