From: | Lutz Fischer <l(dot)fischer(at)ed(dot)ac(dot)uk> |
---|---|
To: | "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Using array instead of sub table (storage and speed) |
Date: | 2017-06-15 08:06:39 |
Message-ID: | 35a68a4e-9567-dc48-5d76-078112e558b3@ed.ac.uk |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Hi,
I have two tables
s {
id bigint NOT NULL PRIMARY KEY,
...
}
sp {
id bigint PRIMARY KEY,
sid bigint REFERENCES s (id),
i numeric,
m numeric
...
}
I have for each entry in [s] on average around 120 entries in [sp]. And
that table has become the largest table in my database (8.81565*10^09
entries).
Data in [sp] are never changed. I can probably reduce the size by
changing datatypes from numeric to float but I was wondering if it would
be more efficient - primarily in terms of storage - to change the
structure to have two arrays in [s]. E.g.
s {
id bigint NOT NULL PRIMARY KEY,
i numeric[],
m numeric[],
...
}
I can probably reduce the size by changing datatypes from numeric to
float/double. so final table would look like this:
s {
id bigint NOT NULL PRIMARY KEY,
i float[],
m double[],
...
}
I haven't really found anything yet how much space (e.g. how many bytes)
an array will use compared to a table row in postgresql.
Thanks
Lutz
--
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
From | Date | Subject | |
---|---|---|---|
Next Message | Stephen Frost | 2017-06-15 13:37:04 | Re: Using array instead of sub table (storage and speed) |
Previous Message | Michael Paquier | 2017-06-13 20:38:52 | Re: [BUGS] Invalid WAL segment size. Allowed values are 1,2,4,8,16,32,64 |