From: | Josh Berkus <josh(at)agliodbs(dot)com> |
---|---|
To: | kuopo <spkuo(at)cs(dot)nctu(dot)edu(dot)tw> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: how to handle a big table for data log |
Date: | 2010-07-27 21:13:42 |
Message-ID: | 4C4F4C06.1050404@agliodbs.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On 7/20/10 8:51 PM, kuopo wrote:
> Let me make my problem clearer. Here is a requirement to log data from a
> set of objects consistently. For example, the object maybe a mobile
> phone and it will report its location every 30s. To record its
> historical trace, I create a table like
> /CREATE TABLE log_table
> (
> id integer NOT NULL,
> data_type integer NOT NULL,
> data_value double precision,
> ts timestamp with time zone NOT NULL,
> CONSTRAINT log_table_pkey PRIMARY KEY (id, data_type, ts)
> )/;
> In my location log example, the field data_type could be longitude or
> latitude.
If what you have is longitude and latitude, why this brain-dead EAV
table structure? You're making the table twice as large and half as
useful for no particular reason.
Use the "point" datatype instead of anonymizing the data.
--
-- Josh Berkus
PostgreSQL Experts Inc.
http://www.pgexperts.com
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2010-07-27 22:18:50 | Re: Slow query using the Cube contrib module. |
Previous Message | Tom Lane | 2010-07-27 20:40:02 | Re: Pooling in Core WAS: Need help in performance tuning. |