| From: | Dave Caughey <caugheyd(at)gmail(dot)com> |
|---|---|
| To: | "pgadmin-support lists(dot)postgresql(dot)org" <pgadmin-support(at)lists(dot)postgresql(dot)org> |
| Subject: | Default custom format for specific columns? |
| Date: | 2021-01-31 15:45:54 |
| Message-ID: | CAAj2gHzNu7NXRUy+Jxx0_8zy+QA+mEw8pp5+E-jCkMGu4mqu2A@mail.gmail.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgadmin-support |
In my databases, timestamps are stored as longs (epoch millis).
Consequently, when I do a query or view table, I always end up with values
like 1612106244000, rather than a more readable format of "2021-01-31
15:17:24".
Yes, I realize I can do "to_timestamp(cast(mytimestamp/1000 as
bigint))::timestamp" in a SELECT to convert "mytimestamp" to a
human-readable form, but it means I have to hand-compose all my queries
rather than being able to use all the convenient "View/Edit Data..."
functions.
I'm wondering if there is a way to assign a conversion/formatting function
(like the above) to specific table columns, e.g., by expanding the table's
columns and editing the properties, so that whenever you do a view/edit
data, it actually implements a "to_timestamp(cast(mytimestamp/1000 as
bigint))::timestamp" for the "mytimestamp" column?
If not, I'm happy to create a RM enhancement request.
Cheers,
Dave
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Aditya Toshniwal | 2021-02-01 04:16:17 | Re: Default custom format for specific columns? |
| Previous Message | Cadstructure Technology | 2021-01-30 09:37:57 | Unable to installed postgis extension using stack builder |