Re: Money casting too liberal?

From: Gavan Schneider <pg-gts(at)snkmail(dot)com>
To: pgsql-general(at)postgresql(dot)org
Subject: Re: Money casting too liberal?
Date: 2013-03-29 22:30:34
Message-ID: 25869-1364596239-151170@sneakemail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On 29/3/13 at 3:32 AM, D'Arcy J.M. Cain wrote:

>On Fri, 29 Mar 2013 11:46:40 -0400 Tom Lane wrote:
>>Well, this has been discussed before, and the majority view every
>>time has been that MONEY is a legacy thing that most people would
>>rather rip out than sink a large amount of additional effort into.

The only reason I have tried to explore these ideas is that the
type is currently too quirky for most use cases. So I must agree
that remove/ignore is the least work option. An argument for
making the type more useful can be made by analogy to the
geolocation add-in type. Most never go there but those who need
to do so seem to prefer the builtin functionality over hand
coding the same behaviour with columns of arrays that just
happen to contain location data.

>>It has some use-cases but they are narrow, and it's not clear how
>>much wider the use-cases would be if we tried to generalize it.

A well designed and specific tool can be worth the effort.

The use cases include:

Financial data, accounts in a single currency, i.e., the
money column in a transaction

Multi currency data, i.e., keeping track of transactions
across several currencies.
specifically we are NOT doing conversions, what
arrives/leaves as $ or ¥ stays that way,
this implies the dB has tables for each area of
operation or columns for each currency

One thing the type should not attempt or allow any implicit
transforming of alues. Mostly a currency change is a transaction
and whenever it happens it has to be recored as such, e.g., so
many ¥ leave their column, appropriate $ are added to their
column, and commission $/¥ is added to its column, also
included will be: exchange rate reference time-stamp journal
reference, etc. A constraint could be constructed to ensure the
double entry book keeping zero sum convention has been
maintained across the whole transaction.

One time this might not be so detailed is for a VIEW where
something akin to total worth is being reported. In cases like
this the exchange rates would usually be in their table and the
business rules would dictate which one is to be used to build
the VIEW, e.g., end of month report, and it might be shown with
all values in a single currency depending on the company's HQ.

>I wonder if our vision isn't a little tunneled here. Using this type
>for money is, perhaps, a specialized use and the type should really be
>called something else and modified to remove all connotations of money
>from it. So...
>

>- Drop the currency symbol
>- Allow number of decimals to be defined once for the column
>- Don't use locale except to specify decimal separator (',' vs. '.')
>
Mostly this is cosmetic and only relevant for parsing text on
data entry or default formatting with SELECT on the command
line. The power of the class is that none of this is in the data
other than as dB column flags. The values themselves are
integer. The class is meant to keep the books moving right along.

>- Allow operations against numeric
>
Whatever else is done this should happen.

>Not sure what to rename it to. Decimal would be good if it wasn't
>already in use. Maybe DecimalInt.
>
I don't think there is much use for another fixed precision
integral type. NUMERIC does a good job when INTEGER isn't
suitable. If this exercise is worth anything then MONEY should
just do its job better so people who track money (and there is
an awful lot of them) will find it useful.

>>My own experience with this sort of thing leads me to think that
>>real applications dealing with a variety of currencies will be
>>needing to store additional details, such as the exact exchange
>>rate that applied to a particular transaction. So while merely
>
>Seems like something that can be stored in a different column.
>
Exactly. We to think this through as would a real user.

If the business is receiving money from multiple regions then
there will be rows which show the currency, number of units
(numeric type since the column is not devoted to a specific
currency), transaction tracing data, exchange reference (another
table), amt_received::MONEY('USD','D2'),
amt_transaction_fee::MONEY('USD','D3'), etc.

Within the accounts of the organisation the MONEY columns are
likely to be in a single currency with movements between ledgers
in the time honoured fashion of adding to this while removing
the same from other(s) so all money entries add to zero across
the row. Movements between currencies are just another
transaction as detailed above.

I have sketched something of a notation for MONEY columns along
these lines:

amt_received MONEY ( CURRENCY -- e.g., 'USD' 'AUD'
'YEN' ...
[,SCALE -- default as per
currency, e.g. USD 2 decimals
-- but could be used to
see money in bigger units
-- such as '000s (e.g.,
that end-of-month view)
[,ROUND -- need to allow for
multiple rules here, sometimes
-- cents are just
dropped, otherwise it can be
-- required that
rounding is up or down
[,OTHER?
]]])

I have left the display characteristics out (they could be there
as a default) but column values are going to be displayed
however the application wants them, and this only applies at the
time of reporting. Each currency can carry the conventional
defaults and the application should have formatting tools to
alter this during output.

Inputting money values, i.e., text to MONEY should follow the
conventions of the target currency. Specifically the input
conversion routine should handle the symbol (or no symbol) and
all the usual conventions for negative values, decimals and
separators. It should throw an error if asked to add a value to
a USD column but finds a yen symbol in the text. (There is no
such help for all of us sharing the $ symbol. :) Also it should
parse such things as 123.456,00 (Europe) and 123,456.00 (Anglo)
properly. Errors need to be thrown when it looks wrong
123,456.789.00 -- since this is likely to be corrupted data, and
finally gets me back to the issue raised by OP. :)

Hope this hasn't been too much of a ramble.

Regards, and happy (Western) Easter to all,
Gavan Schneider

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Ed L. 2013-03-29 23:04:42 unique indices without pg_constraint rows
Previous Message D'Arcy J.M. Cain 2013-03-29 20:30:04 Re: Money casting too liberal?