I came across a script at work and am a bit confused by how the data type of Decimal works.
Here is what I understand
- If a variable, say ldec_test, is declared as Decimal (7, 2), it would have a value like 12345.67 (for example). Is that correct?
- If the variable is declared as Decimal(2), it would have a value format like ######.12 (only 2-digit decimal point). Is that correct?
- When the variable (declared as decimal(2)) has a value $123.456 assigned, would the final value be $123.45, or $123.46 by default? Does Decimal truncate or round it to the nearest penny?
I hope my question makes sense.