What you're describing is called fixed point arithmetic, a super cool technique I wish more programmers knew about.
Proper finance related code should use it, but in my experience in that industry it doesn't seem very common unless you're running mainframes.
Funnily enough, I've seen a lot more fixed point arithmetic in software rasterizers than anywhere else. FreeType, GDI, WPF, WARP (D3D11 reference rasterizer) all use it heavily.
I have worked on firmware that has plenty of fixed point arithmetic. The firmware usually runs on processors without hardware floating point units. For example certain Tesla ECUs use 32-bit integers where they divide it into four bits of integer part and 28 bits of fractional part. So values are scaled by 2^28.
>> The firmware usually runs on processors without hardware floating point units.
I'm working on control code one an ARM cortex-M4f. I wrote it all in fixed point because I don't trust an FPU to be faster, and I also like to have a 32bit accumulator instead of 24bit. I recently converted it all to floating point since we have the M4f part (f indicate FPU), and it's a little slower now. I did get to remove some limit checking since I can rely on the calculations being inside the limits but it's still a little slower than my fixed point implementation.
The other great thing about going fixed point is that it doesn't expose you to device specific floating point bugs, making your embedded code way more portable and easier to test.
32b float on your embedded device doesn't necessary match your 32b float running on your dev machine.
32b float can match your desktop. Really just takes a few compiler flags(like avoiding -funsafe-math), setting rounding modes, and not using the 80bit Intel mode(largely disused after 64bit transition).
You aren't guaranteed that your microcontrollers float is going to match your desktop. Microcontrollers are riddled with bugs, unless you need floats and fixedpoint is fast enough. My recommendation is still to use fixedpoint if application is high reliability.
Esp if your code needs to be portable across arm, risc-v, etc.
Many microcontrollers today, including ARM, RISC-V, and Xtensa have IEEE compliant FPUs or libms available. Same numeric format, same rounding, same result.
Fixed point isn't bad at all, just often slower when a compliant FPU is available.
> IEEE compliant FPUs or libms available. Same numeric format, same rounding, same result.
IEEE only mandates results within ½ ULP (= best possible) for basic operations such as addition, subtraction, multiplication, division, and reciprocal.
For many other ones such as trigonometric functions, exponential and logarithms, results can (and do) vary between conforming implementations.
“The IEEE standard does not require transcendental functions to be exactly rounded because of the table maker's dilemma. To illustrate, suppose you are making a table of the exponential function to 4 places. Then exp(1.626) = 5.0835. Should this be rounded to 5.083 or 5.084? If exp(1.626) is computed more carefully, it becomes 5.08350. And then 5.083500. And then 5.0835000. Since exp is transcendental, this could go on arbitrarily long before distinguishing whether exp(1.626) is 5.083500...0ddd or 5.0834999...9ddd. Thus it is not practical to specify that the precision of transcendental functions be the same as if they were computed to infinite precision and then rounded. Another approach would be to specify transcendental functions algorithmically. But there does not appear to be a single algorithm that works well across all hardware architectures. Rational approximation, CORDIC,16 and large tables are three different techniques that are used for computing transcendentals on contemporary machines. Each is appropriate for a different class of hardware, and at present no single algorithm works acceptably over the wide range of current hardware.”
IEEE 754-2019 says for the transcendental functions (the ones in §9.2):
> A conforming operation shall return results correctly rounded for the applicable rounding direction for all operands in its domain.
so all of them are supposed to be correctly rounded. I think IEEE 754-2008 also requires correct rounding, but I don't have that spec in front of me right now.
In practice, they're not correctly rounded--the C specification explicitly disclaims the need for them to be (§F.3¶20), reserving the cr_ prefix for future mandatory correctly-rounded variants.
Even with that and ignoring C’s “we don’t support that”), it still can be hard to write C code that provides identical results on all platforms. For example, I don’t think much code uses float_t or double_t or checks FLT_EVAL_METHOD (https://en.cppreference.com/w/c/types/limits/FLT_EVAL_METHOD)
So the things you mention aren't useful for getting consistent numerical results. You really have to start getting into obscure platforms like mainframes to find stuff where float and double aren't IEEE 754 single and double precision, respectively. FLT_EVAL_METHOD is largely only relevant if you're working on 32-bit x86 code, and even then, you can sidestep those problems if you're willing to require that hardware be newer than 20 years old or so.
The actual thing you need to do for consistency is to be extremely vigilant in the command line options you use, and also bring your own math library implementations rather than using the standard library. You also need vigilance in your dependencies, for somebody deciding to enable denormal flushing screws everybody in the same process.
Ah, I've had a slightly different task many times: porting a high level algorithm from MATLAB or labview or keras to C.
As part of this I construct a series of test inputs, and confirm that they are bitwise equivalent to the high level language. It's usually as simple as aligning the rounding mode, disabling fused MAC, and a few other compiler flags that shouldn't be project defaults.
The other fun part is using the vector unit - for that we have to define IEEE arithmetic in the order the embedded device does it(usually 4x or 8x interleaved), port that back up, and verify.
Never did use a whole lot of transcendentals - maybe due to the domains I worked in.
Just look at the instruction set for your particular CPU. Every CPU is different, but in most architectures I've seen, floating point operations are 2-3 times slower for the same word size.
Single float adds are usually 2 or 3 CPU cycles while single-word integer adds are usually 1 cycle.
Again, this is extremely dependent on the particular CPU you have. Some architectures do have single-cycle FPU operations, but it's not very common in microcontrollers as far as I can tell.
That would vary wildly with the ARM chip you are talking about. I would say figure out which ARM you’re interested in and go down the rabbit hole from there.
What do they use? Not float I hope.
Plus given that some currencies have different precisions...
Don't tell me it's rounding errors over trillion monies?! :o)
As I indicate in another post, I work in finance and I use binary floats. So do a lot of others who work in the industry. I sympathize with people who think that IEEE floating points are some weird or error prone representation and that fixed point arithmetic solves every problem, but in my professional experience that isn't true and systems that start by using fixed point arithmetic eventually end up making a half-assed error prone and slow version of floating point arithmetic as soon as they need to handle more sophisticated use cases like handling multiple currencies, doing calculations involving percentages such as interest rates, etc etc...
The IEEE 754 floating point standard is a very well thought out standard that is suitable for representing money as-is. If you have requirements such as compliance/legal/regulatory needs that mandate a minimum precision, then you can either opt to use decimal floating point or use binary floating point where you adjust the decimal place up to whatever legally required precision you are required to handle.
For example the common complaint about binary floating point is that $1.10 can't be represented exactly so you should instead use a fixed integer representation in terms of cents and represent it as 110. But if your requirement is to be able to represent values exactly to the penny, then you can simply do the same thing but using a floating point to represent cents and represent $1.10 as the floating point 110.0. The fixed integer representation conveys almost no benefit over the floating point representation, and once you need to work with and mix currencies that are significantly out of proportion to one another, you begin to really appreciate the nuances and work that went into IEEE 754 for taking into account a great deal of corner cases that a fixed integer representation will absolutely and spectacularly fail to handle.
I build cash registers, and I avoid floats like the plague.
I think the difference is where you need an exact result. Auditors have forced me to go through a years transactions to find an 1 cent error. They were right - at one point we weren't handling the fractional cents correctly. After finding that the bug was fixed. Had we been using floating point our answer would have been "shrug, if it's a problem chose another vendor".
You are working in finance so I suspect a 0.00001% error doesn't matter to you. Usually it doesn't. But occasionally, proofs of correctness are important. The can demonstrate for example one of your programmers isn't ripping you off by rounding (0, 0.5) to zero instead of (0, 0.5] and stealing the resulting cents. People have gone to jail for doing exactly that. Which is why, a good auditor can get very picky finding a 1 cent error. He doesn't care about value of that 1c any more that you do. What he cares about greatly is a machine whose job is to add up numbers reliably apparently can't get basic arithmetic right.
Programmer with battle scars from working in that environment are sick and tired of being told by others how much easier floats are to use 99.9999% of the time. Believe me, they know.
There are more problems with using floating-point for exact monetary quantities than just the inexact representations of certain quantities which are exact in base 10. For example, integers have all of the following advantages over floats:
Integer arithmetic will never return NaN or infinity.
Integer (a*b)*c will always equal a*(b*c).
Integer (a+b)%n will always equal (a%n+b%n)%n, i.e. low-order bits are always preserved.
IEEE 754 is not bad and shouldn't be feared, but it is not a universal solution to every problem.
It's also not hard to multiply by fractions in fixed-point. You do a widening multiplication by the numerator followed by a narrowing division by the denominator. For percentages and interest rates etc., you can represent them using percentage points, basis points, or even parts-per-million depending on the precision you need.
>Integer arithmetic will never return NaN or infinity.
I use C++ and what integer arithmetic will do in situations where floating point returns NaN is undefined behavior.
I prefer the NaN over undefined behavior.
>Integer (ab)c will always equal a(bc).
In every situation where an integer will do that, a floating point will do that as well. Floating point numbers behave like integers for integer values, the only question is what do you do for non-integer values. My argument is that in many if not most cases you can apply the same solution you would have applied using integers to floating points and get an even more robust, flexible, and still high performance solution.
>For percentages and interest rates etc., you can represent them using percentage points, basis points, or even parts-per-million depending on the precision you need.
And this is precisely when people end up reimplementing their own ad-hoc floating point representation. You end up deciding and hardcoding what degree of precision you need to use depending on assumptions you make beforehand and having to switch between different fixed point representations and it just ends up being a matter of time before someone somewhere makes a mistake and mixes two close fixed point representations and ends up causing headaches.
With floating point values, I do hardcode a degree of precision I want to guarantee, which in my case is 6 decimal places, but in certain circumstances I might perform operations or work with data that needs more than 6 decimal places and using floating point values will still accommodate that to a very high degree whereas the fixed arithmetic solution will begin to fail catastrophically.
C++ is no excuse; it has value types and operator overloading. You can write your own types and define your own behavior, or use those already provided by others. Even if you insist on using raw ints (or just want a safety net), there's compiler flags to define that undefined behavior.
Putting everything into floats as integers defeats the purpose of using floats. Obviously you will want some fractions at some point and then you will have to deal with that issue, and the denominator of those fractions being a power of 2 and not a power of 10. Approximation is good enough for some things, but not others. Accounts and ledgers are definitely in the latter category, even if lots of other financial math isn't.
You need always be mindful of your operating precision and scale. Even double-precision floats have finite precision, though this won't be a huge issue until you've compounded the results of many operations. If you use fixed-point and have different denominators all over the place, then it's probably time to break out rational numbers or use the type system to your advantage. You will know the precision and scale of types called BasisPoints or PartsPerMillion or Fixed6 because it's in the name and is automatically handled as part of the operations between types.
>I use C++ and what integer arithmetic will do in situations where floating point returns NaN is undefined behavior. I prefer the NaN over undefined behavior.
Really? IME it's much more difficult to debug where a NaN value came from, since it's irreversible and infectious. And although the standard defines which integer operations should have undefined behavior, usually the compiler just generates code that behaves reasonably. Like, you can take INT_MAX and then increment and decrement it and get INT_MAX back.
(That does mean that you're left with a broken program that works by accident, but hey, the program works.)
Integer division by zero will raise an exception in most modern languages.
Integer overflow is more problematic. While some languages in some situations will raise exceptions, most don't. While it's easier to detect overflow that has already occurred with floats (though you'll usually have lost low-order bits long before you get infinity), it's easier to avoid overflow in the first place with integers.
It really depends on your need. In some countries e.g. VAT calculations used to specify rounding requirements that were a pain to guarantee with floats. I at one point had our CFO at the time breathing down my neck while I implemented the VAT calculations while clutching a printout of the relevant regulations on rounding because in theory he could end up a defendant in a court case if I got it wrong (in practice not so much, but it spooked him enough that it was the one time he paid attention to what I was up to). Many tax authorities are now more relaxed, as long as your results average out in their favour, but there's a reason for this advice.
> if your requirement is to be able to represent values exactly to the penny, then you can simply do the same thing but using a floating point to represent cents and represent $1.10 as the floating point 110.0.
Not if you need to represent more than about 170 kilo dollars.
The industry standard in finance is decimal floating point. C# for example has 'decimal', with 128 bits of precision.
On occasion I've seen people who didn't know any better use floats. One time I had to fix errors of single satoshis in a customer's database because their developer used 1.0 to represent 1 BTC.
People are tired of Tesla articles so they more often flag them. Flags bring articles down. These articles do often lead to flames so it's not without merit.
I don't agree at all, Xen was my favorite part of Black Mesa and my least favorite part of the original Half-Life. I was blown away by how good they managed to make it look (with a 2004-era engine, albeit with updates), it's absolutely gorgeous. The gameplay and puzzles were far more fun for me too, and I greatly appreciate that they made this part of the game longer.
Why do people care so much about doxxing these days?
In most places, doxxing is not illegal. In the places where it is illegal it's usually only when there's stalking/harassment intent behind it. Doesn't appear to be against the HN rules either.
If you happen to be in a city where an embassy or consulate of your country actually exists, typically embassies are in the capital city of a country, and if you come from some countries there may be consulates in some other "main" city, but that's it.
You may need to "walk" hundreds of km to reach one of those.
Alaska. You can move out there by yourself, do whatever the hell you want (as long as its by yourself and doesn’t impact others), and no one will notice or care.
In re-reading this thread, I realize something: if you move to Alaska to avoid being forced to work by society, you better damn well believe that you will be forced to work by nature.
I think this says something about the very nature of work. In our modern times it is detached from its purpose (survival of self and species), but that is ultimately what drives the need to work.
This idea that a capable, working-age living thing should be able to free load off of others’ work to stay alive is not dignifying for that person, and unfairly burdens the one who is working to keep the other person alive.
Put another way, just because a farmer can make enough to feed a town doesn’t mean the farmer should only be allowed enough grain to feed his family, with everything else divided among those who don’t work. What’s the incentive to work?
And when you introduce government rules to enforce those things, the farmer is compelled by violence and force to give away his labor.
Just like missing the connection between work and survival, people miss the connection between government regulation and the promise of force behind it as a punishment for non-compliance.
The reason people pay taxes (which is the only way we have as a society to redistribute resources from those who work to those who don’t) is because their money will be taken if it is not paid, or they will be put in jail if they lie about how much they owe.
So while the original comment complains about how you’re compelled to work, that’s actually not true. It is a societal norm to work and to have a good quality of life, but no one is forcing you. But at the same time they are asking to increase government benefits and force those who work to pay for their survival.
All this said - if the conversation is framed around how much work it takes to survive, I think as a society it would make sense for that number to drop as we’ve increased scale of production and automation.
This is only true if you define "work" as "tasks that has to be executed to survive". If you live in the wilderness you have to "work" a lot because you have to execute many tasks to survive. That is not what work is. I have to eat and defecate to survive, but you wouldn't call that "work", would you?
The definition of "work" used by every leftist movement in the world is different. They define it as "directly or indirectly selling your time to other humans in exchange for material benefits." Thus, if I hunt alone in the wilderness to get meat for myself I'm not "working". But if there is another person in the wilderness who forces me to hunt to get meat for us then I'm "working".
I hope you understand the difference. Scrubbing toilets is not per se "work". Scrubbing toilets so that someone will give you money to buy food is "work".
Ah yes, well known very complex computer engineering problems such as:
* Parsing JSON objects, summing a single field
* Matrix multiplication
* Parsing and evaluating integer basic arithmetic expressions
And you're telling me all you needed to do to get the best solution in the world to these problems was talk to an LLM?
reply