I use integers to represent the numerator and denominator in a fraction and always optimize using the greatest common denominator from the euclidean algorithm. No matter how many bits I use, I will sooner or later get an overflow and have to fall back on approximations. Mixing different data types at run-time would be worse than just having floats all the way like I had before.
Is there a way to find the best truncating number without having to brute-force test all errors with floating point approximations?
For example, 62000000 / 30999998 should truncate using 31000000 and become 2 / 1 marked as an approximation.
Just dividing with some number does not work since it must handle very large and very small numbers.