Categories:
.NET (357)
C (330)
C++ (183)
CSS (84)
DBA (2)
General (7)
HTML (4)
Java (574)
JavaScript (106)
JSP (66)
Oracle (114)
Perl (46)
Perl (1)
PHP (1)
PL/SQL (1)
RSS (51)
Software QA (13)
SQL Server (1)
Windows (1)
XHTML (173)
Other Resources:
When I set a float variable to, say, 3.1, why is printf printing it as 3.0999999?
When I set a float variable to, say, 3.1, why is printf printing it as 3.0999999?
✍: Guest
Most computers use base 2 for floating-point numbers as well as for integers, and just as for base 10, not all fractions are representable exactly in base 2. It's well-known that in base 10, a fraction like 1/3 = 0.333333... repeats infinitely. It turns out that in base 2, one tenth is also an infinitely-repeating fraction (0.0001100110011...), so exact decimal fractions such as 3.1 cannot be represented exactly in binary. Depending on how carefully your compiler's binary/decimal conversion routines (such as those used by printf) have been written, you may see discrepancies when numbers not exactly representable in base 2 are assigned or read in and then printed (i.e. converted from base 10 to base 2 and back again).
2015-07-03, 1120👍, 0💬
Popular Posts:
How To Truncate an Array? - PHP Script Tips - PHP Built-in Functions for Arrays If you want to remov...
How Large Can a Single Cookie Be? - PHP Script Tips - Understanding and Managing Cookies How large c...
.NET INTERVIEW QUESTIONS - What is the difference between System exceptions and Application exceptio...
What print out will the folloging code produce? main() { char *p1=“name”; char *p2; p2=(char*)malloc...
How Many Types of Tables Supported by Oracle? - Oracle DBA FAQ - Managing Oracle Database Tables Ora...