The types of numerical variables allow us to store and manipulate numerical values in our programs.
In almost all of our programs we will need to manage and manipulate numbers. It is logical, since a program is nothing more than “a big calculator” that does things (and many of those things will involve numbers).
A number can be any quantifiable amount or measure. For example, a person’s age, the number of occupants in a classroom, the temperature of a room, or the coordinates on a map.
If you remember math, there are different types of numbers. Natural, integer, real. In programming, generically, we will differentiate between two.
- Integer numbers (without decimals)
- Numbers with decimals
Integer numbers
Integer numbers represent numerical values without decimals. They can be positive, negative, or zero.
Integer numbers are the simplest to understand and implement on a computer. They are the numbers that arise, for example, when counting cows in a meadow.
Programming languages offer different types of integer variables, which vary in size and range of values.
In general, the differences are:
- How large the number we can store is
- Whether it allows negative numbers or not
Internally, numbers in a computer are stored in binary format. And we can store 1, 2, 3… up until we run out of computer memory (or, more specifically, until we exceed the capacity of the variable that stores it).
When we try to store a number larger than the maximum that a variable can hold, it is called “overflow”.
Numbers with decimals
The other major family of numbers considered in programming are numbers with decimals.
It should be noted that representing decimal numbers on a computer is not as straightforward as it may seem at first glance.
To do this, it is common to use two usual mechanisms.
- Floating point
- Fixed point
Floating point (Floating-Point) is the most common representation. It uses a mantissa and an exponent to store decimal numbers. It provides a wide range of values and an efficient representation, but it can have rounding errors.
Fixed decimals (Fixed-Point) are stored as integers and use a convention to determine the location of the decimal point. It offers a precise representation, but the number of decimal digits is limited.
There are other more specific types of representation such as fractions or fixed-point integers. They are less commonly used but can be useful in some specific cases.
Examples of number types in different programming languages
As we mentioned, the representation of numbers in different languages varies, especially between typed and untyped languages.
For example, languages like C++, C# or Java define different types of numbers.
The difference between them is,
- Whether they allow positive and negative numbers
- Whether they allow numbers with decimals or not
The details of the maximum size vary between languages. Furthermore, ultimately, it depends on the Operating System we are using, and the compiler we are using.
// positive integer numbers
byte smallUnsignedByte = 255;
ushort smallUnsignedShort = 5;
uint unsignedInteger = 10;
ulong unsignedLong = 1000000000;
// integer numbers with positives and negatives
short smallInteger = 5;
int integer = 10;
long largeInteger = 1000000000;
// numbers with decimals
float floatingPoint = 3.14f;
double doublePrecision = 3.14159265359;
decimal highPrecisionDecimal = 3.1415926535897932384626433832m;
On the other hand, JavaScript only has the Number
type.
Unlike other languages, JavaScript does not distinguish between integers or decimal numbers. All numbers are treated as floating points (64 bits according to the IEEE 754 standard), with no clear separation between integers and floating-point numbers.
Thus, the previous example would look like this.
let integer = 10;
let largeInteger = 1000000000;
let smallInteger = 5;
let smallByte = 255;
let floatingPoint = 3.14;
let doublePrecision = 3.14159265359;
let highPrecisionDecimal = 3.1415926535897932384626433832;
Finally, if we look at the example of Python, it is also not necessary to specify the type of variable we are going to create.
Internally, Python offers different types of numerical data, such as int
, float
, and complex
. The size of numerical variables in Python may vary depending on the specific implementation and the architecture of the machine it is running on.
Thus, the example would look like this.
integer = 10
largeInteger = 1000000000
smallInteger = 5
smallByte = 255
floatingPoint = 3.14
doublePrecision = 3.14159265359
highPrecisionDecimal = 3.1415926535897932384626433832
However, in most common implementations, the int
type can grow to store any integer of any size. Meanwhile, the float
type is implemented using the IEEE 754 standard for 64-bit floating-point numbers.
Of course, there are different peculiarities and more specific cases in different programming languages. But as we see, they have more in common than differences.
Precision problems in floating-point numbers Advanced
We have mentioned that the representation of floating-point numbers has limitations of precision due to the way they work.
In most cases, this does not pose a problem, but it is advisable to understand it correctly (because, sometimes “strange” or unintuitive situations occur when working with them).
For example, let’s consider this example in Python.
result = 0.1 + 0.2
print(result) # Result: 0.30000000000000004
Why does such a strange result come out? Why doesn’t it yield 0.3, which is what it should be? This is the problem of working with floating-point number representation.
The issue is that computer systems have a finite amount of bits to represent numbers, so it is not possible to store an infinite fraction with perfect precision.
It is important to note that this precision problem is not specific to any particular programming language. The same would occur in C#, JavaScript, or any other language. It is an inherent problem of number representation.
I will not go into great detail about the internal implementation (if you wish, you can easily find a lot of information about it). But, in very brief terms, a floating-point number is represented with the following expression.
Where,
s
, is the sign bit (0 for positive, 1 for negative).f
, is the fraction (mantissa) of the number in binary.e
, is the exponent in binary.bias
, is a constant value used to adjust the range of the exponent.
As mentioned, I will not delve deeply into the mathematical part of the problem, but in summary, a floating-point number is not continuous; we are counting in very tiny steps.
For example, if we add 0.1 and 0.2 in a floating-point system, we might expect to get 0.3 as a result. However, due to the precision limitation, the actual result could be 0.30000000000000004.
It is a very small difference, but it can affect sensitive calculations. These precision issues should be considered when working with calculations that require high precision, such as financial or scientific operations.
To deal with this problem, it is recommended to take into account the precision of calculations and avoid direct comparison of floating-point numbers using operators like ==
(equality).
float myVariable = 0.3;
// do not do this
if(myVariable == 0.3)
{
}
// better this way
const float THRESHOLD = 0.0001f;
if(Math.Abs(myVariable - 0.3) < THRESHOLD)
{
}
Instead, techniques such as comparison with an acceptable margin of error or using a different type of number that allows for greater precision, as needed, are often employed.