As we mentioned in the introduction to the programming course, programming encompasses a wide spectrum of technological advances and intellectual contributions that have laid the foundations for the discipline as we know it today.
I wouldn’t feel right conducting an introductory programming course without dedicating at least one article to briefly discuss the history of the evolution of programming.
From the earliest machines to today’s modern systems, each milestone in this chronology has influenced how we understand and apply the principles of programming.
Pre-computer Stage
The origin of programming dates back to the dawn of humanity when humans sought ways to automate repetitive tasks. From prehistoric times, we find machines that used energy to perform work automatically.
We find automatic irrigation systems in Babylon around 2000 B.C. It is also known that the Greeks employed hydraulic and pneumatic systems. The Egyptians had truly ingenious mechanisms in their temples.
Many of these machines exhibited automatic behavior (they were automatons) and logically, they were also programmed. In ancient times, the earliest forms of programming involved setting up mechanical, pneumatic, and hydraulic machines.
During the Middle Ages, as in almost all fields of knowledge, there was a significant halt. It wasn’t until the Renaissance and the Industrial Revolution that advancements in automation began to emerge again.
Industrial Revolution
In 1725, Basile Bouchon, son of a French organ manufacturer, adapted the concept of clockwork mechanisms used in music boxes to operate a loom via a perforated tape. This invention was perfected in 1728 by his assistant, Jean-Baptiste Falcon, who used a series of interconnected punched cards.
Shortly after, in 1745, Jacques de Vaucanson, a French engineer and inventor of mechanical automatons, applied his knowledge and created the first automatic loom. His knowledge of automated mechanisms was crucial for the development of programming and laid the groundwork for future advancements.
Based on these two inventions, in 1801 Joseph Marie Jacquard created a famous mechanical loom of punched cards. Each punched card represented a specific instruction, allowing for the creation of complex patterns without the need for constant human intervention.
As technology advanced, engineers and mathematicians explored ways to automate complex calculations. In 1840, Charles Babbage developed the “Analytical Engine,” considered one of the first attempts to create a programmable computer (although the Analytical Engine was never fully constructed, it laid the groundwork for the conception of the first principles of programming).
In the latter half of the 19th century, punched cards became a popular form of programming in the fields of statistics and census. Punched cards allowed for structured information storage and operations using electromechanical machines. This technology played a crucial role in the evolution of programming, providing a method for executing complex instructions.
The First Computers
The true revolution in the history of programming came with the emergence of the first electronic computers in the 1940s. During World War II, Alan Turing and his team developed the Colossus, a machine designed to decipher the codes of the German Enigma cipher machine.
Quickly, computers increased in power while decreasing in size. This was facilitated by the invention of new technologies such as transistors, which replaced vacuum tubes.
At that time, programmers had to write directly in machine code, which consists of numerical instructions directly understandable by the computer hardware. This approach was extremely labor-intensive and prone to errors.
To abstract from this complexity, in 1947 the assembly language was developed, allowing programmers to write instructions in more readable mnemonics. Although assembly language was still very close to machine language, it provided a layer of abstraction that made program development easier.
Shortly thereafter, in the early 1950s, high-level programming languages emerged, designed to be more accessible and easier to use than assembly language. These languages provided even greater abstraction, allowing programmers to express concepts and algorithms in a way closer to human language.
In parallel, the theory and foundations of many paradigms we know today were being developed. For example, the foundations of combinational logic and Functional Programming, which traces its origins back to the 1930s, were developed.
Instead of focusing on objects and their interaction, functional programming is based on the concept of pure functions, which take an input and generate an output without side effects.
Subsequently, between 1950 and 1960, one of the most influential programming paradigms emerged: Object-Oriented Programming (OOP). OOP is based on the idea that programs should model real-world objects, with associated characteristics (attributes) and behaviors (methods).
Modern Era
With the popularization of computers and their inclusion in the domestic field, programming theory continues to develop into what we know today.
An important milestone was, as we all know, the emergence of what we now call the Internet. This is due to the improvement and complexity of communication systems between computers. With the advent of the www, new needs arose for software development.
As software becomes increasingly complex, with more complex applications, mobile applications, virtualization, cloud computing, software also becomes more complex. Additionally, the need for richer and more usable user interfaces takes on new importance.
This gives rise to new paradigms like reactive programming or immutability, unit testing, frameworks, libraries, dependencies, devops, and many other tools aimed at managing project complexity.
Today, modern software development is a field of technology that addresses many aspects, including design, psychology, project management. As we can see, aspects beyond the “mathematical” component it traditionally had, which we will also explore in this course.