
Though proto-computer science has its roots back in Babylonian times, the general idea behind a working computer did not emerge until the 17th and 18th centuries with mechanical calculators and a mechanized loom, respectively. After that, the first computer was designed in the early 19th century, but was never built, and its idea was very much still a novelty. Some special purpose devices to compute things came and went over the course of the next century, before the foundations of what would become theoretical computer science came along in the 1930s
Before that, the foundations of mathematics had just made strides forward in the 1880s, whence digital logic emerged in the form of schematics, which built on 200 year earlier work on binary numbers. From the 1880s until the start of the 1930s, digital logic was refined, and the first physical gates were produced. Only with binary logic, could digital computers exist in the first place (we ignore the development of analog computers, as these follow completely different paradigms)
In the 30s, thought was invested into computational methods. It is at this time, that we see the development of important ideas, such as the Halting problem, Lambda Calculus and Automata Theory, the latter-most of which was the most logically complete description of a computer or computer program and what it is capable of. This would later on have strong implications different logic systems led to verification technology which would enter the scene in the 1980s, reaching its theoretic heights in the 1990s and was paramount to verifying the correct functioning of soft- and hardware systems. This was wildly necessary, given how complex these systems would become.
From the 30s to the eventual war in the 40s, technological progress within computer engineering was suppressed on the European continent, with the exception of cryptography, a discipline reaching back to Roman times. As a result, the computer industry took off in the United States, where research programs were dedicated to build computing machines (having less funds, the other world superpower, the Soviet Union, made a name for themselves in mathematics). From these new computing machines several influential computers were created in the 1950s using thermionic tubes. Though the first modern day transistor (MOSFET) was created in the late 40s, like thermionic tubes, they were still expensive to run. Transistors are what effectively caused computers to take up large rooms.
Because of this, the study of reverse computing emerged to save money on electricity costs. It did not make a large impact. However, it did reveal some interesting properties about the nature of the universe on the quantum level, as well as for information theory. It is here that physics was first tied up with information theory, which had implications for quantum computers. Not much later, in the latter half of the 50s, the first MOSFETs were created using photolithographic processes, which allowed the computer to become smaller and cheaper, giving rise to computationally more powerful computers that are only now reaching its final frontiers with the given hardware. Moore’s Law and its little known brother, Koomey’s law emerged in the 60s, as a response.
The increase in computing power allowed for probably the most important computing development in the 50s, which is (now) known as an assembler. An assembler is a software that takes a collection of symbols and converts them into instructions which a device is supposed to execute. Developments of other assembly languages occurred shortly after, to accommodate different instruction set architectures, which soon gave rise to the study of programming languages. These really have only one purpose: To tell the computer in shorter and shorter text, to do more and more, while both the computer, and the writer of the text make fewer mistakes. Leading the charge into the foray were women, because men believed that programming to be trivial discipline.
While developments in newer languages ran in parallel to all other developments until the present day, theoretical knowledge about computer science reached its zeniths in the 60s, where many foundations of computers had been developed, such as Algorithmic Complexity. Many today, see this discipline as exemplifying computer science. Information theory and a strange field known as Algorithmic Information Theory also enjoyed niches in the study of computer science, which turned out to have wide-ranging applications for the development of software and telecommunications. Chaos Theory emerged from the first simulations, with the “Lorentz attractor” being the first discovered (mathematical) chaotic system.
With the rising omnipresence of computers and their influence in academia, such as a first mathematical “proof by computer” in 1976, the sale of computers became a widespread business. With cheaper computing power in the 70s, organizations could afford their own computers and ordered them to be used for bookkeeping and menial tasks. Seeing how companies did not want to share hard earned computer research however, a multitude of businesses emerged that built their own software, even hardware, causing each system to work very differently from other ones. This meant that any time a computer scientist left one company, they had to virtually start learning new architectures from scratch, at another firm.
The main driver of such computing opaqueness were operating systems which only became more commonplace over time, starting in the early 60s. This technology/idea allowed computers to accommodate more than 1 program, which meant more than 1 user per computer. Though it was a movement towards inclusion of more users, the practical and theoretical discipline of computer programming and engineering was still largely led by a secretive priesthood of mathematicians and engineers, whose knowledge was mostly private until the end of the 80s.
All listed fields of study and commerce continued to enjoy significant research and investment, though hardware was still evolving. Semiconductor engineering was getting better, and so was telecommunications and signal processing (e.g. Fast Fourier Transform, 1965). The early 70s saw the first thing that had multiple computers connected to one another, creating a proper “computer network”, so to speak. 1971 brought with it the first wireless network. Around this time, the first computer virus was detected on ARPANET, which signified the start of what will later be dealt with in cyber security, a subdiscipline of security studies.
With increasing developments across these multitude of subjects, computers were now cheap enough that people could in theory, own singular computers, such as the Unix machine, emerging in 1969. Coming with it, was the introduction of what would later be known as the “grandfather of all modern programming languages”, C. It built on other languages that emerged before it, most of them which are now historical curiosities. C is still used today, mainly because it has versatile management of memory and replacing it is difficult. Gaming consoles emerged.
With Unix, computers could now be used by singular people, which meant that software aiding people to automate things became more commonplace. Computer companies of this age, such as IBM or DEC, enjoyed plenty of success. With ever better understanding of computer science and automation of software, newer and newer machines could be built that would allow users to do more and more with less and less. The mid 70s saw the two most influential computer companies emerge, Microsoft and Apple, that are still operating and thriving today.
As the industrial 70s came to a close, computer networks now linked many universities with each other. Many other companies which offered unlinked computer services were born. Despite these, compatible software was hard to come by. Organizations such as The Berkeley Software Foundation, allowed certain entities to use their software for educational purposes. Yet this wasn’t nearly enough to fully understand how operating systems or programs functioned, at large.
In the early 80s, the now famous GNU project launched as a response to this, which was supported by a small cadre of programmers. The central tenet of the organization was that software, like much written text, should be free. GNU’s purpose was to create all the software that a commercial computer might have at the time and make it 100% available to anyone. It took them 8 years to finalize the first completely free operating system, with text processors, IDE’s, its own file system, device drivers, other porting software, games, etc. and was revolutionary when it was finally published in its entirety in 1990. This OS is known as Linux.
In parallel to its creation, was the development of the internet for everyone: military, academics, businesses, students, working people. While the 70s laid the theoretical foundations of computer networks, the 80s implemented all that they have learned into massive, continent spanning networks. The cheapest way to have done this at the time was through telephone lines (i.e. dial-up internet), because they were already present as electricity carrying media.
We must remember that throughout all this, computers did not stop increasing in their computational power. As they became even more effective, starting in the beginning 80s, computers could now be customized with color options. UI/UX was now of greater importance, as computer industries fought over a global client pool. Apple made strides towards this goal, while Microsoft was more focused on the functional side of computing, generating arcane solutions for professionals.
Once the 90s began, a new era of computer history arrived: the Internet was becoming widespread. Middle class households began to have computers at their homes. Much information was being digitized and if one wanted to use different operating systems, one would only have to use the one provided by GNU. Many computer companies were going out of business, seeing the success that was Microsoft and Apple, and the internet became more prevalent. The primary 3 operating system families emerged: (GNU) Linux, Windows and Darwin (in the form of the MacIntosh computer).
With a widespread understanding of computing machines, quantum computing emerged from the physicists’ corners of universities, and was studied hard enough to cause (mostly theoretical) concerns in cyber security: Shor’s algorithm, invented in 1994, would, if physically realized cheaply, be able to break RSA encryption, compromising the security of every computer. This is what is primarily fueling interest in quantum computing today (such as at IBM), though results have thus far been dubious.
With the internet abound, new software services emerged that “interfaced” with the “internet”, to offer services that were previously impossible to deliver. The primary formats of information exchange on the internet emerged, such as HTML in 1991, which brought with it a staggering investment in online services. This led to the Dotcom bubble (burst, early 2000) where people imagined how profitable the internet would become, while overstating its actual value. Y2K, an intentionally engineered potential catastrophe, cast a shadow over the computer engineering industry as everything moved into the new millennium. Lucky for the computerized world, the Sylvester of 1999 passed by without much chaos.
Before jumping into the 21st century, we recall the emergence of search engines, Google being the one powerhouse of an engine with their PageRank algorithm that has now expanded into something far more complex, and its owner, into a behemoth of technology services. Social Media also emerged around this time, which connected (computer) users via chat and possibly even sound and video, if one bought the hardware. The way laymen communicated with each other over large distances began to change.
With the internet in every home, even more services emerged, which used even more computing power. Information exchange was still done on paper, though this would change by the end of the 00s. With companies upgrading their networks to handle more and more data transfer, it is only a matter of time before all things would be done on the internet. Building on the theoretical implications of reliable data transfer, the study of Distributed Systems emerged as a rightful discipline in itself. Edge computing is likely its most known invention, with important implications for massive database management.
Because information existed in a multitude of places at once and the information exchanged wasn’t formalized, ontological engineering emerged as a way to rein in information and to understand its relations to other information. This included not only information on the internet, but also information present in larger databases, which helped informatics professionals homogenise their (information) stores. Though a promising discipline, it has built the most important ontologies by the early 00s, and consequently, slowly faded into obscurity. However, the end results are still being used in by librarians and information scientists.
Out of nowhere, Apple decided to enter the mobile phone market at around the same time in 2004. Right after, Google did as well: Apple now fought Microsoft, while competing with Google over a supposed growing mobile phone computer market. Any other competitors they may have had here, or would have had, were wiped out when iOS (based on Darwin) and Android (based on Linux) blasted onto the scene as the victors of a battle few even knew was happening. “Apps” became commonplace.
From here, the Internet of Things was only a small conceptual jump away, where things that we didn’t previously imagine needed computing, would come installed with wireless microcontrollers. With ubiquitous internet access via WiFi,, the idea of devices having computers in them seemed like a good idea, as one can read out data they’ve been collecting, and program them to do things from afar, that would otherwise be extremely tiresome.
Throughout these decades, the ever so silent, frustrating and sacrilegious field of artificial intelligence (AI) finally reared its head from a silent war that waged between symbolic (winning) and statistical AI (losing) adherents since 1950s. As hardware had become computationally strong enough to address the computing power needed for stochastic processes and artificial neural nets, which have known of since the 1940s, it seems that the statistical AI people may have finally ended this conflict. Useful AI was finally in reach and led to an industry growth that has yet to reach its peak. The 4th industrial revolution was on the horizon.
We make a callback here, to the creation of GPUs in the 90s which allowed for fully fleshed out 3D worlds/models to be efficiently created. These devices turned out to be of existential need for neural nets, disregarding their original, intended use. This is why GPUs, needed for high-end gaming, are expensive: They are also being used for AI.
Either way, the internet had become prevalent enough that a website displayed varying degrees of traffic, signaling “peak” problems, similar to those of the electrical grid. A single computer running at company headquarters wasn’t enough to handle sudden increases in requests. Because buying many computers for the sole purpose of handing “request peaks” was infeasible, new computers could be rented out on demand. One would manage these computers via containers, which emerged in the early 10s to accommodate development in computer centers. This was the beginning of the cloud.
Just before the start of this new decade, the first few ominous internet trends were first being noticed and monitored. Humanity was still in glee at this new type of Hyperobject, and creatives were pumping out software for their phones and websites and niche uses of software that, with a slew of new sensors in the phone, could be modified to do anything imaginable. Anyone could and would (to a certain degree), publish anything they wanted online, including a new form of currency that used the knowledge gained from distributed systems’ study, cryptography (e.g. blockchain) and edge commuting.
The 2010s were for reasons of misinformation, a darker period of computer history. The internet is the focal point of all discussions on computers. Cyber attacks and security have become much more prevalent, seeing that people had access to (much) information at all times. This decade has also seen the explicit awareness of spying governments bridging private homes and national borders, which had huge implications on surveillance, privacy and international relations. Propaganda has never been easier to dispel, in particular with the use of AI or at the least, smart software.
Throughout the 2010s, computers became ever more powerful, as did phones. While online gaming found its first peak in the mid 00s, new heights were reached with phones and the emergence of Esports. With many freely available games mining their users for information about them, restrictions were clearly necessary to curb the loss of privacy and attempt to control loss of privacy. Other services did the same, which led to GDPR and similar initiatives emerged.
In the latter half of the decade, the internet became the place of companies, instead of people, where actions of natural peoples were mined for profit, as well as advertising. Bots were abound, and information/data became the modern day gold. Mining acquired a new meaning with cryptocurrencies and this new paradigm of “money”. Internet memes, initiated alongside the propagation of the internet in the 90s, used the idiosyncratic Impact font starting in the 2010s. Memes in general, reached a strange pinnacle in the “2016 meme war”.
This “war” was fought with memes displaying political allegiances relating to the U.S. election of the time. Donald Trump’s appointment as President of the U.S. and awareness of the dangers of Social Media emerged, though sociologists already knew of them in the 2000s. This form of media is still regarded as damaging, though few of us can resist the pull from it. With data being processed from computerized phones (everyone owns one now), software can know one’s behaviors better than one’s self does, and outsmart people for reasons of profit. Echo chambers on the internet are turning even normal people against one another.
In the 2020s, COVID showed us the strengths of the internet, and accelerated the move towards the decentralization of work and living arrangements that journalists suspected more than half a century before. With a huge amount of people confined in their homes all of a sudden, people focused their attention on things they only previously vaguely heard about somewhere: Online financial behavior turned into naive obsessions which worsened wealth inequality. However, not all was unsuccessful: Self-organizing entities existed prior to this, but emerged for the first time as a movement that could change financial systems: an uncoordinated but successful stunt of short squeezing a particular stock showcased this.
As I write, I am thinking of the developments that some of my colleagues are working on. Robotics (such as those done at Boston Dynamics), along with Drone development is currently quite popular, as well as the mathematics of a potential quantum internet or highly technical neural nets that can generate “fake” data, indistinguishable to “real” data. The lines between what has been created by humans vs machines is vanishing, so that it is unclear how many artifacts of users on the internet are even real: AI generated faces, voices, text, etc. are here, and it’s nearly impossible to know who are the people, and who are the machines.
It is still possible to disconnect from the internet though, but it’s becoming harder with each passing year. With advances in biotechnology, rejecting technology might become more and more infeasible, though it will be interesting to see if non-invasive procedures to augment human bodies with computing things will be popular, instead of just with wearables. With computerized watches, the trend towards computerization of all that was previously understood to be human, has never been stronger than now.
With this, I close my laptop, leave my phone, and go outside. I don’t wear watches of any kind. It is perhaps only a couple more decades that we can sit outside, and enjoy the sun before the computer era comes crashing down on us, one way or another and a new age of humanity begins once again. We’ll see what happens.