The History and evolution of Artificial intelligence

Artificial Intelligence has become the most transformative tool of today’s time. It has become one of the most transformative technologies in the rapidly growing modern society. But their relationship is going on from a very old past.

With this thought, we will explore the history and development of Artificial Intelligence, from the creation of machines to humanity’s permanent attraction towards them.

It was not so easy to program computers between the 1930s and 1940s, at that time government research institutes, businesses and colleges had to invest heavily to create special software and or special equipment. We will explore the development and history of the thinking with which the development of Artificial Intelligence grew. We will try to know about ancient myths and today’s system, success and difficulties.

The ancient foundations of AI

Some of the earliest stories were stories that simulated human life. Today they have become reality. For example, they reflect humanity’s early curiosity about creating machines. For example, they have set the stage for future philosophical and scientific investigations into the nature of intelligence.

The notion of artificial beings with their own intelligence is found in the myths and legends of many cultures;

For example, in ancient Chinese lore, a man named Yan Si, an engineer, created a human-like automaton.

In ancient Greek mythology, Talos was a giant automaton created by the god Hephaestus to guard the island of Crete.

Defining of Intelligence

Artificial intelligence is the principle and development of any computer system that is capable of performing tasks that historically required human intelligence, such as recognizing language, making inferences and identifying patterns. It is a broad term that includes many different types of technologies including machine learning, deep learning and natural language processing.

Humans have been using their intelligence for many years, and have achieved what they thought, and even before the 17th and 18th centuries, many theories have been created. While in the 17th and 18th centuries, thinkers began to theorize about the nature of consciousness and intelligence.

Thinkers like Thomas Hobbes argued that logic can be understood as a function of calculation.

René Descartes suggested that animals have automata—mechanical entities—while humans have souls.

Artificial Intelligence is capable of performing those tasks which are beyond the capability of computers and systems. Which generally require human cognition In today’s era, AI is being used every moment and in every field, such as early detection of cancer patients, better treatment, automation, new revenue streams, sizes, businesses, social media, smooth function

The birth of computing and AI

The Birth of Computing and AI

The 20th century saw the rise of formal computers led by pioneers such as Alan Turing and John von Neumann. Alan Turing’s 1936 paper On Computable Neural Networks introduced the concept of the Turing machine, which could simulate the computation of any algorithm.

In the 1950s, during World War II, a benchmark was proposed to indicate whether a machine could perform tasks incomparable to humans. This was Turing’s work on breaking the Enigma code, measuring the practical capability of computational machines.

  • 1952: The research into Artificial intelligence begins here, during this year visionary thinkers and pioneers started a revolutionary form of Artificial Intelligence.

It was also during this year that Samuel took charge of the Samuel Checkers Playing Program, which was the world’s first self-learning game-playing program.

  • 1955: Herbert A. Simon and Alan Newell created the first artificial intelligence program named ‘Logic Theorist’. It proved 38 out of 52 mathematical theorems.
  • 1956: The term ‘Artificial Intelligence’ first emerged as an academic field and was first coined by American computer scientist John McCarthy at the Dartmouth Conference.

John McCarthy developed the programming language, which was quickly adopted by the AI ​​industry and became a popular choice among developers.

  • 1959: Oliver Selfridge publishes ‘Pandemium: A Paradigm for Learning’. This is a major contribution to machine learning. In which there is a model that can mold the events in its own way in the form of a pattern.

Arthur Samuel coined the term machine learning in a seminal paper to demonstrate that computers could be programmed to outperform programmers.

  • 1964: Daniel Bobrow a doctoral candidate at MIT, developed an early natural language processing program called STUDENT for solving algebra word problems.
  • 1965: Carl Gerashi, Lederberg, Joshua and Buchanan, Brush G., Edward Feigenbaum introduced the puzzle dendral and special system, which helped carbon chemists to identify unknown organic molecules.

Friends, the craze for AI was very high at the time when high quality languages ​​like Fortran, Lisp or COBOL were invented.

Successes and Challenges of AI

The 1960s and 1970s saw a boom or breakthrough in AI research, such as Shakey, a robot capable of navigating its environment, and ELIZA, an early Chatbot. AI was demonstrated in many programs, but often lacked robustness outside of controlled settings and was limited to specific domains.

The 1970s and 1980s were an AI winter, as the end of the decade was marked by unfulfilled expectations and a lack of funding. This left researchers short of the computational power needed to process complex data and the limitations of AI.

AI winter: 1987-1993
For example, AAAI warned, an AI winter has arrived. This term refers to the public, private sector, consumer, and private sectors, which leads to a slowdown in research policy, resulting in some successes and some failures. Both government and private investors have lost interest in AI, stopping funding due to the high cost and low profitability. This is considered to be a major reason for the failure of machine learning and expert systems.

1987: LISPs began to collapse in the face of cheap and easily available competitors that could run any LISP software, including hardware offered by Apple and IBM. Many technologies were readily available, so many specialized LISPs failed.

1988: Rollo Carpenter, a computer scientist, invented the chatbot Jabberwacky, which he programmed to provide immersive and entertaining conversation to humans.

Machine Learning

1990s: This decade saw a turning point with the rise of machine learning, as increasing computational power combined with statistical methods enabled systems to tackle more complex tasks. A subset of AI focuses on algorithms that allow machines to learn from data.

1997: This period also saw the emergence of early speech recognition systems and natural language processing (NLP). IBM’s DeepBlue beat chess champion Garry Kasparov.

Deep Learning

2010s: This period saw the beginning of a new wave of Artificial intelligence, born out of supervised learning, a set of machine learning techniques inspired by tasks and the architecture of the human brain. Using artificial neural networking, breakthroughs were made in game playing, language translation, and deep learning models.

Virtual assistants such as Siri, Alexa, and Google Assistant became household brands, demonstrating practical applications of Artificial intelligence. Google’s AlphaGo defeated Go champion Lee Sedol in 2016, demonstrating the ability to learn intuitively.

Artificial General Intelligence

2017: Facebook created two AI chatbots to learn to talk and hold conversations. But as they evolved, they became fully autonomous, creating their own language and abandoning English.

2018: Alibaba’s processing AI beats human intelligence on the Stanford reading and comprehension test, said a Chinese tech group.

2019: During this year, Google’s AlphaStar reached Grandmaster on the video game StarCraft 2, outperforming all but 2 percent.

2020: ChatGPT, or OpenAI, begins beta testing GPT3, a model for poetry, writing, other languages, and more that uses deep learning to generate code that is virtually indistinguishable from human-made content.

2021: When OpenAI developed DALL-E, which can process and understand images well enough to give accurate captions, the concept of AI became easier for the world to understand.

2022: With new generative Artificial intelligence approaches like ChatGPT, deep learning models can be pre-trained on large amounts of data. The growth of large language models, or LLMs, like OpenAI’s ChatGPT, is transforming the ability to increase enterprise value, increasing the performance of AI.

2024: New AI models are delivering richer, more robust experiences. They bring together NLP speech recognition capabilities and computer vision image recognition.

Present and Future of AI

During this time, AI has spread to every sector of the world. Today, AI is ubiquitous. From earth to space, it is empowering technologies from recommendation systems to autonomous vehicles. Innovations in generative AI, such as image synthesis tools, GPT models, are pushing the boundaries of creativity and automation.

Looking beyond this, Artificial intelligence is set to revolution sectors such as defence, healthcare, education and climate. Ensuring accountability, transparency and fairness in AI systems becomes crucial for development. However, ethical considerations are considered paramount.

Good, we have learned about the history and evolution of AI, but you know that in the future, due to this development progress, AI industrialization can lead to human unemployment, however, there will definitely be employment regarding AI. We can never predict the future. That is why we can make educated guesses. We expect AI to become important everywhere, all types of AI based businesses will be adopted. In the future, we will see more automation, workforce changes due to job creation in equal measure, automation, robotics, autonomous vehicles and many more different types of world.

Conclusion

Artificial intelligence evolution, history and development reflect humanity’s constant quest to understand and replicate intelligence. From ancient theorists, myths to cutting-edge technologies, Artificial Intelligence has come a long way and its journey is far from over. As we continue to think and experiment, we must balance progress with responsibility. It must be kept in mind that artificial intelligence should benefit the entire world, not harm it.

Leave a Comment