The Vast Tree of Artificial Intelligence

Siddhant RajJune 3, 2024AI and the Future of KnowledgeFeatures
The Vast Tree of Artificial Intelligence

Artwork by Isaac Muelas Cerezo, age 15, Spain

A new era of code-driven systems and algorithm-based machinery has captured the limelight in the realm of technology.

Today, humanity has advanced to smart phones and iPads, dispelling outdated pagers and snail mail, which are now considered “ages old.” Amid this wilderness of rapid technological development, a new light emerges and is being implemented globally. Augmented by machine learning and algorithm-based methodology, artificial intelligence (AI) is playing a key role in amplifying human efficiency, ranging from transportation to household equipment. In layman’s terms, artificial intelligence enables machines to learn from experience and perform human-like tasks. Moreover, AI aids in discerning important data like the rate of poverty, identifying corruption, and determining consumer behavior.

Hence, AI may be considered a vast tree that branches toward divergent fields and spheres. Today, it has germinated to the extent that it aids in making decisions and discerning important data, thereby solving problems, taking actions, and making an impact. Sundar Pichai, the Chief Executive Officer of Google, once said at the World Economic Forum, “AI is probably the most important thing humanity has ever worked on. I think of it as something more profound than electricity or fire.” However, while it is a prime linchpin of global development, it might also be a major peril to human autonomy. But, before we take a deep dive into the future of AI, we must take a peek at its origin.

The Inception of Artificial Intelligence

It is commonly believed that artificial intelligence is a 21st-century phenomenon, but the truth is that the inception of AI dates back to the 1940s. In 1943, Walter Pitts and Warren McCulloch proposed a model of artificial neurons, which served as the seed for the colossal tree of artificial intelligence. In 1950, Alan Turing introduced the “Turing test,” as a method of determining if a machine could display human intelligence by interacting with a human without being identified as a machine. In the year 1955, John McCarthy, prominently known as the “Father of AI,” coined the term “artificial intelligence.” In 1961, American inventor George Devol innovated the world’s first industrial robot, named Unimate — a hydraulic manipulator arm used by car makers at the time. Following that, Shakey, the first general-purpose mobile robot, was developed by Charles Rosen, Nils Nilsson, and Peter Hart between 1966 and 1972. In 1997, AI took the spotlight in the sports world after Deep Blue, a chess-playing computer from IBM, beat international chess champion Gary Kasparov. In 1998, the realm of AI expanded its horizon to imitating human-like emotions, when American scientist and entrepreneur Cynthia Breazeal invented Kismet, a robot that had the ability to determine and respond to human feelings.

The Future of AI

Eminent physicist and cosmologist Stephen Hawking once said, “AI is likely to be either the best or the worst thing to happen to humanity.” As AI is rapidly being promulgated and etched onto our day-to-day lives, shouldn’t we be aware of where artificial intelligence stems from? How does it continue to develop? And how will it affect humanity in the coming years? Will it give rise to crises, or will it aid socioeconomic growth? This vast family tree of AI was just planted by the invention of computers in the early 19th century, and computers are an integral part of every household today. However, the journey of innovation did not stop there.

In the past decades, the world has seen a quantum leap in the digital realm. In the 1960s, the first chatbot named ELIZA was invented by Joseph Weizenbaum at MIT. The chatbot was Weizenbaum’s method to explore communication between humans and machines. In 1999, Sony introduced a series of robotic dogs named AIBO. In 2016, Hanson Robotics developed a humanoid, named Sophia, designed to display human-like emotions and interact with people. These are just some of humanity’s myriad of achievements in its endeavors to echo the concept of algorithm-based technology — all of which are like leaves in the tremendous tree of artificial intelligence.

AI Bias

Today, we use artificial intelligence as a wide-ranging tool to simplify our everyday lives. It is most often tasked with performing human-like activities, such as playing chess, detecting objects, and surveying environments. While it does have the potential to propel decades — or perhaps centuries — of scientific research and discovery, couldn’t it also be a threat to humanity and its overall development? According to a survey of 979 experts conducted by Pew Research Center in 2018, 63% of those surveyed claimed that the rise of artificial intelligence would help people, whereas the remaining 37% claimed that AI would serve as a threat to the human race, especially in terms of shrinking human autonomy and capacity. Thus, while artificial intelligence paves the future path to technological advancements and innovations, it also comes with a myriad of biases and conjectures, which can give rise to misrepresentation of information or invalid data. However, technology pioneers, entrepreneurs, and innovators claim to be burning the midnight oil to resolve this issue of AI bias by popularizing the idea of Artificial general intelligence (AGI), which aims to generate more personalized information and data.

To date, there have been multitudinous high-profile events where organizations have faced AI bias. In the healthcare industry, computer-aided diagnosis machines have been observed to return prejudiced accuracy results for black patients only. A similar example of AI conjecture was when Amazon’s automated recruitment system was reported to have discriminated against women by rating male applicants higher than female applicants. In 2016, Microsoft launched a ChatBot named Tay, developed to engage in playful conversations with users of Twitter. However, within 24 hours, it started sharing racially discriminative and transphobic tweets, based on its interactions with users. These are merely some of the many problems that businesses have encountered while relying on artificial intelligence.

As the tree of AI has rapidly grown over the years, a robotic vacuum cleaner, Roomba, was introduced by iRobot in the year 2002. In 2011, Apple introduced Siri. Eugene Goostman, a chatbot, passed the Turing test in 2014, followed by Amazon’s launch of the virtual assistant Alexa the same year. Moreover, in 2017, Google’s AlphaGo defeated champion player Ke Jie in all the three games of Go it played against him.

In my opinion, the impact of AI depends on how we use it. If we misuse our digital resources and become blindly dependent on them, then AI will surely bring great turmoil in global development. However, wisely making use of these resources will expand our effectiveness and at the same time maintain our self-government. Movies like Eagle Eye, Terminator, and Avengers: Age of Ultron highlight the jeopardies that unchecked AI can give rise to. Therefore, as digital citizens, it is imperative that we maximize cybersecurity by mindfully utilizing AI, because it is soon going to be carved into our lifestyles.

From the Turing test to ChatGPT, the rise of AI in the past years has taken over a plethora of fields, such as customer service, retail, and marketing, among many others. But every big innovation comes with a big cost. Today, AI laws are being implemented — a sign that the reign of AI is inevitable for the human race. Only time will tell whether humanity will succumb to a world driven by algorithms and codes.

Sources:

https://www.javatpoint.com/history-of-artificial-intelligence

https://fortune.com/2023/04/17/sundar-pichai-a-i-more-profound-than-fire-electricity/

https://aibusiness.com/responsible-ai/stephen-hawking-ai-could-be-human-history-s-greatest-disaster-but-there-is-an-alternative

https://www.simplilearn.com/advantages-and-disadvantages-of-artificial-intelligence-article

https://www.ibm.com/topics/ai-bias

https://www.prolific.com/resources/shocking-ai-bias

Siddhant Raj is an 11-year-old from a multicultural family in India. He loves to call himself a bibliophile and aspires to be a published author someday! Riverside School is the space where he gives shape to his thoughts and dreams. Writing stories and poems is Siddhant’s superpower.