top of page

OUTSMARTING THE HUMAN RACE

Eva Guțu, XI A

After reading an article in The Financial Times regarding the AI race, I have reached with the following conclusions.


It is no surprise whatsoever that Artificial Intelligence and computerized machine learning systems grew hastily in capacity and overall abilities, thus the human race is confronted with the unsettling growth of such machines at an ominous pace. Replicating someone’s voice or even face is nothing new and poses no difficulty for AI, leading to mass misinformation and many more. Consequently, society should be scared of such probable future menaces.


https://www.google.com/url?sa=i&url=https%3A%2F%2Fwww.geeksforgeeks.org%2Fartificial-intelligence-an-introduction%2F&psig=AOvVaw1NQ6lBGPEBKLFefPP7Oeo7&ust=1686383321306000&source=images&cd=vfe&ved=0CBMQjhxqFwoTCJDyi8rZtf8CFQAAAAAdAAAAABAE

Firstly, in order to comprehend our current situation, one should look back and fathom the nature of this particular issue. Therefore, throughout recent times, we could distinguish with great ease three main eras of machine learning. The slowest one, being called the “pre-learning era”, one consisting of mainly slow growths, being followed by the ”deep-learning era”. This is more recent and markedly more intense, ranging from 2010 to 2015, concluding with the “large-scale era”, since 2016 to the present day. In the latter, AI grew both in potential and investments. But somehow, although being aware of its potentially pernicious and catastrophic outcomes, individuals keep spending on this particular “investment of the future”.


Taking into consideration the aspects mentioned above, one could observe a rapid change in rhythm. Industry leaders of the biggest AI-focused companies such as OpenAi and DeepMind do not seem so concerned, thus preferring to believe in the ultimate success benefiting humanity and posterity. Therefore remarking that their ultimate aim is the creation of a system vastly more intelligent than any human, thus being able to solve society’s biggest issues, such as compelling health issues, poverty or climate change. Such utopian perspectives do sound good, but will AI stop there after we feed it the entire information in the world? The basic fundamental principle for these projects is that humans posses a limited mental capacity, whilst Artificial Intelligence provided unlimited stocking space. Consequently, becoming a superhuman system that some regard as God-Like-AI.


https://www.google.com/url?sa=i&url=https%3A%2F%2Fwww.newscientist.com%2Farticle%2F2342632-artificial-intelligence-is-being-asked-to-predict-the-future-of-ai%2F&psig=AOvVaw1NQ6lBGPEBKLFefPP7Oeo7&ust=1686383321306000&source=images&cd=vfe&ved=0CBMQjhxqFwoTCJDyi8rZtf8CFQAAAAAdAAAAABAJ

Another main aspect in the aforementioned article is whether governments should step in. On the one hand, these will be regarded as support for further checking and maintaining somewhat of control, while the other half believes this to be undesirable, hence wishing to maintain authority themselves. Such debatable topics are nonetheless a “double-edged sword” hence being based on the “majority wins” principle. Therefore, it is of tremendous difficulty to convince divided groups driven by power and money. This constant competition amongst industry leaders is of utmost concern, since participating in a seemingly never-ending race for money and influence. These means have been used as the main motivation for many individuals throughout history, but we as a society should ask ourselves whether this is still worth it in the long run, and take action.


Nevertheless, the article tackled another main problem, the one concerning the high risk of misuse. There are a plethora of opportunities for individuals of all social classes and of varying influence to use such weapons for their personal gain. Each one of us sees stability and peace in different ways. Therefore, we do not justify the aggressive use of such machines as something immoral, but rather as something that is free to interpret.


https://www.google.com/url?sa=i&url=https%3A%2F%2Faiab.wharton.upenn.edu%2Fresearch%2Fartificial-intelligence-risk-governance%2F&psig=AOvVaw1NQ6lBGPEBKLFefPP7Oeo7&ust=1686383321306000&source=images&cd=vfe&ved=0CBMQjhxqFwoTCJDyi8rZtf8CFQAAAAAdAAAAABAR

Another aspect on which I wholeheartedly agree is the dire need for scrutiny, or as the author of the article called it, AI alignment. Heavy research should be conducted in order to ensure the safety of such systems over their cosmetization. According to me, until society does not fully apprehend the capacities of the operating systems of such machines, we cannot demonstrate or even concoct the perils and probable outcomes in advance.

A tremendously beneficial idea expressed in the article was to keep such AI under observation and development until proven safe for commercialising on a large scale. This is seemingly one of the most adequate preventive measures, but we are still posed with reluctance and uncertainty. Perhaps it is because we are scared, or because we feel threatened by the unknown itself, but either way, Artificial Intelligence is not something one should dismiss with ease.


To conclude with, I believe that AI is and will continue to be in an ever-growing state, however, I acquiesce that strict limits and thorough control over these machines should be implemented as fast as possible in order to achieve the slowing down of such developments or somewhat of a balance.


9 views0 comments

Comments


bottom of page