Opinion: Artificial Intelligence should not be heavily relied on as development goes on, making it increasingly difficult to tackle every problem created with this technological infant.
———-
As humans continue to move forward in the 2020s, scientific achievements overwhelm our daily lives as frightening new inventions dictate our society’s present and future. Artificial intelligence is one of these exciting achievements that is evolving faster than humans could ever keep up with.
Over the past few months, a number of different AIs have sprouted in all directions with Microsoft’s OpenAI at the forefront. Universities and professional environments are confronted with a new tool for cheating and laziness. Not only can you ask the insanely smart learning system for help on a Canvas discussion post, but this technology can also help medical professionals make new analyses and diagnoses which drastically improves clinician-oriented relationships with patients.
The amazing abilities do go over the line of human comfortability, however, as Google’s most influential AI pioneer, Geoffrey Hinton recently quit in order to warn the public about the philosophical risks involving this transition into the era of advanced AI.
What are these risks that have left so many tech giants fearful for a premature reliance on AI models? Popular media tells us to panic and prepare for the robot overlords to completely wipe out humanity just because they can. Fortunately for us, it is extremely unlikely that these sci-fi stories will come to fruition. Unfortunately for us, we are creating our very own sci-fi story never seen before.
Today, as AI use becomes more normalized, governments across the world have announced plans to eventually integrate AI into their policymaking sectors. China specifically has plans to begin its process of complete integration by 2025, as it also hopes to profit from this pioneering decision. This may sound fine and dandy right now, but there are many pressing limitations that have not yet been perfected by many AI developers.
OpenAI announced in early March of this year that it dramatically improved on its predecessor’s limitations, risks and safety mitigations. Its newest model, GPT-4,is a multimodal large language model which allows this AI to be able to spatially reason. According to OpenAI’s technical report, “One of the main goals of developing such models is to improve [AI’s] ability to understand and generate natural language text, particularly in more complex and nuanced scenarios.” In other words, GPT-4 has the capacity to understand, reason and remember the visual and spatial relations among objects and space, making it a faux form of consciousness.
As scary and exciting as that might sound, this technology continues to be developed and edited as AI in general is still in its infancy.
OpenAI even claims that its shiny new GPT-4 should be used with caution, emphasizing its potential to significantly influence society in both beneficial and harmful ways. OpenAI has also specified that it should not be used in “high-stakes context,” something that world leaders need to listen to. As China prepares to implement AI in its government in less than two years, it is increasingly unlikely that AI will be prepared to take on such a critical role.
For example, one of the ways GPT-4 avoids risky and problematic responses is by a reward system called rule-based reward models. RBRMs target correct behavior, such as refusing to generate harmful content or not refusing safe requests by any user. This reward system is extremely flawed, as one of its three requirements needs a human-made rubric that tells the program what is right and wrong. What happens when this technology gets in the hands of someone who has an objectively skewed view of the world? They would then be the one to dictate how this model responds going forward either positively or negatively.
Overall, AI is not yet an overwhelming force that could destroy all of humanity. Instead, it can best be regarded as an infant that requires lots of patience and learning. This is only the development period, which also makes it the most critical period. We should guide AI toward its final form, as pushing it too early can have devastating effects that aren’t yet comprehensible to our contemporary lives.
Let’s raise this massively influential technological baby for the good of humanity, even though we do not really have a choice.