Nowadays, AI is a constant topic of conversation. In a world where many big tech companies are scrambling to include AI in their products, and AI is receiving constant improvement from AI developers, some people wonder what this means for the future. Some argue AI will help move humanity forward and innovate, while others argue the opposite. Or will AI go Terminator mode and take over the world, just like Hollywood warned us? Well, the reality is that it’s a double-edged sword.
AI has indeed helped humanity. It has the potential to assist medical professionals during surgeries. Modern factories are also dominated by automated AI machines, which are taking on dangerous jobs in manufacturing, preventing workers from risking their lives. AI can even perform calculations and write code, all much faster than a human can.
AI Company, OpenAI, has been developing the AI chatbot ChatGPT since November 2022. You input a prompt, and ChatGPT spits out an answer. For instance, a person could ask ChatGPT to write a program to organize emails and ChatGPT would spit out the program instantly. Need help with a math problem? It solves it instantly. You can ask ChatGPT almost anything and it will answer.
AI is not without its problems, however. ChatGPT and other similar AI programs can’t innovate like humans can and can give incorrect outputs. The core of AI is that it builds upon existing sources. AI is like Legos. It can build something based on provided information on how Legos are built, but AI can’t be given a bunch of Legos and come up with something 100% original like a human could.
AI is also prone to giving incorrect outputs. If AI writes code, some of the lines could be messed up. If it’s told to write a history essay, it can have incorrect grammar, or give false historical information. The issue is that many trust that AI is correct 100% of the time, but this is not the case. A prime example is when Google Gemini was asked to draw historical figures, it spat out blatantly incorrect images. Or, when ChatGPT was asked to file an injury claim it output false citations and research.
In addition, there’s the blaring issue of education. Many students use AI to write essays and perform math problems instead of just one paragraph or problem. When students use AI to complete assignments, they aren’t learning material. It’s no different than copying off answer keys while doing homework. You take the easy way out, and then you don’t know anything when it comes to test day.
Deepfakes have also attracted a lot of attention. Having AI being able to replicate a person’s movements and voice almost to a 1:1 scare is honestly frightening. The main controversy surrounding deep fakes is that people would use them with malicious intent for scams and for spreading fake news.
AI is a fragile thing. It has the power to reshape innovation as we know it and bring humanity to places we’ve never been. It can advance medicine and industrial technology far further than a human ever could. But AI could also set us back if people rely solely on it for cheating and political manipulation. AI must regulated to ensure that it moves us forward into a promising future, not a terrifying dystopia.