YouTube Needs to Fix Their Algorithm
Opinion
On Feb. 28, a mother of a young child found a video explaining the steps on how to commit suicide in the YouTube Kids app. This is only one of many instances where inappropriate videos have been found on the YouTube Kids app, the “child-safe” alternative to YouTube. It has been criticized prior to this event for using an algorithm to determine which videos are appropriate for young audiences, instead of completely examining them. Inevitably allowing a plethora of sexual, dark, and downright weird content, little kids are exposed without many parent’s knowledge. However, while YouTube is trying, it ultimately fails to protect its young viewers due to its robotic response and lack of parental education in terms of children viewership.
The whole debate started when Free Hess, the aforementioned mother, found various messages hidden in innocent cartoons on the app. She was already aware of these videos when another mother voiced her concerns a few months prior, however after seeing it for herself, she decided to contact YouTube. Also, she held her own investigation where she registered as a child, and within minutes, she received predatory comments. Hess, an ER pediatrician, also says that she’s seen kids as young as seven in the ER due to suicide attempts. While YouTube did eventually take down the videos, it took them almost a week and a half to respond, which is too slow for such a viral concern.
While Hess was involved in censoring what her own child saw, unfortunately, a lot of parents aren’t. However, it isn’t their fault. Many parents aren’t aware of the real risks the internet can create, and therefore, they mindlessly give their children access. Nowadays, children as young as one and two are head-deep into cellphones and tablets, with unconcerned parents nearby. It is understandable that YouTube Kids provides a source of entertainment for kids whose parents may not have the time to always play with them. But, parents need to be aware of the risks, and what early access to technology may psychologically do to their kids.
This responsibility of keeping children safe extends to the YouTube company, as well. Youtube has repeatedly stated that they manually review videos every 24 hours, however the action taken doesn’t match the intent of the words spoken. Bot channels that publish multiple videos a day have been discovered on the site, trying to use clickbait to intrigue kids with images of famous characters, such as Spiderman or Elsa from Frozen. These channels have videos that contain disturbing content, such as Elsa giving birth, or Dora committing suicide, exposing little kids to unsuitable topics without their knowledge. However, these channels have millions of subscribers and views, showing how YouTube’s “manual reviews” are not the best course of action continuing forwards.
Instead of continuing with this process that clearly has its flaws, YouTube must revamp it’s censorship process. While the company may not have the resources to check every single video on the site, it should not rely on a computer algorithm to guarantee children’s safety. They should instead monitor popular videos on the platform to see if any patterns are occurring and use that to take action. A good example is Youtube’s recent ban on comments in videos featuring young children to avoid predatory comments. However, their responses also need to be quicker because, with the volatile nature of Youtube, there’s no telling how many kids could potentially see an inappropriate video before it’s deleted.
However, this revamp in security does not mean that all YouTubers should be censored or prevented from making the content that they want. Many YouTubers are already suffering with the recent policies, with many channels becoming unfairly demonetized for content such as cuss words or raising awareness of suicide and depression. If the process is correctly done, there is no need for YouTubers to censor in case a child saw or heard something inappropriate. If the correct action is taken, a balance can be achieved between safety and freedom of expression without sacrificing the income of YouTubers.
We are only recently seeing the negative impacts of our cell phones, and with that, we need to change our protection methods as time goes on. Processes that may have worked a few years ago don’t work now, and the so called “safer methods” aren’t safe anymore. It’s now up to the collaboration of parents and companies to keep children safe from the negative influences of the world. A heightened awareness of what kids are watching and how it may affect them brings more knowledge about security to the table. With this awareness, hopefully we can make sure that no child sees these types of undignified videos ever again.
When I'm not writing, I listen to music a lot, and I love dance and psychology. This is my third year on press, and when I'm not editing stories, I'll...
Daniel Chen • Mar 21, 2019 at 6:25 pm
Don’t take this the wrong way. I wholeheartedly agree with the intentions of this article. However, I’m just a bit confused as to what you are trying to say and I don’t want to misunderstand you.
Is the article a general call for awareness about such an issue as the last paragraph states, which I would totally agree with, or is there something more specific that I am missing. It is suggested in the title that the algorithm should be fixed, but then in the third to last paragraph, you say that a computer algorithm is not enough. If a computer algorithm isn’t enough, what exactly would be enough? It’s okay if you don’t have a clear solution, neither do I. If that is the case, perhaps that should be said in the article.
The issues you outlined should obviously be tackled, but some more specificity with regard to the solution would be helpful. Maybe I am not quite understanding you. Take away from what I said is useful to you, and throw away the rest.