As smart as human beings have presented themselves to be, they do carry some questionable tendencies. While these tendencies cover a wide expanse in their own right, none really match up with how we, at times, put our own interests above everything else, even if it means breaking that moral compass. Now, when you have such a dynamic in play, things can go wrong in no time, so as a preventive measure, the world has set-up dedicated regulatory bodies across the board. Responsible for establishing order, the regulatory bodies are there to monitor your every action, which can leave an impact within the public domain. The idea has worked like a charm, but we must also some shed light on the challenges it faced, the biggest one, of course, coming on the back of technology’s success. You see, technology has been structured in a way that tracking its entire realm is just outright impossible. Nonetheless, some recent events are doing everything to suggest a major breakthrough within this context. As regulators slowly learn about how the big tech companies are exploiting people, the room for any, deliberate or not, mishaps is getting smaller and smaller. In fact, we can see yet another example of it in TikTok’s latest measures.
TikTok has officially revealed plans to strengthen regulatory efforts around dangerous content. According to certain reports, the new measures will take into account various elements; including harmful hoaxes, hateful content or whatever that instigates serious situations like eating disorders, and more. To get these measures actively running at the ground level, TikTok will bank upon creators to make their followers more informed. Even though nobody expected a mention of it in the company’s announcement, the move seems largely inspired by an incident, which saw one viral hoax warning TikTok users about forthcoming real-world violence in schools. The said hoax got so much traction that schools across the country had to shutdown. Beyond TikTok, many other social platforms have gone through similar issues, thus inviting serious scrutiny from the regulators. With the pressure piling up, all these platforms now have no option but to genuinely go out and create a safer environment for their users.
“Our policies are designed to foster an experience that prioritizes safety, inclusion, and authenticity,” said Corman Keenan, head of trust and safety at TikTok, in a press release announcing the new policy. “They take into account emerging trends or threats observed across the internet and on our platform.”
As a part of the safety update, TikTok is also bringing in a new policy category called “dangerous acts and challenges”. Playing upto its billing, the category is focused towards encouraging a more mindful viewing experience. It will achieve the goal by asking the viewer to stop a challenge video, consider if it is harmful, and make a decision. In case the user finds the content potentially dangerous, they are further requested to report it. Wrapping it up through a broadened approach to moderation of disordered eating-related content, TikTok will hope that the new measures guide the platform into a responsible direction.