They might be the smartest species to ever walk the earth, but human beings still have a pretty devastating record at not making mistakes. The same has been proven time and time again throughout our history, with each appearance practically forcing us to look for some semblance of a defensive cover. We will, on our part, find the best possible answer to our conundrum once we bring dedicated regulatory bodies into the fold. Having a well-defined authority across every was a game-changer, as it instantly compensated for a lot of our shortcomings, and consequentially, gave us a shot at all those possibilities that we couldn’t have imagined otherwise. However, the utopia to emerge from it won’t last very long, and if we are being honest, it was all technology’s fault. You see, the moment technology and its layered nature were allowed to take over the scene; it gave people an unprecedented chance to exploit others for their own benefit. In case this wasn’t bad enough in itself, the stated runner started to materialize on such a massive scale that it expectantly overwhelmed our governing forces and sent them back to the square one. After spending a long time in the wilderness, though, the regulatory contingent seems ready to make a comeback. The same has become more and more evident over the recent past, and it should only get a lot stronger on the back of a newly-introduced legislation.
California Governor, Gavin Newsom has officially signed off on a bill called AB 587, which was designed to make web platforms monitor hate speech, extremism, harassment, and other objectionable behaviors. According to certain reports, AB 587 will make it mandatory for social media companies to post their terms of service online, while also sharing a special report with the state attorney general twice a year. Talk about the report, it will need to include a complete lowdown in terms of whether the platform defines and moderates several categories of content, including “hate speech or racism,” “extremism or radicalization,” “disinformation or misinformation,” harassment, and “foreign political interference.” Apart from that, it must also carry details about automated content moderation, how many times people viewed content that was flagged for removal, and how the flagged content was handled.
“California will not stand by as social media is weaponized to spread hate and disinformation that threaten our communities and foundational values as a country,” Newsom said in a statement. “Californians deserve to know how these platforms are impacting our public discourse, and this action brings much-needed transparency and accountability to the policies that shape the social media content we consume every day.”
Interestingly enough it’s not the first time we have witnessed California trying to bolster scrutiny on social media companies. Not long ago, the state introduced AB 2273, which was geared towards tightening regulations for children’s social media use, but like many of those efforts, this one also had some opposition from certain parties. The most notable argument against it was how it could potentially violate First Amendment speech protections. Nevertheless, despite all the dissent, the bill is now very much a law.