Surely, they are the smartest species to ever walk the earth, but despite everything, human beings still retain a pretty dismal record at not making mistakes. This dynamic, in particular, has already been reinforced quite a few times throughout our history, with each testimony practically forcing us to look for a defensive cover. We will, however, solve our conundrum in the most fitting way possible, and we’ll do so by bringing dedicated regulatory bodies into the fold. Having a well-defined authority across each and every area was a game-changer, as it instantly gave us a safety cushion against our many shortcomings. Now, the kind of utopia you would expect from such a development did arrive, but at the same time, it failed to stick around for very long. Talk about what caused its sudden death, the answer has to keep technology at the heart of everything. You see, the moment technology’s layered nature took over the scene, it allowed people an unprecedented chance to exploit others for their own benefit. In case this didn’t sound bad enough, the whole runner soon began to materialize on such a massive scale that it expectantly overwhelmed our governing forces and sent them back to square one. After spending a lengthy spell in the wilderness, though, it seems like the regulatory contingent is finally ready to make a comeback. The same has only turned more and more evident over the recent past, and truth be told, one new lawsuit can do a lot to keep that trend alive and kicking.
Programmer-cum-lawyer Matthew Butterick, along with San Francisco-based Joseph Saveri Law Firm, has officially filed a class-action lawsuit against Microsoft, GitHub, and OpenAI, claiming the companies’ creation of AI-powered coding assistant, GitHub Copilot is built around “software piracy on an unprecedented scale.” Released back in June 2021, GitHub Copilot was designed to streamline software development by enabling developers with relevant artificial intelligence-generated code suggestions as and when they type the code. In order to do the stated job, the tool was trained on public repositories of code scraped from the web. However, many of these repositories were the ones published with licenses that require anyone reusing the code to credit its creators. As you might have guessed it by now, Copilot provided no such credit.
“In June 2022, Copilot had 1,200,000 users. If only 1% of users have ever received Output based on Licensed Materials and only once each, Defendants have ‘only’ breached Plaintiffs’ and the Class’s Licenses 12,000 times,” the lawsuit stated. “However, each time Copilot outputs Licensed Materials without attribution, the copyright notice, or the License Terms it violates the DMCA three times. Thus, even using this extreme underestimate, Copilot has ‘only’ violated the DMCA 36,000 times.”
With those violations applied across all of Copilot’s estimated 1.2 million users; the lawsuit seeks damages worth a whopping $9 billion.
Although focused on the coding space, this lawsuit highlights a much bigger problem i.e. the creators, across various different industries, not having enough protection from AI systems that may be using their work without any attribution at all. The sheer injustice associated with the concept has even pushed companies like Getty Images and Shutterstock to ban the use of AI generative technology on their platforms, but that hasn’t been enough to solve the problem on a wider scale.
“This is the first step in what will be a long journey. As far as we know, this is the first class-action case in the US challenging the training and output of AI systems. It will not be the last. AI systems are not exempt from the law. Those who create and operate these systems must remain accountable,” said Butterick.