A large chunk of human life is actually spent in gaining knowledge and executing it across different frontiers. The incentive for doing so, of course, revolves around widening our perspective over time, and consequentially, becoming better as individuals. If we take a moment and weigh up our success in this regard, we’ll see that humans have actually done a great job so far, but upon a deeper look, we are also bound to notice some glaring inconsistencies. You see, despite our best attempts at looking into every possible source of knowledge, we haven’t really managed to do it. There remains a wide expanse that we can’t cover. Now, such a dynamic doesn’t appear as a major problem until you learn about the risk in play. To contextualize it, we have to understand how our lack of knowledge can easily put us in hugely detrimental situations. Hence, the world has gone on to establish dedicated regulatory bodies throughout the spectrum. These bodies make sure that the interests of the general public are protected under all circumstances. In fact, if they observe anything different, these regulators can also dish out punishments, and to save itself from a similar fate, Crisis Text Line has now announced a timely decision.
Crisis Text Line has officially announced that it will stop sharing conversation data with the AI firm called Loris.ai. The decision to do so is seemingly triggered by a recent Politico report, which made a big revelation in terms of how Crisis Text Line was sharing personal data without properly warning the individuals. Politico report even exposed CTL for firing the volunteer who raised concerns about its handling of data. Once the report was published, CTL put out a statement in defence saying “[t]he only for-profit partner that we have shared fully scrubbed and anonymized data with is Loris.ai.” CTL went on to explain the rationale behind sharing data with Loris.ai, citing that the company actually uses it to help other organizations in de-escalating “some of their most notoriously stressful and painful moments between customer service representatives and customers.”
Furthermore, CTL even brought out the big guns when it mentioned that its practices are substantiated by watchdogs like Electronic Privacy Information Center (EPIC), but that quickly backfired after EPIC accused the organization of removing the necessary context from the statement. The watchdog, in its own statement, would say assert that CTL and Loris.ai are trying to “extract commercial value out of the most sensitive, intimate, and vulnerable moments in the lives (of) those individuals seeking mental health assistance and of the hard-working volunteer responders… No data scrubbing technique or statement in a terms of service can resolve that ethical violation.”
The whole controversy is fortunately put to bed with CTL’s latest decision. Notably, Loris.ai will also delete all the data it has received from CTL thus far.