ARC has officially announced the launch of KeyGuard HE (KHE), which happens to be a secure, enterprise-controlled AI solution.
According to certain reports, the stated solution allows businesses to reap the benefits of AI without risking their intellectual property being used for training and inference by Large Language Models (LLMs) from public cloud AI solutions like ChatGPT, Gemini, Anthropic, as well as others.
Talk about some  use cases for KHE, they begin from the financial sector, where financial institutions can leverage it to perform real-time encrypted data analysis, something they can do so using AI models for trend analysis, fraud detection, and customer insights, all while restricting the exposure for sensitive financial records or violating compliance regulations.
Next up, we have hospitals and other related healthcare facilities that can rely upon the solution to securely share and analyze encrypted patient data across organizations, thus enabling AI-driven medical research and patient diagnostics. Again, KHE can be expected here to preach compliance with strict privacy laws like HIPAA. Beyond that, enterprises may also bank upon KHE to track and audit their supply chain activities using smart contracts and KHE.
Having referred to use cases, we now must talk about KeyGuard HE’s features on a slightly deeper level. For starters, the technology in question brings to your disposal Homomorphic and CKKS Encryption which, on its part, can enable computations on encrypted data so to ensure privacy throughout the process.
Next up, we must dig into the solution’s LLM integration that treads up a long distance to let users plug and play any AI model, and at the same time, encrypt it along with its responses in no more than a few seconds. Then, there is a public encrypted facility. This particular facility makes it possible for you to access trustless user- control for key management, as well as achieve validation for key deletion or modification requests.
“With KeyGuard HE, corporate users can experience the power of AI with confidence that they hold complete control over the keys to security, and that no untrusted vendor, or anyone else can ever access or view that data without permission,” said TJ Dunham, Founder & CEO of ARC. “The model can’t leak your data, as it is encrypted and customers can revoke key access at any time. KeyGuard HE empowers users to manage their private keys with full transparency and trust by using blockchain-based smart contracts to provide verifiable proof-of-actions.”
Another detail worth a mention here is rooted in the solution’s promise to provide a significantly enhanced brand of transparency. The idea behind doing so is to allow every user action to be traceable, and therefore, ensuring full integrity and trust in the system.
Rounding up highlights would be the potential for data privacy at scale. You see, this combination of HE and LLMs will empower large-scale computations to be performed securely, without compromising on privacy or security.
The development under focus follows up on the growing number of warnings issued by experts in regards to sharing confidential or even personal information with AI chatbots. Keeping that in mind, many security conscious companies have even started to ban their employees from using public AI tools for fear of losing the all-important intellectual property they have created.
“You should assume that anything you type into ChatGPT is just going to be fed directly into future versions of ChatGPT. And that can be everything from computer code to trade secrets,” said Mike Wooldridge, a professor of AI at Oxford University, told The Guardian last year.