Assured Safety, ‘the Sign for AI,’ comes out of stealth with $4.2M
As customers, companies, and governments flock to the promise of low cost, quick, and seemingly magical AI instruments, one query retains getting in the way in which: How do I preserve my knowledge non-public?
Tech giants like OpenAI, Anthropic, xAI, Google, and others are quietly scooping up and retaining person knowledge to enhance their fashions or monitor for security and safety, even in some enterprise contexts the place firms assume their data is off limits. For extremely regulated industries or firms constructing on the frontier, that grey space may very well be a dealbreaker. Fears about the place knowledge goes, who can see it, and the way it is likely to be used are slowing AI adoption in sectors like healthcare, finance, and authorities.
Enter San Francisco-based startup Assured Safety, which goals to be “the Sign for AI.” The corporate’s product, CONFSEC, is an end-to-end encryption instrument that wraps round foundational fashions, guaranteeing that prompts and metadata can’t be saved, seen, or used for AI coaching, even by the mannequin supplier or any third celebration.
“The second that you just quit your knowledge to another person, you’ve basically lowered your privateness,” Jonathan Mortensen, founder and CEO of Assured Safety, instructed TechCrunch. “And our product’s aim is to take away that trade-off.”
Assured Safety got here out of stealth on Thursday with $4.2 million in seed funding from Decibel, South Park Commons, Ex Ante, and Swyx, TechCrunch has solely realized. The corporate desires to function an middleman vendor between AI distributors and their prospects — like hyperscalers, governments, and enterprises.
Even AI firms may see the worth in providing Assured Safety’s instrument to enterprise shoppers as a option to unlock that market, mentioned Mortensen. He added that CONFSEC can also be well-suited for brand new AI browsers hitting the market, like Perplexity’s not too long ago launched Comet, to present prospects ensures that their delicate knowledge isn’t being saved on a server someplace that the corporate or unhealthy actors may entry, or that their work-related prompts aren’t getting used to “prepare AI to do your job.”
CONFSEC is modeled after Apple’s Non-public Cloud Compute (PCC) structure, which Mortensen says “is 10x higher than something on the market when it comes to guaranteeing that Apple can not see your knowledge” when it runs sure AI duties securely within the cloud.
Techcrunch occasion
San Francisco
|
October 27-29, 2025
Like Apple’s PCC, Assured Safety’s system works by first anonymizing knowledge by encrypting and routing it by providers like Cloudflare or Fastly, so servers by no means see the unique supply or content material. Subsequent, it makes use of superior encryption that solely permits decryption below strict situations.
“So you may say you’re solely allowed to decrypt this in case you are not going to log the info, and also you’re not going to make use of it for coaching, and also you’re not going to let anybody see it,” Mortensen mentioned.
Lastly, the software program operating the AI inference is publicly logged and open to assessment in order that consultants can confirm its ensures.
“Assured Safety is forward of the curve in recognizing that the way forward for AI is determined by belief constructed into the infrastructure itself,” Jess Leão, associate at Decibel, mentioned in an announcement. “With out options like this, many enterprises merely can’t transfer ahead with AI.”
It’s nonetheless early days for the year-old firm, however Mortensen mentioned CONFSEC has been examined, externally audited, and is production-ready. The crew is in talks with banks, browsers, and engines like google, amongst different potential shoppers, so as to add CONFSEC to their infrastructure stacks.
“You deliver the AI, we deliver the privateness,” mentioned Mortensen.