Rom C is a serial entrepreneur and angel investor with over 2 decades of experience in Google, Amazon, Amazon Web Services and founder in Health-Tech, Artificial Intelligence and De...
Show More
Recent posts
See All
Rom C @RomC
·
5h
- Post
Most teams are rushing to use AI, but very few are asking a simple question:
Should raw customer data ever reach an LLM?
In many workflows today, sensitive data (financial info, personal identifiers, internal docs) is directly sent to AI tools. That creates unnecessary risk — even with trusted providers.
At Questa AI, we’re exploring a different approach:
→ A privacy-first LLM layer
→ Using a PII anonymizer to redact sensitive data before processing
→ Enabling Secure AI without fully self-hosting infrastructure
The idea is simple:
AI should work on context, not identity.
Curious how others are thinking about this —
Is privacy best solved through infrastructure (self-hosting), or through smarter data handling?