It offers robust LLM Firewall solutions, ensuring your data and identity are protected in the AI space. Our service secures your data, offers prompt anonymity, and prevents unauthorized access. Perfect for users valuing security.
I am extremly excited and happy on ZeroTrusted launch today on Product Hunt! 🚀 Being part of this journey and witnessing our idea transform into a great product for data privacy is beyond rewarding. A huge shoutout to everyone who made this dream a reality. ZeroTrusted is here to revolutionizing how we protect our digital conversations and data with intelligence and ease. Can't wait to dive into the discussions and see how ZeroTrusted empowers each of you. Let's make the digital world a safer place, together!
@sidraref congrats on the launch! seems like something setup as a compliance tool for companies that interact with LLMs, are you thinking of it that way? More thoughts
@frank_denbow, first of all, thank you for your review. I believe we need to do a better job of highlighting all our value propositions in the hero section.
You are right; our product assists companies with their compliance needs.
To address some of the questions you raised:
1) ZeroTrusted.ai acts as middleware between users and Large Language Models (LLMs) through our secure chat or API.
2) There's no need for you to have a separate account or key for each LLM. Instead, we provide our own keys, allowing access without revealing your identity to the LLMs.
3) Your point about the potential exposure of scrubbed data in the event of a breach at a third-party LLM's network is partially correct. However, your identity will not be linked to this data. This scenario is particularly critical for both individual users and businesses.
4) We offer features that maintain context when sanitizing sensitive data.
Example #1
Suppose a medical expert needs to process the following input:
"Create a summary of this patient's diagnosis: Patient Name: Paul Smith Date of Birth: 12/08/1987 SSN: 666-555-5555 Diagnosis: Chronic Heart Disease Treatment Plan: Undergo heart surgery by Dr. Smith at General Hospital on 03/15/2024 Insurance: ABC Health Insurance, Policy Number 434444444"
Before we pass it on to any LLM, we'll alter the sensitive (PHI compliance violations) details and transform to:
"Create a summary of this patient's diagnosis: Patient Name: John Doe Date of Birth: 01/01/1970 SSN: 123-45-6789 Diagnosis: Chronic Heart Disease Treatment Plan: Undergo heart surgery by Dr. Smith at General Hospital on 03/15/2024 Insurance: ABC Health Insurance, Policy Number 987654321"
This way, we preserve the context, while ensuring that your PHI compliance is met and sensitive data isn't exposed to LLM.
After getting a response from the LLM, we revert the swapped sensitive value to your original while preserving the result context.
We will be adding features that will allow RLHF to learn and adjust to customer preferences.
Example #2:
if a customer (individual or corporate) submits legal content containing sensitive information, our "Fictionalize" feature replaces this data with fictional values. This ensures the preservation of context while protecting the actual sensitive information. The original data is restored once a response is received.
Hope this helps. Please us know if there are any more questions you would like us to address.
Once again we appreciate your feedback and we hope your views can gain from the privacy and security solution that we provide.
Glad to came across an LLM with a focus on privacy. But I have one thing in my mind and I hope @femitfash will share more about how the platform keeps data secure and accurate?
Hello @adams_parker. Unlike other LLM, 1) we don’t store any history
2) we don’t track your data
3) we also don’t divulge your username/email to an LLMs
4) we use advanced encryption techniques (in addition to TLS) to ensure your searches are not exposed to gateway servers.
@adams_parker In addition to 3) by Femi. We filter our your personal, healthcare, financial information before sending to any LLM.
For example if you've passed in prompt: Hi ChatGPT, my name is Adams Parker. My credit card number is 1234-567-8911. Give me a payment integration code to add in my site.
We'll first sanitize your critical information from it first: Hi ChatGPT, my name is John Doe. My credit card number is 1111-111-0000. Give me a payment integration code to add in my site.
So, that way this sanitized prompt will prevent your data from being passed to the LLM and then we'll give you response in your PIIs replaced back.
Congratulations on your launch. It seems like a great solution for data privacy, But how does you ensure compliance with regulations like PCI, GDPR, and NIST with Zerotrusted?
@amirsohail9 Terrific question. Envision handling data that must adhere to strict compliance standards, necessitating the protection of all sensitive information from Large Language Models (LLMs). This is precisely where our sanitation module steps in, offering capabilities to either mask or create fictional data in place of the actual sensitive details. Our sophisticated model is capable of recognizing a wide range of sensitive data types, such as GDPR, PHI, PII, PCI, and more, benefitting from ongoing training with compliance-related data from NIST.
By utilizing our module, you effectively meet your compliance obligations while streamlining the process. Beyond this, our platform includes a suite of additional features designed to enhance prompt optimization, reduce inaccuracies, and facilitate data injection via our innovative LLM ensemble approach, among others. Please note, access to these advanced functionalities is restricted to subscribers of our business plan.
Paper Billionaires
ZeroTrusted.ai
ZeroTrusted.ai
Bababot
ZeroTrusted.ai
ZeroTrusted.ai
ZeroTrusted.ai