Over the recent period Generative AI has achieved unprecedented proliferation. While this powerful technology provides fantastic opportunities for both businesses and individuals, like any technology, it also carries risks. Due to the fast pace of GenAI adoption, creators have simply not been able to address many of the vulnerabilities still inherent in their platforms. At Vendict we use an AI solution that uses its own proprietary NLP technology and has also integrated other generative capabilities developed by OpenAI. With decades of collective experience in the AI field, the Vendict team is fully aware of the risks associated with these tools and have taken unique steps to mitigate them as far as possible:
We use OpenAI via Microsoft’s Azure, known for its high standard of private data protection and compliance. Azure does not have access to any customer data, even for training or debugging purposes.
We completely anonymize any data transmitted to Azure, including company name, product name, locations or personal names.
We minimize the transmitted data.
To generate an answer to a question, Vendict transmits only the question itself and the few relevant library entries.
When rephrasing a sentence, Vendict transmits only the question and the current answer.
AI hallucinations, errors that superficially appear as correct information, is one of the biggest challenges facing the GenAI industry. This problem becomes even more severe in the GRC space where accuracy is crucial. Vendict’s generative features create hyperlinked citations for the answers it produces, ensuring that all responses are transparently backed by hard data.
While we believe Generative AI presents a huge asset for our clients, using Vendict effectively is not dependent on this feature. All Vendict users have the ability to opt out of using Generative AI for their compliance work.
Our GenAI saves security and sales teams 20 hours every month. Wanna see?