industry news
Subscribe Now

Enkrypt AI Unveils LLM Safety Leaderboard to Enable Enterprises to Adopt Generative AI Safely and Responsibly

  • Revolutionizing AI Security: Enkrypt AI debuts the ground-breaking LLM Safety Leaderboard at the RSA conference, setting the benchmark of transparency and security in AI technology.
  • Smart Choices, Safer Tech, Faster Adoption: With the LLM Safety Leaderboard, enterprises can swiftly identify the safest and most reliable AI models for their needs by understanding their vulnerabilities and enhancing tech trustworthiness.
  • Ethics and Compliance Front and Center: Enkrypt AI’s latest innovation allows AI engineers to make informed decisions to uphold the highest ethical and regulatory standards, building a future where AI is safe for all.
Boston, Massachusetts – May 6, 2024; The rapid adoption of Generative AI, including in regulated settings, has continued to make the security and safety of Large Language Models (LLMs) a key concern amongst cybersecurity professionals. Policy-makers and security professionals around the world continue to seek new technology to help mitigate the risks of Generative AI technologies. For example; just days ago, the US Government’s Department of Homeland Security appointed a board to advise on the role of artificial intelligence on critical infrastructure.
“LLMs are increasingly seen as potential back-office powerhouses for enterprises, processing data and enabling faster front-office decision-making. Consider a fintech where an LLM-powered application is key in rejecting a loan application from a person of color without clear explanation. This raises concerns about implicit biases, as LLMs often reflect societal inequities present in their training data sourced from the internet. Moreover, cases like Google’s LLM appearing ‘woke’ highlight the risks of overcorrecting these biases. How safe is Anthropic’s Claude3 Model? Is Cohere’s Command R+ LLM really ready for enterprise use? These scenarios underscore the urgent need for careful checks on these models to prevent exacerbating societal inequities and causing harm.”
At the highly anticipated RSA conference, Enkrypt AI, the leader in securing Generative AI technologies, will introduce its latest innovation, the LLM Safety Leaderboard. This product is part of Enkrypt AI’s comprehensive Sentry suite, designed to empower enterprises to deploy LLMs with heightened security and peace of mind.
The LLM Safety Leaderboard will provide essential insights into the vulnerabilities and hallucination risks of various LLMs, enabling technology teams to make informed decisions about which models best suit their specific needs. This tool aims to educate and raise awareness about the relative strengths and potential weaknesses of different LLMs, so AI engineers can make informed decisions about the unique strengths of each.
Highlights of the LLM Safety Leaderboard include: Comprehensive Vulnerability Insights which delivers detailed evaluations of potential security risks, including data leakage, privacy breaches, and susceptibility to cyber-attacks. Ethical and Compliance Risk Assessment which tests for biases, toxicity, and compliance with ethical standards and regulatory requirements, ensuring models align with enterprise and brand values.
The LLM Safety Leaderboard is a new component of Enkrypt’s Sentry suite, which includes Sentry Red Team, Sentry Guardrails, and Sentry Compliance. This suite offers a holistic approach to managing and securing LLMs, aligning with the strictest standards for privacy, security, and compliance within the enterprise environment.
The announcement comes as a new preprint paper by Enkrypt AI, “Increased LLM Vulnerabilities from Fine-tuning and Quantization”, has found that common practices used to implement LLMs in business settings, namely fine-tuning and quantization, lead to increased risk of security vulnerabilities namely from jailbreaking. However, implementing external guardrails platforms like Enkrypt’s Sentry Guardrails solution was successful in mitigating such vulnerabilities. On one model, Enkrypt’s Sentry Guardrails provided a 9x reduction in vulnerability to jailbreaking attacks.
Sahil Agarwal, CEO of Enkrypt AI, said: “With the launch of the LLM Safety Leaderboard, we are enhancing our commitment to enabling the safe, secure, and responsible use of generative AI in the enterprise. This tool will serve as a critical resource for organizations aiming to navigate the complexities of AI implementation with full confidence in their security posture.”
Prashanth Harshangi, CTO of Enkrypt AI, added: “In the last two quarters, our team has been solely focused on generative AI safety and making rapid progress with our Sentry Suite. Comprising three key components – Sentry Red Team, Sentry Guardrails, and Sentry Compliance. With the LLM Safety Leaderboard, we are proud to offer a product that not only identifies potential risks but also empowers businesses to proactively manage and mitigate these challenges, enabling informed and faster decision making.”
Ends 
Notes to the editor
For further information please contact the Enkrypt AI press office: Bilal Mahmood on b.mahmood@stockwoodstrategy.com or +44 (0) 771 400 7257
About Enkrypt AI
Enkrypt AI, co-founded by Yale PhDs Sahil Agarwal and Prashanth Harshangi, is pioneering the safe adoption of Generative AI within enterprises. With an innovative all-in-one platform, Enkrypt AI is revolutionizing how Large Language Models (LLMs) are integrated and managed, addressing critical needs for reliability, security, data privacy, and compliance in a unified solution.
Used by mid to large-sized enterprises in industries including finance and life sciences, Enkrypt AI’s guardrails offer a proactive approach to AI security, fostering trust and efficiency in AI implementations from chatbots to automated reporting. Enkrypt AI sits between users and AI models, to offer a variety of safety and security layers.
Enkrypt AI stands apart by merging threat detection, privacy, and compliance into a comprehensive toolkit, poised to become the definitive Enterprise Generative AI platform for an evolving regulatory landscape. For more information please visit https://www.enkryptai.com/ or follow via LinkedInXInstagram or YouTube.

Leave a Reply

featured blogs
May 8, 2024
Learn how artificial intelligence of things (AIoT) applications at the edge rely on TSMC's N12e manufacturing processes and specialized semiconductor IP.The post How Synopsys IP and TSMC’s N12e Process are Driving AIoT appeared first on Chip Design....
May 2, 2024
I'm envisioning what one of these pieces would look like on the wall of my office. It would look awesome!...

featured video

Introducing Altera® Agilex 5 FPGAs and SoCs

Sponsored by Intel

Learn about the Altera Agilex 5 FPGA Family for tomorrow’s edge intelligent applications.

To learn more about Agilex 5 visit: Agilex™ 5 FPGA and SoC FPGA Product Overview

featured paper

Designing Robust 5G Power Amplifiers for the Real World

Sponsored by Keysight

Simulating 5G power amplifier (PA) designs at the component and system levels with authentic modulation and high-fidelity behavioral models increases predictability, lowers risk, and shrinks schedules. Simulation software enables multi-technology layout and multi-domain analysis, evaluating the impacts of 5G PA design choices while delivering accurate results in a single virtual workspace. This application note delves into how authentic modulation enhances predictability and performance in 5G millimeter-wave systems.

Download now to revolutionize your design process.

featured chalk talk

Must be Thin to Fit: µModule Regulators
In this episode of Chalk Talk, Amelia Dalton and Younes Salami from Analog Devices explore the benefits and restrictions of Analog Devices µModule regulators. They examine how these µModule regulators can declutter PCB area and increase the system performance of your next design, and the variety of options that Analog Devices offers within their Ultrathin µModule® regulator product portfolio.
Dec 5, 2023
22,635 views