In a groundbreaking move at the 2023 Defcon hacker conference in Las Vegas, AI tech companies, in conjunction with algorithmic integrity and transparency groups, united to unleash thousands of attendees on generative AI platforms with the objective of identifying vulnerabilities in these critical systems. This initiative, known as a “red-teaming” exercise, was further bolstered by support from the US government, as it sought to shed light on the increasingly influential yet cryptic nature of these systems.

Building upon the success of the Defcon exercise, the ethical AI and algorithmic assessment organization, Humane Intelligence, has taken this concept to a new level. Recently, the group announced a call for participation with the US National Institute of Standards and Technology, extending an invitation to all US residents to partake in a qualifying round of a nationwide red-teaming initiative designed to assess AI office productivity software. This qualifying round will be conducted online and is open to both developers and the general public as part of NIST’s AI challenges program, known as Assessing Risks and Impacts of AI (ARIA).

The ultimate aim of this endeavor is to enhance the capabilities for conducting thorough testing of the security, resilience, and ethical standards of generative AI technologies. Theo Skeadas, the chief of staff at Humane Intelligence, emphasized the importance of empowering individuals to assess the suitability of AI models for their intended purposes. By democratizing the ability to conduct evaluations, the organization seeks to enable users to make informed decisions about the AI systems they interact with on a daily basis.

The final stage of the red-teaming event at the Conference on Applied Machine Learning in Information Security (CAMLIS) in Virginia will see participants divided into red and blue teams tasked with attacking and defending AI systems, respectively. Leveraging the AI 600-1 profile, a component of NIST’s AI risk management framework, participants will evaluate whether the red team can produce outcomes that deviate from the expected behavior of the systems. This structured evaluation process aims to foster a scientific approach to assessing generative AI technologies.

Rumman Chowdhury, the founder of Humane Intelligence and a contractor at NIST’s Office of Emerging Technologies, highlighted the collaborative nature of the ARIA program. Drawing on user feedback and sociotechnical test and evaluation expertise, the initiative strives to advance the field towards rigorous scientific evaluation of AI systems. Chowdhury and Skeadas revealed plans for additional red team collaborations with various government agencies, both domestic and international, as well as non-governmental organizations in the near future.

The overarching goal of these red-teaming efforts is to foster a culture of transparency and accountability within the AI industry. By encouraging companies to participate in initiatives like “bias bounty challenges,” where individuals are incentivized to identify flaws and inequities in AI models, Humane Intelligence aims to broaden the community involved in testing and evaluating these systems. Policymakers, journalists, civil society members, and non-technical individuals are all seen as integral stakeholders in the process of ensuring the responsible development and deployment of AI technologies.

AI

Articles You May Like

Groundbreaking Discoveries in Nonlinear Hall Effect: Room Temperature Applications in Tellurium
Harnessing Artificial Photosynthesis: A Revolutionary Step in Sustainable Fuel Production
YouTube’s Innovative Features: Enhancing Community Engagement and Creator Support
The Curious Case of Flappy Bird’s Revival: A Game of Nostalgia and Rights

Leave a Reply

Your email address will not be published. Required fields are marked *