At the 2023 Defcon hacker conference in Las Vegas, there was a significant partnership between AI tech companies and algorithmic integrity groups to conduct a red-teaming exercise on generative AI platforms. This exercise was supported by the US government in an effort to bring transparency and scrutiny to these critical systems.

Following the success of the red-teaming exercise at Defcon, the ethical AI and algorithmic assessment nonprofit, Humane Intelligence, announced a call for participation with the US National Institute of Standards and Technology (NIST). This initiative invites US residents to take part in a red-teaming effort to evaluate AI office productivity software through a qualifying round known as ARIA.

The goal of the red-teaming effort is to democratize the ability to evaluate the effectiveness and ethics of generative AI technologies. According to Theo Skeadas, chief of staff at Humane Intelligence, the average person lacks the knowledge to determine if an AI model is fit for purpose. By involving developers and the general public in the evaluation process, the aim is to empower users to assess whether AI models meet their needs.

Participants who pass through the qualifying round will advance to an in-person red-teaming event at the Conference on Applied Machine Learning in Information Security (CAMLIS) in Virginia. This event will divide participants into red and blue teams, with the red team attempting to attack AI systems and the blue team focusing on defense. The AI 600-1 profile from NIST’s AI risk management framework will be used as a rubric to measure outcomes that violate the expected behavior of the systems.

Rumman Chowdhury, founder of Humane Intelligence, highlights the importance of the NIST partnership in advancing the field of AI evaluation. The ARIA team leverages structured user feedback and expertise in sociotechnical test and evaluation to promote rigorous scientific evaluation of generative AI. In addition to the partnership with NIST, Humane Intelligence plans to announce further collaborations with US government agencies, international governments, and NGOs to encourage transparency and accountability in AI development.

Skeadas emphasizes the importance of involving a broader community in the testing and evaluation of AI systems. Beyond programmers, policymakers, journalists, civil society members, and non-technical individuals should all play a role in identifying issues and biases in AI models. The concept of “bias bounty challenges” is one way to incentivize individuals to identify problems and inequities in AI systems, ultimately promoting greater transparency and accountability in the development process.

Red-teaming exercises like the one conducted at Defcon and the upcoming red-teaming effort with NIST play a crucial role in evaluating the security, resilience, and ethics of generative AI technologies. By involving a diverse range of participants and organizations in the evaluation process, transparency and accountability can be fostered in the development of AI systems.

AI

Articles You May Like

Microsoft’s Strategic Shift: Expanding Xbox App Functionality on Android
Advancement in Quantum Computing: Google Research’s Breakthrough in Noise Reduction
Revolutionizing Healthcare: Microsoft Unveils Advanced AI Tools
YouTube’s Dream Track: A Leap into AI-Generated Audio for Creators

Leave a Reply

Your email address will not be published. Required fields are marked *