OpenAI creates Safety and Security Committee for board

OpenAI creates Safety and Security Committee for board

A Safety and Security committee has been established by the OpenAI Board with the goal of providing recommendations on safety and security matters for all OpenAI projects. One of the committee’s initial responsibilities is to evaluate and enhance the existing processes and protections within a span of 90 days. Following the assessment period, the committee will present its recommendations to the OpenAI Board for thorough review. Subsequently, OpenAI will publicly announce any process changes that have been adopted.

Security leaders share their insights

Stephen Kowski, Field CTO at SlashNext Email Security+:

“The establishment of a new AI safety committee by OpenAI and the commencement of training for their next major AI model come as no surprise. This aligns with the recent global commitment to responsible AI development made in Seoul. With governments worldwide focusing primarily on AI governance, OpenAI’s partners, like Microsoft, have also joined international AI safety initiatives. Hence, it is imperative for OpenAI and its counterparts to implement such controls and oversight to continue operating and innovating in the current landscape. By taking proactive steps, OpenAI can influence the development of these controls, making it a compelling move for them to initiate this form of governance independently.”

Narayana Pappu, CEO at Zendata:

“The announcement of a new AI model by OpenAI elevates them to the level of institutions like Google, which established a safety and security board back in 2019. While AI is a nascent field, other industries have similar governing bodies, such as institution review boards overseeing medical research involving human subjects, which hold significant importance. The inclusion of non-technical and external personnel in their structures is pertinent to AI security and safety and should be considered by OpenAI in their future endeavors.”

John Bambenek, President at Bambenek Consulting:

“One concerning aspect is the apparent lack of external involvement indicated in this announcement. The committee seems to comprise solely of OpenAI staff or executives. This could potentially lead to an echo chamber effect that may overlook risks associated with more advanced models.”

Nicole Carignan, Vice President of Strategic Cyber AI at Darktrace:

“As AI advancements continue at a rapid pace, it is vital to see similar commitments towards data science and data integrity. Ensuring data integrity, testing, evaluation, and verification, along with accuracy benchmarks, are critical aspects in the appropriate and efficient utilization of AI. Promoting diversity of thought within AI teams is also crucial in combating bias and preventing harmful training and output. Above all, AI must be used responsibly, safely, and securely. The risks posed by AI often lie in its application.”

Post Your Comment

Subscribe Our Newsletter

We hate spam, we obviously will not spam you!

Services
Use Cases
Opportunities
Resources
Support
Get in Touch
Copyright © TSP 2024. All rights reserved. Designed by Enovate LLC

Copyright © TSP 2024. All rights reserved. Designed by Enovate LLC