Businesses in Singapore now will be able to tap a governance testing framework and toolkit to demonstrate their “objective and verifiable” use of artificial intelligence (AI). The move is part of the government’s efforts to drive transparency in AI deployments through technical and process checks.
Coined A.I. Verify, the new toolkit was developed by the Infocomm Media Development Authority (IMDA) and Personal Data Protection Commission (PDPC), which administers the country’s Personal Data Protection Act.
The government agencies underscored the need for consumers to know AI systems were “fair, explainable, and safe”, as more products and services were embedded with AI to deliver more personalised user experience or make decisions without human intervention. They also needed to be assured that organisations that deploy such offerings were accountable and transparent.
Singapore already has published voluntary AI governance frameworks and guidelines, with its Model AI Governance Framework currently in its second iteration.
A.I Verify now will allow market players to demonstrate to relevant stakeholders their deployment of responsible AI through standardised tests. The new toolkit currently is available as a minimum viable product, which offers “just enough” features for early adopters to test and provide feedback for further product development.
Specifically, it delivers technical testing against three principles