AI ethics should be hard coded, like security by design

Companies need to think about ethics from scratch when they begin conceptualizing and developing artificial intelligence (AI) products. This helps ensure that AI tools can be implemented in a responsible and unbiased manner.

The same approach is already considered essential for cybersecurity products, where a “security by designThe development principle will require risk assessment and hardcode security to be assessed from the outset, avoiding patchy patchwork and costly retrofitting at a later stage.

This mindset should now be applied to AI product development, said Kathy Baxter, chief architect of the AI ​​ethics practice at Salesforce.com, underscoring the need for organizations to adhere to fundamental development standards with AI ethics.

She noted that there are many lessons to be learned from the cybersecurity industry, which has evolved over the decades since the first malware emerged in the 1980s. For an industry that didn’t even exist before that, cybersecurity has since changed the way companies protected their systems, with an emphasis on identifying risks from the outset and developing core standards and regulations to implement.

As a result, most organizations today would have put in place basic security standards that all stakeholders, including employees, should adhere to, Baxter said in an interview with ZDNet. For example, all new employees at Salesforce.com must undergo an orientation process where the company outlines what is expected of employees in cybersecurity practices, such as adopting a strong password and using a VPN.

The same was true for ethics, she said, adding that there was an internal team driving this within the company.

There were also resources to help employees assess whether a task or service should be performed based on the company’s ethical guidelines and to understand where the red lines were, Baxter said. AI-powered from Salesforce.com Einstein Visionfor example, can never be used for facial recognition, so any sales member who is not aware of this and tries to sell the product for such implementation will do so in violation of company policy.

And just as cybersecurity practices were regularly reviewed and revised to keep pace with the changing threat landscape, the same should be applied to policies related to AI ethics, she said.

This was critical as societies and cultures changed over time, where values ​​that were considered relevant ten years ago may not be anymore. attuned to the views of the population of a country held today, she noted. AI products had to reflect this.

Data a key barrier to tackling AI bias

While policies could reduce some risks of AI bias, other challenges remained, most notably access to data. A lack of volume or variety can lead to an inaccurate representation of an industry or segment.

This has been a major challenge in healthcare, especially in countries like the US where there were no socialized medicine or government-run healthcare systems, Baxter said. When AI models were trained on limited datasets based on a narrow subset of a population, it could affect the delivery of health services and the ability to detect disease for certain groups of people.

Salesforce.com, which cannot access or use its customers’ data to train its own AI models, will fill the gaps by purchasing from third-party sources, such as linguistic data, used to power its chatbots. training, and by tapping synthetic data.

Asked about the role that regulators played in driving AI ethics, Baxter said that mandating the use of specific metrics could be harmful, as there were still many questions about the definition of “explainable AI” and how it is used. must be implemented.

The executive of Salesforce.com is a member of: Singapore’s Advisory Council on Ethical Use of AI and data, which advises government on policy and governance related to the use of data-driven technologies in the private sector.

Referring to her experience on the board, Baxter said its members quickly realized that defining “fairness” alone was complicated, with more than 200 statistical definitions. Plus, sometimes what was fair to one group would inevitably be less fair to another, she noted.

Defining “explainability” was also complex, with even machine learning experts able to misinterpret how a model worked based on predefined statements, she said. Established policies or regulations should be easily understood by anyone using AI-powered data and across all industries, such as field agents or social workers.

Realizing that such issues were complex, Baxter said the Singapore council had determined it would be more effective to establish a framework and guidelines, including toolkits, to help AI users understand and be transparent about their impact. about their use of AI.

Singapore last month released a toolkit called AI Verify, which would allow companies to demonstrate their “objective and verifiable” use of AI. The move was part of the government’s efforts to increase the transparency of AI implementations through technical and process controls.

Baxter urged the need to dispel the misconception that AI systems are fair by default simply because they are machines and thus free from bias. Organizations and governments need to invest efforts to ensure that the benefits of AI are shared equally and that its application meets certain criteria of responsible AI, she said.

RELATED COVERAGE

Leave a Comment

Your email address will not be published. Required fields are marked *