By 2023, seventy-five percent of large organisations would hire artificial intelligence (AI) behavior forensic, privacy, and customer trust specialists to reduce brand and reputation risk, Gartner Inc. predicted on Thursday.
Bias based on race, gender, age or location, and bias based on a specific structure of data, have been long-standing risks in training AI models.
“New tools and skills are needed to help organisations identify these and other potential sources of bias, build more trust in using AI models, and reduce corporate brand and reputation risk. More and more data and analytics leaders and chief data officers (CDOs) are hiring ML (machine learning) forensic and ethics investigators,” Jim Hare, Research Vice President at Gartner, said in a statement.
Increasingly, sectors like finance and technology are deploying combinations of AI governance and risk management tools and techniques to manage reputation and security risks.
In addition, organisations such as Facebook, Google, Bank of America, MassMutual and NASA are hiring, or have already appointed, AI behavior forensic specialists who primarily focus on uncovering undesired bias in AI models before they are deployed.
“While the number of organisations hiring ML forensic and ethics investigators remains small today, that number will accelerate in the next five years,” added Hare.