Washington: Scientists have developed a new artificial intelligence (AI) tool for detecting unfair discrimination — such as on the basis of race or gender. Preventing unfair treatment of individuals on the basis of race, gender or ethnicity, for example, has been a long-standing concern of civilised societies. However, detecting such discrimination resulting from decisions, whether by human decision makers or automated AI systems, can be extremely challenging. Also Read – Spotify rolls out Siri support, new Apple TV app “Artificial intelligence systems — such as those involved in selecting candidates for a job or for admission to a university — are trained on large amounts of data,” said Vasant Honavar, a professor at Pennsylvania State University (Penn State) in the US. “But if these data are biased, they can affect the recommendations of AI systems,” Honavar said. He said if a company historically has never hired a woman for a particular type of job, then an AI system trained on this historical data will not recommend a woman for a new job. Also Read – New Instagram tool to help users spot phishing emails “There’s nothing wrong with the machine learning algorithm itself,” said Honavar. “It’s doing what it’s supposed to do, which is to identify good job candidates based on certain desirable characteristics. But since it was trained on historical, biased data it has the potential to make unfair recommendations,” he said. The team created an AI tool for detecting discrimination with respect to a protected attribute, such as race or gender, by human decision makers or AI systems. “We can minimise gender-based discrimination in salary if we ensure that similar men and women receive similar salaries,” said Aria Khademi, graduate student at Penn State. The researchers tested their method using various types of available data, such as income data from the US Census Bureau to determine whether there is gender-based discrimination in salaries. They also tested their method using the New York City Police Department’s stop-and-frisk programme data to determine whether there is discrimination against people of colour in arrests made after stops. “We analysed an adult income data set containing salary, demographic and employment-related information for close to 50,000 individuals,” said Honavar. “We found evidence of gender-based discrimination in salary. Specifically, we found that the odds of a woman having a salary greater than USD 50,000 per year is only one-third that for a man. “This would suggest that employers should look for and correct, when appropriate, gender bias in salaries,” he said. Although the team’s analysis of the New York stop-and-frisk dataset — which contains demographic and other information about drivers stopped by the New York City police force — revealed evidence of possible racial bias against Hispanics and African American individuals, it found no evidence of discrimination against them on average as a group. “You cannot correct for a problem if you don’t know that the problem exists,” said Honavar. “To avoid discrimination on the basis of race, gender or other attributes you need effective tools for detecting discrimination. Our tool can help with that,” he said.