Page 11 - IAV Digital Magazine #425
P. 11

iAV - Antelope Valley Digital Magazine
CONTINUATION FROM PREVIOUS PAGE
that AI has the potential to reinforce existing biases because, unlike humans, algorithms may be unequipped to consciously coun- teract learned bias- es. “A danger would be if you had an AI system that didn’t have an explicit part that was driven by moral ideas, that would be bad,” she said. The research, pub- lished in the journal Science, focuses on a machine learning tool known as “word embedding”, which is already transform- ing the way comput- ers interpret speech and text. Some argue that the natu- ral next step for the technology may involve machines developing human- like abilities such as common sense and logic.
“A major reason we chose to study word embeddings is that they have been spectacularly suc- cessful in the last few years in helping computers make sense of language,” said Arvind Narayanan, a com- puter scientist at Princeton University and the paper’s sen- ior author.
The approach, which is already used in web search and machine trans- lation, works by building up a mathe- matical representa- tion of language, in which the meaning of a word is distilled into a series of num- bers (known as a word vector) based on which other words most fre- quently appear alongside it. Perhaps surprising- ly, this purely statis- tical approach appears to capture the rich cultural and social context of what a word means in the way that a dictionary definition would be incapable of.
For instance, in the mathematical “lan- guage space”, words for flowers are clustered closer to words linked to pleasantness, while words for insects are closer to words linked to unpleasant- ness, reflecting com- mon views on the relative merits of insects versus flow- ers.
The latest paper shows that some more troubling implicit biases seen in human psycholo- gy experiments are also readily acquired by algorithms. The words “female” and “woman” were more closely associated with arts and humanities occupa-
tions and with the home, while “male” and “man” were closer to maths and engineering profes- sions.
And the AI system was more likely to associate European American names with pleasant words such as “gift” or “happy”, while African American names were more commonly associat- ed with unpleasant words.
The findings suggest that algorithms have acquired the same biases that lead people (in the UK and US, at least) to match pleasant words and white faces inimplicit asso-
ciation tests.
These biases can have a profound impact on human behavior. One previ- ous study showed that an identical CV is 50% more likely to result in an interview invitation if the can- didate’s name is European American than if it is African American. The latest results suggest that algorithms, unless explicitly pro- grammed to address this, will be riddled with the same social prejudices.
“If you didn’t believe that there was racism associated with people’s names, this shows it’s there,” said Bryson.
The machine learn- ing tool used in the study was trained on a dataset known as the “common crawl” corpus – a list of 840bn words that have been taken as they appear from material published online. Similar results were found when the same tools were trained on data from Google News.
Sandra Wachter, a researcher in data ethics and algo- rithms at the University of Oxford, said: “The world is biased, the historical data is biased,
hence it is not sur- prising that we receive biased results.”
Rather than algo- rithms representing a threat, they could present an opportu- nity to address bias and counteract it where appropriate, she added.
“At least with algo- rithms, we can potentially know when the algorithm is biased,” she said. “Humans, for exam- ple, could lie about the reasons they did not hire someone. In contrast, we do not expect algorithms to lie or deceive us.”
However, Wachter said the question of how to eliminate inappropriate bias from algorithms designed to under- stand language, without stripping away their powers of interpretation, would be challenging.
“We can, in princi- ple, build systems that detect biased decision-making, and then act on it,” said Wachter, who along with others has called for an AI watchdog to be established. “This is a very complicated task, but it is a responsibility that we as society should not shy away from.”
10
661-266-4-ADS
iAV - Antelope Valley Digital Magazine


































































































   9   10   11   12   13