Why China’s AI Leadership Could Result In Discrimination Against ‘US’?
Artificial Intelligence has huge potential economic benefits and equally significant societal impacts. Some estimate that the annual GDP (Gross Domestic Product) contribution from AI could be as high as $130bn a year, by 2030. Equivalently, some suggest that between 30% and 40% of all jobs could be, at least ‘Augmented’ by AI and, potentially, especially where those jobs are repetitive, replaced entirely by the technology.
There are a number of potentially important consideration as AI projects and solutions roll out. Bias in AI systems is just one. Bias is an ‘unfair’ output, produced as a result of the calculations performed by an AI system. There are many examples of ‘bias’ in AI, but one memorable exemplar has been provided by Joy Buolamwini, a researcher at MIT who turned racial bias in AI systems in to her PHD paper. For her example, defined bias, as ‘having practical differences in gender classification error rates between groups.’
Essentially what Buolamwini produced in her research, was clear evidence that AI systems performing Facial Recognition, got it wrong, systematically, for some ethnic groups.
Algorithmic bias comes from the most fundamental aspects of AI projects. AI systems are designed to meet the priorities (and therefore, sometimes the prejudices, even subconscious prejudices) of those who program them.
Imagine, for example, an AI system that was trained to assign jail time for people accused of crimes. The algorithm is ‘trained’ with the history of all crimes in a particular state and the corresponding sentence their perpetrators received.
In general, as a result of human bias, women receive half the jail time of men who committed the same crime. Put simply, there is systematic bias against men in the court system and the data this AI system is trained on reflects that balance. Any AI algorithm trained using that data would be equally bias against men and assign the same 50% jailtime to women whose circumstances it was asked to adjudicate.
The problem comes as we explore the ubiquity of AI solutions in our world. AI is already used, for example, by most of the fortune 500 companies, to filter applicants, before they are seen by a human. AI is being trailed at airports, in new schemes to avoid the needs for passports, using facial and ‘gait’ recognition technology to identify people without the need for paperwork.
As things stand, the economic benefits of AI could be off limits to some subsets of our society.
Chinese Leadership in Artificial Intelligence
China is winning the Cold War over the development of Artificial Intelligence (AI Leadership). Last year they produced more research papers than the US, invested more dollars in AI R&D than the US and trained more citizens on AI skills, than the US.
China’s more Authoritarian approach to government and economic growth, their a huge population, (which generates reams of data, from billions of smartphones, every hour) and a clear, top down strategy, in which they have a stated goal of being number 1 in the field, are just some of the reasons China has streaked ahead in AI leadership in technology terms, in recent years, and seems set to ultimately achieve their goal of dominating this world changing technology.
What could bias in Chinese AI look like?
It’s hard to tell exactly what systematic bias might look like, in Chinese AI Systems. Indeed, if you asked Chinese software engineers, psychologists and mathematicians who design those systems, they might well say that the system was not bias – because they are not aware of their own prejudices.
China, is, however, different in two notable ways to what we might describe as The Western World. It has a more Authoritarian government with a clearer direction and a preparedness to put the wishes of its citizens second to its own requirements.
Secondly, although not entirely differently, it, along with other Eastern cultures with histories of religion including Confusions and Buddhism, favor the interests of collectives and of groups, over the wishes of the individual.
A Chinese AI leadership designed to write books, for example, would, in all probability, not write a film of the sort which might have starred John Wayne as a bold, independent lone figure doing what he thought was right in the Wild West.
How could systematic bias affect our lives?
Artificial Intelligence is being rolled out in every industry and every component of business. AI systems now review job applicant’s social feeds before they come in for interview, some actually perform first line interviews with job hunters, examining applicant’s facial ‘micro movements’ and trying to associate the best performers in the organization to see if there’s a match. AI guesses where crimes are going to take place and the most relevant TV shows, music and podcasts to show you on your internet services. Bias, if it is shown, will affect lives materially.
At the very least, the situation calls for governance and standards of proof, that systems are not bias. They should instigate this, even as things stand, with such clear discriminatory bias already being evident in the examination of AI systems, so far.
If there is light at the end of the tunnel on the subject, it is that, this time, unusually, it is white males who are being discriminated against. It’s possible that this time, outrage at the effects of discrimination, since it is against a group which does not usually receive it, will receive a fairer and more immediate hearing. But of course, it is my own bias that makes me think that.
Image credit- Canva
Discover more from Newskart
Subscribe to get the latest posts sent to your email.
Comments are closed.