AI NEWS: AI Policy, history, ethics & law

Discover our latest curated articles on the policies, ethics and history surrounding AI.

AI Can Write Disinformation Now—and Dupe Human Readers

3rd July 2021

Wired

Joy Xu

At Georgetown University, researchers have been able to use Open AI’s famed natural language model GPT-3 to create fake news, false narratives and tweets which seem eerily convincing. In particular, the model was incredibly effective for short social media posts, which are one of the fastest ways misinformation can spread and grow viral on the Internet. For example, one tweet reads: “I don't think it's a coincidence that climate change is the new global warming. They can't talk about temperature increases because they're no longer happening.”

 

More worrying is that humans are duped by these messages - experiments demonstrated that their opinions changed significantly after reading the tweets generated by the AI model. This has heavy implications: for instance, if AI were to play a larger role in disinformation campaigns, it could generate false data at an alarming rate, and with it comes issues in safety, ethics and law. Bots, deepfakes and tweet-generating models are some of many new breakthroughs which, when wielded by the wrong hand, can make it harder and harder to discern lies from reality.

When predictive policing backfires

16th June 2021

The Verge

Joy Xu

Predictive policing is the process of finding patterns in criminal data, to identify potential criminals and stop crimes before they even happen. To do so, many rely on artificial intelligence to derive meaning from huge amounts of data, yet this also highlights the many issues this technology still has: most police departments are clueless as to what exactly the algorithm is searching for in potential criminals, for example, and heavily biased data only exacerbates racial and socioeconomic inequalities that plague many of the areas where predictive policing is trying to be implemented.

 

Robert McDaniel lived in one of the most dangerous areas of Chicago, yet had no violent criminal records of his own. In 2013, police officers from the City of Chicago arrived at his house, announcing that he was on the “heat list”: a database of people that were determined to be involved in some future shooting. However, they did not know whether he would be the shooter, or the shooted. Both a potential perpetrator and a potential victim, he became closely watched by officials day and night, causing his friends and acquaintances to grow suspicious of him - wondering whether he was collaborating, or reporting to the police in some way. After many years of struggles attempting to clear his name, McDaniel finally got involved in a shooting - although not in the way the police expected: some former friends shot him out of suspicion, because he was on the heat list and was constantly trailed by the police, putting everyone at risk. Predictive policing had backfired, by essentially causing a series of events which might not have happened had the police never visited McDaniel’s house in the first place.

 

This illustrates the crucial question of whether relying solely on data, and following its recommendations by the letter, is a good idea at all. Especially if the quality of the available data is subpar, trying to predict criminals’ next moves can do more harm than good, sowing mistrust and doubt in communities that only encourage violence. This also perpetuates the idea that law enforcement has no empathy for the individual circumstances of each person, and the use of artificial intelligence with such heavy implications should be closely monitored - to say the least.

Facial recognition has bled into civil rights 

1st April 2021

MIT Technology Review

Joy Xu

Detroit, January 9th, 2020. Robert Williams had been wrongfully arrested for stealing watches because AI facial recognition software had wrongfully identified him as the criminal. Now, more than a year later, the American Civil Liberties Union and  University of Michigan Law School’s Civil Rights Litigation Initiative filed a lawsuit on behalf of Williams, claiming that this had infringed on his civil rights. “By employing technology that is empirically proven to misidentify Black people at rates far higher than other groups of people,” the lawsuit states, ”the DPD denied Mr. Williams the full and equal enjoyment of the Detroit Police Department’s services, privileges, and advantages because of his race or color.”

 

Thus, using AI-powered technology in criminal justice also has become an issue of discrimination: bias easily infiltrates systems that are not robust enough, and an over-reliance on software alone, without any human judgment, could worsen the chasm with targeted minorities. Between a rushed investigation, technological errors and the low-quality data that is fed into the algorithm, pushing for the use of AI facial recognition before it is ready could have damaging consequences on the livelihood of many.

This is how we lost control of our faces

14th March 2021

MIT Technology Review

Joy Xu

The applications of facial recognition are becoming increasingly ubiquitous: from telesurveillance to improving search suggestions, larger and larger datasets are being generated to draw insights from our faces. In recent years, however, a growing issue has been of the quality of the datasets: as researchers scramble for more images, asking for consent and regulated data collection has become a tedious task which more and more gloss over. This can lead to messier, non-consensual and possibly erroneous data, which has drastic repercussions on the model’s output in practice such as labelling with racist stereotypes.

The history of facial recognition is fascinating: what started out as trying to match images to people and painstakingly verifying the outputs manually, now has turned to auto labelling and attempting to infer ethnicity, gender and other characteristics from a single image.

 

Read more about the study that presents those findings here: https://arxiv.org/pdf/2102.00813.pdf

Men wear suits, women wear bikinis?

16th February 2021

MIT Technology Review

Joy Xu

It is a well-known fact that natural language models can perpetuate racism and bias because of the text it was trained on. Now, researchers at Carnegie Mellon University have also found out that image-generating algorithms that were pre-trained in an unsupervised manner (i.e. without humans labelling images) also contain human-like biases. These models base their knowledge and training on the internet - which is often full of harmful stereotypes that overrepresent those biases. For instance, images of women, even US Representative Alexandria Ocasio-Cortez, were autocompleted by the algorithm wearing low-cut tops or bikinis 53% of the time - a startling and eye-opening proportion.

 

This issue also transcended the more technological methodological differences for each model: for both OpenAI’s i-GPT (an image version of GPT-2) and Google’s SimCLR, photos of men and ties and suits appear more related, whereas those of women appear farther apart from the latter. The implications of this are important: it suggests that anything from video-based candidate assessment algorithms to facial recognition could perpetuate those biased views, especially when there is no human in the loop to correct them.


Research paper: https://arxiv.org/pdf/2010.15052.pdf

How to make AI a greater force for good in 2021?

30th January 2021

MIT Technology Review

Joy Xu

January is almost over, and we are all slowly getting used to the new year. Undoubtedly, there has been massive progress in AI and machine learning in the past 12 months, such as with OpenAI’s GPT-3, or with its applications in healthcare and clinical research, and its influence will continue growing this year. But how do we ensure that it is not used maliciously, or cross ethical boundaries that are yet to be defined? With the rising concern around data protection and privacy, the regulation of this relatively new field is becoming increasingly urgent.

 

Karen Hao, a senior AI reporter at MIT Technology Review, delves into the five ways she hopes AI can improve on this year: by reducing corporate influence in research, refocusing on common-sense understanding, empowering marginalized researchers, centering the perspectives of impacted communities, and codifying guard rails into regulation.

AI is wrestling with a replication crisis

27th November 2020

MIT Technology Review

Joy Xu

As AI is overtaking the world, so is scientific research related to it. This is exciting, but comes with a whole new set of challenges: artificial intelligence is a relatively new concept, thus there has not yet been strict regulations and frameworks implemented to structure its research. Because of this, there is a lack of transparency, such as in papers or published code (usually very little is revealed), which is often criticised since it leads to issues in reproducibility.

As Joelle Pineau, a computer scientist at Facebook AI Research explains, “It used to be theoretical, but more and more we are running experiments. And our dedication to sound methodology is lagging behind the ambition of our experiments.” 

This is why Joelle, along with several other scientists in Nature, wrote a scathing critique of Google Health’s paper on breast cancer detection, which reveals a deeper underlying trend in journals publishing AI research that has little concrete evidence, preventing replication and advancement.

Serve and Replace: Covid's impact on AI automatization

21st October 2020

MIT Technology Review

The Innovation Issue

Joy Xu

Supermarkets staffed with robots reshelving goods might be your idea of the future of helper robots, but the covid-19 pandemic has brought to light just how far-reaching their uses could be. From spraying disinfectant to walking dogs, their presence has grown tremendously. More importantly, they are able to perform riskier tasks in the presence of infected patients or deliver lab samples, which would free up nurses for more essential tasks.

On the flip side, the same robots are a threat to the workforce, stealing away potential jobs from millions who are already in financial difficulty due to the crisis. And now, workers are potential carriers of the virus, accelerating the shift to an automated world. As Hayasaki explains, “(b)efore covid-19 hit, many companies—not just in logistics or medicine—were looking at using robots to cut costs while protecting humans from dangerous tasks. Today humans are the danger, potentially infecting others with the coronavirus. Now the challenge is that a minimum-wage laborer might actually be a carrier”.

The future of AI in robotics is not black or white, and this gripping article shows just how much the pandemic is going to change the technology landscape.

Bonus: Page 71 features a fascinating story written by an AI bot!