AI NEWS: AI Policy, history, ethics & law

Discover our latest curated articles on the policies, ethics and history surrounding AI.

This is how we lost control of our faces

14th March 2021

MIT Technology Review

Joy Xu

The applications of facial recognition are becoming increasingly ubiquitous: from telesurveillance to improving search suggestions, larger and larger datasets are being generated to draw insights from our faces. In recent years, however, a growing issue has been of the quality of the datasets: as researchers scramble for more images, asking for consent and regulated data collection has become a tedious task which more and more gloss over. This can lead to messier, non-consensual and possibly erroneous data, which has drastic repercussions on the model’s output in practice such as labelling with racist stereotypes.

The history of facial recognition is fascinating: what started out as trying to match images to people and painstakingly verifying the outputs manually, now has turned to auto labelling and attempting to infer ethnicity, gender and other characteristics from a single image.

 

Read more about the study that presents those findings here: https://arxiv.org/pdf/2102.00813.pdf

Men wear suits, women wear bikinis?

16th February 2021

MIT Technology Review

Joy Xu

It is a well-known fact that natural language models can perpetuate racism and bias because of the text it was trained on. Now, researchers at Carnegie Mellon University have also found out that image-generating algorithms that were pre-trained in an unsupervised manner (i.e. without humans labelling images) also contain human-like biases. These models base their knowledge and training on the internet - which is often full of harmful stereotypes that overrepresent those biases. For instance, images of women, even US Representative Alexandria Ocasio-Cortez, were autocompleted by the algorithm wearing low-cut tops or bikinis 53% of the time - a startling and eye-opening proportion.

 

This issue also transcended the more technological methodological differences for each model: for both OpenAI’s i-GPT (an image version of GPT-2) and Google’s SimCLR, photos of men and ties and suits appear more related, whereas those of women appear farther apart from the latter. The implications of this are important: it suggests that anything from video-based candidate assessment algorithms to facial recognition could perpetuate those biased views, especially when there is no human in the loop to correct them.


Research paper: https://arxiv.org/pdf/2010.15052.pdf

How to make AI a greater force for good in 2021?

30th January 2021

MIT Technology Review

Joy Xu

January is almost over, and we are all slowly getting used to the new year. Undoubtedly, there has been massive progress in AI and machine learning in the past 12 months, such as with OpenAI’s GPT-3, or with its applications in healthcare and clinical research, and its influence will continue growing this year. But how do we ensure that it is not used maliciously, or cross ethical boundaries that are yet to be defined? With the rising concern around data protection and privacy, the regulation of this relatively new field is becoming increasingly urgent.

 

Karen Hao, a senior AI reporter at MIT Technology Review, delves into the five ways she hopes AI can improve on this year: by reducing corporate influence in research, refocusing on common-sense understanding, empowering marginalized researchers, centering the perspectives of impacted communities, and codifying guard rails into regulation.

AI is wrestling with a replication crisis

27th November 2020

MIT Technology Review

Joy Xu

As AI is overtaking the world, so is scientific research related to it. This is exciting, but comes with a whole new set of challenges: artificial intelligence is a relatively new concept, thus there has not yet been strict regulations and frameworks implemented to structure its research. Because of this, there is a lack of transparency, such as in papers or published code (usually very little is revealed), which is often criticised since it leads to issues in reproducibility.

As Joelle Pineau, a computer scientist at Facebook AI Research explains, “It used to be theoretical, but more and more we are running experiments. And our dedication to sound methodology is lagging behind the ambition of our experiments.” 

This is why Joelle, along with several other scientists in Nature, wrote a scathing critique of Google Health’s paper on breast cancer detection, which reveals a deeper underlying trend in journals publishing AI research that has little concrete evidence, preventing replication and advancement.

Serve and Replace: Covid's impact on AI automatization

21st October 2020

MIT Technology Review

The Innovation Issue

Joy Xu

Screenshot 2020-10-21 200424.png

Supermarkets staffed with robots reshelving goods might be your idea of the future of helper robots, but the covid-19 pandemic has brought to light just how far-reaching their uses could be. From spraying disinfectant to walking dogs, their presence has grown tremendously. More importantly, they are able to perform riskier tasks in the presence of infected patients or deliver lab samples, which would free up nurses for more essential tasks.

On the flip side, the same robots are a threat to the workforce, stealing away potential jobs from millions who are already in financial difficulty due to the crisis. And now, workers are potential carriers of the virus, accelerating the shift to an automated world. As Hayasaki explains, “(b)efore covid-19 hit, many companies—not just in logistics or medicine—were looking at using robots to cut costs while protecting humans from dangerous tasks. Today humans are the danger, potentially infecting others with the coronavirus. Now the challenge is that a minimum-wage laborer might actually be a carrier”.

The future of AI in robotics is not black or white, and this gripping article shows just how much the pandemic is going to change the technology landscape.

Bonus: Page 71 features a fascinating story written by an AI bot!

Contact

  • Black Facebook Icon
  • Black Instagram Icon