Monthly highlights

Noteworthy news and interesting breakthroughs, curated for you every month.

MARCH 2021

By: rishi Kalra

Algorithm detects deepfakes by analyzing reflections in eyes

Research 

News Atlas

Researchers at the University of Buffalo managed to develop a tool that found it was able to distinguish between real portraits and deepfakes with a 94% effectiveness. The idea involves training the algorithm to analyze the reflections in the eyes of the person in the image. Specifically, the reflective patterns in the cornea have different shapes and intensities, which should be very similar in each eye if the image is real and not a deepfake. The high accuracy shown by this method is promising. However, it is not without drawbacks. Deepfake creators can simply do some further manual editing on the image to make it more realistic, meaning the deepfake detection algorithm struggles to accurately distinguish between what is real and fake. Though not perfect, this method does make it slightly harder for the average person to make a true deepfake and reduces the potential of deepfakes created for malicious intent

Brain-computer interface for generating personally attractive images

Research

 

By using GANs and brain-computer interfaces, University of Helsinki researchers managed to determine which generated faces people found attractive, and then created new faces that matched their preferences. Test subjects were shown a set of faces, whilst activity in their brains were monitored using EEG (electroencephalography), which is a way of measuring electrical signals produced when brain cells message each other. Observation of the levels of brain activity was used along with neuroadaptive computing to adapt a “best guess” of what the person finds attractive and generate a new person that would embody the characteristics that the test subject personally seemed to like. In a double-blind trial where subjects were shown a set of faces, some real and some generated, 87% of the time the subject selected the AI-generated face.

Facebook announces a project to use AI on public user videos

Commercial

MUO

Facebook announced the launch of a project called “Learning from Videos”, which has the aims to understand audio, textual and visual parts of public user videos on the social network using Generalized Data Transformations, a self-supervised system. Facebook emphasises the great potential of the project, which includes the accuracy improvements made possible by access to such a huge dataset. It also says by training on user videos it will help researchers “break away from the reliance on labelled data”, which has been known to be a problem in the field, as many applications fail to replicate performance when applied in the real world. Nevertheless, there are still privacy concerns as there is no specific choice for users to opt-out, instead, consent is assumed by a user simply using a Facebook-owned app.

Google’s firing of AI ethics researchers causes backlash

Ethics

Wired

Since the firing of one AI ethics researcher over disagreements over the content of a paper, and another firing for allegedly sharing “confidential” information, Google has faced further backlash from academics who are boycotting conferences and refusing funding from the tech giant. This brings the main questions regarding ethics and artificial intelligence back into the spotlight, as well as Google and their role in the development of ethical AI.

Contact

  • Black Facebook Icon
  • Black Instagram Icon