Monthly highlights

Noteworthy news and interesting breakthroughs, curated for you every month.

MAY 2021

By: rishi Kalra

Game theory as an engine for large-scale data analysis

Research 

DeepMind

DeepMind has recently proposed a novel approach to solve some fundamental problems in machine learning - for instance principal component analysis (PCA). PCA is a process that often transforms a large set of variables into a smaller one to reduce the dimensionality of a large data set, and is used to make data easy to explore and visualize. DeepMind has reformulated this problem, which is traditionally a type of eigenvalue problem, as ‘a competitive multi-agent game’ named EigenGame. They also discovered a surprising similarity to how neurons adapt when learning, more can be read at the DeepMind blog post, which also links to the relevant paper for additional technical details as well.

High-performance speech recognition with no supervision at all

Research

Facebook AI

Speech recognition has become an ubiquitous aspect of our lives, yet is only available to the most common languages because of a lack of adequate and sufficient data. To tackle this, Facebook AI has developed a model, named wav2vec Unsupervised, which allows users to build speech recognition systems with no requirement of transcribed data, apparently also rivalling performance of the best supervised models of a few years ago, which were trained on nearly 1000 hours of transcribed speech. Wav2vec has also been tested with languages that do not traditionally have enough labelled data such as Swahili and Tatar. The overarching aim for Facebook AI is to develop algorithms that can learn based on brief observation, perhaps moving towards more generalised and practical artificial intelligence, which wav2vec-U seems to achieve. 

Research Paper: https://ai.facebook.com/research/publications/unsupervised-speech-recognition

Code: https://github.com/pytorch/fairseq/tree/master/examples/wav2vec/unsupervised?fbclid=IwAR04F9d9PKeBz6IBESxSGV9d3vp9Yyw9smZtF-RpiB6Nj9IJwrRh4I86Cx0

Machine learning, ethics and open source licensing

Ethics

The Gradient

What is the cost of allowing AI to dictate more and more aspects of our lives, from security and privacy to consumer behaviour? This is a two-part essay that covers some important topics in the field of artificial intelligence and ethics. The author begins with references to specific cases where the fundamental weaknesses of artificial intelligence have proven to have severe consequences, for instance through racially biased software that is being used today by police. Further topics in the first part include the incentives around AI and corporations and the problem of moral relativism. The second part picks up with an exploration into how AI research and software can be used, and more importantly who can use it and with what limits, leading to discussion about open-source and licensing when it comes to using software that has the potential to change lives and ultimately society as we know it.

Part 1: https://thegradient.pub/machine-learning-ethics-and-open-source-licensing/

Part 2: https://thegradient.pub/machine-learning-ethics-and-open-source-licensing-2/

AI predicts effective drug combinations to fight complex diseases faster

Research 

Facebook AI

Facebook AI and the Helmholtz Zentrum München have developed a new way to predict the effects of different combinations of drugs, dosages, timings and more. This was done by developing the very first single AI model to do so, called Compositional
Perturbation Autoencoder (CPA). The model has been open-sourced, published along with a paper including more detail (both linked below). Previously, determining the effects of multiple combinations of drugs would have taken years and numerous experiments. With CPA, researchers can go through all the possible combinations within hours.
Details on how the model works can be found in the Facebook AI blog post.


Research paper: https://www.biorxiv.org/content/10.1101/2021.04.14.439903v1 

Code: https://github.com/facebookresearch/CPA 

Nvidia GTC 2021 Keynote

Commercial

Nvidia

Watch here: https://youtu.be/eAn_oiZwUXA 

 

In this approximately 1.5-hour long video, Nvidia’s CEO Jensen Huang covers democratizing high-performance computing, the metaverse, development of new CPUs and GPUs for giant-scale AI computing and more. Topics are time-stamped in the video so you can skip to whatever interests you most.

Europe attempts to take a leading role in regulating uses of AI.

Ethics

The Financial Times

Towards the end of April, Brussels has announced the EU will be working towards regulating artificial intelligence, and particularly its uses within the EU member countries. The apparent goal is to bring the EU closer to other major powers such as the US and China in terms of promoting innovation in AI, though simultaneously maintaining trust with consumers - which would allegedly create a “competitive advantage” if European systems are seen as high-quality and trustworthy as a result.

 

Nevertheless, there has been significant responses from Big Tech companies and others, who claim that the regulations would stifle innovation, particularly for smaller startups. It is said these regulations are currently being debated and will potentially be put into law by 2023 at the earliest.

Note: UCL has a Financial Times subscription which you may use to access the article for free

April 2021

By: rishi Kalra

MARCH 2021

By: rishi Kalra

Algorithm detects deepfakes by analyzing reflections in eyes

Research 

News Atlas

Researchers at the University of Buffalo managed to develop a tool that found it was able to distinguish between real portraits and deepfakes with a 94% effectiveness. The idea involves training the algorithm to analyze the reflections in the eyes of the person in the image. Specifically, the reflective patterns in the cornea have different shapes and intensities, which should be very similar in each eye if the image is real and not a deepfake. The high accuracy shown by this method is promising. However, it is not without drawbacks. Deepfake creators can simply do some further manual editing on the image to make it more realistic, meaning the deepfake detection algorithm struggles to accurately distinguish between what is real and fake. Though not perfect, this method does make it slightly harder for the average person to make a true deepfake and reduces the potential of deepfakes created for malicious intent

Brain-computer interface for generating personally attractive images

Research

 

By using GANs and brain-computer interfaces, University of Helsinki researchers managed to determine which generated faces people found attractive, and then created new faces that matched their preferences. Test subjects were shown a set of faces, whilst activity in their brains were monitored using EEG (electroencephalography), which is a way of measuring electrical signals produced when brain cells message each other. Observation of the levels of brain activity was used along with neuroadaptive computing to adapt a “best guess” of what the person finds attractive and generate a new person that would embody the characteristics that the test subject personally seemed to like. In a double-blind trial where subjects were shown a set of faces, some real and some generated, 87% of the time the subject selected the AI-generated face.

Facebook announces a project to use AI on public user videos

Commercial

MUO

Facebook announced the launch of a project called “Learning from Videos”, which has the aims to understand audio, textual and visual parts of public user videos on the social network using Generalized Data Transformations, a self-supervised system. Facebook emphasises the great potential of the project, which includes the accuracy improvements made possible by access to such a huge dataset. It also says by training on user videos it will help researchers “break away from the reliance on labelled data”, which has been known to be a problem in the field, as many applications fail to replicate performance when applied in the real world. Nevertheless, there are still privacy concerns as there is no specific choice for users to opt-out, instead, consent is assumed by a user simply using a Facebook-owned app.

Google’s firing of AI ethics researchers causes backlash

Ethics

Wired

Since the firing of one AI ethics researcher over disagreements over the content of a paper, and another firing for allegedly sharing “confidential” information, Google has faced further backlash from academics who are boycotting conferences and refusing funding from the tech giant. This brings the main questions regarding ethics and artificial intelligence back into the spotlight, as well as Google and their role in the development of ethical AI.