AI NEWS: physics, biotech & medecine

Discover our latest curated articles on physics, biotech and medecine.

The entire universe is a machine learning algorithm?

26th May 2021

The Next Web

Joy Xu

A bold statement, to say the least. Researchers and theoretical physicists, in collaboration with Microsoft, have released a new preprint that details how they believe the universe essentially learns on its own - an “Autodidactic Universe”, as the preprint title suggests. 

 

A quote from the preprint: “For instance, when we see structures that resemble deep learning architectures emerge in simple autodidactic systems might we imagine that the operative matrix architecture in which our universe evolves laws, itself evolved from an autodidactic system that arose from the most minimal possible starting conditions?”

 

The main takeaway from it is that the universe tends to adapt the laws of physics, similar to a self-learning neural network. These laws, such as conservation of energy, would therefore not be fundamental laws but would rather be constantly evolving in response to the current state of the universe. This is in stark contrast to most modern physics theories which affirm that fundamental laws are set in stone - which also implies that unifying physics might be impossible, if the universe is in a perpetual cycle of self-improvement. 

 

Read the preprint here: https://arxiv.org/pdf/2104.03902.pdf 

DeepONet, another step towards lightning-fast solutions in physics

22nd May 2021

Quanta Magazine

Joy Xu

Traditionally, neural networks map values between spaces with finite dimensions (for example, with image classification, think of the values of pixels in an image being mapped onto a number between 0 and 9 to represent 10 different classes), but now researchers have come up with a new concept: mapping an infinite-dimensional vector space onto another infinite-dimensional space. Using a deep neural network dubbed DeepONet,  the concept is to work with operators rather than functions: these are essentially a mapping from one function to another, such as taking a derivative. Its special feature is its bifurcated architecture, which processes data in two parallel networks, a “branch” and a “trunk.” An immediate application for this is in solving partial differential equations (PDEs): these equations model almost any process we can think of, from fluid dynamics to climate change, yet are incredibly difficult to solve and usually require extremely long computations. That is, before we started using deep neural nets on them.


Solving partial differential equations using neural networks was already done last year with the Fourier neural operator (FNO). This network also maps functions to functions, from infinite-dimensional space to infinite-dimensional space, and solves PDEs with incredible speed. These two methods both seem promising, and highlight dramatic new approaches to solving PDEs in comparison to the old, computationally-taxing methods. What’s more, is that these do not appear to suffer from the curse of dimensionality, where too many features (or dimensions) in a training set causes the model to underperform (The Curse of Dimensionality. Why High Dimensional Data Can Be So… | by Tony Yiu).

Can we have true artificial intelligence without first understanding the brain?

1st April 2021

MIT Technology Review

Joy Xu

Jeff Hawkins is one of the most successful computer architects in Silicon Valley - and what differentiates him from most others in the field is his dedication to understanding just how neuroscience and artificial intelligence are linked, not just if AI can replicate a human mind with large enough models. After working as a software engineer, he decided to pursue a PhD in neuroscience at Berkeley to better understand the big picture: “What is intelligence and how does it work?”. Next, he ventured into entrepreneurship and founded many highly-regarded companies, namely Palm Computing and Numenta, a research company in neuroscience. Journalists from the MIT Technology Review recently interviewed him on how alike biological and artificial intelligence really are - or should be.

Dark matter and astrophysics: how AI can help you see the unobservable

27th March 2021

TNW

Joy Xu

In astronomy, gravitational lenses are images of distant galaxies that appear bent or circular in shape: as the light emitted by these distant galaxies passes by massive objects in the universe, their gravity can distort or “pull” the light towards them. This phenomenon, called “gravitational lensing”, also occurs when galaxies are close to large amounts of dark matter - a hyperdense, invisible constituent that makes up most of our universe and that has been fascinating astrophysicists for years.

 

However, it is extremely tedious and difficult to find those gravitational lenses in images from observatories. But recently, researchers from several universities joined forces and designed an AI model based on deep residual neural networks to scan survey data on images of gravitational lenses, and non-lenses.

 

Then using this model they attempted to find additional gravitational lenses from the DESI Legacy Imaging Surveys, which are enormous datasets of images of the observable universe. Impressively, the model detected more than 1200 new potential lenses, in stark contrast to the 300 that were already found when the project first started.

 

Read the preprint of the study here: https://arxiv.org/pdf/2005.04730.pdf

A computer chip ... that works on light

21st March 2021

Wired

Joy Xu

AI runs on computers, and for a computer, hardware and software go hand-in-hand: even the most sophisticated algorithms cannot perform well if they are running on insufficient computing power during training and testing. That is why more and more people are attempting to find innovative ways to reinvent computers, in order to keep up with the fast pace of AI progress.

 

Traditionally, the flow of electrons in semiconductors is the foundational concept of modern computers which enables them to perform boolean operations, the building blocks of software and logic. Now, a new concept has been introduced by Lightmatter, a startup founded at MIT - light-based computer chips.

 

These chips can be faster than conventional chips for some types of AI calculations, as the different wavelengths of light are used to encode information, which is less greedy than controlling electrons in terms of power usage. Furthermore, they can be directly compatible with most AI software and data centres. A notable use for these chips would be deep learning, although there are some limitations. For instance, the calculations are analogue rather than digital, leading to less precision, and companies might be reluctant to use this design on a large scale as it has not yet been proven as clearly superior. However, this still shows great potential, by leveraging light and physics to unlock new ways of doing AI. As Aydogan Ozcan, a professor at UCLA, explains: “We might see major advances in computing speed, power and parallelism, which will further feed into and accelerate the success of AI.” 

Beyond qubits: Next big step to scale up quantum computing

28th February 2021

Nature & ScienceDaily

Joy Xu

You may be familiar with the concept of a “bit”: the smallest binary unit of information stored in a computer, usually represented by a 1 or 0. You may also be familiar with quantum physics, where one of the fundamental principles is superposition: when particles can exist in multiple states at once. But what happens when you combine the two concepts? You get a quantum computer, where a bit becomes a “quantum bit”, or “qubit”, and can have more than two distinct states.

Quantum computers are sometimes considered the “supercomputers of tomorrow”: they have the potential to run calculations exponentially faster than today’s most powerful processors, and despite them still being in their infancy tech giants like IBM and Google are racing to find ways to make the technology stable and scalable.

 

One challenge in the creation of quantum computers is that it is difficult to coordinate and stabilize these qubits. Current machines are bulky and impractical, involving more than hundreds of connections and wiring. Now, scientists and engineers at the University of Sydney and Microsoft Corporation have invented a single chip that can control thousands of qubits at once. Microsoft Senior Hardware Engineer, Dr Kushal Das, a joint inventor of the chip, says: "Our device does away with all those cables. With just two wires carrying information as input, it can generate control signals for thousands of qubits.”


Nature paper: http://dx.doi.org/10.1038/s41928-020-00528-y

Reducing bias in healthcare outcomes with AI

11th February 2021

MIT Technology Review

Joy Xu

We often hear about machine learning models perpetuating bias in practical, real-life settings, whether it be racial, social or economic. But in the field of healthcare, machines might not be the only ones at fault: for instance, the US National Institute of Health found that Black patients with osteoarthritis are more likely to report higher levels of pain than their white counterparts, even though they appear to have the same KLG score. A KLG score is a pain level score determined by radiologists based on X-rays and is used by doctors to assign treatments instead of their self-reported pain. This prompted researchers to ask themselves why - are the Black patients exaggerating their pain, feel pain differently, or is the KLG score entirely unsuitable and biased towards the pain levels of white patients?

 

After running experiments with a deep-learning algorithm, researchers discovered that the ML model was much more accurate in predicting levels of self-reported pain based on patient X-rays than KLG - regardless of ethnicity -  to the extent of reducing “racial disparity at each pain level by nearly half”. This startingly reveals that standard ways of determining pain might be flawed and tailored to certain populations (similar to overfitting in machine learning) and thus could be an area where more objective algorithms can help out.

Natural language predicts COVID viral escape

3rd February 2021

MIT Technology Review

Joy Xu

In this day and age, COVID-19 is at the centre of current research. This is no different in the field of AI and ML: Bonnie Berger, a computational biologist, and her colleagues have released a new paper, leveraging the disciplines of biology and computer science to explain how natural-language processing (NLP) algorithms can “generate protein sequences and predict virus mutations, including key changes that help the coronavirus evade the immune system.”

Interestingly, mutations and genetic characteristics of a virus can be interpreted through grammar and semantics - for example, an unfit virus will be “grammatically incorrect”.

Using a type of neural network called LSTM, they were able to identify possible mutations for the virus that would be viable, genetically speaking. This is extremely important knowledge for healthcare researchers and authorities, as knowing in advance future incoming variations can help them plan ahead faster and better prepare their defences. A fascinating read

Journal paper: https://science.sciencemag.org/content/371/6526/284?mc_cid=7daa2c5de3&mc_eid=[83f8b7deb0]

Schrödinger's, solved?

23rd January 2021

SciTech Daily

Joy Xu

In quantum chemistry, working out Schrödinger’s equation is essential in order to “predict chemical and physical properties of molecules based solely on the arrangement of their atoms in space”. However, like most partial differential equations, it remains a challenge to solve and usually requires huge amounts of brute force computational power. Recently, researchers at Freie Universität Berlin have combined accuracy and computational efficiency in creating an AI model capable of doing this much more efficiently.

Schrödinger’s equation is based on the wave equation, which determines how electrons move in a molecule. The standard approach to expressing the wave equation for a particular molecule used to consist of mathematical approximations to map the behaviour of each individual atom, but the complexity of combining predictions for each atom, especially given the number of dimensions in the wave equation, makes it nearly impossible for larger molecules.

 

The team leader, Dr Frank Noé, explains: “[W]e designed an artificial neural network capable of learning the complex patterns of how electrons are located around the nuclei”. Although the model is not yet ready for industrial use, it opens up promising new opportunities in reducing “the need for resource-intensive and time-consuming laboratory experiments”.

Check out the original paper here: https://doi.org/10.1038/s41557-020-0544-y 

AI has cracked a key mathematical puzzle for understanding our world

17th November 2020

MIT Technology Review

Joy Xu

You might remember - or are studying - partial differential equations. These types of equations are extremely useful in modelling real-world situations, such as fluid motion or planetary orbits, for example. The only problem? They are extremely hard to solve and researchers often use supercomputers, with their enormous processing power, to find their solutions.

However, now the field of AI has entered the stage: researchers at Caltech have developed a generalizable deep learning technique, capable of solving entire families of PDEs—such as the Navier-Stokes equation for any type of fluid—without needing retraining. It also is hundreds of thousands of times faster than any mathematical formula. This makes sense, as fundamentally, the job of an AI algorithm is to fund a function that will give the correct output for a given input, such as predicting “cat” from an image of a cat. Similarly, this sort of function approximation is linked to how we solve partial differential equations.

 

These new findings could have wide-ranging implications - from modelling weather patterns rapidly and accurately for climate change response to predicting air turbulence patterns. As Caltech professor Anandkumar puts it, “the sky’s the limit”.