Science

Google Maps street view images can be used to detect signs of inequality


Google Maps street view images can be used to detect signs of social, economic, environmental and health inequality in major UK cities

  • Data collection was done in London on over 156,581 different London postcodes 
  • Half a million images used and applied to Leeds, Birmingham and Manchester
  • Best at spotting differences in quality of living environment and mean income
  • It performed worst for predicting levels of crime and self-reported health

Spotting inequality can now be done by a computer using a pre-existing, vast and easily available database of images – Google Maps street view. 

More than half a million pictures from this catalogue of ‘on-the-ground’ photos were inputted into a deep learning algorithm which unpicked signs of inequality in London.   

Data collection was done over 156,581 different postcodes and was then applied to Leeds, Birmingham and Manchester. 

Scroll down for video 

Overview of the street images and outcome data used in the analysis pictured). Visible signs - such as disrepair, cars or local shops were processed by the algorithm and became indicators for various levels of austerity - were used to predict inequality in an area

Overview of the street images and outcome data used in the analysis pictured). Visible signs – such as disrepair, cars or local shops were processed by the algorithm and became indicators for various levels of austerity – were used to predict inequality in an area  

Esra Suel and colleagues from Imperial College London used deep-learning to train a computer programme designed to detect signs of austerity. 

It was trained using 525,860 Google street view images of London taken across 156,581 different postcodes in the capital. 

These were then processed and compared to official data for London’s areas to improve its accuracy. 

Academics then  further adapted the algorithm and fed it one per cent more data from the West Midlands, Greater Manchester and West Yorkshire.

It was most successful at recognising differences in quality of the living environment and mean income (data pictured for observed, left, and predicted, right)

It was most successful at recognising differences in quality of the living environment and mean income (data pictured for observed, left, and predicted, right)

Visible signs were found to correlate with some of the metrics studied by the researchers and the deep-learning algorithm performed worst for crime levels (pictured, observed crime levels from official statistics, left, and the predicted crime levels, right)

Visible signs were found to correlate with some of the metrics studied by the researchers and the deep-learning algorithm performed worst for crime levels (pictured, observed crime levels from official statistics, left, and the predicted crime levels, right)

This allowed the computer programme to detect various forms of inequality in London, Birmingham, Manchester and Leeds just as well as it had done for London. 

The research, published in the journal Scientific Reports, was more effective for some gauges of inequality than others. 

It was most successful at recognising differences in quality of the living environment and mean income but performed worst for crime and self-reported health. 

Visible signs were found to correlate with some of the metrics studied by the researchers. 

Features such as disrepair, cars or local shops were processed by the algorithm and became indicators for various levels of austerity and were linked to the quality of housing and the living environment. 

The authors write in the study: ‘Our findings demonstrate that street imagery has the potential… to measure inequalities and monitor the impacts of policies that aim to address them.

WHAT IS DEEP LEARNING?  

Deep learning is a form of machine learning concerned with algorithms which have a wide range of applications. 

It is a field which was inspired by the human brain  and focuses on building artificial neural networks.

It was formed originally based on brain simulations and to allow learning algorithms to become better and easier to use. 

Processing vast amounts of complex data then becomes much easier and allows researchers to trust algorithms to draw accurate conclusions based on the parameters the researchers have set. 

Task-specific algorithms which exist are better for specific tasks and goals but deep-learning allows for a wider scope of data collection. 





READ SOURCE

Leave a Reply

This website uses cookies. By continuing to use this site, you accept our use of cookies.