The Growing Ubiquity of Algorithms in Society

Machine Learning and Artificial Intelligence (AI), powered by algorithms, play an ever-increasing part in decision-making within our society. This was the context for “The growing ubiquity of algorithms in society: implications, impacts, and innovations” – a discussion meeting held by the Royal Society on 30-31 October 2017.

Speakers from various backgrounds, ranging from law to computer science, economics to education, and graphic design to statistics, delivered a series of thought-provoking talks, followed by engaging discussions generated by questions from the audience.

In this blog I will highlight some pertinent points that emerged, but of course, many more interesting topics were discussed. For a detailed overview of all the speakers and their abstracts, see the event page on the Royal Society’s website.

Day 1 focused on the relationship of algorithms and the law, transparency and regulation, touching upon what the legal and regulatory implications of the use of algorithms in society are.

Many conversations on this day concerned potential issues surrounding the black box nature of (some) algorithms. Because of the growing complexity of machine learning and AI, it is often challenging, if not impossible, to reveal the underlying route that led to a decision an algorithm has made. How, then, can such a decision be scrutinised or tested against the law? The implications of this are quite substantial – consider for example a self-driving car that is involved in a fatal accident. Liability is hard to assign if it is unclear why the car drove the way it did. A key line uttered by one of the speakers this day was appropriately: “if something is not testable, can it be contestable?”

Day 2 started off on a positive note, focusing on what we can gain from algorithms in society, demonstrating use cases of algorithms with societal impact, before moving on to discussing the implications and potential of algorithms applied to studying human health.

We learned how UN Global Pulse is developing machine learning methods to obtain high quality information from humanitarian disaster areas – areas that are hard to reach in the immediate aftermath of a disaster, and therefore historically, information quality is notoriously low. Exciting applications that were demonstrated included using satellite imagery to identify human-built structures and communities, and releasing speech recognition and natural language processing tools on radio broadcasts to identify where events are occurring.

A key discussion on this day evolved around the implications of using data for human health, and the implications for privacy. To train good algorithms, much data is needed. In health, this will often involve sensitive personal data, such as one’s genome. Despite efforts of anonymisation, it has repeatedly been shown that with the right tools, individuals can often be re-identified from analysis outcomes. To alleviate such privacy concerns while still enabling data scientists to have access to the data they need, much innovation is happening. To give an idea of promising methods, speakers discussed techniques such as differential privacy, distorting data (k-anonymity), and methods where analysts do not need to have access to the underlying microdata (e.g. datashield).

Overall, perhaps the strongest point about this meeting was the broad variety of disciplines that was represented by the delegates and speakers, allowing for the complexity of the topic to be debated from many different angles, underlining that machine learning, artificial intelligence, and their implications for society aren’t a challenge tackled by a single discipline.

For further reading on this topic, the Royal Society has published some interesting materials on machine learning.

 

Blog post by Bobby Stuijfzand, Data Science Specialist at the Jean Golding Institute. Follow us on on Twitter @JGIBristol @BobbyGlennS