BCS Turing Talk Belfast 2019

BCS Turing Talk Belfast 2019

26th February 2019 Articles Past Events 0

Original Article

This article is a summary of an article hosted elsewhere. Read the original article here.


Summary

Event Details

Organiser: BCS
Date: February 21, 2019

Results and Standing

Going on forward from the last Turing Talk the ACM Student Chapter went to see, this talk focused on the bias in Machine Learning which can go on so far as to giving false or highly unreliable data.

Insight Talk by Austin Tanney

Starting off, Austin Tanney, the Head of Artificial Intelligence for Kainos upon other AI-related occupations, gave a brief history of Artificial Intelligence, doing away with misconceptions the audience might have had.

Pointing out that AI is not synonymous with robots, which are machines that do the task they have been programmed to do, he gave insight into how everyone that owns a smartphone, is, most likely, accessing some form of  Artificial Intelligence.

Starting from Speech Recognition, like Alexa, Siri or Google Assistant, over Chat bots to Expert Systems and combinations thereof, Artificial Intelligence is part of most of our daily life’s.

Turing Talk by Dr. Krishna Gummadi

As Artificial Intelligence is already part of everyday life, it doesn’t come to a surprise that Machine Learning and Artificial Intelligence are used in more ways than we anticipate.

It is used in crime prevention and offers insight into where police force should be strengthened next, among other areas.

And for good reason to. As studies have shown that judges can be highly biased by their fatigue and hunger, human bias is a serious problem that cannot be ignored.

But can we trust our machine-learned agents to be as unbiased as we hope?

Can we just set out on our way without considering the bias that ML-agents could bring?

Clearly, as these questions are brought up, we can not.

Thinking about the issue, it shouldn’t come to a surprise either, that ML-agents, that have been trained on biased data, can only be biased themselves. How could they be different? How could they be beyond human error when it is humans that design them?

Combating this, not only do we need to make sure that our data is clean and as unbiased as possible; we also need to make sure the optimal solution takes into consideration that an unbiased solution might not be only optimal to one category, but has to be marginally less optimal but including all categories.

 

 

Janina Editor

Masters graduate for Creative Digital Media, Tutor in the IT Learning Centre and Advisor for the ACM Student Chapter at Dundalk IT.

follow me

Share With Your Colleagues

Scroll Up