Artificial intelligence is the topic of the hour. But why is everyone talking about machine learning, and will AI become the next big innovation in industry and business?

In an interview with Oliver Kramer, Professor of Computational Intelligence at the University of Oldenburg, we discover what lies behind the hype. We discuss technological progress in Germany and the wider world, the role of humans and moral questions. Of course, we also want to get to the bottom of ‘end times’ predictions some people have already started to make – are robots really positioned to rebel against and threaten humanity?

 

Hello, Oliver. Thank you very much for making time for this interview. To begin with, perhaps you could just introduce yourself.

Yes, sure. My name is Oliver Kramer, I hold the professorship for Computation Intelligence at the University of Oldenburg and, at the same time, I’m the director of the Department of Computer Science. I am now 40 years old and began a junior professorship in the same area of learning six years ago. I’m interested in everything that is generally understood by the term ‘AI’, or artificial intelligence, including neural networks, machine learning and the optimisation of genetic algorithms – this last point is a particular specialisation.

We have a lot of fields of AI application in Oldenburg, such as energy, health and autonomous driving. I look after a small group of five doctoral students, who conduct research with me.

The whole topic is surrounded by hype and becoming more and more popular in many industries – including the economy – not just in research, which we’re very pleased to see.

 

So are there a lot of people working in AI?

Currently, the community is growing as the importance of the topic increases. But the research community has always been very, very big. In general, every faculty of computer science has had one or two people working in AI and now there is even more demand.

There are diverse digitalisation strategies. For example, Hamburg already has one and Lower Saxony is currently developing a new one. There will soon be new professors emerging in the areas of big data, data science, machine learning and deep learning – these are all, ultimately, overlapping areas.

 

Simply speaking, what exactly is artificial intelligence?

It is in fact a paradigm that distinguishes itself from the normal approaches of informatics. Normally, when a programmer encounters a problem, they break it down into sub-problems to solve. AI, on the other hand, holds many examples of ready-made solutions and uses them to learn how to solve unknown problems. Furthermore, you can equip a machine with the capacity to independently generate new solutions.

“In computer science, when a programmer encounters a problem, they break it down into sub-problems to solve. AI, on the other hand, holds many examples of ready-made solutions and uses them to learn how to solve unknown problems.”

The whole thing works through a well-established setting known as ‘supervised learning’. You train the machine by showing it how a solution looks. This is when its neural networks begin to work, which then have to learn tasks from the data with which they are presented in order to solve the problem.

 

How much technological progress has been made in this area so far?

There are several stages of progress that can be differentiated from each other. Supervised learning is the area currently used most successfully in industry. I would say that about 90 per cent of the AI currently making money out there is in the area of supervised learning.

“Supervised learning is the area currently used most successfully in industry. About 90 per cent of the AI currently making money out there is in the area of supervised learning.”

But there are other levels. So, in supervised learning, you show the machine examples in the form of data and, from these examples, it learns to construct a solution. But it can then also generalise and interpolate i.e. offer reasonable solutions to new tasks. The next step – or perhaps the one after that – is for the machine to think independently about things and make proposals for solutions itself.

Progressing logically, the next step is transfer learning. Here, a machine holds data from a particular area – for example, data on face recognition – and applies it in a completely new area, such as medical image processing (perhaps for the purpose of early tumour detection). So, it takes existing networks from face recognition and, with only minor adjustments, enters a new domain.

Then there is the stage of reinforcement learning, which means learning from reward. And that goes further in the direction of a machine really considering things independently through a system of rewards and punishments. Such a machine could then produce actual intelligence. But we’re still quite far from that.

Currently, we’re at a stage that you can quote from AI research: anything that a normal person, with normal cognitive abilities, could solve in a second, a machine can also solve. For example, the recognition of images: ‘What am I seeing? Is this a dangerous situation? Do I see a car in front of me? Should I turn right or left? Can I see a tumour on this medical image?’ These are all things that both experts and normal people can process in a second, and precisely the cognitive performance of machines today.

With this ability, you can already solve a great quantity of problems. And this is the reason it is used in industry… and in research.

 

Since you’ve just talked about levels of development, what do you think about how our lives might change over the next 50 years? Or, might they not change as much as we ‘fear’?

Firms will change a lot, because data can already be used to expand into completely new fields of business. So, data is in fact the new gold. A lot will happen in the next few years.

This is similar to the revolution of the internet. There, too, completely new fields of business opened up. Companies that were previously retailers did not simply just set up a web shop: they changed their whole structure. Look at the example of Amazon, which created a whole new business model that went further than classic online retail. Something similar will happen with data and AI.

And in 50 years? Machines will accompany us much more in everyday life and hopefully make our lives simpler.

 

Unfortunately, alongside the possibility of machines simplifying our lives, there’s also talk of AI’s potential danger and the ‘end’ of humanity. What do you think of these predictions?

I hope, first and foremost, that the good aspects of AI come into play; for example, that my eating habits are optimised. Or that diseases are detected early i.e. that risks in genomes developing diseases are uncovered earlier. Or that my car will be safer because I’m accompanied while driving, or because driving is completely autonomous. These are perhaps not the most exciting nor dramatic visions but, when a child or a deer runs out in front of a car and it breaks automatically, that’s ‘AI for good’, so to speak. Beyond that, I can imagine a huge amount of further potential.

If we’re talking about threats coming from AI, that’s still a distant scenario.

“If we’re talking about threats coming from AI, that’s still a distant scenario.”

But, of course, there are also moral obligations – including in research – to investigate those aspects of AI that concern morality. Because, at the moment, the main factor is actually human beings and the question, ‘For what does man choose to use AI?’ If man uses AI to determine sexual orientation or political direction, then there are moral implications. Ultimately, though, it comes down to the human. Whatever might be found within data, data is neutral until the point of collection. It is the person who does something good or bad with it. The AI is not ‘evil’; the person who uses it as a tool might be.

“Whatever might be found within data, data is neutral until the point of collection. It is the person who does something good or bad with it.”

The idea that we might suddenly find ourselves in a ‘Terminator’ scenario in which AI independently evolves and rises up against humanity – that, I find unrealistic.

 

Nevertheless, leading figures (notably Elon Musk and Bill Gates) have also discussed ethical questions in forums and organisations such as OpenAI. Are moral issues the greatest concerns/greatest challenges we need to face?

Yes, exactly. It’s about questions like, how do you teach morals to a machine? How do you make sure the programmer is sensitised to such questions? How do you create the awareness that morality plays a major role and that you have to deal with data sensitively and responsibly?

Overall, the best way to ensure that AI is used for good causes is to make it a common asset. You have to make AI public; even the algorithms should be publicly available to researchers, developers and others. But, on the whole, you have to let AI become common property and inform regular citizens about what is possible and what algorithmically lies behind AI.

What is a neural network? This could in fact be learnt in primary school or at least in secondary school. It should become part of the basics, so that the public discourse is led away from the impression that crazy scientists are sitting in ivory towers hacking machines that will ultimately threaten humanity.

“What is a neural network? This could in fact be learnt in primary school or at least in secondary school.”

 

And this is to some extent the current situation. At least I suspect that many feel that a handful of smart scientists are developing something…

Platforms and organisations such as Open AI – and even more general forums and magazines such as Spiegel Online – are bringing AI topics into focus. The media is increasingly picking up on current topics and issues and bringing them to the attention of society. I think that’s a very important process.

I’m noticing again and again how such themes are being recognised by society, because people stop me and say, “Hey! You’re doing something in AI. I have a question. What does it look like? How does it work?” The field seems to be entering the public’s awareness as it’s presented more often in the media.

Of course, there are some famous scientists and others in the public eye, such as Elon Musk and Putin, who have made sceptical statements about AI – World War Three will be decided by AI, or some such (editor’s note: a quote from Putin). These are daring predictions. But, again, they relate to the question of what a person does with this data. For what do they choose to use AI? After all, it’s people who develop AI.

 

What role, then, does Germany play as a research and development location in the fields of AI and deep learning?

That’s a good question! So the US, or specifically Silicon Valley, is a pioneer with companies such as Google, Facebook, UBER and more. These companies create a great environment for researchers. They not only provide a creative work environment, but also pay high salaries and actively recruit people, sometimes directly from universities. That’s why so much is happening in Silicon Valley.

Germany has a lot of potential in this area. We have a fantastic education system and great computer-science faculties. Current developments also indicate that policymakers have recognised that digitalisation is an important topic. I don’t think we’ve missed the chance to make a connection, nor do I think we have to. We’re a country with a high standard of education and access to knowledge is public and has never been as easy to gain as it is today.

“Germany has a lot of potential in this area. We have a fantastic education system and great computer-science faculties.”

It isn’t the case that Google, Facebook and co. never publish or disclose anything. In fact, they sometimes even publish their algorithms, or we (researchers) develop algorithms in collaboration with the companies.

I’m travelling to Berkeley again next week, then I’ll be in San Francisco Bay, in Silicon Valley. There, I can see what is currently happening and what my colleagues are up to. And the great thing is, the people working there are still just human beings. They are like you and me, so to speak. They’re gifted, of course, but in the end they are people with strengths and weaknesses. You can say the same about people in Germany.

“The people in Silicon Valley are still just human beings. They are like you and me, so to speak.”

What do you think?
Reader Rating 4 Votes
86%

Also published on Medium.