The career of Big Data World Frankfurt speaker Bruno Kramm has spanned gaming, music and politics. Ahead of his appearance at Messe Frankfurt on November 14, Techerati spoke to Bruno about his new organisation Digitalgrid, which focuses on the cultural and social effects of machine learning, artificial intelligence, and the comprehensive automation and digitisation of our society.
Do you remember your first encounter with AI?
My first contact with AI was in my childhood – very romantically, I have to admit, in the form of science fiction authors like Stanislaw Lem or Isaac Asimov’s “Three Laws of Robotics” and not to forget the immense impression HAL 9000 gave to me in 2001 Space Odyssey.
My first computer was a Commodore Pet 3016 which my father – a renowned professor for mathematics – brought home from university when I was 13. He was deeply engaged in games theory and chaos theory, which had a significant impact on the early AI research, and he conveyed that to me rather spectacularly. Then I attended his whole lecture series on chaos theory and its influence on games theory.
But as an artist who specialises in electronic music I was intrigued early on by Google’s TensorFlow technology and the interesting possibilities artificial intelligence creates for music composition and sound synthesis forms (as for example NSynth algorithms).
What are the biggest challenges AI brings to the table for the economy and society?
In this regard, the economy and society are often diametrically opposed. AI should be a common good per se because especially machine learning is a basic prerequisite of AI. It is dependent on our data to master a learning process defined by us humans successfully according to our ethical understanding.
But more often than not the economy evaluates the proprietary possession of data and knowledge with the traditional paradigm of productive producing — products are made of raw material. This not only contradicts to the elementary transparency and traceability but also the copyrights of data and the desire for protection of privacy. Especially in Europe, GDPR has defined some new thresholds.
Furthermore, in the hands of a few economic organisations AI may lead to misuse. Control requires transparency and the allocation onto a broad basis of economical players from middle market to large industry. The self-commitment via responsible AI practices like at Google or the merging into ethical self-control committees like at Amazon, Google, IBM Microsoft and Apple are a start.
Civil society has too often associated images of fear with AI, which stem far more from Hollywood dystopias then from startup labs. But the industry also needs the acceptance of society which can only be achieved by knowledge, in order to apply the new AI-based technologies in the interest of the whole of mankind. Besides the ethical aspects, it is also about mitigating future social tensions caused by blazing fast automation.
Concepts like “lifelong learning”, open educational resources (OER) and the unconditional basic income (UBI) have to be promoted in a vast social discourse. The following basically applies: user and creator have to massively redefine their responsibilities because the more responsibility we transfer to an AI which operates globally in public space the more we obtain a clear idea of the implications on both sides. AI can only be as good as its developer and its user.
What are the most exciting opportunities opened up by AI?
Long-term – in space exploration. Mid-term, in traffic, healthcare and education. Especially in the latter as there is a permanent lack of personnel. Even today, physicians often refer to diagnoses of algorithms based on machine learning, e.g. early identification of skin cancer, first diagnoses as well as evaluation of disease recurrences. In combination with mobile solutions such as smart scanning systems, many people can live much longer symptom-free. Diagnostic errors, the third most common cause of death, can verifiably be avoided by the consultative application of AI-based systems.
In education, we will be facing much more dramatic disruptions. The vertical and horizontal permeability of educational systems as a basis for lifelong learning needs intelligent online systems to share knowledge and skills. People want to learn and they want it simple, modern and tailored to their needs.
A monopoly of educational bossing – however that may look like – is no longer up-to-date and makes it more difficult for the society to walk the road to digitisation. Knowledge is common property.
The reorganisation of infrastructure projects, traffic, as well as the disruptive changes in the energy sector to a decentralised supply infrastructure cannot be tackled without AI — the incoming data is much too complex and the contractual situation too fragmental and volatile. A possible approach might be some kind of blockchain-based AI. The authorisations of data usage could be filed in quasi-contractual constructs almost forgery-proof.
We have only just become aware of AI’s many implications – and the speed of developments will accelerate massively in the next years, because we often underestimate the effects arising from synergies. AI, AutoML, robotics, facial recognition, language synthesis up to deep fakes together sometimes may create dystopic but also fantastic new possibilities. Data collection, data processing and the respective findings as one of the essential pillars for the next phase of digitisation will hardly get by without machine learning-based processes.
Do business people fully understand the implications of AI?
Internally, they often ignore how much companies have to change generally, how much they have to check their own business models for future viability and if they might even become obsolete. The public agencies are still much too focused on returns. The great social success through combining AI, IoT and blockchain is pictured sensibly only on rare occasions. Not only is a clear vision lacking but also preparation for clients and society, respectively, in order to ensure that initial enthusiasm does not become complete rejection.
This possible rejection of AI is not discussed enough nowadays. The recent past shows in particular how fast rejection can turn into a massive wave of protest with rebellion potential via social media and echo chambers. Economy, ministries and cultural institutions have to develop a long-term strategy to prevent early enthusiasm for technology from turning into fundamental rejection.
What is the role of technical and non-technical people in the debate?
There are two key aspects. Firstly, you can only reach widespread acceptance of AI by broad social discourse. The discourse is just the first step in preparing society for the upcoming paradigm where we say good-bye to conventional work and education models.
Secondly, there is the synergy between AI and human culture. We can only expect to achieve a humanly reasonable AI in the distant future, if we let all machine learning processes take part in all aspects of society. This is the only way we can overcome the bias, blindspots and gaps of the AI and realize especially the emotional and psychological aspects of a gentle but intellectual AI.
The Google project PAIR (People + AI research) gets to the heart of this. For example: “A one-sidedly conditioned lab-rat will complete all tasks which it was trained for but will still be a snappy and atrophied being – in contrast to a household pet which grows to a socially interacting friend and helper within the family.” This might be a bit pathetic, but in the core, it shows exactly what Google currently realises with its API open AI structure.
Everybody can participate and use TensorFlow, representing a classical win-win situation. Human expertise, implementation, contextualisation, cultural sensitivity, transparency and traceability are key aspects which can only be realised in an AI by introducing extensive social expertise and individual experience. Institutional software development is decreasing slowly whereas open source-based prosumer models are growing in importance. The era of the solitary coder is over.
Bruno Kramm is chairing a debate on ethical AI at Big Data World Frankfurt on November 14, at Messe Frankfurt. To see Bruno and many more AI and data analytics experts speak, claim your free ticket to this year’s show now.