Original article ComputerWeekly
Gartner analyst Frank Buytendijk has proposed a new approach to programming and system development based on ethical impact, as the explosion of smart machines and devices brings the internet of things (IoT) into the mainstream.
In a series of proposals, Buytendijk, a research vice-president and distinguished analyst at Gartner, said CIOs need to begin to consider how to develop ethical programming standards for smart machines to realise their potential and secure successful futures for the many businesses that will come to rely on them.
“Clearly, people must trust smart machines if they are to accept and use them,” he said. “The ability to earn trust must be part of any plan to implement artificial intelligence or smart machines, and will be an important selling point when marketing this technology.
“CIOs must be able to monitor smart machine technology for unintended consequences of public use and respond immediately, embracing unforeseen positive outcomes and countering undesirable ones.”
Five levels of ethical programming
In his proposals, set out in full in a Gartner report, Buytendijk suggested there should be five levels of ethical programming.
At the lowest level, Level 0, also dubbed Non-ethical Programming, Buytendijk said there were no explicit ethical considerations that needed to be taken into account by the manufacturer. This would include vapourware – technology that is announced but never materialises – and the first release of new software, which is seldom complete.
Gartner recommended that tech manufacturers communicated openly on what they would deliver and any changes to that, including service-level agreements to specify what is delivered and how.
At Level 1, Ethical Oversight, there was still no requirement for ethical programming, however deployment and use of smart devices may have ethical consequences, which would largely arise from how people deployed and used them.
At this level, Buytendijk recommended businesses establish governance practices to ensure their usage of smart devices and the IoT breaks no laws.
If a smartphone told you to jump off a bridge
At the second level, Ethical Programming, businesses will begin to see some challenges around smart devices, as responsibility begins to be shared between users, service providers and manufacturers. This could be around areas such as virtual personal assistants.
Read more about ethics in IT
- The Certified Ethical Hacker certification gained in popularity recently. Expert Joseph Granneman explains the CEH and why it’s relevant again
- A controversial Facebook research study provides fodder for discussions on the ethical issues involved in digital experimentation efforts
- Gut instinct is out and data-driven decisions are in at smart companies, right? Not so fast. Even when relying on data to drive business decisions, organisations should proceed with care
At this level users would be responsible for the content of the interactions they start, but not necessarily the outcome, which would fall to the designer and manufacturer, while the provider of the service would have to interact with the user to ensure the technology is used ethically.
“For example, one smartphone-based virtual personal assistant would in the past guide you to the nearest bridge if you told it you’d like to jump off one,” wrote Buytendijk. “Now, it is programmed to pick up on such signals and refer you to a helpline.”
At Level 3, Evolutionary Ethical Programming, tech companies would need to introduce ethical programming as part of a connected device that learns and evolves, because the more a smart device does learn, the more it departs from its original design. Here the user would maintain overall control, but the smart device would have some degree of autonomy.
How future devices are trusted by users will become key at this level, said Gartner. For example, if a smartphone app is not trusted to report your business expenses accurately, or if an autonomous car was not trusted to safely navigate a dangerous stretch of road, the user would be able to take back control.
Buytendijk recommended that CIOs begin to consider how their companies will handle autonomous devices acting on a user’s behalf. For example, he asked could a smart machine be trusted to access a corporate credit card, or should it have its own credit card?
Devices in control
Read more about autonomous devices and the IoT
- Autonomous connected vehicles will be launched onto the market in 2016, the head of the Renault-Nissan Alliance tells Mobile World Congress
- Google has revealed that it is developing autonomous drones for the delivery of goods that could be used for disaster relief
- The growth of the internet of things will see devices begin to initiate customer service requests on behalf of humans, says Gartner
The fourth level of ethical programming, as set out by Buytendijk, deals with the future hypothetical of Machine-Developed Ethics, or a time where connected machines become self-aware and will need to be raised and taught, like children.
Here the concept of users disappears to be replaced by the concept of human actors, who initiate interactions, and respond and adapt to them.
Here, said Buytendijk, the device itself will be responsible for its behaviour.
“The questions that we should ask ourselves are: How will we ensure these machines stick to their responsibilities?” he said.
“Will we treat smart machines like pets, with owners remaining responsible? Or will we treat them like children, raising them until they are able to take responsibility for themselves?”