Dont be evil. In 2000, this Google mantra made us smile, smirk, and print a T-shirt or two.

These were headier days, when our love of tech was still in the early bloom of romance.

At the time, the three-word maxim was so important to Google that it entered the companys official code of conduct (and was even, rumour has it, the wifi password for shuttles ferrying staff to its Silicon Valley HQ).

Read more: Training AI to be unbiased must be a priority, not an afterthought

However, in May this year “Dont be evil” was quietly removed from Googles official messaging. The zeitgeist had altered the significance of these three simple words.

This could, of course, have been a product of an evolving corporate culture. Perhaps more likely, however, it was a reaction to the global narrative of growing mistrust in technologys behemoths. At the same time as “Dont be evil” was scrapped, the EU data protection regulations were coming into force, and a chief executive from another tech giant was being hauled in front of Congress and accused of disrupting democracy.

Such a shift in attitude towards technology was this month echoed by Stanford University (alma mater for the founders of LinkedIn, Google, Netflix, and others), when an internal review officially recommended a new educational focus: “ethics, society, and technology”.

Stanfords undertaking is commendable, certainly. Its a wise move that will cause a timely ripple-effect. But it begs the question: why has it taken our industry so long to fully adopt a stance like this?

At Hyper Island, the creative business school where I work, we have for 20 years been preparing students for the modern, digital-immersed economy.

In all discussions on innovation, collaboration, and transformation, ethics is a central component of the learning process. Our students are required to think critically about the ethics of their work from the beginning – after all, the ability to make ethical considerations has to be trained like a muscle. As time goes by, the feedback we receive only confirms how ethics must be a priority in teaching.

Ethics needs to be the starting-point when training people for the digital environment.

The ethical implications that accompany digital transformation are endless – technologies are evolving rapidly, into previously unimaginable fields. We need core ethical principles – such as care for humans and the value of consent – to be the space within which the likes of artificial intelligence can grow.

To maintain control, designers and developers need to have constant sense-checks in place, and from the outset consider ethics to be as important as the tech itself, rather than treating it as an afterthought.

Take Mark Zuckerbergs case, for example. Very few people believe that the Facebook founder advocated the spreading of false information via his platform. Instead, we have a man who lost control of his product and failed to see problems looming in the distance. Had Facebook been built on a bedrock of ethics from the start, this might have been a different story.

Failing to equip the next generation of visionaries (and consumers, for that matter) with the tools to make balanced ethical decisions means the technology industry could lose control of itself. This situation is in nobodys interests.

If the tech industry cannot demonstrate a foundational focus on ethics – more than just picking or dropping a slogan or publishing an internal review – it will not be trusted to police itself. Without this trust, the industry could well face the prospect of overregulation. If this happened, Google couldnt be evil, even if it wanted to.

So if tech firms – from startups to behemoths – want to keep control, they need to show they are getting the ethics right, before its too late.

Read more: Facebooks Icarus needs to come down to earth before its too late

Original Article

[contf] [contfnew]

CityAM

[contfnewc] [contfnewc]