Now, the company's boss, Sundar Pichai, has shared Google's principles on AI, saying the tech behemoth felt "a deep responsibility to get this right". He has pledged that AI at the company will be built and tested for safety, be accountable to people, and avoid creating – or reinforcing – unfair bias.
Pichai also said Google will not design or use AI for weapons or other tech with a principal purpose was to cause injury to people, and won't use it in tech that gathers information for surveillance "violating internationally accepted norms".
In a blog posted yesterday evening, Pichai wrote: "At its heart, AI is computer programming that learns and adapts. It cant solve every problem, but its potential to improve our lives is profound."
Google will assess AI applications against a series of objectives including that they "be socially beneficial" and "incorporate privacy design principles".
|The seven Google AI principles|
|1. Be socially beneficial|
2. Avoid creating or reinforcing unfair bias
3. Be built and tested for safety
4. Be accountable to people
5. Incorporate privacy design principles
6. Uphold high standards of scientific excellence
7. Be made available for uses that accord with these principles
Pichai said Google uses AI to make products more useful, from email that is spam free, to a digital assistant you can speak to.
Beyond that, the company is also using it "to help people tackle urgent problems", citing examples of AI-powered sensors that could predict the risk of wildfires to farmers using it to monitor the health of their herds.
Today were sharing our AI principles and practices. How AI is developed and used will have a significant impact on society for many years to come. We feel a deep responsibility to get this right. https://t.co/TCatoYHN2m
— Sundar Pichai (@sundarpichai) June 7, 2018
|Where Google has pledged not to pursue AI|