It has its critics – including a certain Elon Musk – but artificial intelligence is being explored by many of the world's biggest tech giants as a key focus for the future. Including Google.

Now, the company's boss, Sundar Pichai, has shared Google's principles on AI, saying the tech behemoth felt "a deep responsibility to get this right". He has pledged that AI at the company will be built and tested for safety, be accountable to people, and avoid creating – or reinforcing – unfair bias.

Pichai also said Google will not design or use AI for weapons or other tech with a principal purpose was to cause injury to people, and won't use it in tech that gathers information for surveillance "violating internationally accepted norms".

Read more: Google backtracks on controversial military drone project Maven

In a blog posted yesterday evening, Pichai wrote: "At its heart, AI is computer programming that learns and adapts. It cant solve every problem, but its potential to improve our lives is profound."

Google will assess AI applications against a series of objectives including that they "be socially beneficial" and "incorporate privacy design principles".

The seven Google AI principles
1. Be socially beneficial

2. Avoid creating or reinforcing unfair bias

3. Be built and tested for safety

4. Be accountable to people

5. Incorporate privacy design principles

6. Uphold high standards of scientific excellence

7. Be made available for uses that accord with these principles

Pichai said Google uses AI to make products more useful, from email that is spam free, to a digital assistant you can speak to.

Beyond that, the company is also using it "to help people tackle urgent problems", citing examples of AI-powered sensors that could predict the risk of wildfires to farmers using it to monitor the health of their herds.

Today were sharing our AI principles and practices. How AI is developed and used will have a significant impact on society for many years to come. We feel a deep responsibility to get this right.

— Sundar Pichai (@sundarpichai) June 7, 2018

Where Google has pledged not to pursue AI
  1. Technologies that cause or are likely to cause overall harm

  2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people

  3. Technologies that gather or use information for surveillance violating internationally accepted norms

  4. Technologies whose purpose contravenes widely accepted principles of international law and human rights

Read more: Dating site Match launches Lara, an AI chatbot inside Google Assistant

Original Article

[contf] [contfnew]


[contfnewc] [contfnewc]