The UK's first public inquiry into the development and use of artificial intelligence (AI) has said the law governing its effects "is not currently clear".
The House of Lords Select Committee on Artificial Intelligence has asked the Law Commission to investigate whether UK law is "sufficient" when systems malfunction or cause harm to users.
The recommendation comes as part of a report by the 13-member committee on the "economic, ethical and social implications of advances in artificial intelligence".
The report proposes a national and international "AI code" based on five principles to ensure the country becomes "a world leader" in the industry's application of machine learning. The principles are focused on fairness, education and avoidance of wrongdoing.
Committee chairman Lord Clement-Jones said: "AI is not without its risks… an ethical approach ensures the public trusts this technology and sees the benefits of using it. It will also prepare them to challenge its misuse."
Members of the committee visited DeepMind, the artificial intelligence arm of Google, at its headquarters in Kings Cross in September, to further understand the approach to AI by business. Last year, The Royal Free NHS Trust illegally handed over 1.6 million patents' personal data to the company.
Despite this, the report recommends that the NHS "should capitalise on" advances in AI for healthcare, which it labels "impressive".
Evidence submitted to the committee raised concerns by some witnesses over systems reflecting "historical patterns of prejudice", such as the training of an algorithm to sort job applicants learning from the bias of previous human decisions.
Representatives from civil liberty campaigners Big Brother Watch wrote to the committee to say plans by Durham Police into using AI for custodial decisions are a "very worrying trend, particularly when the technology is being trialled when its abilities are far from accurate".
More from Artificial Intelligence
The committee noted that Google has still not fixed its visual identification algorithms, which could not distinguish between gorillas and black people, nearly three years after the problem was first identified.
The report also calls for action by the Government's Competition and Markets Authority into "the monopolisation of data" by big technology companies operating in the UK and advocates the teaching of "ethical design and use of AI" to children as "an integral part" of the national curriculum.
[contf] [contfnew]
Sky News
[contfnewc] [contfnewc]