Google and Microsoft workers call for AI regulation

AI Now has published its new report, AI Now Report, which outlines 10 recommendations for ethical AI usage and warns against AI being used for "affect recognition"

7Dec

AI Now, a group of technology researchers, including employees from Google and Microsoft, have come together and called for the regulation of AI and facial recognition technology to protect public interest.

The groups concerns were outlined in a report called the AI Now Report which made 10 recommendations to make sure the future of machine learning is transparent, ethical and fail-safe.

The report warned about the dangers of the technology being used in fields like policing, finance and education and of the unintended consequences of its use. AI Now suggested in the report that independent bodies should be used to audit the AI services that governments use and called for government vendors to waive claims to trade secrecy.


Visit Machine Learning Innovation Summit, part of DATAx New York on December 12–13, 2018


In particular, the AI Now Report outlined concerns around AI technologies that claim to achieve "affect recognition", AI that can read people's emotions and mental state, being used by governments bodies and social services. "These tools are very suspect and based on faulty science," said Kate Crawford, a co-founder of the group and Microsoft Research employee. "You cannot have black box systems in core social services."

AI Now's 10 recommendations:

  • Governments need to regulate AI by expanding the powers of sector-specific agencies to oversee, audit, and monitor these technologies by domain.
  • Facial recognition and affect recognition need stringent regulation to protect the public interest.
  • The AI industry urgently needs new approaches to governance. As this report demonstrates, internal governance structures at most technology companies are failing to ensure accountability for AI systems.
  • AI companies should waive trade secrecy and other legal claims that stand in the way of accountability in the public sector.
  • Technology companies should provide protections for conscientious objectors, employee organizing, and ethical whistle-blowers.
  • Consumer protection agencies should apply "truth-in-advertising" laws to AI products and services.
  • Technology companies must go beyond the "pipeline model" and commit to addressing the practices of exclusion and discrimination in their workplaces.
  • Fairness, accountability, and transparency in AI require a detailed account of the "full stack supply chain".
  • More funding and support are needed for litigation, labor organizing, and community participation on AI accountability issues
  • University AI programs should expand beyond computer science and engineering disciplines." 
How it departments should evolve to deal with cyber attackssmall

Read next:

How IT departments should evolve to deal with cyber-attacks

i