AI Weekly: Ministry of Defense proposes new guidelines for the development of AI technologies


Hear from CIOs, CTOs, and other senior executives and leaders on data and AI strategies at the Future of Work Summit on January 12, 2022. Learn more


This week, the Defense Innovation Unit (DIU), the division of the United States Department of Defense (DoD) that awards contracts for prototypes of emerging technologies, released a first draft of a white paper outlining “responsible guidelines” that establish processes to “avoid consequences” in AI systems. The document, which includes worksheets for system planning, development and deployment, is based on the DoD ethical principles adopted by the Secretary of Defense and was written in collaboration with researchers at the Software Engineering Institute of Carnegie Mellon University, according to the DIU.

“Unlike most ethical guidelines, [the guidelines] are very prescriptive and grounded in action, ”a spokesperson for DIU told VentureBeat via email. “Given DIU’s relationship with private sector companies, ethics will help shape the behavior of private companies and provide food for thought. “

Launched in March 2020, the DIU effort comes as corporate defense contracts, especially those involving AI technologies, come under increased scrutiny. When news emerged in 2018 that Google had contributed to Project Maven, a military AI project to develop surveillance systems, thousands of company employees protested.

For some AI and data analytics companies, like Oculus co-founder Palmer Luckey’s Anduril and Peter Thiel’s Palantir, military contracts have become the primary source of revenue. In October, Palantir won the bulk of a $ 823 million contract to supply data and large analytics software to the U.S. military. And in July, Anduril said it was awarded a contract worth up to $ 99 million to supply the U.S. military with drones to counter hostile or unauthorized drones.

Machine learning, computer vision and facial recognition providers, including TrueFace, Clearview AI, TwoSense, and AI.Reverie, also have contracts with various branches of the U.S. military. And in the case of Maven, Microsoft and Amazon among others have taken Google’s place.

AI Development Guide

The DIU guidelines recommend that companies begin by defining tasks, measures of success and baselines “appropriately”, identifying stakeholders, and modeling damage. They also require developers to deal with the effects of faulty data, establish system audit plans, and “confirm that new data does not degrade system performance”, primarily through “damage assessment.[s]»And quality control steps designed to mitigate negative impacts.

The guidelines are unlikely to satisfy critics who argue that any guidance offered by DoD is paradoxical. As the MIT Tech Review points out, the DIU says nothing about the use of autonomous weapons, which some ethicists and researchers as well as regulators in countries like Belgium and Germany have opposed.

But Bryce Goodman of the DIU, who co-authored the white paper, told the MIT Tech Review that the guidelines aren’t meant to be a panacea. For example, they may not offer universally reliable ways to “fix” loopholes such as biased data or inappropriately selected algorithms, and they may not be applicable to the systems proposed for use cases of the. national security forces that have no path to responsible deployment.

Studies show that bias mitigation practices like those recommended by the white paper are not a panacea when it comes to ensuring accurate predictions from AI models. The bias in AI doesn’t come from datasets alone, either. Problem formulation, or how researchers adapt tasks to AI techniques, can also help. The same goes for other human-led steps along the AI ​​deployment pipeline, like the selection and preparation of the datasets and the architectural differences between the models.

Either way, the work could change the way AI is developed by government if DoD guidelines are adopted by other departments. While NATO recently released an AI strategy and the US National Institute of Standards and Technology is working with universities and the private sector to develop AI standards, Goodman told MIT Tech Review that he and his colleagues had already delivered the white paper to the National Oceanic and Atmospheric. Administration, Department of Transportation and ethics groups at Department of Justice, General Services Administration and Internal Revenue Service.

The DIU says it is already rolling out the guidelines on a range of projects covering applications such as predictive health, underwater autonomy, predictive maintenance and supply chain analysis. “There is no other guideline, either within the DoD or, frankly, the United States government, that goes into that level of detail,” Goodman told MIT Tech Review.

For AI coverage, send topical advice to Kyle Wiggers – and be sure to subscribe to the AI ​​Weekly newsletter and bookmark our AI channel, The Machine.

Thanks for reading,

Kyle wiggers

AI personal writer

VentureBeat

VentureBeat’s mission is to be a digital public place for technical decision-makers to learn about transformative technology and conduct transactions. Our site provides essential information on data technologies and strategies to guide you in managing your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the topics that interest you
  • our newsletters
  • Closed thought leader content and discounted access to our popular events, such as Transform 2021: Learn more
  • networking features, and more

Become a member

About Perry Perrie

Check Also

Quantum Leap: Biden Administration Commits to US Leadership in Emerging Tech

Government presents plan for post-quantum encryption The Biden administration is politically committed to promoting American …