View of the Earth from space at night with lights and connections of cities. (World map courtesy of NASA: https://visibleearth.nasa.gov/view.php?id=55167)
Contracting officials and agency leaders are key to deploying ethical AI processes.
More than 50 years ago, in the sci-fi movie, 2001: A space odysseythe crew of the spacecraft included the HAL 9000 computer that became a malicious artificial intelligence (AI) system motivated by its survival, intent on killing the human crew members who mistrusted it.
This cautionary tale exemplifies the potential for unintended, albeit extreme, AI behavior when deploying AI solutions to augment human capabilities. How can governments prevent AI from acting contrary to human intentions and expectations? How can the purchasing process and agreements be used to mitigate potentially undesirable behaviour?
Government procurement processes are beginning to reduce this potential using new specialized principles and methods that encourage responsible use of AI. There is much to learn and do on this mission-critical journey.
Academia and the private sector have led the way in developing AI technologies. Widespread adoption of AI by government will enable agencies to pursue better outcomes at scale.
Using AI to improve human performance could also enable efficiencies at levels once unattainable and orders of magnitude better than the status quo.
But harnessing the power of private sector AI technology for good governance can to create challenges for public procurement, such as Cary Coglianese and Erik Lampman to have noted in their work on AI governance contracts. They point out that the risks of AI must be weighed against their potentially groundbreaking benefits.
Coglianese, in an article with Alicia Lai, goes on to create the point that there is no perfect, unbiased system to compare AI to. Human designers, decision-makers and government officials bring lifelong, often undervalued cumulative biases into their work. These prejudices will not eliminated by AI if there are still humans in the process – as it will and should be. In addition to the remaining human discretion, algorithmic bias introduced through choices in training data embedded in algorithms through repetitive training, human biases can be hidden from view.
In order for the government to reap the benefits of AI without causing any problems, the application of the technology must be improved. There are thorny issues of cybersecurity, equity, diversity, inclusion and adaptation to a life-changing climate crisis. But innovation in using the technology for such important use case solutions will be discouraged if even more regulations are imposed on overstretched government contractors and entrepreneurs.
The federal acquisition system is the expression of a vast library of rules and regulations interpreted by each agency and their guaranteed professional contracting officers. The Federal Acquisition Ordinance (FAR) and its derivatives are the sheet music for a bureaucratic orchestra and choir in which the contractor is the conductor. The high degree of complexity in public procurement regulation is designed to meet the need for public confidence in the fairness of the system, and it has largely met that objective – with hidden and undetectable opportunity costs in terms of lost performance.
Part 1 of the FAR states that contracting officers should use good business judgment on behalf of taxpayers. In practice, however, the discretion of Part 1 is often overwhelmed by the cultural norms of adhering to the voluminous regulations.
Many contract workers recognize that regulatory libraries create barriers and barriers for companies at the forefront of technology invention and adoption to interact with government. This recognition has sparked an avalanche of procurement innovation among contract workers who capitalize on new market opportunities in the changing landscape.
The Office for Federal Procurement Policy through the Federal Acquisition Institute together with my ACT-IAC team of volunteers to create an easy-to-use knowledge base of such innovations – the Periodic Table of Acquisition Innovations—promoting the adoption of successful acquisition techniques in government and industry. One of those innovations is the Pilot IRS program. It cleverly links the authorities of FAR Part 12 and 13 to enable the Internal Revenue Service (IRS) to: to buy as a venture capitalist. The current limit on such contracts is $7.5 million, which is the IRS seeking to be educated.
The United States Congress, sensing a similar need, extensive the 60-year-old Other transaction authority (OTA), designed to remove FAR rules and encourage experimentation with new technologies such as AI. The use of OTAs is rising in recent years, concentrated in the US Department of Defense.
These authorities have been instrumental in advancing the art of AI acquisition by the Department of Defense Joint Center for Artificial Intelligence (JAIC). The ability to exercise these powers requires more experience from the contractors, who must selectively use the right business acumen, but not the rules necessarilyembedded in the FAR as they create OTAs.
It is to the credit that the JAIC has deliberately created called a “golf course” of AI contracting trade wind, where the tees, pins, traps and fairways can contain the business acumen of the FAR with the relative freedom of OTAs. Tradewind is available for use by the entire federal government to: switch better and faster AI acquisition.
Responsible AI (RAI) forms a set of new, AI-specific principles from the JAIC’s enterprise AI initiative. The Ministry of Defence effort to RAI begins with top department management.
The new Chief Digital and AI Office (CDAO) is the hub for the performance of the Defense Department’s AI strategy. RAI principles are: coach the development and acceleration of the adoption of AI through innovative acquisition approaches on a new acquisition path based on OTA, related authorities and a contract vehicle infrastructure such as test and evaluation support as described by the Defense Innovation Unit† Contracts based on challenge statements can executed in 30 to 60 days to quickly shape and take advantage of emerging techniques.
On the other hand, many essential civilian agency missions are essentially about resource allocation.
For example, missions to the United States Department of Health and Human Services must: soften against unintended socio-economic biases that are illegal. In guiding procurement teams to avoid such biases, National Institute of Standards and Technology (NIST) is remarkably ambitious in address data, testing and evaluation, and human factors. NISTs analysis of future standards encompasses the legal, business and technical barriers to prevent and uncover socioeconomic biases in deploying AI solutions.
Reminiscent of the JAIC, NIST argues that traditional testing approaches embedded in contracts are not enough to eliminate socioeconomic biases from AI solutions. They recognize that the “explainable” challenge of powerful yet opaque machine learning techniques should be: including in contracts.
Reducing bias requires deep and transparent insights into the data used to train the solutions. NIST presents an approach to testing sociotechnical AI solutions that procurement teams should consider. NIST’s work is the first systematic map of this unknown region. At stake is public confidence in AI.
Procurers are blazing new trails in the brave new world of acquiring AI for government use. They simultaneously drive industry engagement, ensure AI solutions are accountable and free from unwanted bias, and increase human effort in executing missions. Ethical use of AI technology starts with procurement. Procurement officers conduct the symphony of evolving federal procurement of AI.
This essay is part of a nine-part series entitled Artificial Intelligence and Procurement†