Members of the public sector, private sector and academia gathered for the second AI Policy Forum Symposium last month to examine critical directions and questions posed by artificial intelligence in our economies and societies.
The virtual event, organized by the AI Policy Forum (AIPF) — a commitment by the MIT Schwarzman College of Computing to bridge the principles of high-level AI policy with the practices and considerations of governance — brought together a series of leading panelists to delve into four cross-cutting topics: law , auditing, healthcare and mobility.
Over the past year, there have been substantial changes in the regulatory and policy landscape surrounding AI in several countries, most notably in Europe with the development of the European Union Artificial Intelligence Act, the first attempt by a major regulator to pass a law on artificial intelligence. to represent intelligence. In the United States, the National AI Initiative Act of 2020, which took effect in January 2021, provides a coordinated program across the federal government to accelerate AI research and application for economic prosperity and security gains. Finally, China has recently introduced some new rules of its own.
Each of these developments represents a different approach to AI law, but what is a good AI law? And when should AI legislation be based on binding rules with sanctions versus drafting voluntary guidelines?
Jonathan Zittrain, a professor of international law at Harvard Law School and director of the Berkman Klein Center for Internet and Society, says the self-regulatory approach taken during the expansion of the Internet had its limitations as companies struggled to balance their interests. to that of their industry and the public.
“One lesson could be that it’s a good idea to let the representative government play an active role early on,” he says. “It’s just that they are challenged by the fact that there seem to be two phases in this environment of regulation. One, too early to say, and two, too late to do anything about it. In AI, I think a lot of people would say we’re still in the ‘too early to tell’ stage, but since there’s no middle zone before it’s too late, it may still require some regulation.”
One theme that came up repeatedly during the first panel on AI laws — a talk moderated by Dan Huttenlocher, dean of the MIT Schwarzman College of Computing and chair of the AI Policy Forum — was the notion of trust. “If you would consistently tell me the truth, I would say that you are an honest person. If AI could offer something similar, something that I can say is consistent and the same, then I would say it is trusted AI said Bitange Ndemo, a professor of entrepreneurship at the University of Nairobi and a former permanent secretary of the Kenyan Ministry of Information and Communications.
Eva Kaili, Vice-President of the European Parliament, adds: “In Europe you know that when you use something, such as medicines, it has been monitored. You know you can rely on it. You know the controls are there. We also need to achieve that with AI.” Kalli further emphasizes that building trust in AI systems will not only lead to people using more applications in a secure manner, but that AI itself will reap the benefits as it will generate greater amounts of data.
The rapidly increasing applicability of AI to all fields has led to the need to address both the opportunities and challenges of emerging technologies and the impact they have on social and ethical issues such as privacy, fairness, bias, transparency and accountability. In healthcare, for example, new machine learning techniques have shown tremendous promise in improving quality and efficiency, but questions of equity, data access and privacy, security and reliability, and immunology and global health surveillance remain.
Marzyeh Ghassemi of MIT, an assistant professor in the Department of Electrical and Computer Science and the Institute of Medical Technology and Science, and David Sontag, an associate professor of electrical engineering and computer science, worked with Ziad Obermeyer, an associate professor of health policy and management at the University of California Berkeley School of Public Health, to host AIPF Health Wide Reach, a series of sessions to discuss issues of data sharing and privacy in clinical AI. The organizers gathered experts dedicated to AI, policy and health from around the world with the aim of understanding what can be done to reduce the barriers to accessing quality health data to promote more innovative, robust and inclusive research results with respect for patient privacy.
Over the course of the series, members of the group presented a topic of expertise and were instructed to propose concrete policy approaches to the challenge discussed. Based on these extensive conversations, participants revealed their findings at the symposium, featuring success stories from nonprofits and government and restricted access models; upward demonstrations; legal frameworks, regulations and financing; technical approaches to privacy; and infrastructure and data sharing. The group then discussed some of their recommendations summarized in a report to be released shortly.
One of the findings calls for more data to be made available for research purposes. Recommendations stemming from this finding include updating regulations to promote data sharing to facilitate access to safe havens, such as the Health Insurance Portability and Accountability Act (HIPAA) for de-identification, as well as expanding the funding for private health institutions to manage data sets, among others. Another finding, to remove barriers to data for researchers, supports a recommendation to reduce barriers to research and development on federally created health data. “If this is data that needs to be accessible because it’s being funded by a federal entity, we need to easily identify the steps that will be part of getting that data access so that it’s a more inclusive and equitable set of research opportunities for everyone.” Ghasemi says. The group also recommends looking closely at the ethical principles governing data sharing. While many principles have already been proposed, Ghassemi says that “of course you can’t satisfy all the levers or buttons at once, but we think this is a trade-off that is very important to think about intelligently.”
In addition to legislation and healthcare, other facets of AI policy explored at the event included controlling and monitoring AI systems at scale, and the role AI plays in mobility and the range of technical, business and policy challenges for autonomous vehicles.
The AI Policy Forum Symposium was an effort to bring communities of practice together with the shared goal of designing the next chapter of AI. In his closing remarks, Aleksander Madry, the Cadence Designs Systems Professor of Computing at MIT and faculty co-lead of the AI Policy Forum, emphasized the importance of collaboration and the need for different communities to interact with each other to really make an impact in AI. -policy space.
“The dream here is that we can all come together — researchers, industry, policy makers and other stakeholders — and really talk to each other, understand each other’s concerns and think about solutions together,” Madry said. “This is the mission of the AI Policy Forum and this is what we want to make possible.”