Companies applying AI need to take compliance seriously.
The White House’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, released on October 30, 2023, signalled a new regulatory regime for the fast-growing field of AI.
The order establishes guidelines for AI privacy, governance, and compliance, among other areas, along with the promise of further regulation to come. This could include the creation of new government institutions to oversee the technology and the companies that use it. It’s the latest in a series of regulatory moves the government has undertaken in recent months, including the announcement of a national Blueprint for an AI Bill of Rights and the establishment of the National Institute of Standards in Technology (NIST) AI Risk Management Framework.
Bruce Reed, White House deputy chief of staff, called the executive order “the strongest set of actions any government in the world has ever taken on AI safety, security and trust.”1
As the Brookings Institution notes, the executive order and the related draft memorandum that followed from the Office of Management and Budget (OMB) signal “hard accountability” for any company using artificial intelligence. The guidance “requires companies developing next-generation AI models to report to the federal government on an ongoing basis to ensure that they meet certain safety, evaluation, and reporting procedures.”2
Answering critical questions
It comes at a critical time for AI: According to an Emerging Markets Research report, the global AI market is expected to reach a value of $6 trillion by 2026.3 The order clarifies some of the questions that any company looking to add AI to its workflow will have to address in the months and years to come.
“If an organization is going to partner with an AI company, they’re going to want to partner with a company that has a robust compliance program,” says Adam Rivera, Chief Legal Officer at Inbenta. “Does the AI company have a team to support compliance? Do they have a privacy program? Are they thinking about AI governance as their product teams innovate?”
Questions like these have become critical to companies looking to enter the AI space. This is particularly true for companies that deal with sensitive data, such as those in industries like healthcare and finance. In these and related industries, the use of AI — and the trust in vendors that provide it — depends on a clear and robust compliance program, not only for these companies to avoid running afoul of government rules, but to future-proof them at a time when regulations are still in flux.
Coming as AI technology has captured the public imagination, the latest executive order is a roadmap for the future of AI development and implementation. As the regulatory guidance evolves, companies with a proactive approach to compliance are likely to benefit. “Ultimately, it’s about standing up for consumers to ensure the technology is being used safely and securely,” says Rivera.
Interested in learning about the benefits of AI and how to adapt it to your business? Schedule a demo and discover its full potential here.
- The Financial Times, “Joe Biden moves to compel tech groups to share AI safety test results,” Oct. 30, 2023. Reference link: https://www.ft.com/content/3c6fb9ef-4185-4157-943c-5142ab4dc2f7
- Brookings Institute, “How the AI Executive Order and OMB memo introduce accountability for artificial intelligence,” Nov. 16, 2023. Reference Link: https://www.brookings.edu/articles/how-the-ai-executive-order-and-omb-memo-introduce-accountability-for-artificial-intelligence/
- Expert Market Research, “Global Artificial Intelligence Market to Reach USD 6 Trillion by 2026, Bolstered by the Rising Digitisation of Data,” 2023 Reference Link: https://www.expertmarketresearch.com/pressrelease/global-artificial-intelligence-market