Posted on

White House Releases 10 AI Principles for Agencies to Follow


The White House’s Office of Science and Technology Policy has released 10 principles that federal agencies must meet when drafting AI regulations. The list was met with mixed reception. (GETTY IMAGES)

By AI Trends Staff

The White House’s Office of Science and Technology Policy (OSTP)  this week released what it has described as a “first of its kind” set of principles that agencies must meet when drafting AI regulations. The principles were met with less than universal approval, with some experts suggesting they represent a “hands-off” approach at a time when some regulation may be needed.

The announcement supplements efforts within the federal government over the past year to define ethical AI use. The defense and national security communities have mapped out their own ethical considerations for AI, according to an account in the Federal News Network. The US last spring signed off on a common set of international AI principles with more than 40 other countries.

“The U.S. AI regulatory principles provide official guidance and reduce uncertainty for innovators about how the federal government is approaching the regulation of artificial intelligence technologies,” said US Chief Technology Officer Michael Kratsios. “By providing this regulatory clarity, our intent is to remove impediments to private-sector AI innovation and growth. Removing obstacles to the development of AI means delivering the promise of this technology for all Americans, from advancements in health care, transportation, communication—innovations we haven’t even thought of yet.”

Michael Kratsios, US Chief Technology Officer

The public will have 60 days to comment on the White House’s draft guidance. Following those 60 days, the White House will issue a final memorandum to federal agencies and instruct agencies to submit implementation plans.

Deputy U.S. Chief Technology Officer Lynne Parker said these agency implementation plans will cover a wide range of policy issues and will help avoid a “one-size-fits-all” approach to regulating AI.

“While there are ongoing policy discussions about the use of AI by the government, this action in particular though addresses the use of AI in the private sector,” Parker said. “It’s also important to note that these principles are intentionally high-level. Federal agencies will implement the guidance in accordance with their sector-specific needs. We purposefully want to avoid top-down, one-size-fits-all blanket regulation, as AI-powered technologies reach across vastly different industries.”

Here is a summary of the OSTP’s 10 AI principles:

  1. Public trust in AI: The government’s regulatory and non-regulatory approaches to AI must promote reliable robust and trustworthy AI applications.
  2. Public participation: Agencies should provide ample opportunities for the public to participate in all stages of the rulemaking process.
  3. Scientific integrity and information quality: Agencies should develop technical information about AI through an open and objective pursuit of verifiable evidence that both inform policy decisions and foster public trust in AI.
  4. Risk assessment and management: A risk-based approach should be used to determine which risks are acceptable, and which risks present the possibility of unacceptable harm or harm that has expected costs greater than expected benefits.
  5. Benefits and costs: Agencies should carefully consider the full societal benefits, and distributional effects before considering regulations.
  6. Flexibility: Regulations should adapt to rapid changes and updates to AI applications.
  7. Fairness and non-discrimination – Agencies should consider issues of fairness and non-discrimination “with respect to outcomes and decisions produced by the AI application at issue.”
  8. Disclosure and transparency — “Transparency and disclosure can increase public trust and confidence in AI applications.”
  9. Safety and security: Agencies should pay particular attention to the controls in place to ensure the confidentiality, integrity and availability of information processed stored and transmitted by AI systems.
  10. Interagency coordination: Agencies should coordinate with each other to share experiences and ensure consistency and predictability of AI-related policies that advance American innovation and growth and AI.

Some See Regulation of AI As Needed

Some experts believe regulation of AI is needed as technology advances rapidly into diagnosing medical conditions, driving cars, judging credit risk and recognizing individual faces in video footage. The inability of the AI system at times to convey how it got to its recommendation or prediction, leads to questions of how far to trust AI and when to have humans in the loop.

Terah Lyons, Executive Director of the nonprofit Partnership on AI, which advocates for responsible AI and has backing from major tech firms and philanthropies, said in an account from the Associated Press that the White House principles will not have sweeping or immediate effects. But she was encouraged that they detailed a U.S. approach centered on values such as trustworthiness and fairness.

Terah Lyons, Executive Director, the Partnership on AI

“The AI developer community may see that as a positive step in the right direction,” said Lyons, who worked for the White House OSTP during the Obama administration. However, she noted no clear mechanisms are suggested for holding AI systems accountable. “It’s a little bit hard to see what the actual impact will be,” she stated.

AI Now Having a Geopolitical Impact

The US has so far rejected working with other G7 nations on a project known as the Global Partnership on AI, which seeks to establish shared principles and regulations.

The White House has suggested that the G7 plan would stifle innovation with bureaucratic meddling. In an interview with Wired, Michael Kratsios, the Chief Technology Officer for the United States, said he hopes other nations will follow America’s lead when developing their own regulations for AI. “The best way to counter authoritarian uses of AI is to make sure America and its international partners remain global hubs of innovation, advancing technology and manners consistent with our values,” Kratsios stated.

Some observers question the strategy of going it alone and how effective the principles will be. “There’s a downside to us going down a different path to other nations,” stated Martijn Rasser, a senior fellow at the Center for New American Security and the author of a recent report that calls for greater government investment in AI. Regarding the AI principles, Rasser stated, “A lot of this is open to interpretation to each individual agency. Anything that an agency produces could be shot down, given the vagueness.”

Martijn Rasser, Senior Fellow, Center for New American Security

In examples of the US effort to shape AI policy, the US Commerce Department last year blacklisted several Chinese AI firms after the Trump administration said they were implicated in the repression of Muslims in the country’s Xinjiang region. On Monday, citing national security concerns, the agency set limits on exporting AI software used to analyze satellite imagery.

The controls are meant to limit the ability of rivals such as China to use US software to develop military drones and satellites. However, regulating the export of software is notoriously difficult, especially when many key algorithms and data sets are open source.

Also last fall, the administration placed Chinese companies responsible for developing super computing technology on the blacklist. In May, the Chinese tech giant Huawei was put on the blacklist, along with 70 affiliates, due to concerns over its ties to the government and the potential for its 5G wireless technology to be used for cyber espionage.

Matt Sanchez, CTO and founder, Cognitive Scale

Matt Sanchez, CTO of CognitiveScale, which offers a platform for building explainable and trustworthy AI systems, said in a comment for AI Trends, “These 10 AI principles are a good first step to drive more transparency and trust in the AI industry. They address the elements of trust that are critical for safeguarding public data—privacy, explainability, bias, fairness and compliance. This is a great framework, but in order to be truly successful, implementation cannot be driven by the government alone. It must be an interdisciplinary execution with deep involvement of the technology industry, academia, and the American public. In addition, this cannot be seen as proprietary, so ensuring it has a non-proprietary open standard is terribly important to success.”

Read the source articles in the Federal News Network, from the  Associated Press, in Wired and from CognitiveScale.

Source: AI Trends
Continue reading White House Releases 10 AI Principles for Agencies to Follow

Posted on

US Patent and Trademark Office Seeking Comment on Impact of AI on Creative Works


A notice in the Federal Register invited readers to comment on whether creative content produced by AI can be issued a patent.

By AI Trends Staff

The US Patent and Trademark Office (USPTO) is getting more involved in AI. One effort is an AI project that aims to speed patent examinations. The office receives approximately 2,500 patent applications per day.

The project took some nine months to develop and makes a “really compelling case” for the use of AI, stated Tom Beach, Chief Data Strategist and Portfolio Manager at USPTO, in an account in MeriTalk. Beach was speaking at a recent Veritas Public Sector Vision Day event.

The project calls for extracting technical data from patent applications and using that to enhance Cooperative Patent Classification (CPC) data, which is reviewed by USPTO patent examiners to evaluate patent applications. The aim is to speed the overall evaluation process. “That’s the ROI for this project,” Beach stated.

The USPTO is also actively seeking comments on the impact of AI on creative works. The office published a notice in the Federal Register in August 2019 seeking comments. It sought comment on the interplay between patent law and AI. In October, the USPTO expanded the inquiry to include copyright, trademark and other IP rights, according to an account in Patently-O. Comments are now being accepted until Jan. 10, 2020.

(Anyone can respond; interested AI Trends readers are encouraged to respond.)

The questions have no concrete answers in US law, experts suggest. “I think what’s protectable is conscious steps made by a person to be involved in authorship,” stated Zvi S. Rosen, lecturer at the George Washington University School of Law, in an account in The Verge. A person executing a single click might not be so recognized. “My opinion is if it’s really a push button thing, and you get a result, I don’t think there’s any copyright in that,” Rosen stated.

This push-button creativity discussion gets a little more murky when considering the deal Warner Music reached with AI startup Endel in March 2019. Endel used its algorithm to create 600 short tracks on 20 albums that were then put on streaming services, returning a 50 / 50 royalty split to Endel, The Verge reported.

Rosen encouraged people to respond. “If a musician has worked with AI and can attest to a particular experience or grievance, that’s helpful,” he stated.

For those interested, here are the questions:

  1. Should a work produced by an AI algorithm or process, without the involvement of a natural person contributing expression to the resulting work, qualify as a work of authorship protectable under U.S. copyright law? Why or why not?
  2. Assuming involvement by a natural person is or should be required, what kind of involvement would or should be sufficient so that the work qualifies for copyright protection? For example, should it be sufficient if a person (i) designed the AI algorithm or process that created the work; (ii) contributed to the design of the algorithm or process; (iii) chose data used by the algorithm for training or otherwise; (iv) caused the AI algorithm or process to be used to yield the work; or (v) engaged in some specific combination of the foregoing activities? Are there other contributions a person could make in a potentially copyrightable AI-generated work in order to be considered an “author”?
  3. To the extent an AI algorithm or process learns its function(s) by ingesting large volumes of copyrighted material, does the existing statutory language (e.g., the fair use doctrine) and related case law adequately address the legality of making such use? Should authors be recognized for this type of use of their works? If so, how?
  4. Are current laws for assigning liability for copyright infringement adequate to address a situation in which an AI process creates a work that infringes a copyrighted work?
  5. Should an entity or entities other than a natural person, or company to which a natural person assigns a copyrighted work, be able to own the copyright on the AI work? For example: Should a company who trains the artificial intelligence process that creates the work be able to be an owner?
  6. Are there other copyright issues that need to be addressed to promote the goals of copyright law in connection with the use of AI?
  7. Would the use of AI in trademark searching impact the registrability of trademarks? If so, how?
  8. How, if at all, does AI impact trademark law? Is the existing statutory language in the Lanham Act adequate to address the use of AI in the marketplace?
  9. How, if at all, does AI impact the need to protect databases and data sets? Are existing laws adequate to protect such data?
  10. How, if at all, does AI impact trade secret law? Is the Defend Trade Secrets Act (DTSA), 18 U.S.C. 1836 et seq., adequate to address the use of AI in the marketplace?
  11. Do any laws, policies, or practices need to change in order to ensure an appropriate balance between maintaining trade secrets on the one hand and obtaining patents, copyrights, or other forms of intellectual property protection related to AI on the other?
  12. Are there any other AI-related issues pertinent to intellectual property rights (other than those related to patent rights) that the USPTO should examine?
  13. Are there any relevant policies or practices from intellectual property agencies or legal systems in other countries that may help inform USPTO’s policies and practices regarding intellectual property rights (other than those related to patent rights)?

Read the source articles in  MeriTalk, Patently-O and The Verge.

Send your comments to AIPartnership@uspto.gov.

Source: AI Trends
Continue reading US Patent and Trademark Office Seeking Comment on Impact of AI on Creative Works