Posted on

Effort to Fund National Research Cloud for AI Advances


A national research cloud to support US efforts to maintain and grow a lead in AI is at the proposal stage in Congress. (GETTY IMAGES)

By AI Trends Staff

A bipartisan group of legislators in the US House and Senate proposed a bill in the first week of June that would direct the federal government to develop a national cloud computing infrastructure for AI research.

This idea originated with a proposal from Stanford University in 2019.

The legislation was introduced by Sens. Rob Portman, R-Ohio, and Martin Heinrich, D-NM, is called the National Cloud Computing Task Force Act. It would convene a mix of technical experts across academia, industry and government, to plan for how the US should build, deploy, govern and maintain a national research cloud for AI.

“With China focused on toppling the United States’ leadership in AI, we need to redouble our efforts with a sustained commitment to the best and brightest by developing a national research cloud to ensure our technical researchers get the tools they need to succeed,” stated Portman, according to an account in  Nextgov. “By democratizing access to computing power we ensure that any American with computer science talent can pursue their good ideas.”

“Artificial Intelligence is likely to be one of the most transformative technologies of all time. If we defer its development to other nations, important ethical, safety, and privacy principles will be at risk, which not only harms the United States, but also the international community as a whole,” stated Sen. Heinrich.

A companion bill was introduced the same week in the House, filed by Reps. Anna Eshoo, D-Calif., and Anthony Gonzalez, R-Ohio.

Original Suggestion for National Research Cloud From Stanford

A project to support a National Research Cloud was suggested by John Etchemendy, co-director of the Stanford Institute for Human Centered AI (HAI), and Fei-Fei Li, also a co-director of HAE and a computer science professor at Stanford. Etchemendy is retired as provost of Stanford, a position he held for 17 years, stepping down in 2017. Li was the director of Stanford’s AI Lab from 2013 to 2018; she served as VP of and Chief Scientist of AI/ML at Good Cloud during a sabbatical from January 2017 to September 2018.

In an update published on the HAE blog at Stanford University in March, the authors outlined how the advance of AI by US companies is a direct outgrowth of federally-funded university research, furthered by exceptional R&D in the private sector. Then a warning: “Today, the research prowess that’s powered decades of growth and prosperity is at risk.”

The Stanford Institute for Human-Centered Design (HAI), founded in March 2019, aims to advance AI practice to improve the human condition.

The two primary reasons are: university researchers especially lack access to compute power; and meaningful datasets are scarce. These two resources are “prerequisites for advanced AI research,” the authors stated.

Today’s AI, the researchers note, requires massive amounts of compute power, huge volumes of data and high expertise to train the gigantic machine learning models underlying the most advanced research. “There is a wide gulf between the few companies that can afford these resources and everyone else,” the authors stated. In an example, Google said the company itself spent $1.5 million in computer cycles to train the Meena chatbot announced earlier this year. “Such costs for a single research project are out of reach for most corporations, let alone for academic researchers,” the authors stated.

Meanwhile, the large data sets required to train AI algorithms are mostly controlled by industry or government, hobbling academic researchers, important partners in the American research partnership.

Here is the call: “It is for this reason that we are calling for the creation of a US Government-led task force from academia, government, and industry to establish a National Research Cloud. Support from Congress and the President could have a meaningful impact on American innovation through the creation of such a task force. Indeed, we believe that this could be one of the most strategic research investments the federal government has ever made.”

After HAI launched the initiative last year, the presidents and provosts of 22 universities nationwide signed a joint letter to the President and Congress in support of the effort. But HAE held off on issuing the letter until now, with the nation absorbed in responding to the pandemic.

The co-directors of the Stanford Institute for Human-Centered Design (HAI) are John Etchemendy and Fei-Fei Li. He retired as provost in 2017; she is a computer science professor at Stanford. (Drew Kelly for Stanford Institute for Human-Centered Artificial Intelligence)

Eric Schmidt Suggests Building on CloudBank of the NSF

Former Google CEO Eric Schmidt put his support behind a national cloud effort at a hearing of the House Science, Space and Technology committee in late January. The hearing was to consider actions the US could take to maintain and extend its technological leadership in the world.

Now the chair of the Defense Innovation Board and the National Security Commission on Artificial Intelligence (AI), Schmidt spoke about the CloudBank program launched last year by the National Science Foundation to provide public cloud allocations and associated training to support projects.

The committee considered the importance of collaboration between government, industry, and academia in the effort to sustain and grow US competitiveness. Schmidt suggested that CloudBank “could expand into a nation-wide National Research Cloud,” according to a recent account in Meritalk.

“Congress should also explore tax incentives for companies to share data and provide computing capabilities to research institutions, and accelerate efforts to make government datasets more widely available,” Schmidt stated.

Schmidt offered the following recommendations to boost US tech competitiveness:

  • More Federal research and development funding. “For AI, the scale of investment should be multiple times current levels,” he stated, adding, “Simply put, we need to place big bets.”
  • Federal investment in nationwide infrastructure. That should include a secure alternative to 5G network equipment made by China-based Huawei, investing in high-performance computing, and emulating previous national models like the National Nanotechnology Initiative.
  • Boosting public confidence in advanced technology. “If we do not earn the public’s trust in the benefits of new technologies, especially AI, doubts will hold us back,” Schmidt stated.

The committee seemed to support the concept of public-private-academic partnerships to achieve the recommended outcomes.

Read the source articles at Nextgov., the HAE blog at Stanford University and in Meritalk.

Read More

Posted on

Federal Government Moving to Implement AI More Widely

As the federal government moves to a wider implementation of AI, we highlight selected implementations at the VA and the IRS. (GETTY IMAGES)
By AI Trends Staff
The VA is planning to expand its use of AI; the IRS is moving to employ more AI for help with tax compliance. …

Read More

Posted on

White House, Hospitals, Private Companies Exploring AI to Fight Coronavirus

The White House has issued a “call to action” for AI researchers to fight the coronavirus spread, and private industry races to discover effective drugs.
By AI Trends Staff
The White House has issued a “call to action” to AI researchers to help fight the coronavirus spread; hospitals are pursuing …

Read More

Posted on

Vatican, DoD Weigh in on Ethical AI Principles in Same Week


St. Peter’s Basilica in Vatican City, where the Pope last week issued ethical principles to guide AI developers. (GETTY IMAGES)

By AI Trends Staff

The Vatican and the Department of Defense both took stances on AI ethics last week.

The Department of Defense on Monday held a press conference to announce its principles of AI ethics to guide development of new systems. The Vatican on Friday received support from IBM and Microsoft for its guidance for developers of AI rooted in Catholic social teaching.

The Rome Call for AI Ethics was drafted by the Pontifical Academy for Life, an advisory body to Pope Francis. It outlines six principles to define the ethical use of AI, to ensure that AI is developed and used to serve and protect people and the environment. Microsoft and IBM announced support for the charter, reported WSJPro  on Feb. 28.

IBM Executive VP John Kelly and Microsoft President Brad Smith were scheduled to travel to the Vatican to sign the document. IBM and Microsoft provided feedback to the creators of the document as it was being developed.

There is precedent for the Vatican to put out position papers and calls for guidelines on the environment and the planet for example.  “But this is the first one that I’m aware of where they really put out a definite document or set of guidelines around a technology,” stated Kelly. “And also the first I’m aware of where they invited a big tech company—and follow-on tech companies—to sign on.”

Signing the document shows a sincerity and seriousness of purpose, said Smith of Microsoft. “Our signature affirms our commitment to develop and deploy artificial intelligence with a clear focus on ethical issues,” he stated.

The Rome Call for AI Ethics identifies six principles: transparency, in that AI systems need to be explainable; inclusion, so the needs of all people are considered, that all benefit from the technology, and those who design and deploy do so with caution; impartiality, so that developers of systems do so without bias, and build systems that safeguard fairness and human dignity; reliability, so the AI systems are dependable; and security and privacy, so that systems are safeguarded and respect privacy.

None of the principles suggested by the Vatican are new ideas, suggested an account in Vox with the headline, “The Pope’s Plan to Battle Evil AI.” They echo some of the nonbinding AI guidelines issued by the European Union last year and the Trump Administration in January.

Technology company leaders have been frequenting the Vatican in recent years. In addition to the Pontifical Academy for Life, the pope has hosted the Pontifical Academy of Social Sciences and the Pontifical Academy of Sciences, to address questions raised by robotics and AI. Attendees have included DeepMind CEO Demis Hassabis, Facebook computer scientist Yann LeCun, and LinkedIn founder Reid Hoffman.

The Vatican’s vision for AI so far mirrors what the tech giants are saying, suggested Vox, namely: “regulate our new technology, but don’t ban it outright.”

DoD Adopts Five Principles of Ethical Use of AI

Meanwhile in Washington, DC, at a press conference on Feb. 24, the US Department of Defense officially adopted five principles for the ethical use of AI, with a focus on ensuring the military can retain full control and understanding over how machines make decisions, according to an account in fedscoop.

“We believe the nation that successfully implements AI principles will lead in AI for many years,” stated Lt. Gen. Jack Shanahan, the director of the Joint AI Center.

Lt. Gen. Jack Shanahan, director, Joint AI Center, DoD

The final DoD principles map closely to recommendations submitted by the Defense Innovation Board to Secretary of Defense Mark Esper in October.

The five DoD principles for the ethical use of AI are: to be responsible, exercising appropriate levels of judgement; equitable, taking steps to minimize unintended bias; traceable, with capabilities developed and deployed to be transparent and able to be audited; reliable, with safety, security and effectiveness subject to testing; and governable, with AI capabilities designed to fulfill their intended functions and avoid unintended consequences, and the ability to deactivate deployed systems that demonstrate unintended behavior.

The DoD’s Joint AI Center will take the lead in deploying the ethical AI principals across the agency. “Ethics remain at the forefront of everything the department does with AI technology, and our teams will use these principles to guide the testing, fielding and scaling of AI-enabled capabilities across the DoD,” stated CIO Dana Deasy, according to an account in MeriTalk.

Lt. Gen. Shanahan was quoted as saying that DoD will “design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences,” in an account in Space News. The general said the intelligence community is likely to embrace similar guidelines and discussions among agencies and international allies have been going on for months.

Read the source articles in WSJPro, Vox, fedscoop. MeriTalk and  Space News.

Posted on

GSA Unit Launches AI Community of Practice to Boost Agency Adoption


A GSA unit has formed an AI community of practice that aims to accelerate AI adoption across the federal government by compiling helpful use cases. (GETTY IMAGES)

By AI Trends Staff

The General Services Administration’s Technology Transformation Services (TTS) unit has launched an AI community of practice (AI CoP) to capture advances in AI and accelerate adoption across the federal government. The founding was announced in November via a blog post written by Steve Babitch, head of the AI portfolio for TTS.

The action is a follow-up to an Executive Order signed by President Trump in February on Maintaining American Leadership in AI. “The initiative implements a government-wide strategy in collaboration and engagement with the private sector, academia, the public, and like-minded international partners,” Babitch stated in the blog post.

Steve Babitch, head of the AI portfolio for TTS unit of the GSA

He outlined these six areas where the AI CoP will support and coordinate the use of AI technologies in federal agencies:

  • Machine Learning and deep learning
  • Robotic Process Automation
  • Human-computer interactions
  • Natural Language Processing
  • Rule based automation
  • Robotics

The executive sponsors of the AI CoP are the Federal Chief Information Officer, Suzette Kent, and the Director of GSA’s Technology Transformation Services, Anil Cheriyan. The CoP will be administered out of the Technology Transformation Services (TTS) Solutions division, led by Babitch, who coordinates with the CIO Council’s Innovation Committee.

Library of AI Use Cases in Government Being Compiled

At a GSA event in January, Babitch described an effort to develop a library of AI use cases that agencies can reference as they start to invest in new AI technology, according to an account in  fedscoop. The library could lead to other practice areas being added to the list.

“The harder we start to build that repository of use cases and build in a searchable database, if you will, that can sort of blossom into other facets as well—different themes or aspects of use cases,” Babitch stated. “Maybe there’s actually a component around culture and mindset change or people development.”

Practice areas mentioned include acquisition, ethics, governance, tools and techniques, and workforce readiness. Early use cases across agencies have touched on customer experience, human resources, advanced cybersecurity, and business processes.

In an example, the Economic Indications Division (EID) of the Census Bureau developed a machine learning model for automating data coding.

“It’s the perfect machine learning project,” stated Rebecca Hutchinson, big data leader at EID. “If you can automate that coding, you can speed up and code more of the data. And if you can code more of the data, we can improve our data quality and increase the number of data products we’re putting out for our data users.”

She reported that the model is performing with about 80% accuracy, leaving only 20% still needing to be manually coded.

One-Third of Census Bureau Staff Enrolled in AI Training

The Census Bureau has been offering AI training to interested workers, many of whom are taking advantage of the opportunity.

Interested staff can apply to learn Python in ArcGIS, and Tableau through a Coursera course. Hutchinson reported that one-third of the bureau’s staff has completed training or is currently enrolled, coming away with ML and web scraping skills.

“Once you start training your staff with the skills, they are coming up with solutions,” Hutchinson stated. “It was our staff that came up with the idea to do machine learning of construction data, and we’re just seeing that more and more.”

Read the source articles on the  GSABlog and in fedscoop.

Source: AI Trends

Posted on

Federal Government Adoption of AI and RPA Spreading; Bots Coming to GSA


Government agencies are adopting RPA along with IPA to wrap legacy government systems in wrappers incorporating AI. (GETTY IMAGES)

By AI Trends Staff

Surround the legacy platforms with a wrapper of automated processes, produced with a combination of AI and RPA — in part to avoid the expense of replacing the legacy platform — is an approach being widely adopted in the federal government today.

RPA is a form of business process automation that “employs” software robots, increasingly imbued with more AI, to do work. The government is also now pursuing Intelligent Process Automation, the application of AI and other technologies — such as computer vision, cognitive automation and machine learning — to RPA.

“Many civilian and federal agencies’ use of information processing, process improvement, and intelligent character recognition have led to the use of AI in robotic process automation (RPA),” stated Anil Cheriyan is Director/Deputy Commissioner, Technology Transformation Services for the US Federal Government, in a recent account in the Enterprisers Project. “We see a significant opportunity to use AI and RPA to automate processes around antiquated systems without the expense associated with replacing the legacy platforms.”

Anil Cheriyan, Director/Deputy Commissioner, Technology Transformation Services, US Federal Government

Cheriyan has experience with AI, having led the digital transformation of SunTrust Banks as CIO. He pursued APIs, robotics, data lakes and cloud computing to help make the bank more efficient. He also worked for IBM Global Business Services, mainly for clients in financial services. Cheriyan earned his Master of Science and Master of Philosophy degrees in Management as well as a Bachelor of Science in Electronic and Electrical Engineering from Imperial College in London, UK.

“AI is a capability that the country needs. Not only is it instrumental in improving the experience and effectiveness of citizens’ engagement with federal services, it also enables core capabilities that strengthen our national security and defense,” he stated.

The GSA has a three-phase framework for moving to RPA and IPA. The first is the evaluation phase, an examination of the end-to-end processes in an agency to determine where “pain points” exist. The RPA/IPA team thinks about whether the process need to be automated or eliminated. Second, process automation tools are employed to implement bots. In a third phase, the bots are monitored and iterated using the automation tools.

“The benefit of using an RPA approach is that you’re not completely replacing the legacy platform, you’re building a layer on top of it to enable automation across those legacy platforms,” stated Cheriyan. “That’s what’s attractive about RPA: Rather than spending five plus years replacing a legacy platform, you can build process automation across legacy platforms using RPA techniques in just a few months.”

Robotics Process Automation Community of Interest in DC

A Robotic Process Automation Community of Interest holds periodic meetings in Washington, to share experiences and challenges. A meeting last fall was hosted by teams from the IRS, the General Services Administration and the Office of Personnel Management.

The IRS could cite six distinct use cases for RPA, according to IRS Deputy Chief Procurement Officer Harrison Smith, who spoke with reporters after the event, according to an account in Nextgov. Smith plans to apply RPA in projects as part of the Pilot IRS program. He is seeing that the automation efforts will not be uniform across the government.

Harrison Smith, Deputy Chief Procurement Officer, IRS

“They’re not all going to look the same,” he stated. “You have to make sure that if it’s an automation solution for another environment, that you have the technology [people] and you have the systems integrators who are able to talk to the people who are actually performing the work.

He encouraged other federal technology managers to engage in a dialogue with the RPA and IPA tool and solution providers, to plug for their own agency’s needs to be addressed currently and in the future. Smith notes that project spending on automation tools is expected to triple in the next two to three years. The current requirements of the IRS are likely to differ from longer-term requirements. “We need to keep those lines of conversation open and moving ahead—making sure everybody is on a similar sheet of music,” he stated.

Antworks CEO Issues Cautions

Government customers of Antworks are using the company’s intelligent automation platform to pursue projects including call center optimization, passport verification and management, records management and vendor onboarding, said Asheesh Mehra, CEO and Co-Founder of Antworks, in response to a query from AI Trends.

He added, “But with AI’s great power comes great responsibility. And government must work alongside business to enable Ethical AI by ensuring people use AI engines as intended and not for fraudulent or malicious purposes. I believe government should create and enforce rules for AI at the application level – defining which applications of AI are acceptable and which are not.

Asheesh Mehra, CEO and Co-Founder Antworks

“Also, government is a major employer. So, it needs to prepare for AI’s impact on its workforce. Government agencies can do that by giving their workers the opportunity not just to upskill, but also to reskill, so they can undertake higher value roles as well as entirely new jobs.”

Strong Growth for RPA Seen by New Forrester Study

A new study from Forrester commissioned by UiPath, supplier of RPA software, to gauge the interest in RPA from business is projecting strong growth.

In report highlights:

  • Investment in automation will rise: 66% of companies stated they plan to increase RPA software spend by at least 5% over the next 12 months
  • Automation will affect roles in different ways: By 2030, some jobs will be cannibalized, some will be created, others will be transformed – but only a few will remain untouched
  • The digital skills gap is a concern for all employees: 41% of respondents say their employees are concerned that their existing digital skills may not match what their job will require in the future
  • Automation education in the workplace will boost career prospects: Training employees, providing them vocational courses, or encouraging them to pursue digital qualifications allows them to overcome fears around automation and embrace it as a productivity-boosting asset.

(To accompany the study, UiPath will host a webinar on Thursday, February 6 at 9 a.m. ET/2 p.m. GMT that will go in depth about the report findings.)

Read the source articles in the Enterprisers Project and in Nextgov. Learn more at UiPath and Antworks.

Source: AI Trends

Posted on

As in Business, Cloud Strategy for Government Agencies Must Be a Fit


Public sector tech executives looking to modernize their IT infrastructure are likely to pursue a multi-cloud strategy. (GETTY IMAGES)

By AI Trends Staff

Public sector technology executives know they need to modernize, while accounting for cybersecurity and trying to reduce costs. This often results in a push to pursue a cloud strategy, and in doing so, the same principles apply for a federal government agency as for a private business: find the best fit.

The General Services Administration (GSA) is working through a cloud transformation. GSA CIO Dave Shive, in an account from Meritalk, said the cloud strategy needs to fit the business needs and take agency resources into account. He can provide lessons in setting up a transition based on his agency’s experience.

“There’s no one size fits all. Cloud does and will continue to look different for every agency,” Shive stated at Cloudera’s recent Data Cloud Summit. “Agencies should take time to resource, scale, cloud adoption—both time and resources—to best meet their respective missions.”

David Shive, CIO, GSA

The GSA ended up closing all 121 of its data centers to improve on its infrastructure strategy.

The workforce was reallocated to focus on higher value outcomes. If the target strategy is to adopt cloud,  “Just get started,” Shive recommended.

He made the following suggestions for making a cloud move successful:

  • Increase talent level through training;
  • Have strong leadership support;
  • Determine what kind of data there is and the sensitivity of the data that will move to cloud;
  • Be smart when acquiring cloud technology because “not all cloud is created equal;” and
  • Ensure that stakeholders understand the values of IT investments.

Emphasize Outcomes Over Technology

Be careful not to emphasize technology over outcomes, advised Mark Forman, Vice President, Digital Government for Unisys Federal, in a recent account in Nextgov. Getting desired results from the tidal shift to cloud computing will require a new standard of management. Moving applications to the cloud will not guarantee benefits.

Mark Forman, Vice President, Digital Government for Unisys Federal

Savvy IT shops are looking toward an “ecosystem operating model focused on outcomes, as opposed to the “IT silo” approach of managing unique software and hardware for each application. The new operating model encompasses:

  • Secure hybrid cloud;
  • Cloud management platform that enables the CIO as a trusted broker;
  • Transparency of IT service costs and performance;
  • Deploying web-services and micro-services to replace inflexible applications components;
  • IT process automation; and
  • DevSecOps for process digitization instead of automating “cow paths.”

CIOs should move away from the capital expenditure model and toward an IT service catalog. Document performance, security and cost in order to compare to legacy approaches. A cloud management platform should be used to compare options, for ordering, and tracking data on usage, performance, and costs.

Employ agile development infused with security, and engage line workers. This is to align with DevOps, a set of practices that combine software development and IT operations, with the aim of shortening the systems development life cycle and provide continuous, high-quality delivery.

Be prepared for systems owners who need to overcome fear about loss of control. Having an outcome-based system for showing cost and performance advantages is a good idea. Given the shifting dynamics of cloud offerings, CIOs must continually adjust their IT service catalogs to encourage adoption while accepting a certain level of fixed costs.

Research and advisory firm Gartner recommends organizations keep working on their cloud-first strategy.  “If you have not developed a cloud-first strategy yet, you are likely falling behind your competitors,” stated Elias Khnaser, VP Analyst at Gartner, in a recent article on planning a cloud strategy. “IT organizations have moved past asking whether applications can be deployed or migrated to the public cloud. Instead, they are commonly accepting the pace and innovation of cloud providers as foundational to their business.”

This means the cloud-first strategy needs to be embraced by the whole organization, not just IT. For some organizations, moving all applications out of the data centers might be the way to go. For others, moving a subset of applications to the public cloud might be the best approach.

Practice workload placement analysis, involving reassessing workloads on a regular basis, always assessing whether to stick with the current execution venue or move to an alternative with higher value without adding significant risk.

A multi-cloud strategy offers more options and is more challenging to manage. Organizations need visibility into the cost of computer services being consumed.  They must govern consumption of cloud services by provider, and consumption across cloud providers in order to effectively manage the environment.

This requires a cloud management tooling strategy to minimize the number of tools needed in order to fulfill the management objectives. “The best strategy is a combination of solutions, based on the required degrees of cross-platform consistency and platform-specific functionality,” stated Khnaser. “In all cases, organizations should prioritize the use of the cloud platform’s native toolset, augmenting that where needed with third-party cloud management platforms, cloud management point tools, DIY solutions and outsourcing.”

Read the source articles in Meritalk, Nextgov and at Gartner.

Source: AI Trends

Posted on

White House Releases 10 AI Principles for Agencies to Follow


The White House’s Office of Science and Technology Policy has released 10 principles that federal agencies must meet when drafting AI regulations. The list was met with mixed reception. (GETTY IMAGES)

By AI Trends Staff

The White House’s Office of Science and Technology Policy (OSTP)  this week released what it has described as a “first of its kind” set of principles that agencies must meet when drafting AI regulations. The principles were met with less than universal approval, with some experts suggesting they represent a “hands-off” approach at a time when some regulation may be needed.

The announcement supplements efforts within the federal government over the past year to define ethical AI use. The defense and national security communities have mapped out their own ethical considerations for AI, according to an account in the Federal News Network. The US last spring signed off on a common set of international AI principles with more than 40 other countries.

“The U.S. AI regulatory principles provide official guidance and reduce uncertainty for innovators about how the federal government is approaching the regulation of artificial intelligence technologies,” said US Chief Technology Officer Michael Kratsios. “By providing this regulatory clarity, our intent is to remove impediments to private-sector AI innovation and growth. Removing obstacles to the development of AI means delivering the promise of this technology for all Americans, from advancements in health care, transportation, communication—innovations we haven’t even thought of yet.”

Michael Kratsios, US Chief Technology Officer

The public will have 60 days to comment on the White House’s draft guidance. Following those 60 days, the White House will issue a final memorandum to federal agencies and instruct agencies to submit implementation plans.

Deputy U.S. Chief Technology Officer Lynne Parker said these agency implementation plans will cover a wide range of policy issues and will help avoid a “one-size-fits-all” approach to regulating AI.

“While there are ongoing policy discussions about the use of AI by the government, this action in particular though addresses the use of AI in the private sector,” Parker said. “It’s also important to note that these principles are intentionally high-level. Federal agencies will implement the guidance in accordance with their sector-specific needs. We purposefully want to avoid top-down, one-size-fits-all blanket regulation, as AI-powered technologies reach across vastly different industries.”

Here is a summary of the OSTP’s 10 AI principles:

  1. Public trust in AI: The government’s regulatory and non-regulatory approaches to AI must promote reliable robust and trustworthy AI applications.
  2. Public participation: Agencies should provide ample opportunities for the public to participate in all stages of the rulemaking process.
  3. Scientific integrity and information quality: Agencies should develop technical information about AI through an open and objective pursuit of verifiable evidence that both inform policy decisions and foster public trust in AI.
  4. Risk assessment and management: A risk-based approach should be used to determine which risks are acceptable, and which risks present the possibility of unacceptable harm or harm that has expected costs greater than expected benefits.
  5. Benefits and costs: Agencies should carefully consider the full societal benefits, and distributional effects before considering regulations.
  6. Flexibility: Regulations should adapt to rapid changes and updates to AI applications.
  7. Fairness and non-discrimination – Agencies should consider issues of fairness and non-discrimination “with respect to outcomes and decisions produced by the AI application at issue.”
  8. Disclosure and transparency — “Transparency and disclosure can increase public trust and confidence in AI applications.”
  9. Safety and security: Agencies should pay particular attention to the controls in place to ensure the confidentiality, integrity and availability of information processed stored and transmitted by AI systems.
  10. Interagency coordination: Agencies should coordinate with each other to share experiences and ensure consistency and predictability of AI-related policies that advance American innovation and growth and AI.

Some See Regulation of AI As Needed

Some experts believe regulation of AI is needed as technology advances rapidly into diagnosing medical conditions, driving cars, judging credit risk and recognizing individual faces in video footage. The inability of the AI system at times to convey how it got to its recommendation or prediction, leads to questions of how far to trust AI and when to have humans in the loop.

Terah Lyons, Executive Director of the nonprofit Partnership on AI, which advocates for responsible AI and has backing from major tech firms and philanthropies, said in an account from the Associated Press that the White House principles will not have sweeping or immediate effects. But she was encouraged that they detailed a U.S. approach centered on values such as trustworthiness and fairness.

Terah Lyons, Executive Director, the Partnership on AI

“The AI developer community may see that as a positive step in the right direction,” said Lyons, who worked for the White House OSTP during the Obama administration. However, she noted no clear mechanisms are suggested for holding AI systems accountable. “It’s a little bit hard to see what the actual impact will be,” she stated.

AI Now Having a Geopolitical Impact

The US has so far rejected working with other G7 nations on a project known as the Global Partnership on AI, which seeks to establish shared principles and regulations.

The White House has suggested that the G7 plan would stifle innovation with bureaucratic meddling. In an interview with Wired, Michael Kratsios, the Chief Technology Officer for the United States, said he hopes other nations will follow America’s lead when developing their own regulations for AI. “The best way to counter authoritarian uses of AI is to make sure America and its international partners remain global hubs of innovation, advancing technology and manners consistent with our values,” Kratsios stated.

Some observers question the strategy of going it alone and how effective the principles will be. “There’s a downside to us going down a different path to other nations,” stated Martijn Rasser, a senior fellow at the Center for New American Security and the author of a recent report that calls for greater government investment in AI. Regarding the AI principles, Rasser stated, “A lot of this is open to interpretation to each individual agency. Anything that an agency produces could be shot down, given the vagueness.”

Martijn Rasser, Senior Fellow, Center for New American Security

In examples of the US effort to shape AI policy, the US Commerce Department last year blacklisted several Chinese AI firms after the Trump administration said they were implicated in the repression of Muslims in the country’s Xinjiang region. On Monday, citing national security concerns, the agency set limits on exporting AI software used to analyze satellite imagery.

The controls are meant to limit the ability of rivals such as China to use US software to develop military drones and satellites. However, regulating the export of software is notoriously difficult, especially when many key algorithms and data sets are open source.

Also last fall, the administration placed Chinese companies responsible for developing super computing technology on the blacklist. In May, the Chinese tech giant Huawei was put on the blacklist, along with 70 affiliates, due to concerns over its ties to the government and the potential for its 5G wireless technology to be used for cyber espionage.

Matt Sanchez, CTO and founder, Cognitive Scale

Matt Sanchez, CTO of CognitiveScale, which offers a platform for building explainable and trustworthy AI systems, said in a comment for AI Trends, “These 10 AI principles are a good first step to drive more transparency and trust in the AI industry. They address the elements of trust that are critical for safeguarding public data—privacy, explainability, bias, fairness and compliance. This is a great framework, but in order to be truly successful, implementation cannot be driven by the government alone. It must be an interdisciplinary execution with deep involvement of the technology industry, academia, and the American public. In addition, this cannot be seen as proprietary, so ensuring it has a non-proprietary open standard is terribly important to success.”

Read the source articles in the Federal News Network, from the  Associated Press, in Wired and from CognitiveScale.

Source: AI Trends

Posted on

US Patent and Trademark Office Seeking Comment on Impact of AI on Creative Works


A notice in the Federal Register invited readers to comment on whether creative content produced by AI can be issued a patent.

By AI Trends Staff

The US Patent and Trademark Office (USPTO) is getting more involved in AI. One effort is an AI project that aims to speed patent examinations. The office receives approximately 2,500 patent applications per day.

The project took some nine months to develop and makes a “really compelling case” for the use of AI, stated Tom Beach, Chief Data Strategist and Portfolio Manager at USPTO, in an account in MeriTalk. Beach was speaking at a recent Veritas Public Sector Vision Day event.

The project calls for extracting technical data from patent applications and using that to enhance Cooperative Patent Classification (CPC) data, which is reviewed by USPTO patent examiners to evaluate patent applications. The aim is to speed the overall evaluation process. “That’s the ROI for this project,” Beach stated.

The USPTO is also actively seeking comments on the impact of AI on creative works. The office published a notice in the Federal Register in August 2019 seeking comments. It sought comment on the interplay between patent law and AI. In October, the USPTO expanded the inquiry to include copyright, trademark and other IP rights, according to an account in Patently-O. Comments are now being accepted until Jan. 10, 2020.

(Anyone can respond; interested AI Trends readers are encouraged to respond.)

The questions have no concrete answers in US law, experts suggest. “I think what’s protectable is conscious steps made by a person to be involved in authorship,” stated Zvi S. Rosen, lecturer at the George Washington University School of Law, in an account in The Verge. A person executing a single click might not be so recognized. “My opinion is if it’s really a push button thing, and you get a result, I don’t think there’s any copyright in that,” Rosen stated.

This push-button creativity discussion gets a little more murky when considering the deal Warner Music reached with AI startup Endel in March 2019. Endel used its algorithm to create 600 short tracks on 20 albums that were then put on streaming services, returning a 50 / 50 royalty split to Endel, The Verge reported.

Rosen encouraged people to respond. “If a musician has worked with AI and can attest to a particular experience or grievance, that’s helpful,” he stated.

For those interested, here are the questions:

  1. Should a work produced by an AI algorithm or process, without the involvement of a natural person contributing expression to the resulting work, qualify as a work of authorship protectable under U.S. copyright law? Why or why not?
  2. Assuming involvement by a natural person is or should be required, what kind of involvement would or should be sufficient so that the work qualifies for copyright protection? For example, should it be sufficient if a person (i) designed the AI algorithm or process that created the work; (ii) contributed to the design of the algorithm or process; (iii) chose data used by the algorithm for training or otherwise; (iv) caused the AI algorithm or process to be used to yield the work; or (v) engaged in some specific combination of the foregoing activities? Are there other contributions a person could make in a potentially copyrightable AI-generated work in order to be considered an “author”?
  3. To the extent an AI algorithm or process learns its function(s) by ingesting large volumes of copyrighted material, does the existing statutory language (e.g., the fair use doctrine) and related case law adequately address the legality of making such use? Should authors be recognized for this type of use of their works? If so, how?
  4. Are current laws for assigning liability for copyright infringement adequate to address a situation in which an AI process creates a work that infringes a copyrighted work?
  5. Should an entity or entities other than a natural person, or company to which a natural person assigns a copyrighted work, be able to own the copyright on the AI work? For example: Should a company who trains the artificial intelligence process that creates the work be able to be an owner?
  6. Are there other copyright issues that need to be addressed to promote the goals of copyright law in connection with the use of AI?
  7. Would the use of AI in trademark searching impact the registrability of trademarks? If so, how?
  8. How, if at all, does AI impact trademark law? Is the existing statutory language in the Lanham Act adequate to address the use of AI in the marketplace?
  9. How, if at all, does AI impact the need to protect databases and data sets? Are existing laws adequate to protect such data?
  10. How, if at all, does AI impact trade secret law? Is the Defend Trade Secrets Act (DTSA), 18 U.S.C. 1836 et seq., adequate to address the use of AI in the marketplace?
  11. Do any laws, policies, or practices need to change in order to ensure an appropriate balance between maintaining trade secrets on the one hand and obtaining patents, copyrights, or other forms of intellectual property protection related to AI on the other?
  12. Are there any other AI-related issues pertinent to intellectual property rights (other than those related to patent rights) that the USPTO should examine?
  13. Are there any relevant policies or practices from intellectual property agencies or legal systems in other countries that may help inform USPTO’s policies and practices regarding intellectual property rights (other than those related to patent rights)?

Read the source articles in  MeriTalk, Patently-O and The Verge.

Send your comments to AIPartnership@uspto.gov.

Source: AI Trends