Posted on

Effort to Fund National Research Cloud for AI Advances


A national research cloud to support US efforts to maintain and grow a lead in AI is at the proposal stage in Congress. (GETTY IMAGES)

By AI Trends Staff

A bipartisan group of legislators in the US House and Senate proposed a bill in the first week of June that would direct the federal government to develop a national cloud computing infrastructure for AI research.

This idea originated with a proposal from Stanford University in 2019.

The legislation was introduced by Sens. Rob Portman, R-Ohio, and Martin Heinrich, D-NM, is called the National Cloud Computing Task Force Act. It would convene a mix of technical experts across academia, industry and government, to plan for how the US should build, deploy, govern and maintain a national research cloud for AI.

“With China focused on toppling the United States’ leadership in AI, we need to redouble our efforts with a sustained commitment to the best and brightest by developing a national research cloud to ensure our technical researchers get the tools they need to succeed,” stated Portman, according to an account in  Nextgov. “By democratizing access to computing power we ensure that any American with computer science talent can pursue their good ideas.”

“Artificial Intelligence is likely to be one of the most transformative technologies of all time. If we defer its development to other nations, important ethical, safety, and privacy principles will be at risk, which not only harms the United States, but also the international community as a whole,” stated Sen. Heinrich.

A companion bill was introduced the same week in the House, filed by Reps. Anna Eshoo, D-Calif., and Anthony Gonzalez, R-Ohio.

Original Suggestion for National Research Cloud From Stanford

A project to support a National Research Cloud was suggested by John Etchemendy, co-director of the Stanford Institute for Human Centered AI (HAI), and Fei-Fei Li, also a co-director of HAE and a computer science professor at Stanford. Etchemendy is retired as provost of Stanford, a position he held for 17 years, stepping down in 2017. Li was the director of Stanford’s AI Lab from 2013 to 2018; she served as VP of and Chief Scientist of AI/ML at Good Cloud during a sabbatical from January 2017 to September 2018.

In an update published on the HAE blog at Stanford University in March, the authors outlined how the advance of AI by US companies is a direct outgrowth of federally-funded university research, furthered by exceptional R&D in the private sector. Then a warning: “Today, the research prowess that’s powered decades of growth and prosperity is at risk.”

The Stanford Institute for Human-Centered Design (HAI), founded in March 2019, aims to advance AI practice to improve the human condition.

The two primary reasons are: university researchers especially lack access to compute power; and meaningful datasets are scarce. These two resources are “prerequisites for advanced AI research,” the authors stated.

Today’s AI, the researchers note, requires massive amounts of compute power, huge volumes of data and high expertise to train the gigantic machine learning models underlying the most advanced research. “There is a wide gulf between the few companies that can afford these resources and everyone else,” the authors stated. In an example, Google said the company itself spent $1.5 million in computer cycles to train the Meena chatbot announced earlier this year. “Such costs for a single research project are out of reach for most corporations, let alone for academic researchers,” the authors stated.

Meanwhile, the large data sets required to train AI algorithms are mostly controlled by industry or government, hobbling academic researchers, important partners in the American research partnership.

Here is the call: “It is for this reason that we are calling for the creation of a US Government-led task force from academia, government, and industry to establish a National Research Cloud. Support from Congress and the President could have a meaningful impact on American innovation through the creation of such a task force. Indeed, we believe that this could be one of the most strategic research investments the federal government has ever made.”

After HAI launched the initiative last year, the presidents and provosts of 22 universities nationwide signed a joint letter to the President and Congress in support of the effort. But HAE held off on issuing the letter until now, with the nation absorbed in responding to the pandemic.

The co-directors of the Stanford Institute for Human-Centered Design (HAI) are John Etchemendy and Fei-Fei Li. He retired as provost in 2017; she is a computer science professor at Stanford. (Drew Kelly for Stanford Institute for Human-Centered Artificial Intelligence)

Eric Schmidt Suggests Building on CloudBank of the NSF

Former Google CEO Eric Schmidt put his support behind a national cloud effort at a hearing of the House Science, Space and Technology committee in late January. The hearing was to consider actions the US could take to maintain and extend its technological leadership in the world.

Now the chair of the Defense Innovation Board and the National Security Commission on Artificial Intelligence (AI), Schmidt spoke about the CloudBank program launched last year by the National Science Foundation to provide public cloud allocations and associated training to support projects.

The committee considered the importance of collaboration between government, industry, and academia in the effort to sustain and grow US competitiveness. Schmidt suggested that CloudBank “could expand into a nation-wide National Research Cloud,” according to a recent account in Meritalk.

“Congress should also explore tax incentives for companies to share data and provide computing capabilities to research institutions, and accelerate efforts to make government datasets more widely available,” Schmidt stated.

Schmidt offered the following recommendations to boost US tech competitiveness:

  • More Federal research and development funding. “For AI, the scale of investment should be multiple times current levels,” he stated, adding, “Simply put, we need to place big bets.”
  • Federal investment in nationwide infrastructure. That should include a secure alternative to 5G network equipment made by China-based Huawei, investing in high-performance computing, and emulating previous national models like the National Nanotechnology Initiative.
  • Boosting public confidence in advanced technology. “If we do not earn the public’s trust in the benefits of new technologies, especially AI, doubts will hold us back,” Schmidt stated.

The committee seemed to support the concept of public-private-academic partnerships to achieve the recommended outcomes.

Read the source articles at Nextgov., the HAE blog at Stanford University and in Meritalk.

Read More

Posted on

Federal Government Moving to Implement AI More Widely

As the federal government moves to a wider implementation of AI, we highlight selected implementations at the VA and the IRS. (GETTY IMAGES)
By AI Trends Staff
The VA is planning to expand its use of AI; the IRS is moving to employ more AI for help with tax compliance. …

Read More

Posted on

Quantum Computing Research Gets Boost from Federal Government


The federal government is directing millions of research dollars into quantum computing; AI is expected to speed development.

By AI Trends Staff

The US federal government is investing heavily in research on quantum computing, and AI is helping to boost the development.

The White House is pushing to add an additional billion dollars to fund AI research that would increase AI R&D funding research to nearly $2 billion and quantum computing research to about $860 million over the next two years, according to an account in TechCrunch on Feb. 7.

This is in addition to the $625 million investment in National Quantum Information Science Research Centers announced by the Department of Energy’s (DoE) Office of Science in January, following from the National quantum Initiative Act, according to an account in MeriTalk.

“The purpose of these centers will be to push the current state-of-the-art science and technology toward realizing the full potential of quantum-based applications, from computing, to communication, to sensing,” the announcement stated.

The centers are expected to work across multiple technical areas of interest, including quantum communication, computing, devices, applications, and foundries. The centers are expected to collaborate, maintain science and technology innovation chains, have an effective management structure and needed facilities.

The department expects awards to range from $10 million to $25 million per year for each center. The goal is to accelerate the research and development of quantum computing. The department is looking for at least two multi-institutional and multi-disciplinary teams to engage in the five-year project. Applications are being accepted through April 10.

Russian Researchers Searching for Quantum Advantage

In other quantum computing developments, Russian researchers are being credited with finding a way to use AI to mimic the work of quantum “walk experts,” who search for advantages quantum computing might have over analog computing. By replacing the experts with AI, the Russians try to identify if a given network will deliver a quantum advantage. If so, they are good candidates for building a quantum computer, according to an account in SciTechDaily based on findings reported in the New Journal of Physics.

The researchers are the Moscow Institute of Physics and Technology (MIPT), the Valiev Institute of Physics and Technology, and ITMO University.

Problems in modern science solved through quantum mechanical calculations are expected to be better-suited to quantum computing. Examples include research into chemical reactions and the search for stable molecular structures for medicine and pharmaceutics. The Russian researchers used a neural network geared toward image recognition to return a prediction of whether the classical or the quantum walk between identified nodes would be faster.

“It was not obvious this approach would work, but it did. We have been quite successful in training the computer to make autonomous predictions of whether a complex network has a quantum advantage,” stated Associate Professor Leonid Fedichkin of the theoretical physics department at MIPT.

Associate Professor Leonid Fedichkin, Associate Professor of theoretical physics department at MIPT

MIPT graduate and ITMO University researcher Alexey Melnikov stated, “The line between quantum and classical behaviors is often blurred. The distinctive feature of our study is the resulting special-purpose computer vision, capable of discerning this fine line in the network space.”

With their co-author Alexander Alodjants, the researchers created a tool that simplifies the development of computational circuits based on quantum algorithms.

Google, Amazon Supporting Quantum Computer Research

Finally, Google and Amazon have recently made moves to support research into quantum computing. In October, Google announced a quantum computer outfitted with its Sycamore quantum processor completed a test computation in 200 seconds that would have taken 10,000 years to match by the fastest supercomputer.

And Amazon in December announced the availability of Amazon Braket,a new managed service that allows researchers and developers experimenting with computers from multiple quantum hardware providers in a single place. Amazon also announced the AWS Center for Quantum Computing adjacent to the California Institute of Technology (Caltech) to bring together quantum computing researchers and engineers together to accelerate development in hardware and software.

Tristan Morel L’Horset, the North America intelligent cloud and infrastructure growth lead for Accenture Technology Services

“We don’t know what problems quantum will solve because quantum will solve problems we haven’t thought of yet,” stated Tristan Morel L’Horset, the North America intelligent cloud and infrastructure growth lead for Accenture Technology Services, at an Amazon event in December, according to an account in Information Week.

This is the first opportunity for customers to directly experiment with quantum computing, which is “ incredibly expensive to build and operate.” It may help answer some questions. “A lot of companies have wondered how they would actually use it,” L’Horset stated.

Read the source articles in TechCrunch, MeriTalk, SciTechDaily and Information Week.

Source: AI Trends

Posted on

GSA Unit Launches AI Community of Practice to Boost Agency Adoption


A GSA unit has formed an AI community of practice that aims to accelerate AI adoption across the federal government by compiling helpful use cases. (GETTY IMAGES)

By AI Trends Staff

The General Services Administration’s Technology Transformation Services (TTS) unit has launched an AI community of practice (AI CoP) to capture advances in AI and accelerate adoption across the federal government. The founding was announced in November via a blog post written by Steve Babitch, head of the AI portfolio for TTS.

The action is a follow-up to an Executive Order signed by President Trump in February on Maintaining American Leadership in AI. “The initiative implements a government-wide strategy in collaboration and engagement with the private sector, academia, the public, and like-minded international partners,” Babitch stated in the blog post.

Steve Babitch, head of the AI portfolio for TTS unit of the GSA

He outlined these six areas where the AI CoP will support and coordinate the use of AI technologies in federal agencies:

  • Machine Learning and deep learning
  • Robotic Process Automation
  • Human-computer interactions
  • Natural Language Processing
  • Rule based automation
  • Robotics

The executive sponsors of the AI CoP are the Federal Chief Information Officer, Suzette Kent, and the Director of GSA’s Technology Transformation Services, Anil Cheriyan. The CoP will be administered out of the Technology Transformation Services (TTS) Solutions division, led by Babitch, who coordinates with the CIO Council’s Innovation Committee.

Library of AI Use Cases in Government Being Compiled

At a GSA event in January, Babitch described an effort to develop a library of AI use cases that agencies can reference as they start to invest in new AI technology, according to an account in  fedscoop. The library could lead to other practice areas being added to the list.

“The harder we start to build that repository of use cases and build in a searchable database, if you will, that can sort of blossom into other facets as well—different themes or aspects of use cases,” Babitch stated. “Maybe there’s actually a component around culture and mindset change or people development.”

Practice areas mentioned include acquisition, ethics, governance, tools and techniques, and workforce readiness. Early use cases across agencies have touched on customer experience, human resources, advanced cybersecurity, and business processes.

In an example, the Economic Indications Division (EID) of the Census Bureau developed a machine learning model for automating data coding.

“It’s the perfect machine learning project,” stated Rebecca Hutchinson, big data leader at EID. “If you can automate that coding, you can speed up and code more of the data. And if you can code more of the data, we can improve our data quality and increase the number of data products we’re putting out for our data users.”

She reported that the model is performing with about 80% accuracy, leaving only 20% still needing to be manually coded.

One-Third of Census Bureau Staff Enrolled in AI Training

The Census Bureau has been offering AI training to interested workers, many of whom are taking advantage of the opportunity.

Interested staff can apply to learn Python in ArcGIS, and Tableau through a Coursera course. Hutchinson reported that one-third of the bureau’s staff has completed training or is currently enrolled, coming away with ML and web scraping skills.

“Once you start training your staff with the skills, they are coming up with solutions,” Hutchinson stated. “It was our staff that came up with the idea to do machine learning of construction data, and we’re just seeing that more and more.”

Read the source articles on the  GSABlog and in fedscoop.

Source: AI Trends

Posted on

White House Releases 10 AI Principles for Agencies to Follow


The White House’s Office of Science and Technology Policy has released 10 principles that federal agencies must meet when drafting AI regulations. The list was met with mixed reception. (GETTY IMAGES)

By AI Trends Staff

The White House’s Office of Science and Technology Policy (OSTP)  this week released what it has described as a “first of its kind” set of principles that agencies must meet when drafting AI regulations. The principles were met with less than universal approval, with some experts suggesting they represent a “hands-off” approach at a time when some regulation may be needed.

The announcement supplements efforts within the federal government over the past year to define ethical AI use. The defense and national security communities have mapped out their own ethical considerations for AI, according to an account in the Federal News Network. The US last spring signed off on a common set of international AI principles with more than 40 other countries.

“The U.S. AI regulatory principles provide official guidance and reduce uncertainty for innovators about how the federal government is approaching the regulation of artificial intelligence technologies,” said US Chief Technology Officer Michael Kratsios. “By providing this regulatory clarity, our intent is to remove impediments to private-sector AI innovation and growth. Removing obstacles to the development of AI means delivering the promise of this technology for all Americans, from advancements in health care, transportation, communication—innovations we haven’t even thought of yet.”

Michael Kratsios, US Chief Technology Officer

The public will have 60 days to comment on the White House’s draft guidance. Following those 60 days, the White House will issue a final memorandum to federal agencies and instruct agencies to submit implementation plans.

Deputy U.S. Chief Technology Officer Lynne Parker said these agency implementation plans will cover a wide range of policy issues and will help avoid a “one-size-fits-all” approach to regulating AI.

“While there are ongoing policy discussions about the use of AI by the government, this action in particular though addresses the use of AI in the private sector,” Parker said. “It’s also important to note that these principles are intentionally high-level. Federal agencies will implement the guidance in accordance with their sector-specific needs. We purposefully want to avoid top-down, one-size-fits-all blanket regulation, as AI-powered technologies reach across vastly different industries.”

Here is a summary of the OSTP’s 10 AI principles:

  1. Public trust in AI: The government’s regulatory and non-regulatory approaches to AI must promote reliable robust and trustworthy AI applications.
  2. Public participation: Agencies should provide ample opportunities for the public to participate in all stages of the rulemaking process.
  3. Scientific integrity and information quality: Agencies should develop technical information about AI through an open and objective pursuit of verifiable evidence that both inform policy decisions and foster public trust in AI.
  4. Risk assessment and management: A risk-based approach should be used to determine which risks are acceptable, and which risks present the possibility of unacceptable harm or harm that has expected costs greater than expected benefits.
  5. Benefits and costs: Agencies should carefully consider the full societal benefits, and distributional effects before considering regulations.
  6. Flexibility: Regulations should adapt to rapid changes and updates to AI applications.
  7. Fairness and non-discrimination – Agencies should consider issues of fairness and non-discrimination “with respect to outcomes and decisions produced by the AI application at issue.”
  8. Disclosure and transparency — “Transparency and disclosure can increase public trust and confidence in AI applications.”
  9. Safety and security: Agencies should pay particular attention to the controls in place to ensure the confidentiality, integrity and availability of information processed stored and transmitted by AI systems.
  10. Interagency coordination: Agencies should coordinate with each other to share experiences and ensure consistency and predictability of AI-related policies that advance American innovation and growth and AI.

Some See Regulation of AI As Needed

Some experts believe regulation of AI is needed as technology advances rapidly into diagnosing medical conditions, driving cars, judging credit risk and recognizing individual faces in video footage. The inability of the AI system at times to convey how it got to its recommendation or prediction, leads to questions of how far to trust AI and when to have humans in the loop.

Terah Lyons, Executive Director of the nonprofit Partnership on AI, which advocates for responsible AI and has backing from major tech firms and philanthropies, said in an account from the Associated Press that the White House principles will not have sweeping or immediate effects. But she was encouraged that they detailed a U.S. approach centered on values such as trustworthiness and fairness.

Terah Lyons, Executive Director, the Partnership on AI

“The AI developer community may see that as a positive step in the right direction,” said Lyons, who worked for the White House OSTP during the Obama administration. However, she noted no clear mechanisms are suggested for holding AI systems accountable. “It’s a little bit hard to see what the actual impact will be,” she stated.

AI Now Having a Geopolitical Impact

The US has so far rejected working with other G7 nations on a project known as the Global Partnership on AI, which seeks to establish shared principles and regulations.

The White House has suggested that the G7 plan would stifle innovation with bureaucratic meddling. In an interview with Wired, Michael Kratsios, the Chief Technology Officer for the United States, said he hopes other nations will follow America’s lead when developing their own regulations for AI. “The best way to counter authoritarian uses of AI is to make sure America and its international partners remain global hubs of innovation, advancing technology and manners consistent with our values,” Kratsios stated.

Some observers question the strategy of going it alone and how effective the principles will be. “There’s a downside to us going down a different path to other nations,” stated Martijn Rasser, a senior fellow at the Center for New American Security and the author of a recent report that calls for greater government investment in AI. Regarding the AI principles, Rasser stated, “A lot of this is open to interpretation to each individual agency. Anything that an agency produces could be shot down, given the vagueness.”

Martijn Rasser, Senior Fellow, Center for New American Security

In examples of the US effort to shape AI policy, the US Commerce Department last year blacklisted several Chinese AI firms after the Trump administration said they were implicated in the repression of Muslims in the country’s Xinjiang region. On Monday, citing national security concerns, the agency set limits on exporting AI software used to analyze satellite imagery.

The controls are meant to limit the ability of rivals such as China to use US software to develop military drones and satellites. However, regulating the export of software is notoriously difficult, especially when many key algorithms and data sets are open source.

Also last fall, the administration placed Chinese companies responsible for developing super computing technology on the blacklist. In May, the Chinese tech giant Huawei was put on the blacklist, along with 70 affiliates, due to concerns over its ties to the government and the potential for its 5G wireless technology to be used for cyber espionage.

Matt Sanchez, CTO and founder, Cognitive Scale

Matt Sanchez, CTO of CognitiveScale, which offers a platform for building explainable and trustworthy AI systems, said in a comment for AI Trends, “These 10 AI principles are a good first step to drive more transparency and trust in the AI industry. They address the elements of trust that are critical for safeguarding public data—privacy, explainability, bias, fairness and compliance. This is a great framework, but in order to be truly successful, implementation cannot be driven by the government alone. It must be an interdisciplinary execution with deep involvement of the technology industry, academia, and the American public. In addition, this cannot be seen as proprietary, so ensuring it has a non-proprietary open standard is terribly important to success.”

Read the source articles in the Federal News Network, from the  Associated Press, in Wired and from CognitiveScale.

Source: AI Trends