Posted on

Specialized AI Chip Market Seen Expanding Rapidly


The Cerebras Wafer Scale Engine powering the CS-1 chip shown here is said by the company to be 56 times the size of the largest GPU. (CEREBRAS)

By AI Trends Staff

The fragmenting and increasingly specialized AI chip market will cause developers of AI applications to have to make platform choices for upcoming projects, choices with potentially long-term implications.

AI chip specialization arguably began with graphics processing units, originally developed for gaming then deployed for applications such as deep learning. When NVIDIA released its CUDA toolkit for making GPUs programmable in 2007, it opened the market up to a wider range of developers, noted a recent account in IEEE Spectrum written by Evan Sparks, CEO of Determined AI.

GPU processing power has advanced rapidly. Chips originally designed to render images are now the workhorses powering AI R&D. Many of the linear algebra routines necessary to make Fortnite run at 120 frames per second, are now powering the neural networks at the heart of advanced applications in computer vision, automated speech recognition and natural language processing, Evans notes.

Market projections for specialized AI chips are aggressive. Gartner projects specialized AI chip sales to project to $8 billion in 2019 and grow to $34 billion by 2023. NVIDIA’s internal projections reported by Evans have AI chip sales projected to reach $50 billion by 2023; most of those anticipated for data center GPUs used to power deep learning. Custom silicon research is ongoing at Amazon, ARM, Apple, IBM, Intel, Google, Microsoft, NVIDIA and Qualcomm. Many startups are also in the competition, including Cerebras, Graphcore, Groq, Mythic AI, SambaNova Systems and Wave Computing, who together have raised over $1 billion.

Allied Market Research projects the global AI chip market to reach $91 billion by 2025, with growth rates of 45% a year until then. Market drivers include a surge in demand for smart homes and smart cities, more investment in AI startups, the emergence of quantum computing and the rise of smart robots, according to a release from Allied on the Global Newswire. Market growth, however is being slowed by too few skilled workers.

The market splits into chip type, application, industry vertical, technology processing type and region, according to Allied. The chip types are divided into the GPU, the application-specific integrated circuit (ASIC), the field-programmable gate array (FPGA), the central processing unit (CPU) and others. The ASIC segment is expected to register the fastest growth at 52% per year until 2025.

At the recent International Electron Devices Meeting (IEDM) conference in San Francisco, IBM discussed innovations into making hardware systems that advance with the pace of demands of AI software and data workloads, according to an account in Digital Journal.

Among the highlights: nanosheet technology aims to meet the requirements of AI and 5G. Researchers discussed how to stack nanosheet transistors and multiple-Vt solutions (multi-threshold voltage devices).

Phase-change memory (PCM) has emerged as an alternative to conventional von Neumann systems to train deep neural networks (DNNs) where a synaptic weight is represented by the device conductance. However, a temporal evolution of the conductance values, referred to as conductance drift, poses challenges for the reliability of the synaptic weights. IBM presented an approach to reduce the impact of PCM conductance drift. IBM also demonstrated an ultra-low power prototype chip, with the potential to execute AI tasks in edge computing devices in real time.

An example of a specific application driving an AI chip design is happening at the Argonne National Laboratory, a science and engineering research institution in Illinois. Finding a drug that cancer patients can best respond to, tests the limits of modern science. With the emergence of AI, scientists are able to combine machine learning and genomics to sequence data and help clinicians better understand how to tailor treatment plans to individual patients, according to an account in AIMed (AI in Medicine).

Argonne National Lab Employing CS-1 for Cancer Research

Argonne recently announced the first deployment of a new AI processor, the CS-1, developed by Cerebras, a computer systems startup. The chip enables a faster rate of training for deep learning algorithms. CS-1 is said to house the fastest and largest AI chip ever built.

Rick Stevens, Argonne Associate Lab Director for Computing, Environment and Life Sciences, stated in a press release, “By deploying the CS-1, we have dramatically shrunk training time across neural networks, allowing our researchers to be vastly more productive.”

CS-1 also has the ability to handle scientific data reliably and in an easy to use manner, including higher-dimensional data sets with data coming from diverse data sources. The deep learning algorithms developed to work these models are extremely complex, compared to computer vision or language applications, Stevens stated.

The main job of the CS-1 is to increase the speed of developing and deploying new cancer drug models. The hope is that the Argonne Lab will arrive at a deep learning model that can predict how a tumor may respond to a drug or combination of two or more drugs.

Read the source articles in IEEE Spectrum,  on the Global Newswire, in Digital Journal and in AIMed (AI in Medicine).

Source: AI Trends
Continue reading Specialized AI Chip Market Seen Expanding Rapidly

Posted on

AI Bouncing Off the Walls as Growing Models Max Out Hardware


The growing size of AI models is bumping into the limits of hardware needed to process it, meaning current AI may be hitting the wall. (GETTY IMAGES)

By John P. Desmond, AI Trends Editor

Has AI hit the wall? Recent evidence suggests it might be the case.

At the recent NeurIPS event in Vancouver, software engineer Blaise Aguera y Arcas, the head of AI for Google, recognized the progress in the use of deep learning techniques to get smartphones to recognize faces and voices. And he called attention to limitations of deep learning.

Blaise Aguera y Arcas, the head of AI for Google

“We’re kind of like the dog who caught the car,” Aguera y Arcas said in an account reported in Wired. Problems that involve more reasoning or social intelligence, like sizing up a potential hire, may be out of reach of today’s AI. “All of the models that we have learned how to train are about passing a test or winning a game with a score, [but] so many things that intelligences do aren’t covered by that rubric at all,” he stated.

A similar theme was struck in an address by Yoshua Bengio, director of Mila, an AI institute in Montreal, known for his work in artificial neural networks and deep learning. He noted how today’s deep learning systems yield highly specialized results. “We have machines that learn in a very narrow way,” Bengio said. “They need much more data to learn a task than human examples of intelligence, and they still make stupid mistakes.”

Both speakers recommended AI developers seek inspiration from the biological roots of natural intelligence, so that for example, deep learning systems could be flexible enough to handle situations different from the ones they were trained on.

A similar alarm was sounded by Jerome Pesenti, VP of AI at Facebook, also in a recent account in Wired on AI hitting the wall. Pesenti joined Facebook in January 2018, inheriting a research lab created by Yann Lecun, a French-American computer scientist known for his work on machine learning and computer vision. Before Facebook, Pesenti had worked on IBM’s Watson AI platform and at Benevolent AI, a company applying the technology to medicine.

Jerome Pesenti, VP of AI at Facebook

“Deep learning and current AI, if you are really honest, has a lot of limitations. We are very very far from human intelligence, and there are some criticisms that are valid: It can propagate human biases, it’s not easy to explain, it doesn’t have common sense, it’s more on the level of pattern matching than robust semantic understanding. But we’re making progress in addressing some of these, and the field is still progressing pretty fast. You can apply deep learning to mathematics, to understanding proteins, there are so many things you can do with it,” Pesenti stated in the interview.

The compute power hardware requirement, the sheer volume of equipment needed, continues to grow for advanced AI. This continuation of this growth rate appears to be unrealistic. “Clearly the rate of progress is not sustainable. If you look at top experiments, each year the cost it going up 10-fold. Right now, an experiment might be in seven figures, but it’s not going to go to nine or ten figures, it’s not possible, nobody can afford that,” Pesenti stated. “It means that at some point we’re going to hit the wall. In many ways we already have.”

The way forward is to work on optimization, getting the most out of the available compute power.

Similar observations are being made by Intel’s Naveen Rao VP and general manager of Intel’s AI Products Group. He suggested at the company’s recent AI Summit, from an account in datanami, that the growth in the size of neural networks is outpacing the ability of the hardware to keep up. Solving the problem will require new thinking about how processing, network, and memory work together.

Naveen Rao, VP and general manager of Intel’s AI Products Group

“Over the last 20 years we’ve gotten a lot better at storing data,” Rao stated. “We have bigger datasets than ever before. Moore’s Law has led to much greater compute capability in a single place. And that allowed us to build better and bigger neural network models. This is kind of a virtuous cycle and it’s opened up new capabilities.”

More data translates to better deep learning models for recognizing speech, text, and images. Computers that can accurately identify images and chatbots that can carry on fairly natural conversations, are primary examples of how deep learning is having an impact on daily life. However this cutting edge AI is only available to the biggest tech firms—Google, Facebook, Amazon, Microsoft. Still, we might be at the max.

It could be application-specific integrated circuits (ASIC) could help move more AI processing to the edge. Discrete graphics processing units (GPUs) are also being planned at Intel and a vision processing unit (VPU) chip was recently unveiled.

“There’s a clear trend where the industry is headed to build ASICS for AI,” Rao stated. “It’s because the growth of demand is actually outpacing what we can build in some of our other product lines.”

Facebook AI researchers recently published a report on their XLM-R project, a natural language model based on the Transformer model from Google.  XLM-R is engineering to be able to perform translations between 100 different languages, according to an account in ZDNet.

XLM-R runs on 500 of NVIDIA’s V100 GPUs, and it is hitting the wall, running into resource constraints. The application has 24 layers, 16 “attention heads” and 500 million parameters. Still, it has a finite capacity and reaches its limit.

“Model capacity (i.e. the number of parameters in the model) is constrained due to practical considerations such as memory and speed during training and inference,” the authors wrote.

The experience exemplifies two trends in AI on a collision course. One is the intent of scientists to build bigger and bigger models to get better results; the other is roadblocks in computing capacity.

Read the source articles in Wireddatanami and ZDNet.

Source: AI Trends
Continue reading AI Bouncing Off the Walls as Growing Models Max Out Hardware