Posted on

Growth in Machine Learning Leading to Demand for Automated ML

AutoML solutions have emerged to assist data scientists with the many tasks required to build and maintain robust AI models. (GETTY IMAGES)
By John P. Desmond, AI Trends Editor
Machine learning has been used successfully in many disciplines that increasingly depend on it. However, the success relies on human machine …

Read More

Posted on

AI is Changing the Pattern for How Software is Developed


AI is helping companies to deploy new software more efficiently, and to allow a new generation of developers to learn to code more easily. Credit: Getty Images 

By AI Trends Staff  

Software developers are using AI to help write and review code, detect bugs, test software and optimize development projects. This assistance is helping companies to deploy new software more efficiently, and to allow a new generation of developers to learn to code more easily. 

These are conclusions of a recent report on AI in software development published by Deloitte and summarized in a recent article in Forbes. Authors David Schatsky and Sourabh Bumb describe how a range of companies have launched dozens of AI-driven software development tools over the past 18 months. The market is growing with startups raising $704 million in the year ending September 2019.  

The new tools can be used to help reduce keystrokes, detect bugs as software is being written and automate many of the tests needed to confirm the quality of software. This is important in an era of increasing reliance on open source code, which can come with bugs. 

While some fear automation may take jobs away from coders, the Deloitte authors see it as unlikely.  

“For the most part, these AI tools are helping and augmenting humans, not replacing them,” Schatsky stated. “These tools are helping to democratize coding and software development, allowing individuals not necessarily trained in coding to fill talent gaps and learn new skills. There is also AI-driven code review, providing quality assurance before you even run the code.” 

A study from Forrester in 2018 found that 37 percent of companies involved in software development were using coding tools powered by AI. The percentage is likely to be higher now, with companies such as Tara, DeepCode, Kite, Functionize and Deep TabNine and many others providing automated coding services. 

Success seems to be accelerating the trend. “Many companies that have implemented these AI tools have seen improved quality in the end products, in addition to reducing both cost and time,” stated Schatsky.  

The Deloitte study said AI can help alleviate a chronic shortage of talented developers. Poor software quality cost US organizations an estimated $319 billion last year. The application of AI has the potential to mitigate these challenges. 

Deloitte sees AI helping in many stages of software development, including: project requirements, coding review, bug detection and resolution, more through testing, deployment and project management.     

IBM Engineer Learned AI Development Lessons from Watson Project 

IBM Distinguished Engineer Bill Higgins, based in Raleigh, NC, who has spent 20 years in software development at the company, recently published an account on the impact of AI in software development in Medium.  

Organizations need to “unlearn” the patterns for how they have developed software in the past. “If it’s difficult for an individual to adapt, it’s a million times harder for a company to adapt,” the author stated.   

Higgins was the lead for IBM’s AI for developers mission within the Watson group. “It turned out my lack of personal experience with AI was an asset,” he stated. He had to go through his own learning journey and thus gained deeper understanding and empathy for developers needing to adapt.  

To learn about AI in software development, Higgins said he studied how others have applied it (the problem space) and the cases in which using AI is superior to alternatives (the solution space). This was important to understanding what was possible and to avoid “magical thinking.” 

The author said his journey was the most intense and difficult learning he had done since getting a computer science degree at Penn State. “It was so difficult to rewire my mind to think about software systems that improve from experience, vs. software systems that merely do the things you told them to do,” he stated.  

IBM developed a conceptual model to help enterprises think about AI-based transformation called the AI Ladder. The ladder has four rungs: collect, organize, analyze and infuse. Most enterprises have lots of data, often organized in siloed IT work or from acquisitions. A given enterprise may have 20 databases and three data warehouses with redundant and inconsistent information about customers. The same is true for other data types such as orders, employees and product information. “IBM promoted the AI Ladder to conceptually climb out of this morass,” Higgins stated.  

In the infusion stage, the company works to integrate trained machine learning models into production systems, and design feedback loops so the models can continue to improve from experience. An example of infused AI is the Netflix recommendation system, powered by sophisticated machine learning models. 

IBM had determined that a combination of APIs, pre-built ML models and optional tooling to encapsulate, collect, organize and analyze rungs of the AI ladder for common ML domains such as natural language understanding, conversations with virtual agents, visual recognition, speech and enterprise search. 

For example, Watson’s Natural Language Understanding became rich and complex. Machine learning is now good at understanding many aspects of language including concepts, relationships between concepts and emotional content. Now the NLU service and the R&D on machine learning-based natural language processing can be made available to developers via an elegant API and supporting SDKs. 

Thus developers can today begin leveraging certain types of AI in their applications, even if they lack any formal training in data science or machine learning,” Higgins stated.  

It does not eliminate the AI learning curve, but it makes it a more gentle curve.  

Read the source articles in Forbes and  Medium.  

Source: AI Trends

Posted on

AI Bouncing Off the Walls as Growing Models Max Out Hardware


The growing size of AI models is bumping into the limits of hardware needed to process it, meaning current AI may be hitting the wall. (GETTY IMAGES)

By John P. Desmond, AI Trends Editor

Has AI hit the wall? Recent evidence suggests it might be the case.

At the recent NeurIPS event in Vancouver, software engineer Blaise Aguera y Arcas, the head of AI for Google, recognized the progress in the use of deep learning techniques to get smartphones to recognize faces and voices. And he called attention to limitations of deep learning.

Blaise Aguera y Arcas, the head of AI for Google

“We’re kind of like the dog who caught the car,” Aguera y Arcas said in an account reported in Wired. Problems that involve more reasoning or social intelligence, like sizing up a potential hire, may be out of reach of today’s AI. “All of the models that we have learned how to train are about passing a test or winning a game with a score, [but] so many things that intelligences do aren’t covered by that rubric at all,” he stated.

A similar theme was struck in an address by Yoshua Bengio, director of Mila, an AI institute in Montreal, known for his work in artificial neural networks and deep learning. He noted how today’s deep learning systems yield highly specialized results. “We have machines that learn in a very narrow way,” Bengio said. “They need much more data to learn a task than human examples of intelligence, and they still make stupid mistakes.”

Both speakers recommended AI developers seek inspiration from the biological roots of natural intelligence, so that for example, deep learning systems could be flexible enough to handle situations different from the ones they were trained on.

A similar alarm was sounded by Jerome Pesenti, VP of AI at Facebook, also in a recent account in Wired on AI hitting the wall. Pesenti joined Facebook in January 2018, inheriting a research lab created by Yann Lecun, a French-American computer scientist known for his work on machine learning and computer vision. Before Facebook, Pesenti had worked on IBM’s Watson AI platform and at Benevolent AI, a company applying the technology to medicine.

Jerome Pesenti, VP of AI at Facebook

“Deep learning and current AI, if you are really honest, has a lot of limitations. We are very very far from human intelligence, and there are some criticisms that are valid: It can propagate human biases, it’s not easy to explain, it doesn’t have common sense, it’s more on the level of pattern matching than robust semantic understanding. But we’re making progress in addressing some of these, and the field is still progressing pretty fast. You can apply deep learning to mathematics, to understanding proteins, there are so many things you can do with it,” Pesenti stated in the interview.

The compute power hardware requirement, the sheer volume of equipment needed, continues to grow for advanced AI. This continuation of this growth rate appears to be unrealistic. “Clearly the rate of progress is not sustainable. If you look at top experiments, each year the cost it going up 10-fold. Right now, an experiment might be in seven figures, but it’s not going to go to nine or ten figures, it’s not possible, nobody can afford that,” Pesenti stated. “It means that at some point we’re going to hit the wall. In many ways we already have.”

The way forward is to work on optimization, getting the most out of the available compute power.

Similar observations are being made by Intel’s Naveen Rao VP and general manager of Intel’s AI Products Group. He suggested at the company’s recent AI Summit, from an account in datanami, that the growth in the size of neural networks is outpacing the ability of the hardware to keep up. Solving the problem will require new thinking about how processing, network, and memory work together.

Naveen Rao, VP and general manager of Intel’s AI Products Group

“Over the last 20 years we’ve gotten a lot better at storing data,” Rao stated. “We have bigger datasets than ever before. Moore’s Law has led to much greater compute capability in a single place. And that allowed us to build better and bigger neural network models. This is kind of a virtuous cycle and it’s opened up new capabilities.”

More data translates to better deep learning models for recognizing speech, text, and images. Computers that can accurately identify images and chatbots that can carry on fairly natural conversations, are primary examples of how deep learning is having an impact on daily life. However this cutting edge AI is only available to the biggest tech firms—Google, Facebook, Amazon, Microsoft. Still, we might be at the max.

It could be application-specific integrated circuits (ASIC) could help move more AI processing to the edge. Discrete graphics processing units (GPUs) are also being planned at Intel and a vision processing unit (VPU) chip was recently unveiled.

“There’s a clear trend where the industry is headed to build ASICS for AI,” Rao stated. “It’s because the growth of demand is actually outpacing what we can build in some of our other product lines.”

Facebook AI researchers recently published a report on their XLM-R project, a natural language model based on the Transformer model from Google.  XLM-R is engineering to be able to perform translations between 100 different languages, according to an account in ZDNet.

XLM-R runs on 500 of NVIDIA’s V100 GPUs, and it is hitting the wall, running into resource constraints. The application has 24 layers, 16 “attention heads” and 500 million parameters. Still, it has a finite capacity and reaches its limit.

“Model capacity (i.e. the number of parameters in the model) is constrained due to practical considerations such as memory and speed during training and inference,” the authors wrote.

The experience exemplifies two trends in AI on a collision course. One is the intent of scientists to build bigger and bigger models to get better results; the other is roadblocks in computing capacity.

Read the source articles in Wireddatanami and ZDNet.

Source: AI Trends