Posted on

Disney Research neural face swapping technique can provide photorealistic, high resolution video

A new paper published by Disney Research in partnership with ETH Zurich describes a fully automated, neural network-based method for swapping faces in photos and videos – the first such method that results in high-resolution, megapixel resolution final results according to the researchers. That could make it suited for use in film and TV, where high resolution results are key to ensuring that the final product is good enough to reliably convince viewers as to their reality.

The researchers specifically intend this tech for use in replacing an existing actor’s performance with a substitute actor’s face, for instance when de-aging or increasing the age of someone, or potentially when portraying an actor who has passed away. They also suggest it could be used for replacing the faces of stunt doubles in cases where the conditions of a scene call for them to be used.

This new method is unique from other approaches in a number of ways, including that any face used in the set can be swapped with any recorded performance, making it possible to relatively easily re-image the actors on demand. The other is that it kindles contrast- and light conditions in a compositing step to ensure the actor looks like they were actually present in the same conditions as the scene.

You can check out the results for yourself in the video below (as the researchers point out, the effect is actually much better in moving video than in still images). There’s still a hint of ‘uncanny valley’ effect going on here, but the researchers also acknowledge that, calling this “a major step toward photo-realistic face swapping that can successfully bridge the uncanny valley” in their paper. Basically it’s a lot less nightmare fuel than other attempts I’ve seen, especially when you’ve seen the side-by-side comparisons with other techniques in the sample video. And, most notably, it works at much higher resolution, which is key for actual entertainment industry use.

[embedded content]

The examples presented are a super small sample, so it remains to be seen how broadly this can be applied. The subjects used appear to be primarily white, for instance. Also, there’s always the question of the ethical implication of any use of face-swapping technology, especially in video, since it could be used to fabricate credible video or photographic ‘evidence’ of something that didn’t actually happen.

Given, however, that the technology is now in development from multiple quarters, it’s essentially long past the time for debate about the ethics of its development and exploration. Instead, it’s welcome that organizations like Disney Research are following the academic path and sharing the results of their work, so that others concerned about its potential malicious use can determine ways to flag, identify and protect against any bad actors.

Read More

Posted on

TinyML is giving hardware new life

The latest embedded software technology moves hardware into an almost magical realm

Aluminum and iconography are no longer enough for a product to get noticed in the marketplace. Today, great products need to be useful and deliver an almost magical experience, something that becomes an extension of life. Tiny Machine Learning (TinyML) is the latest embedded software technology that moves hardware into that almost magical realm, where machines can automatically learn and grow through use, like a primitive human brain.

Until now building machine learning (ML) algorithms for hardware meant complex mathematical modes based on sample data, known as “training data,” in order to make predictions or decisions without being explicitly programmed to do so. And if this sounds complex and expensive to build, it is. On top of that, traditionally ML-related tasks were translated to the cloud, creating latency, consuming scarce power and putting machines at the mercy of connection speeds. Combined, these constraints made computing at the edge slower, more expensive and less predictable.

But thanks to recent advances, companies are turning to TinyML as the latest trend in building product intelligence. Arduino, the company best known for open-source hardware is making TinyML available for millions of developers. Together with Edge Impulse, they are turning the ubiquitous Arduino board into a powerful embedded ML platform, like the Arduino Nano 33 BLE Sense and other 32-bit boards. With this partnership you can run powerful learning models based on artificial neural networks (ANN) reaching and sampling tiny sensors along with low-powered microcontrollers.

Over the past year great strides were made in making deep learning models smaller, faster and runnable on embedded hardware through projects like TensorFlow Lite for Microcontrollers, uTensor and Arm’s CMSIS-NN. But building a quality dataset, extracting the right features, training and deploying these models is still complicated. TinyML was the missing link between edge hardware and device intelligence now coming to fruition.

Tiny devices with not-so-tiny brains

Read More

Posted on

Immunai wants to map the entire immune system and raised $20 million in seed funding to do it

For the past two years the founding team of Immunai had been working stealthily to develop a new technology to map the immune system of any patient.

Founded by Noam Solomon, a Harvard and MIT-educated postdoctoral researcher, and former Palantir engineer, Luis Voloch, Immunai was born from the two men’s interest in computational biology and systems engineering. When the two were introduced to Ansuman Satpathy, a professor of cancer immunology at Stanford University, and Danny Wells, who works as a data scientist at the Parker Institute for Cancer Immunotherapy the path forward for the company became clear.

“Together we said we bring the understanding of all the technology and machine learning that needs to be brought into the work and Ansu and Danny bring the single-cell biology,” said Solomon. 

Now as the company unveils itself and the $20 million in financing it has received from investors including Viola Ventures and TLV Partners, it’s going to be making a hiring push and expanding its already robust research and development activities. 

Immunai already boasts clinical partnerships with over ten medical centers and commercial partnerships with several biopharma companies, according to the company. And the team has already published peer-reviewed work on the origin of tumor-fighting T cells following PD-1 blockade, Immunai said.

“We are implementing a complicated engineering pipeline. We wanted to scale to hundreds of patients and thousands of samples,” said Wells. “Right now, in the world of cancer therapy, there are new drugs coming on the market that are called checkpoint inhibitors. [We’re] trying to understand how these molecules are working and find new combinations and new targets. We need to see the immune system in full granularity.”

That’s what Immunai’s combination of hardware and software allows researchers to do, said Wells. “It’s a vertically integrated platform for single cell profiling,” he said. “We go even further to figure out what the biology is there and figure that out in a new combination design for the trial.”

Cell therapies and cancer immunotherapies are changing the practice of medicine and offering new treatments for conditions, but given how complex the immune system is, the developers of those therapies have few insights into how their treatments will effect the immune system. Given the diversity of individual patients variations in products can significantly change the way a patient will respond to the treatment, the company said.

Photo: Andrew Brookes/Getty Images

Immunai has the potential to change the way these treatments are developed by using single-cell technologies to profile cells by generating over a terabyte of data from an individual blood sample. The company’s proprietary database and machine learnings tools map incoming data to different cell types and create profiles of immune responses based on differentiated elements. Finally, the database of immune profiles supports the disvovery of biomarkers that can then be monitored for potential changes.

“Our mission is to map the immune system with neural networks and transfer learning techniques informed by deep immunology knowledge,” said Voloch, in a statement. “We developed the tools and knowhow to help every immuno-oncology and cell therapy researcher excel at their job. This helps increase the speed in which drugs are developed and brought to market by elucidating their mechanisms of action and resistance.”

Pharmaceutical companies are already aware of the transformational potential of the technology, according to Solomon. The company is already in the process of finalizing a seven-figure contract from a Fortune 100 company, according to Solomon. 

One of the company’s earliest research coups was using research to show the way that immune systems function when anti-PD1 molecules are introduced. Typically the presence of PD-1 means that t-cell production is being suppressed. What the research from ImmuneAI revealed was that the response wasn’t happening with T-cells within the tumor. There were new t-cells that were migrating to the tumor to fight it off, according to Wells.

“This whole approach that we have around looking at all of these indications — we believe that the right way and most powerful way to study these diseases is to look at the immune system from the top down,” said Voloch, in an interview. “Looking at all of these different scenarios. From the top, you see these patterns than wouldn’t be available otherwise.” 

Read More

Posted on

AI in Space: NASA Studying Exoplanets, ESA Supporting Satellites


AI is being employed in a wide range of efforts to explore and study space, including the study of exoplanets by NASA. (GETTY IMAGES)

By AI Trends Staff

AI is being employed in a wide range of efforts to explore and study space, including the study of exoplanets by NASA, the support of satellites by ESA, development of an empathic assistant for astronauts and efforts to track space debris.

NASA scientists are partnering with AI experts from companies including Intel, IBM and Google to apply advanced computer algorithms to problems in space science.

Machine learning is seen as helping space scientists to learn from data generated by telescopes and observatories such as the James Webb Space Telescope, according to a recent account from NASA. “These technologies are very important, especially for big data sets and in the exoplanet field,” stated Giada Arney, an astrobiologist at NASA’s Goddard Space Flight Center in Greenbelt, Md. (Exoplanets are beyond the solar system.)  “Because the data we’re going to get from future observations is going to be sparse and noisy, really hard to understand So using these kinds of tools has so much potential to help us.”

Giada Arney, astrobiologist, NASA’s Goddard Space Flight Center

NASA has laid some groundwork for collaborating with private industry. For the past four summers, NASA’s Frontier Development Lab (FDL) has brought together technology and space innovators for eight weeks every summer to brainstorm and develop code. The program is a partnership between the SETI Institute and NASA’s Ames Research Center, both located in Silicon Valley.

The program pairs science and computer engineering early-career doctoral students with experts from the space agency, academia and some big tech companies. The companies contribute hardware,  algorithms, supercomputing resources funding, facilities and subject matter experts. Some of the resulting technology has been put to use, helping to identify asteroids, find planets and predict extreme solar radiation events.

Scientists at Goddard have been using different techniques to reveal the chemistry of exoplanets, based on the wavelengths of light emitted or absorbed by molecules in their atmospheres. With thousands of exoplanets discovered so far, the ability to make quick decisions about which ones deserve further study would be a plus.

Arney, working with Shawn Domagal-Goldman, an astrobiologist at Goddard Center, working with technical support from Google Cloud, deployed a neural network to compare performance to a machine learning approach. University of Oxford computer science graduate student Adam Cobb led a study to test the capability of a neural network against a widely-used machine learning technique known as a “random forest.” The team analyzed the atmosphere of WASP-12b, an exoplanet discovered in 2008 that had a comparison study done with a random forest technique, using data supplied by NASA’s Hubble Space Telescope.

“We found out right away that the neural network had better accuracy than random forest in identifying the abundance of various molecules in WASP-12b’s atmosphere,” Cobb stated. Beyond the greater accuracy, the neural network model could also tell the scientists how certain it was about its prediction. “In a place where the data weren’t good enough to give a really accurate result, this model was better at knowing that it wasn’t sure of the answer, which is really important if we are to trust these predictions,” states Domagal-Goldman.

The European Space Agency (ESA) is studying how to employ AI to support satellite operations, including relative position, communication and end-of-life management for large satellite constellations, according to an account from ESA.

The ESA has engaged in a number of studies on how to use AI for space applications and spacecraft operations as part of its Basic Activities program. One study examines using AI to support autonomous spacecraft that can navigate, perform telemetry analysis and upgrade their own software without communicating with Earth.

Another study focused on how AI can support the management of complex satellite constellations, to reduce the active workload of ground operators. Greater automation, such as for collision avoidance, can reduce the need for human intervention.

Additional studies are researching how a swarm of picosatellites – very small ones – can evolve a collective consciousness. The method employed explored known results in crystallography, the study of crystals, which may open a new way of conceiving lattice formation, a sub-discipline of order theory and abstract algebra.

AI Helping Astronauts Too; An AI Assistant with Empathy Coming

Astronauts traveling long distances for extended periods might be offered assistance from AI-powered emotional support robots, suggests a recent report in yahoo! News. Scientists are working to create an AI assistant that can sense human emotion and “respond with empathy.”

The robots could  be trained to anticipate the needs of crew members and “intervene if their mental health is at stake.” An AI assistant with empathy could be helpful to astronauts on a deep-space mission to Mars.

Astronauts on the International Space Station have an intelligent robot called CIMON that can interact, but is lacking in emotional intelligence. NASA CTO Tom Soderstrom has stated. A team at the organization’s Jet Propulsion Laboratory is working on a more sophisticated emotional support companion that can help fly the spacecraft as well as track the health and well-being of crew members.

AI Employed in Effort to Track Space Debris 

Space debris is becoming a critical issue in space. Scientists count more than 23,000 human-made fragments larger than 4 inches, and another 500,000 particles between half an inch and 4 inches in diameter. These objects move at 22,300 miles per hour; collisions cause dents, pits or worse.

Scientists have begun to augment the lasers employed to measure and track space debris with AI, specifically neural nets, according to a recent account in Analytics India Magazine.

Laser ranging technology was becoming a challenge due to poor prediction accuracy, small size of objects, and no reflection prism on the surface of debris, making it difficult to spot the exact location of fragments. Scientists began using a method to correct the telescope pointing error of the laser ranging system by enhancing certain hardware equipment. Most recently, AI deep learning techniques are starting to be employed to enhance the correction models.

Chinese researchers from the Chinese Academy of Surveying and Mapping, Beijing and Liaoning Technical University, Fuxin have worked to enhance the accuracy of identifying space junk. The team used a backpropagation neural network model, optimized by a proposed genetic algorithm and the Levenberg-Marquardt algorithm (used in curve fitting) to help pinpoint the location of debris. The results showed higher probability of accurately locating debris between three and nine times.

“After improving the pointing accuracy of the telescope through deep learning techniques, space debris with a cross-sectional area of one meter squared and a distance of 1,500 kilometres can be identified,” stated Tianming Ma of the Chinese Academy of Surveying and Mapping, Beijing and Liaoning Technical University, Fuxin.

Read the source articles from NASA, the  ESA, Robotics Business Review, in yahoo! News   and in Analytics India Magazine.

Read More

Posted on

Helm.ai raises $13M on its unsupervised learning approach to driverless car AI

Four years ago, mathematician Vlad Voroninski saw an opportunity to remove some of the bottlenecks in the development of autonomous vehicle technology thanks to breakthroughs in deep learning.
Now, Helm.ai, the startup he co-founded in 2016 with Tudor Achim, is coming out of stealth with an announcement that it has …

Read More

Posted on

When it Comes to Neural Networks, We’ve Only Scratched the Surface

Based on the sheer number of articles about deep learning that are published every day, one could be forgiven for thinking that deep learning and neural networks make up the bulk of artificial intelligence innovation. Here is why when it comes to neural networks, we’ve only just scratched the surface.

Despite the incredible technological advances made possible through these deep learning techniques, relatively few organizations have opted to implement them. 

According to Mary Beth Moore, an artificial intelligence and language analytics strategist for SAS, those who do use deep learning tend to do so for specific use cases, such as CNN (convolutional neural networks) and image recognition.

Even when neural networks can be applied to other spaces, such as text analysis, they tend to be less popular than conventional machine learning approaches. 

Why? For one thing, neural networks require a large amount of clean, labeled data. Clean, labeled data, in turn, requires processors capable of handling substantial training sets, as well as engineers who are familiar with applying deep learning frameworks, both of which can impose extra costs on companies that can ill afford them.

What about the issue of transparency?

Somewhat paradoxically, the more accurate a neural network becomes, the less transparent it is; in other words, as the neural network develops, it becomes harder and harder to pinpoint how it arrives at a particular solution.

Naturally, this makes some companies reluctant to embrace a technology whose results, no matter how accurate, are difficult to explain fully to clients and investors.

However, a recent paper published by researchers from MIT Lincoln Laboratory explored ways to design a neural network that would make it easier to interpret results while maintaining a high level of accuracy. As the authors note, while neural networks “were initially designed with a degree of model transparency, their performance on complex visual reasoning benchmarks was lacking.”

Most current iterations of neural networks “do not provide an effective mechanism for understanding the reasoning process.”

The neural networks solution was to create “Transparency by Design networks.” Networks that are capable of “directly [evaluating] the model’s learning process,” helps to lessen the mystique surrounding neural networks, and provide more accountability. 

While the development of such techniques will hopefully hasten the adoption of neural networks across a more significant number of industries, it must also be noted that neural networks themselves still have a long way to develop.

The difficulty in adoption is, in part, due to the aforementioned need for extensive data training sets.

Data sets require companies to undertake the arduous process of collection, cleaning, and labeling.

It is estimated that, for a deep learning algorithm to reach or exceed a human’s performance, the training set should contain at least 10 million labeled data examples. That much data is a reasonably high bar to clear, especially for smaller companies who do not have the means or the opportunity to gather that many pieces of information. 

The opportunities that deep learning can offer businesses are enormous.

For instance, deep learning can help companies reduce their manufacturing costs by increasing accuracy and efficiency. It can also identify new business opportunities, personalize interactions between customer and company, and enable businesses to better respond to shifts in supply and demand.

Neural networks are transforming the world of healthcare by pinpointing effective treatment options, analyzing research, and finding patterns that would otherwise have gone unnoticed. 

Neural networks underpin several of the most widely-used AI technologies: image recognition, voice recognition, and translation.

These neural networks are also capable of creating art, composing music, and teaching themselves how to solve a Rubik’s cube. Other functions are there that only humans were previously capable of performing to a high level.

Whether we can create sentient AI or not, the fact remains that neural networks are capable of doing much more than carrying out basic analytical tasks. More than any other technology, neural networks can demonstrate, or to some critics, mimic, human intuition and creativity.

Jeremy Fain

Jeremy Fain

Jeremy Fain is the CEO and co-founder of Cogntiv. With over 20 years of interactive experience across agency, publisher, and ad tech management, Jeremy led North American Accounts for Rubicon Project before founding Cognitiv. At Rubicon Project, Jeremy was responsible for global market success of over 400 media companies and 500 demand partners through Real-Time-Bidding, new product development, and other revenue strategies, ensuring interactive buyers and sellers could take full advantage of automated transactions. Prior to Rubicon Project, Jeremy served as Director of Network Solutions for CBS Interactive. With oversight of a $30 million+ P&L, Jeremy was responsible for development, execution and management of data-driven solutions across CBS Interactive’s network of branded sites, including audience targeting, private exchange, and custom audience solutions. Prior to CBS, Jeremy served as Vice President of Industry Services for the IAB, where he shaped interactive industry policy, standards, and best practices, such as the first VAST standard and the Tc&Cs 3.0, by working on a daily basis with all the major media companies as well as all the agency holding companies.

Source: ReadWrite

Posted on

High-quality Deepfake Videos Made with AI Seen as a National Security Threat


Deepfake videos so realistic that they cannot be detected as fakes have the FBI concerned about they pose a national security threat. (GETTY IMAGES)

By AI Trends Staff

The FBI is concerned that AI is being used to create deepfake videos that are so convincing they cannot be distinguished from reality.

The alarm was sounded by an FBI executive at a WSJ Pro Cybersecurity Symposium held recently in San Diego. “What we’re concerned with is that, in the digital world we live in now, people will find ways to weaponize deep-learning systems,” stated Chris Piehota, executive assistant director of the FBI’s science and technology division, in an account in WSJPro.

The technology behind deepfakes and other disinformation tactics are enhanced by AI. The FBI is concerned natural security could be compromised by fraudulent videos created to mimic public figures. “As the AI continues to improve and evolve, we’re going to get to a point where there’s no discernible difference between an AI-generated video and an actual video,” Piehota stated.

Chris Piehota, executive assistant director, FBI science and technology division

The word ‘deepfake’ is a portmanteau of “deep learning” and “fake.” It refers to a branch of synthetic media in which artificial neural networks are used to generate fake images or videos based on a person’s likeness.

The FBI has created its own deepfakes in a test lab, that have been able to create artificial personas that can pass some measures of biometric authentication, Piehota stated. The technology can also be used to create realistic images of people who do not exist. And 3-D printers powered with AI models can be used to copy someone’s fingerprints—so far, FBI examiners have been able to tell the difference between real and artificial fingerprints.

Threat to US Elections Seen

Some are quite concerned about the impact of deepfakes on US democratic elections and on the attitude of voters. The AI-enhanced deepfakes can undermine the public’s confidence in democratic institutions, even if proven false, warned Suzanne Spaulding, a senior adviser at the Center for Strategic and International Studies, a Washington-based nonprofit.

“It really hastens our move towards a post-truth world, in which the American public becomes like the Russian population, which has really given up on the idea of truth, and kind of shrugs its shoulders. People will tune out, and that is deadly for democracy,” she stated in the WSJ Pro account.

Suzanne Spaulding, senior adviser, Center for Strategic and International Studies

Deepfake tools rely on a technology called generative adversarial networks (GANs), a technique invented in 2014 by Ian Goodfellow, a Ph.D. student who now works at Apple, according to an account in Live Science.

A GAN algorithm generates two AI streams, one that generates content such as photo images, and an adversary that tries to guess whether the images are real or fake. The generating AI starts off with the advantage, meaning its partner can easily distinguish real from fake photos. But over time, the AI gets better and begins producing content that looks lifelike.

For an example, see NVIDIA’s project www.thispersondoesnotexist.com which uses a GAN to create completely fake—and completely lifelike—photos of people.

Example material is starting to mount. In 2017, researchers from the University of Washington in Seattle trained a GAN can change a video of former President Barack Obama, so his lips moved consistent with the words, but from a different speech. That work was published in the journal ACM Transactions on Graphics (TOG). In 2019, a deepfake could generate realistic movies of the Mona Lisa talking, moving and smiling in different positions. The technique can also be applied to audio files, to splice new words into a video of a person talking, to make it appear they said something they never said.

All this will cause attentive viewers to be more wary of content on the internet.

High tech is trying to field a defense against deepfakes.

Google in October 2019 released several thousand deepfake videos to help researchers train their models to recognize them, according to an account in Wired. The hope is to build filters that can catch deepfake videos the way spam filters identify email spam.

The clips Google released were created in collaboration with Alphabet subsidiary Jigsaw. They focused on technology and politics, featuring paid actors who agreed to have their faces replaced. Researchers can use the videos to benchmark the performance of their filtering tools. The clips show people doing mundane tasks, or laughing or scowling into the camera. The face-swapping is easy to spot in some instances and not in others.

Some researchers are skeptical this approach will be effective. “The dozen or so that I looked at have glaring artifacts that more modern face-swap techniques have eliminated,” stated Hany Farid, a digital forensics expert at UC Berkeley who is working on deepfakes, to Wired. “Videos like this with visual artifacts are not what we should be training and testing our forensic techniques on. We need significantly higher quality content.”

Going further, the Deepfake  Detection Challenge competition was launched in December 2019 by Facebook — along with Amazon Web Services (AWS), Microsoft, the Partnership on AI, Microsoft, and academics from Cornell Tech, MIT, University of Oxford, UC Berkeley; University of Maryland, College Park; and State University of New York at Albany, according to an account in VentureBeat.

Facebook has budged more than $10 million to encourage participation in the competition; AWS is contributing up to $1 million in service credits and offering to host entrants’ models if they choose; and Google’s Kaggle data science and machine learning platform is hosting both the challenge and the leaderboard.

“‘Deepfake’ techniques, which present realistic AI-generated videos of real people doing and saying fictional things, have significant implications for determining the legitimacy of information presented online,” noted Facebook CTO Mike Schroepfer in a blog post. “Yet the industry doesn’t have a great data set or benchmark for detecting them. The [hope] is to produce technology that everyone can use to better detect when AI has been used to alter a video in order to mislead the viewer.”

The data set contains 100,000-plus videos and was tested through a targeted technical working session in October at the International Conference on Computer Vision, stated Facebook AI Research Manager Christian Ferrer.  The data does not include any personal user identification and features only participants who have agreed to have their images used. Access to the dataset is gated so that only teams with a license can access it.

The Deepfake Detection Challenge is overseen by the Partnership on AI’s Steering Committee on AI and Media Integrity. It is scheduled to run through the end of March 2020.

Read the source articles in  WSJProLive Science, Wired and VentureBeat.

Source: AI Trends

Posted on

Community of AI Artists Exploring Creativity with Technology


AI artists explore the use of AI as a creative medium to produce original works, often exploring themes around the relationship of humans and machines. (GETTY IMAGES)

By AI Trends Staff

Artists are using AI to explore original work in new mediums.

Refik Anadol, for example, creates art installations using pools of data to create what he calls a new kind of “sculpture.” His “Machine Hallucination” installation ran in Chelsea Market, New York City, last fall.

The Turkish artist used machine learning algorithms on a dataset of more than three million images, to create a synthetic reality experiment. The model generates “a data universe of architectural hallucinations in 512 dimensions,” according to an account of the exhibit in designboom.

The exhibit was installed in the boiler room in the basement of Chelsea Market, a 6,000 square-foot space newly opened with the Anadol exhibit. He commented on being selected, “I’m especially proud to be the first to reimagine this historic building, which is more than 100 years old, by employing machine intelligence to help narrate the hybrid relationship between architecture and our perception of time and space, machine hallucination offers the audience a glimpse into the future of architecture itself.”

Machine Hallucinations was shown on giant screens or projected onto walls, floors, ceilings or entire buildings, using data to produce a kind of AI pointillism, in an immersive experience.

One theme of Anadol’s work is the symbiosis and tension between people and machines, according to an account in Wired.  The artist says his work is an example of how AI—like other technologies—will have a broad range of uses. “When we found fire, we cooked with it, we created communities; with the same technology we kill each other or destroy,” Anadol stated. “Clearly AI is a discovery of humanity that has the potential to make communities, or destroy each other.”

Artists working with AI as a medium have come together to form AIArtists.org to curate works by pioneering AI artists and act as the world’s first clearinghouse for AI’s impact on art and culture. The site was founded by Marnie Benney, an independent, contemporary art curator. The site features the community of AI artists and works they are investigating.

The artists are exploring themes around our relationship with technology. Will AI be the greatest invention or the last one? How can AI expand human creativity? Can AI be autonomously creative in a meaningful way? Can AI help us learn about our collective imagination? How can artists build creative and improvisational partnerships with AI? Can AI write poetry and screenplays? What does a machine see when it looks at the depth and breadth of our human experience?

Lauren McCarthy, LA-based AI artist who examines social relationships

These are fun questions to consider. The site offers resources for AI artists, a timeline of AI art history and a compilation of unanswered questions about AI.

Among the artists listed is Lauren McCarthy, an LA-based artist who examines social relationships in the midst of surveillance, automation and algorithmic living. She is the creator of p5.js, an open source programming language for learning creative expression through code online. It has over 1.5 million users. She is co-director of the Processing Foundation, a nonprofit with a mission to promote software literacy within the visual arts. She is an assistant professor at UCLA Design Media Arts.

See the source articles in designboom, Wired and visit AIArtists.org.

Source: AI Trends

Posted on

Find a Dataset to Launch Your Data Science Project, and Tune Your AI Education


Find the right dataset for your data science, get it off the ground, and keep your AI education tuned up. (GETTY IMAGES)

By AI Trends Staff

Once you have decided to explore a career in data science, and you need to engage in a project to get yourself going, you need to decide what dataset to use.

Fortunately, a guide to the best datasets for machine learning has been published in edureka!, written by Disha Gupta, a computer science and technology writer based in India. She notes that without training datasets, machine-learning algorithms would not have a way to learn text mining or text classification. Five to 10 years ago, it was difficult to find datasets for machine learning and data science projects. Today the challenge is not finding data, but to find the relevant data.

Here is an excerpt referring to datasets good for Natural Language Processing projects, which need text data. She recommended:

Enron Dataset – Email data from the senior management of Enron that is organized into folders.

Amazon Reviews – It contains approximately 35 million reviews from Amazon spanning 18 years. Data includes user information, product information, ratings, and text review.

Newsgroup Classification – Collection of almost 20,000 newsgroup documents, partitioned evenly across 20 newsgroups. It is great for practicing topic modeling and text classification.

For Finance projects:

Quandl: A great source of economic and financial data that is useful to build models to predict stock prices or economic indicators.

World Bank Open Data: Covers population demographics and many economic and development indicators across the world.

IMF Data: The International Monetary Fund (IMF) publishes data on international finances, foreign exchange reserves, debt rates, commodity prices, and investments.

And for Sentiment Analysis projects:

Multidomain sentiment analysis dataset – Features product reviews from Amazon.

IMDB Reviews – Dataset for binary sentiment classification. It features 25,000 movie reviews.

Sentiment140 – Uses 160,000 tweets with emoticons pre-removed.

Two Questions for Your Data Science Project

Once you have selected a dataset, you might need some more suggestions for getting your project off the ground. First, ask yourself two questions, suggests a recent article in Data Science Weekly: How would you make some money with it? And how would you save some money with it?

The answers will help you focus on what is important and useful when looking at your data. You will often find that before you get to the modeling or serious math, you may have to work through problems with the data, such as missing, erroneous or biased data. “You will find frequently in the real world that data is incredibly messy and nothing like the squeaky clean data sets found online in contests on Kaggle or elsewhere,” the author states.

Maybe at this stage you feel you need more education on AI. Fortunately, BestColleges has arrived. The company is a partnership with HigherEducation.com to provide students with direct connections to schools and programs that suit their education goals. The site provides college planning, access to financial aid and career resources.

Tune Up Your AI Education

Success in the AI field usually requires an undergraduate degree in computer science or a related discipline such as mathematics. More senior positions may require a master’s of PhD. Motivation is important. “Curiosity, confidence and perseverance are good traits for any student looking to break into an emerging field and AI is no exception,” states Dan Ayoub, Education Manager for Microsoft. “Unlike careers where a path has been laid over decades, AI is still in its infancy, which means you may have to form your own path and get creative.”

Dan Ayoub, General Manager, Education, Microsoft

The article sketches out sample core subjects in an AI curriculum in math and statistics, computer science and “core AI,” such as machine learning, neural networks and natural language processing. Once you cover some fundamentals, you can begin to explore subjects that interest you personally. Clusters include machine learning, robotics, and human-AI interaction.

Whether you are a college student or already in the workforce, it’s important to proactively define your own AI curriculum, Ayoub suggested.

Example skills that can help you check off the right boxes in your response to the AI job posting include:

  • Programming Languages: Python, Java, C/C++, SQL, R, Scala, Perl
  • Machine Learning Frameworks: TensorFlow, Theano, Caffe, PyTorch, Keras, MXNET
  • Cloud Platforms: AWS, Azure, GCP
  • Workflow Management Systems: Airflow, Luigi, Pinball
  • Big Data Tools: Spark, HBase, Kafka, HDFS, Hive, Hadoop, MapReduce, Pig
  • Natural Language Processing Tools: spaCy, NLTK

Jobs of the future will require a willingness to stay curious. It takes a little time and some patience.

An IBM AI researcher encourages an attitude that AI needs to be adopted by more people with data science and software engineering skills, as demand for workers skilled in machine learning is doubling every few months. “If we leave it as some mythical realm, this field of AI, that’s only accessible to the select PhDs that work on this, it doesn’t really contribute to its adoption,” said Dario Gil, research director at IBM, in an article in  VentureBeat.

Read the source articles in  edureka!, Data Science Weekly, at BestColleges and in  VentureBeat.

Source: AI Trends

Posted on

Specialized AI Chip Market Seen Expanding Rapidly


The Cerebras Wafer Scale Engine powering the CS-1 chip shown here is said by the company to be 56 times the size of the largest GPU. (CEREBRAS)

By AI Trends Staff

The fragmenting and increasingly specialized AI chip market will cause developers of AI applications to have to make platform choices for upcoming projects, choices with potentially long-term implications.

AI chip specialization arguably began with graphics processing units, originally developed for gaming then deployed for applications such as deep learning. When NVIDIA released its CUDA toolkit for making GPUs programmable in 2007, it opened the market up to a wider range of developers, noted a recent account in IEEE Spectrum written by Evan Sparks, CEO of Determined AI.

GPU processing power has advanced rapidly. Chips originally designed to render images are now the workhorses powering AI R&D. Many of the linear algebra routines necessary to make Fortnite run at 120 frames per second, are now powering the neural networks at the heart of advanced applications in computer vision, automated speech recognition and natural language processing, Evans notes.

Market projections for specialized AI chips are aggressive. Gartner projects specialized AI chip sales to project to $8 billion in 2019 and grow to $34 billion by 2023. NVIDIA’s internal projections reported by Evans have AI chip sales projected to reach $50 billion by 2023; most of those anticipated for data center GPUs used to power deep learning. Custom silicon research is ongoing at Amazon, ARM, Apple, IBM, Intel, Google, Microsoft, NVIDIA and Qualcomm. Many startups are also in the competition, including Cerebras, Graphcore, Groq, Mythic AI, SambaNova Systems and Wave Computing, who together have raised over $1 billion.

Allied Market Research projects the global AI chip market to reach $91 billion by 2025, with growth rates of 45% a year until then. Market drivers include a surge in demand for smart homes and smart cities, more investment in AI startups, the emergence of quantum computing and the rise of smart robots, according to a release from Allied on the Global Newswire. Market growth, however is being slowed by too few skilled workers.

The market splits into chip type, application, industry vertical, technology processing type and region, according to Allied. The chip types are divided into the GPU, the application-specific integrated circuit (ASIC), the field-programmable gate array (FPGA), the central processing unit (CPU) and others. The ASIC segment is expected to register the fastest growth at 52% per year until 2025.

At the recent International Electron Devices Meeting (IEDM) conference in San Francisco, IBM discussed innovations into making hardware systems that advance with the pace of demands of AI software and data workloads, according to an account in Digital Journal.

Among the highlights: nanosheet technology aims to meet the requirements of AI and 5G. Researchers discussed how to stack nanosheet transistors and multiple-Vt solutions (multi-threshold voltage devices).

Phase-change memory (PCM) has emerged as an alternative to conventional von Neumann systems to train deep neural networks (DNNs) where a synaptic weight is represented by the device conductance. However, a temporal evolution of the conductance values, referred to as conductance drift, poses challenges for the reliability of the synaptic weights. IBM presented an approach to reduce the impact of PCM conductance drift. IBM also demonstrated an ultra-low power prototype chip, with the potential to execute AI tasks in edge computing devices in real time.

An example of a specific application driving an AI chip design is happening at the Argonne National Laboratory, a science and engineering research institution in Illinois. Finding a drug that cancer patients can best respond to, tests the limits of modern science. With the emergence of AI, scientists are able to combine machine learning and genomics to sequence data and help clinicians better understand how to tailor treatment plans to individual patients, according to an account in AIMed (AI in Medicine).

Argonne National Lab Employing CS-1 for Cancer Research

Argonne recently announced the first deployment of a new AI processor, the CS-1, developed by Cerebras, a computer systems startup. The chip enables a faster rate of training for deep learning algorithms. CS-1 is said to house the fastest and largest AI chip ever built.

Rick Stevens, Argonne Associate Lab Director for Computing, Environment and Life Sciences, stated in a press release, “By deploying the CS-1, we have dramatically shrunk training time across neural networks, allowing our researchers to be vastly more productive.”

CS-1 also has the ability to handle scientific data reliably and in an easy to use manner, including higher-dimensional data sets with data coming from diverse data sources. The deep learning algorithms developed to work these models are extremely complex, compared to computer vision or language applications, Stevens stated.

The main job of the CS-1 is to increase the speed of developing and deploying new cancer drug models. The hope is that the Argonne Lab will arrive at a deep learning model that can predict how a tumor may respond to a drug or combination of two or more drugs.

Read the source articles in IEEE Spectrum,  on the Global Newswire, in Digital Journal and in AIMed (AI in Medicine).

Source: AI Trends