Posted on

AWS introduces new Trn1 chips to speed up training of machine learning models



Share

As more companies move to custom silicon for their customer’s workloads, Amazon has been busy on this front. They introduced the Inferentia chip in 2019 to help speed up inference learning. Then last year the company launched a second Trainium chip, designed specifically for machine learning models. Today, AWS continued to build on this previous work, introducing its latest machine learning chip, the Trn1.
Adam Selipsky, delivering his first AWS re:Invent keynote, dispatched the news about the latest chip on stage in Las Vegas this morning.
“So today, I’m excited to announce the new Trn1 instance powered by Trainium, which we expect to deliver the best price-performance for training deep learning models in the cloud and the fastest on EC2,” Selipsky told the re:Invent audience.
“Trn1 is the first EC2 instance with up to 800 gigabytes per second bandwidth. So …

Read More