Intel Achieves AI Nervana

All Apologies to GPUs and FPGAs At CES 2019 in Las Vegas this week, Navin Shenoy – Intel Data Center Group executive vice president, announced the Intel Nervana Neural Network Processor for Inference, which will go into production this year. Nervana was also reportedly working on the development of a custom chip for neural network processing at the time, which they claimed would outperform GPUs as AI accelerators by a factor of at least ten. They analyzed a number of deep neural networks and came up with what they believed to be the best architecture for their key operations. Intel hasn’t given specific performance or power consumption figures for the new devices except to say that power consumption will be in the “hundreds of watts” – which puts Nervana clearly in the data center (compared with edge-targeted AI devices such as the company’s Movidius and Mobileye offerings). Nervana seems to be aimed at GPUs’ and FPGAs’ increasing foothold as AI accelerators in the data center.

View Full Article

Disclaimer

This news content is a computer generated summarized version of the original article and the authenticity of the original content has not been verified. Please click on the View Article button to refer to the actual content. 

Leave your vote

0 points
Upvote Downvote

Comments

0 comments

Log in

Forgot password?

Don't have an account? Register

Forgot your password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Close
of

Processing files…