AMD Moves Into Deep Learning With Open Source Vengeance

Deep Learning, one of the foundational elements for true Artificial Intelligence (which many of us thing is a stupid term because Intelligence is something that is either there or isn’t), is likely the key driver for massive growth in the technology segment going forward. NVIDIA was one of the first companies to jump on this bandwagon but the market is still young and that means companies like AMD and Intel still have time to make their mark, and, given this is a young market expected to explode that no one, or two vendors could ever handle alone, right now the more the merrier.

The latest entry to this rush to massive revenue and changing computing as we’ve come to know it is from AMD and their unique angle is an Open Source approach to the problem designed to get the most people focused on developing for their hardware the quickest. This leveraged approach is important because AMD lacks the resources of either Intel or NVIDIA so they are looking for advantages that will help them catch up.

Radeon Instinct

At the heart of this effort is a new branded initiative called Radeon Instinct that consists of accelerators and an Open Software Stack called Radeon Open Compute (ROCm). Because they are starting late the use of an Open Source approach is critical to getting the kind if initial developer support they will need to advance this initiative quickly. Their other initial advantage is that they have both advanced GPU and CPU architectures to play with so they can create a more unified architecture than they could if they were coming from a GPU or CPU base alone. Even Apple realized they couldn’t do AI alone and their researchers were finally able to publish their progress largely because this segment is moving so fast you can’t advance quickly enough if you don’t collaborate.

MIOpen is their Open GPU-accelerated library that is currently under development and slated for release in 2017. At launch this library is expected to provide implementations for standard routines including convolution, pooling, activation functions, normalization and tensor format.

Deep learning currently hast two phases “training” which takes the most time, and educates the system how to make decisions, and “inference” when the technology is deployed in mass and then has to apply what it has learned. While other firms can argue whether a CPU or GPU is better for training or inference while AMD just has to point their customers to the appropriate internal product for their needs.

Wrapping Up:

AMD’s entry into this segment, following Intel and NVIDIA, fully validates this segment and suggests the wars have moved from platforms like IBM Watson to components, software, and libraries which should both accelerate advancement and provide more choices when it comes to adoption all of which are important to a market ramping rapidly. In a few years the rather than intelligent applications standing out as largely do today, the applications that aren’t intelligent will increasingly be the exception as those applications move rapidly toward obsolescence. I’m not sure how I feel about more things being smarter than I am.