The Machine Intelligence initiative at Linaro aims at collaborating to reduce fragmentation in the Deep learning NN acceleration ecosystem, where currently every IP vendor forks the existing open source models and frameworks to integrate their hardware blocks and then tune for performance. This leads to a duplication of effort amongst all players, perpetual cost of re-integration for every new rebasing, and overall increased total cost of ownership.

The initial focus is on the inference side on Cortex-A application processors with Linux and Android, both edge computing and smart devices. As part of the remit, the team will collaborate on a definition of API and modular framework for an Arm runtime inference engine architecture based on plug-ins supporting dynamic modules and optimized shared Arm compute libraries.

Below are some of the Machine Intelligence related sessions from the previous Linaro Connect:

Speaker Company ID Title
Chris Benson AI Strategist YVR18- 300K2 Keynote: Artificial Intelligence Strategy: Digital Transformation Through Deep Learning
Jem Davies Arm YVR18-300K1 Keynote: Enabling Machine Learning to Explode with Open Standards and Collaboration
Robert Elliott Arm YVR18-329 Arm NN intro
Pete Warden Google Tensorflow YVR18-338 Tensorflow for Arm devices
Mark Charlebois Qualcomm YVR18-330 Qualcomm Snapdragon AI Software
Thom Lane Amazon AWS AI YVR18-331 ONNX and Edge Deployments
Jammy Zhou Linaro YVR18-332 TVM compiler stack and ONNX support
Luba Tang Skymizer YVR18-333 ONNC (Open Neural Network Compiler) for ARM Cortex-M
Shouyong Liu Thundersoft YVR18-334 AI Alive: On Device and In-App
Ralph Wittig Xilinx YVR18-335 Xilinx: AI on FPGA and ACAP Roadmap
Andrea Gallo and others Linaro, Arm, Qualcomm, Skymizer, Xilinx YVR18-337 BoF: JIT vs offline compilers vs deploying at the Edge