Skymizer connects ONNX to all deep learning accelerator ASICs

Skymizer, a compiler company founded in 2013, will launch the open source compiler “ONNC” (Open Neural Network Compiler) to ONNX backed by its unique compiler technologies.

Hundreds of AI chips are releasing in the near future, the latest figures indicate 34 IC and IP vendors will provide various AI chips and deep learning accelerator (DLA) ASICs in 2018. These all reflect the urgent need for an open compiler to support different AI chips.

Skymizer foresaw the trend and developed the compiler ONNC. Based on ONNX, ONNC is an efficient way to connect all current AI chips, especially DLA ASICs, with ONNX. Skymizer will open source ONNC before the end of July 2018.

Open Neural Network Exchange Format (ONNX) is a standard for representing deep learning models that enables models to be transferred between frameworks. Skymizer introduces ONNC that supports ONNX format and mainstream AI frameworks such as Caffe and Tensorflow. ONNC’s dominant advantage to current AI frameworks is that it provides direct support to DLA ASIC chips by ability to describe variants of performance cost models of hardware and by general optimization passes. DLA ASIC vendors can reuse these optimization passes by describing its special performance cost model in ONNC. ONNX and ONNC together help DLA ASIC vendors support various AI frameworks within a short time, improves DLA’s performance and shortens developing schedule.

“AI innovations need the open ecosystem, ONNX, who guarantees interoperability among frameworks,” said Luba Tang, CEO of Skymizer. “ONNC aims to connect all deep learning accelerators to ONNX, by general approach, in short time.”

Skymizer will release ONNC, an open source neural network compiler before the end of July 2018.