Neural Compiler®
Deep learning has become popular in recent years in several areas of applications involving the processing of image, voice, and other sensor data. Some practical examples are object identification with image sensors, keyword recognition with microphones, gesture detection with inertial sensors, etc. Due to the amount of computation involved, the data is typically processed on power-hungry GPUs in the data centers. There is a constant push to improve the performance and energy efficiency of such processing to improve total cost of ownership for the data centers. Further, there is also a constant drive for local processing of the data in the IoT, wearable, mobile and automotive spaces where the power involved is very low and the typical workloads are very application specific and inference only. This calls for a wide variety of specific deep learning accelerator cores or chips for the multitude of applications. In addition, the product development cycles are very short with dynamic market conditions and rapid evolution of the technologies. Therefore, the requirements on the PPAL, diversity of the products serving the markets, shorter time to market and very short shelf life for the products combined with recurring development cycles creates a perfect need for a tool like a compiler that can very quickly generate a PPAL optimized physical IP for the DLA compute engine.
Deep learning is a revolutionary technology disrupting almost every sector. Many teams across the world are contributing in all phases of neural networks & training algorithms, applications, implementation architectures, and the ASIC developments.
DXCorr, with its decade long expertise in the physical IP development, brings the innovation of Neural Compiler®to help the system architects, chip designers and physical implementation teams to build differentiating deep-learning accelerator cores/chips while meeting the aggressive time-to-market goals.