Samsung has introduced a new NPU (Neural Processing Unit) technology which will allow on-device AI to be faster and more energy efficient than existing technologies. The company has used Quantization Interval Learning (QIL), which can retain the accuracy of data by re-organising it in bits smaller than their existing size.
An NPU is a processor which has been optimised for deep learning algorithm computation. It has been designed to efficiently process thousands of these computations simultaneously. On-Device AI technology can compute and process data directly from the device itself. The company claims that the new algorithm is over four times lighter and eight times faster than the existing algorithms.
According to the company’s statement, Samsung Advanced Institute of Technology has run experiments which successfully demonstrated how the quantization of an in-server deep learning algorithm in 32-bit intervals provided higher accuracy than the existing solutions when computed into levels of less than 4 bits.
Since this system requires less electricity and hardware, it can be mounted directly into a device at the place where the data for a photo or fingerprint sensor is getting obtained, ahead of transmitting the processed data on to the necessary endpoints, the statement said.
Therefore, On-Device AI technology can directly compute the data from within the device and will not require to depend on cloud servers. Through this new technology, users can cut down the cost of cloud construction for AI and also save personal biometric information used for device authentication, such as fingerprint, iris and face scans, onto mobile devices safely.
The company had launched Exynos 9 (9820) last year, where it had featured a proprietary NPU inside the mobile processor.