Emergence of AI in technology is increasing with advent of AI Inference Processing

AI based Inference Processing system is still in its nascent stages but this & technology is growing at a rapid pace & is set to change the overall market dynamics.

Artificial Intelligence(AI) chipsets for edge inference & inference training is set to grow at 65% & 137% respectively between 2018 & 2023. This is a proven number according to a report from market foresight firm ABI Research. It indicates shipment revenue for Edge AI processing, which was previously $1.3 Bn will raise to $23 Bn by 2023.

This exponential growth has in turn resulted to a fierce competition between Hyper-scalers, SME’s & start ups. Also with the launch of several chipsets, each one of them are hoping to grab a sizeable share of this fast growing & changing market.

The current trend & a hot topic is “Inferencing at the Edge” and this is being by a couple of major areas which includes automotive & smart surveillance cameras. Processing at the edge will enable companies to deliver inference without the need to transfer data. Transferring data is a costly process that could affect latency & accuracy due to reliance on connectivity & data transfer speeds. Such factors could diminish user experience & for some applications could spell devastating consequences.

AI chipsets image
Artificial Intelligence(AI) chipsets for edge inference & inference training

Geoff Tate, CEO of Flex Logix gave autonomous vehicles as one example. “In the future there will be cameras located on the car’s exterior and inside too, monitoring, recognising and detecting,” he said. “If you want a system that is able to identify other vehicles and pedestrians at 70mph, speed and precision of inference processing is essential.”

A food for thought is “The more that can be done in a car, the better it will be”. New opportunities spring up as soon as the bandwidth around the car is reduced by running neural networks at the edge.

This is not just type-casted to the automobile. We could also run neural networks is surveillance cameras which is another application wherein smart cameras would be enabled to carry out tasks such as gaze tracking & object recognition to enhance safety. It already in place where cameras have Gait Analysis features which checks & verifies the person’s body movements. These details are stored in the computer array. Any intrusion with regards to this could be quickly subdued & stopped without or with minimal human interventions. Such technologies are in place in military & high security installations alike.

But the question remains, why more AI inference edge processing hasn’t been done before? The Answer is the problems involving the price balancing with power & accuracy.

Take your smart phones as an example, to process one picture we need 227 Bn multiples & 227 Bn additions(operations). A vehicle that can detect objects & surroundings will need to process 30 of those pictures which are now trillions of multiples & accumulate per second.

But the main challenge remains that customers want an inference chip which could accomplish this & costs only $20 burning only a few watts.

So to reduce some of this computing consumption & cost, chip designers are stripping some of the training chip’s features from the inference chip. This training typically takes place in floating point such as an FP32. Once finished the model is actively frozen and exported into a smaller format such as an int8 onto a new chip. A lot of the original circuitry is removed, and multiple layers of the model is fused into a single computational step to reduce cost and increase speed even more. Further to improve the chip’s performance, a new type of processing unit is now being used to run inference models.

 

CPU’s which were originally used are being phased out and companies now are increasingly turning to GPU & FPGA’s which has given huge improvements in terms of performance per watt. Basically these systems rely on buses and those have contention as multiple cores fight for access to the same memory.

Flex Logix’s InferX X1 leverages the company’s interconnect technology from its embedded FPGA and is combined with inference-optimised nnMAX clusters. As a result, the chip is capable of delivering higher throughput in edge applications than existing solutions, and does so with a single DRAM.

Imagination has taken needed steps to improve inference chip performance by basically developing IP for a Neural Network Accelerator (NNA). Companies regard this technology as a fundamental class of processor which is as significant as a CPU & a GPU.

From 2018 to 2023, the market for AI inferencing is set to grow in a massive scale and hence there is an opportunity to increase advancement in the low signal processing & communicational expertise mechanism to create a custom-built AI accelerator that has a better performance per watt than what is currently available in the market. Here instead of talking to the cloud with 100’s of milliseconds of latency, the device will be talking to the cloud edge via 5G which will revolutionize the end to end experience. The market for inferencing is raising leaps & bounds & is basically tied to neural networks where the pace of innovation & change is staggering. New technology features are updated every couple of months wherein there is a quick growth in computation needs over time due to more complex networks being developed. Hence owing to this reason lot of efficiency is required in the processor & which should be strong enough to ensure flexibility & programmability to cope with the more taxing & unknown future requirements & new features in the network.

 

Hence it is safe to say, with the rapid advancement in technology emergence of AI is clearly a winner in all aspects & is set to challenge the existing systems & create the requisite change in the “Technosphere”.

For related content visit insourcing multiplier for regular updates

For more articles stay tuned to insourcingmultiplier.com for regular updates


Story Conceived & written by

Sumit Peer, Founder Insourcing Multiplier

Leave a Reply

Your email address will not be published. Required fields are marked *