Half a billion years ago, something remarkable happened: an astonishing, sudden increase in new species of organisms. Paleontologists call it the Cambrian explosion, and many of the animals on the planet today trace their lineage back to this event.
Something similar is happening today in embedded vision and artificial intelligence processors, such as the recent Embedded Vision Summit made clear. The in-person event, held last month in Santa Clara, California, focused on hands-on know-how for product makers incorporating AI and vision into their products.
These products require AI processors that balance conflicting needs for high performance, low power consumption, and cost sensitivity. The staggering number of embedded AI chips on display at the Summit underscored the industry’s response to this demand. While the number of processors focusing on computer vision and machine learning (ML) is overwhelming, there are some natural groupings that make the field easier to understand. Here are some themes we see.
First, some processor vendors are considering the best way to serve applications that simultaneously apply ML to data from different sensor types, for example audio and video. Synaptics’ Katana low-power processor, for example, fuses inputs from a variety of sensors, including vision, sound and environmental factors. Xperi’s talk about smart toys for the future was also about this.
Second, is a subset of processor vendors focused on minimizing power and cost. This is interesting because it enables new applications. For example, Cadence spoke at the Summit about additions to its Tensilica processor portfolio that enable always-on AI applications. Arm presented low-power vision and ML use cases based on its Cortex-M series processors. And Qualcomm covered tools for creating power-efficient computer vision apps on its Snapdragon family.
ThirdWhile many processor vendors focus primarily or exclusively on ML, a few focus on other types of algorithms typically used in conjunction with deep neural networks, such as classical computer vision and image processing. One example is quadric, whose new q16 processor is said to excel at a wide variety of algorithms, including both ML and conventional computer vision.
FinallyAn entirely new species seems to be emerging: neuromorphic processors. Neuromorphic computing refers to approaches that mimic the way the brain processes information. For example, biological vision systems process events in the field of view, while classical computer vision approaches typically capture and process all pixels in a scene at a fixed frame rate unrelated to the source of the visual information.
The keynote talk from the top, “Event-based neuromorphic perception and computation: the future of sensing and AIpresented an overview of the benefits that neuromorphic approaches can bring. It was provided by Ryad Benosman, a professor at the University of Pittsburgh and adjunct professor at the CMU Robotics Institute. And Opteran presented its neuromorphic processing approach to enable vastly improved vision and autonomy, whose design is inspired by insect brains.
Whatever your application and whatever your requirements, there’s a built-in AI or vision processor out there somewhere that’s best for you. This year’s summit highlighted many of them. Come back in 10 years, when we’ll see how many of the AI processors of 2032 trace their lineage to this modern Cambrian explosion.
This article was originally on the sister site EE Times†
The June 2022 issue of EE Times Europe Magazine covers topics such as embedded system security, optics and photonics, computer vision, as well as initiatives to accelerate the transition to a clean energy society.