, Sweat-proof “smart skin” takes reliable vitals, even during workouts and spicy meals
, Sweat-proof “smart skin” takes reliable vitals, even during workouts and spicy meals

Re-Defining Embedded Vision SMART IMAGING FOR THE VISION OF THINGS

Embedded Vision has been the buzz word in the imaging industry for quite a while. Unquestionably, there is a huge potential for Embedded Vision to change industries’ business models, to take vision to the next level, and to allows devices to see and think in all industrial and consumer markets. But how is this different than the classic vision technology? How can all industries, virtually all devices and every_thing, leverage and benefit from the embedded Vision of Things?

The Internet of Things (IoT) creates the swarm intelligence of holistic systems by connecting all devices among one another to interact accordingly. Embedded Vision technologies provide the eyes and brain power (AI) for autonomous decision making without any human interaction to empower the Vision of Things (VoT) to act intelligently within the Internet of Things.

What differentiates Embedded Vision from Classic Vision?

, Sweat-proof “smart skin” takes reliable vitals, even during workouts and spicy meals

Figure 1: Embedded vision systems
typically combine a camera, processing
device and interface

Regular vision systems are mainly built with a camera that is connected to a host PC with a known data interface. The system is mostly separated into the machine that run and the controlling process that do the inspection. The processing of the video stream and images are mostly outsourced and often needs user interaction for validation and/or decision making. A surveillance application may recognizes people, but a security officer needs to validate any abnormal occurrence to determine if it is a threat that needs an immediate response. In comparison, a security based Embedded Vision application would be able to assess the thread, determine that the threat is a person of interest and alert the authorities to react accordingly. In this case, the vision technology inside of a device, a complete system with intelligent on-board processing, is able to provide an appropriate response without any human operator oversight. Embedded Vision is not only part of the device, it is its smart eye. In its entirety Embedded Vision minimizes or removes human interactions within the imaging pipeline and allows machines to make their own decisions by capturing, analyzing and interpreting the data all-in-one.

From a developer’s standpoint, classic vision systems were mostly made to support numerous verticals with multitude of possible tasks to be programmed. This broad variety is the main reason for the large processing space requirements needed off-board. Embedded Vision tends to be more laser-focused in its applications, it is designed for a specific task. This “purpose-build” approach opens new possibilities and frees processing space to be used for neural intelligence algorithms. From the vision manufacturer perspective, he does not have to provide a one-fits-all product and cover all possible use cases, but can be specializes and focus his development on the “how” of a specific system which will be customized later by the OEM developer to satisfy his unique requirements.

, Sweat-proof “smart skin” takes reliable vitals, even during workouts and spicy meals

Darren Bessette, Category
Manager and machine
vision expert, FRAMOS

How IoT applications benefit from Embedded Vision

Embedded Vision does not only take pass / fail or yes / no decisions based on some criteria, it provides a broader form of intelligence leveraging neural networks that massage and analyze the image data and information. Embedded Vision systems moves from pre-defined actions based on specific inputs to specific reactions to spontaneous situations, with real-time decision making and resulting activities. This is similar to how smart cameras work but allow for adaptation and expansion with evolving responses as more scenarios are encountered and evaluated. Also, the “smarts” are being deeply integrated into every kind of device, more and more. This creates new IoT devices that are more aware and better able to process inputs from their surroundings, further propagating how Embedded Vision is enabling more VoT devices.

Most industrial and consumer products are internet-aware today, exchanging data with one and other using local networks and the cloud. With the addition of vision, these devices will be controllable via eye tracking, face or gesture recognition. As an example, a refrigerator with embedded vision inside would be able to recognize what food has been consumed and then automatically add it to the family’s online shopping list. Using intelligent embedded vision, security applications can count people, create heat maps, or identify persons of interest and share the visual and analytics data within networks. When it comes to self-driving cars, embedded vision steers the vehicle within its lane on the road and avoid obstacles that may appear without any warning. This example highlights the importance of Embedded Vision in this application to not only to see but to understand the scene and react accordingly.

From a technical perspective, a smart embedded vision system not only recognizes defects or abnormalities based on pre-defined criteria, it is capable of determining an appropriate response to correct or avoid them. Embedded Vision provides a more comprehensive view of the world by recognizing, understanding and identifying the environment without further external interaction.

, Sweat-proof “smart skin” takes reliable vitals, even during workouts and spicy meals

Embedded vision examples: Home
robotics, e. g. robot vacuum cleaners

What is required for a fully embedded vision product?

It is all about efficiency. Embedded Vision brings vision technology to its simplest formula “capture, process, respond”. The building blocks of a true embedded vision systems are:

  • Sensor or Sensor Module
  • Control unit to receive the images and direct them to the processing unit
  • Processing unit that is either local or cloud based that provides the full image pipe line
  • Purpose build algorithms and neural networks that provide the intelligent processing of the images

Embedded Vision requires more analysis and processing of the image data, so an embedded vision product typically includes back-end processing that is done on either an ISP or GPU. Intelligent algorithms running on these devices allow the machine to analyze the incoming video data, process them and interpret them to make decisions and react accordingly. Embedded vision products provide not only data but results based on this data.

“Embedded Vision does not have to be small but it has to be smart.” – Darren Bessette

Typical embedded vision components come in small form factors, but this does not mean that they can only be used in small or low-cost devices. As with the example provided previously, self-driving cars are the exception to using embedded vision in small, low cost devices. A better way to think of embedded vision is to think of smart imaging that do not need human interaction to process and react to the video stream. Embedded Vision is enabling machines to see and think, powering the Vision of Things of the future.

Comments are closed.