Archive

Embedded VISION Europe conference 2017 – a review

The first edition of this new event showed a rising trend towards so-called “edge” devices and hybrid solutions of edge devices and cloud services. Making vision systems smarter through deep learning and artificial intelligence (AI) also means that camera modules for embedded vision need to become smarter too.

Embedded Vision Europe 2017

The first Embedded VISION Europe conference (EVE) hosted by The European Machine Vision Association (EMVA) and Messe Stuttgart took place on October 12-13, 2017 in Stuttgart, Germany with Allied Vision as its sponsor. With about 200 participants and highly professional presentations, EVE 2017 was an undisputed success. 

 

From cloud-based computing to edge devices

I attended many presentations and spoke with a lot of experts about the latest trends in the embedded vision industry. My key take-away from these two days is that the market is now realizing that 100% cloud-based services consisting in capturing data in the field with connected devices and uploading it to the cloud for further processing are reaching their limits. In the past years, cloud services were hyped as the dominant solution, but these services come with their own set of limitations. Users are faced with challenges like huge data storage and high latency. After moving from 100% local to 100% cloud computing, the future seems to belong to middle-way, a hybrid of so-called “edge” devices and cloud services.

 

The concept of edge devices is to not only capture data – in the case of vision systems: image data – but also perform as much processing as possible locally to minimize the amount of data to be uploaded, stored and processed in the cloud. For example, in the case of a face recognition application, a system would not only capture images of a scene and upload them to a server for identification. It would search for faces in the image, crop the image accordingly and only transmit images of the face(s) to the server for identification. Or it might even include a biometric analysis application that would analyze the faces locally and only transmit biometric data to the remote server to be compared with a database of searched persons.

 

Deep learning and AI for smarter devices

This approach requires local devices – which are typically embedded systems – to be smarter. It is understandable why many presentations and discussions at EVE 2017 insisted on the rise of deep learning and artificial intelligence to improve the performance of vision systems.

 

This is to some extent a paradigm change for embedded systems that were often designed to perform a limited number of well-defined, simple tasks as it requires more computing power on the embedded host processor. If embedded vision systems need to become smarter, it is a logical step to apply that logic to embedded cameras. The more advanced image correction and pre-processing tasks an embedded camera module can perform, the more it frees CPU capacity on the host for the complex, application-specific image processing.

 

Subscribe to this blog

Don’t miss any new articles. To receive new post notifications to your inbox, subscribe to our e-newsletters today, and we will send you email notifications as soon as a new article in your area of interest is available.