Imagimob/Synaptics: Machine Learning for Edge Devices – Embedded

Imagimob/Synaptics: Machine Learning for Edge Devices - Embedded

imagimob

Based on Imagimob’s “tinyML” platform, sound event detection systems can be developed quickly and easily, for example for use in predictive maintenance.

Other examples include motion and gesture recognition, audio systems, and material recognition. The DBM10L from Synaptics works on the »tinyML« platform from »Imagimob AI« – an end-to-end machine learning platform for edge devices. This is a dual-core SoC on which a DSP and a neural network are integrated. The DSP Group originally developed the “DBM10”, which Synaptics acquired at the end of 2021 for $550 million. The IC is small and because of its low power consumption it is suitable for use in battery-powered devices such as smartphones, tablets, smart home devices, remote controls and wearables.


Synaptics’ Sound Event Detection (SED) learning algorithms can be used to quickly create systems that detect things like shattering glass, a baby crying, gunshots, or the sound of a microwave oven. From the collection of the data to the complete installation on the respective edge device, it takes a few minutes. Imagimob launched “Imagimob AI” as a development platform for machine learning in edge devices. Visitors to embedded world can learn more about this at the Imagimob stand (Hall 4, Stand 133).


You might also be interested in

Related articles

Synaptics Inc.


#ImagimobSynaptics #Machine #Learning #Edge #Devices #Embedded

Leave a Comment

Your email address will not be published.