A Learn-By-Listening System Is Being Designed By Apple And CMU Researchers For Our Smart Homes
A team of researchers from Apple and Carnegie Mellon University’s Human-Computer Interaction Institute have presented a system for embedded AIs to learn by listening to noises in their environment without the need for up-front training data or without placing a huge burden on the user to supervise the learning process.
The aim is to increase the understanding of smart devices about contextual or situational awareness. The system is called Listen Learner and it depends entirely on acoustic activity recognition. A pre-trained model is added to allow the system to make a guess about the acoustic cluster, making the interaction less open-ended. While acoustic activity recognition is not a new strategy, the team wanted to improve on existing deployments.
Listen Learner is designed to increase accuracy and automatically generate acoustic event classifiers. The audio events are separated using an adaptive threshold which is usually prompted when the microphone input level is 1.5 standard deviations greater than the mean of the past minute. The paper also talks about privacy concerns since a microphone would naturally process environmental data. It is also not practical to expect that all processing might happen locally. The ideal situation remains that all the data would be retained on the sensing device.
Check Out The Top 12 Smart Devices To Build A Smarter Home!
Be The Superhero Of Your Smart Home
You can teach an old 🏡 new tricks with smart devices. Whether your house is one or one hundred years old it can be a smart home. Sign up today to see the top 12 smart devices to build your smarter home. 🙂 Are you more interested in a Google smart home or an Amazon smart home? Let us know above.