Tuesday, September 27, 2022
HomeSoftware DevelopmentMarketing campaign to Acquire and Share Machine Studying Use Instances

Marketing campaign to Acquire and Share Machine Studying Use Instances



Posted by Hee Jung, Developer Relations Neighborhood Supervisor / Soonson Kwon, Developer Relations Program Supervisor

ML in Motion is a digital occasion to gather and share cool and helpful machine studying (ML) use instances that leverage a number of Google ML merchandise. That is the primary run of an ML use case marketing campaign by the ML Developer Packages crew.

Allow us to announce the winners proper now, proper right here. They’ve showcased sensible makes use of of ML, and the way ML was tailored to actual life conditions. We hope these tasks can spark new utilized ML undertaking concepts and supply alternatives for ML neighborhood leaders to debate ML use instances.

4 Winners of “ML in Motion” are:

Detecting Meals High quality with Raspberry Pi and TensorFlow

By George Soloupis, ML Google Developer Knowledgeable (Greece)

This undertaking helps individuals with scent impairment by figuring out meals degradation. The thought got here immediately when a buddy revealed that he has no sense of scent as a result of a motorcycle crash. Even with experiences attending a number of IT conferences, this concern was unaddressed and the facility of machine studying is one thing we may depend on. Therefore the objective. It’s to create a prototype that’s inexpensive, correct and usable by individuals with minimal information of computer systems.

The fundamental setting of the meals high quality detection is that this. Raspberry Pi collects knowledge from air sensors over time throughout the meals degradation course of. This single board pc was very helpful! With the GUI, it’s simple to execute Python scripts and see the outcomes on display screen. Eight sensors collected knowledge of the chemical parts akin to NH3, H2s, O3, CO, and CH4. After working the prototype for at some point, classes had been set following the outcomes. The primary hours of the meals out of the fridge as “good” and the remainder as “unhealthy”. Then the dataset was evaluated with the assistance of TensorFlow and the inference was executed with TensorFlow Lite.

Since there have been no open supply prototypes on the market with related objectives, it was an entire journey. Sensors on PCBs and standalone sensors had been used to get one of the best combination of accuracy, stability and sensitivity. A logic stage converter has been used to reduce the usage of resistors, and capacitors have been positioned for stability. And the outcome, a compact prototype! The Raspberry Pi may connect instantly on with slots for eight sensors. It’s developed in such a method that sensors might be changed at any time. Customers can experiment with completely different sensors. And the inference time values are despatched by way of the bluetooth to a cellular machine. As an finish outcome a consumer with no superior technical information will have the ability to see meals high quality on an app constructed on Android (Kotlin).

Reference: Github, extra to learn

* This undertaking is supported by Google Affect Fund.

Election Watch: Making use of ML in Analyzing Elections Discourse and Citizen Participation in Nigeria

By Victor Dibia, ML Google Developer Knowledgeable (USA)

This undertaking explores the usage of GCP instruments in ingesting, storing and analyzing knowledge on citizen participation and election discourse in Nigeria. It started on the premise that the proliferation of social media interactions gives an attention-grabbing lens to check human habits, and ask necessary questions on election discourse in Nigeria in addition to interrogate social/demographic questions.

It’s based mostly on knowledge collected from twitter between September 2018 to March 2019 (tweets geotagged to Nigeria and tweets containing election associated key phrases). General, the information set comprises 25.2 million tweets and retweets, 12.6 million unique tweets, 8.6 million geotagged tweets and three.6 million tweets labeled (utilizing an ML mannequin) as political.

By analyzing election discourse, we will be taught just a few necessary issues together with – points that drive election discourse, how social media was utilized by candidates, and the way participation was distributed throughout geographic areas within the nation. Lastly, in a rustic like Nigeria the place up to date demographics knowledge is missing (e.g., on neighborhood buildings, wealth distribution and so forth), this undertaking reveals how social media can be utilized as a surrogate to deduce relative statistics (e.g., existence of diaspora communities based mostly on election dialogue and wealth distribution based mostly on machine kind utilization throughout the nation).

Information for the undertaking was collected utilizing python scripts that wrote tweets from the Twitter streaming api (matching sure standards) to BigQuery. BigQuery queries had been then used to generate mixture datasets used for visualizations/evaluation and coaching machine studying fashions (political textual content classification fashions to label political textual content and multi class classification fashions to label normal discourse). The fashions had been constructed utilizing Tensorflow 2.0 and skilled on Colab notebooks powered by GCP GPU compute VMs.

References: Election Watch web site, ML fashions descriptions one, two

Bioacoustic Sound Detector (To establish fowl calls in soundscapes)

By Usha Rengaraju, TFUG Organizer (India)

 
(Chook picture is taken by Krisztian Toth @unsplash)

“Visionary Perspective Plan (2020-2030) for the conservation of avian variety, their ecosystems, habitats and landscapes within the nation” proposed by the Indian authorities to assist in the conservation of birds and their habitats impressed me to take up this undertaking.

Extinction of fowl species is an rising world concern because it has a huge effect on meals chains. Bioacoustic monitoring can present a passive, low labor, and cost-effective technique for finding out endangered fowl populations. Current advances in machine studying have made it potential to mechanically establish fowl songs for frequent species with ample coaching knowledge. This innovation makes it simpler for researchers and conservation practitioners to precisely survey inhabitants tendencies and so they’ll have the ability to commonly and extra successfully consider threats and regulate their conservation actions.

This undertaking is an implementation of a Bioacoustic monitor utilizing Masked Autoencoders in TensorFlow and Cloud TPUs. The undertaking might be introduced as a browser based mostly software utilizing Flask. The deep studying prototype can course of steady audio knowledge after which acoustically acknowledge the species.

The objective of the undertaking after I began was to construct a fundamental prototype for monitoring of uncommon fowl species in India. In future I wish to broaden the undertaking to observe different endangered species as nicely.

References: Kaggle Pocket book, Colab Pocket book, Github, the dataset and extra to learn


Persona Labs’ Digital Personas

By Martin Andrews and Sam Witteveen, ML Google Developer Consultants (Singapore)

Over the past 3 years, Purple Dragon AI (an organization co-founded by Martin and Sam) has been growing real-time digital “Personas”. The important thing concept is to allow customers to work together with life-like Personas in a format just like a Zoom name : Chatting with them and seeing them reply in actual time, simply as a human would. Naturally, every Persona might be tailor-made to duties required (by adjusting the looks, voice, and ‘motivation’ of the dialog system behind the scenes and their corresponding backend APIs).

The elements required to make the Personas work successfully embrace dynamic face fashions, expression technology fashions, Textual content-to-Speech (TTS), dialog backend(s) and Speech Recognition (ASR). A lot of this was constructed on GCP, with GPU VMs operating the (many) Deep Studying fashions and mixing the outputs into dynamic WebRTC video that streams to customers by way of a browser front-end.

A lot of the earlier years’ work focussed on making the Personas’ faces behave in a life-like method, whereas ensuring that the general latency (i.e. the time between the Persona listening to the consumer asking a query, to their lips beginning the response) is saved low, and the rendering of particular person pictures matches the 25 frames-per-second video charge required. As you may think, there have been many Deep Studying modeling challenges, coupled with laborious engineering points to beat.

When it comes to backend applied sciences, Google Cloud GPUs had been used to coach the Deep Studying fashions (constructed utilizing TensorFlow/TFLite, PyTorch/ONNX & extra just lately JAX/Flax), and the real-time serving is completed by Nvidia T4 GPU-enabled VMs, launched as required. Google ASR is at present used as a streaming backend for speech recognition, and Google’s WaveNet TTS is used when multilingual TTS is required. The system additionally makes use of Google’s serverless stack with CloudRun and Cloud Features being utilized in a few of the dialog backends.

Go to the Persona’s web site (linked under) and you’ll see movies that exhibit a number of facets : What the Personas appear to be; their Multilingual functionality; potential purposes; and so forth. Nonetheless, the movies can’t actually exhibit what the interactivity ‘looks like’. For that, it’s greatest to get a dwell demo from Sam and Martin – and see what real-time Deep Studying mannequin technology appears to be like like!

Reference: The Persona Labs web site

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

15 − 6 =

Most Popular

Recent Comments