This new (vision) kit from Google, the second after ‘voice', seems important.
I think this add-on board for R-Pi, complete with tensor engine compiler & 3 neural network models, is “something big”.
They’re releasing their SDK’s for DIY… Google Assistant isn’t just locked up in their proprietary devices, like “Google Home”.
Seems they can run off-line, if you create the ‘model’ yourself.
I’ve included more links than needed, most relevant first.
I like their suggested projects - a perceptual Dog Door. "Open it when she wants to come back in" :)
There were mentions of collections of projects based on Google’s AIY projects, but I wasn’t able to find a concise page.
Perhaps someone else can.
Google AIY Voice Kit for Raspberry Pi - US$24.95 [didn’t look for Vision Kit, too new]
Google Assistant SDK [advanced users / developers]
Building Voice-Controlled Objects with Google’s AIY Projects Voice Kit
- retro analogue phone, complete with Operator (US thing, not AUS)
- ‘Magic Mirror’, 12” square with a 7” R-Pi LCD underneath
> The mirror was also served as a testbed for my exploration of deep learning at the edge,
> allowing me to test Google’s TensorFlow on the device for simple hotword recognition.
> The ability to run these trained networks “at the edge” nearer the data —
> without the cloud support that seems necessary to almost every task these days, or even in some cases without even a network connection —
> could help reduce barriers to developing, tuning, and deploying machine learning applications.
Long, detailed walk-through from unpacking to booting to modifying Voice Kit.
Great tutorial for newbies.
R-Pi forum mentioning #AIYProjects - not a concise list, AFAICS
Google’s current DIY-AI (AIY) offerings:
Voice and Vision with “more to come"
Introducing AIY Vision Kit: Make devices that see
AIY Voice Kit: Inspiring the maker community
Introducing the AIY Vision Kit: Add computer vision to your maker projects
> The VisionBonnet is an accessory board for Raspberry Pi Zero W
> that features theIntel® Movidius™ MA2450, a low-power vision processing unit capable of running neural networks.
> It can run at speeds of up to 30 frames per second, providing near real-time performance.
> Bundled with the software image are three neural network models:
> • A model based on MobileNets that can recognize a thousand common objects.
> • A model for face detection capable of not only detecting faces in the image, but also scoring facial expressions on a "joy scale" that ranges from "sad" to "laughing."
> • A model for the important task of discerning between cats, dogs and people.
> For those of you who have your own models in mind, we've included the original TensorFlow code and a compiler. Take a new model you have (or train) and run it on the the Intel® Movidius™ MA2450.
> We hope you'll use it to solve interesting challenges, such as:
> • Build "hotdog/not hotdog" (or any other food recognizer)
> • Turn music on when someone walks through the door
> • Send a text when your car leaves the driveway
> • Open the dog door when she wants to get back in the house
Steve Jenkin, IT Systems and Design
0412 786 915 (+61 412 786 915)
PO Box 38, Kippax ACT 2615, AUSTRALIA
mailto:[hidden email] http://members.tip.net.au/~sjenkin
linux mailing list
|Free forum by Nabble||Edit this page|