2nd Google 'AIY' project (DIY AI) . US$45 for Vision kit (board + box & wires, but not R-Pi etc)

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

2nd Google 'AIY' project (DIY AI) . US$45 for Vision kit (board + box & wires, but not R-Pi etc)

Samba - linux mailing list
This new (vision) kit from Google, the second after ‘voice', seems important.
I think this add-on board for R-Pi, complete with tensor engine compiler &  3 neural network models, is “something big”.

They’re releasing their SDK’s for DIY… Google Assistant isn’t just locked up in their proprietary devices, like “Google Home”.
Seems they can run off-line, if you create the ‘model’ yourself.

I’ve included more links than needed, most relevant first.
I like their suggested projects - a perceptual Dog Door. "Open it when she wants to come back in" :)

There were mentions of collections of projects based on Google’s AIY projects, but I wasn’t able to find a concise page.
Perhaps someone else can.

cheers
steve

===========================================

Google AIY Voice Kit for Raspberry Pi - US$24.95 [didn’t look for Vision Kit, too new]
<https://www.adafruit.com/product/3602>

        Google Assistant SDK [advanced users / developers]
        <https://developers.google.com/assistant/sdk/>

Building Voice-Controlled Objects with Google’s AIY Projects Voice Kit
Alasdair Allan
<https://blog.hackster.io/building-voice-controlled-objects-with-googles-aiy-projects-voice-kit-352d3272cede>

 - retro analogue phone, complete with Operator (US thing, not AUS)
 - ‘Magic Mirror’, 12” square with a 7” R-Pi LCD underneath

> The mirror was also served as a testbed for my exploration of deep learning at the edge,
>  allowing me to test Google’s TensorFlow on the device for simple hotword recognition.
>
> The ability to run these trained networks “at the edge” nearer the data —
>  without the cloud support that seems necessary to almost every task these days, or even in some cases without even a network connection — 
> could help reduce barriers to developing, tuning, and deploying machine learning applications.


Long, detailed walk-through from unpacking to booting to modifying Voice Kit.
        Great tutorial for newbies.
<https://medium.com/@aallan/hands-on-with-the-aiy-projects-voice-kit-7c810856faaf>


R-Pi forum mentioning #AIYProjects - not a concise list, AFAICS
<https://www.raspberrypi.org/forums/viewforum.php?f=114>

———————————

Google’s current DIY-AI (AIY) offerings:
Voice and Vision with “more to come"
<https://aiyprojects.withgoogle.com>
        <https://aiyprojects.withgoogle.com/vision/#project-overview>
        <https://aiyprojects.withgoogle.com/voice#project-overview>

———————————

Introducing AIY Vision Kit: Make devices that see
<https://blog.google/topics/machine-learning/introducing-aiy-vision-kit-make-devices-see/>

AIY Voice Kit: Inspiring the maker community
<https://blog.google/topics/machine-learning/aiy-voice-kit-inspiring-maker-community/>

———————————

Introducing the AIY Vision Kit: Add computer vision to your maker projects
<https://developers.googleblog.com/2017/11/introducing-aiy-vision-kit-add-computer.html>

> The VisionBonnet is an accessory board for Raspberry Pi Zero W
> that features theIntel® Movidius™ MA2450, a low-power vision processing unit capable of running neural networks.
> It can run at speeds of up to 30 frames per second, providing near real-time performance.
>
> Bundled with the software image are three neural network models:
>
> • A model based on MobileNets that can recognize a thousand common objects.
> • A model for face detection capable of not only detecting faces in the image, but also scoring facial expressions on a "joy scale" that ranges from "sad" to "laughing."
> • A model for the important task of discerning between cats, dogs and people.
> For those of you who have your own models in mind, we've included the original TensorFlow code and a compiler. Take a new model you have (or train) and run it on the the Intel® Movidius™ MA2450.
>
> We hope you'll use it to solve interesting challenges, such as:
>
> • Build "hotdog/not hotdog" (or any other food recognizer)
> • Turn music on when someone walks through the door
> • Send a text when your car leaves the driveway
> • Open the dog door when she wants to get back in the house


--
Steve Jenkin, IT Systems and Design
0412 786 915 (+61 412 786 915)
PO Box 38, Kippax ACT 2615, AUSTRALIA

mailto:[hidden email] http://members.tip.net.au/~sjenkin


--
linux mailing list
[hidden email]
https://lists.samba.org/mailman/listinfo/linux