Kick like a pro with Footy Skills Lab

When I was growing up in Brisbane, Aussie Rules football wasn’t offered as a school sport – and there weren’t any professional female role models to look up to and learn from. Despite these limitations, we got resourceful. We organized football games in our lunch breaks with friends, using soccer or rugby goal posts and adding sticks or cones to serve as point posts. We practised accuracy using rubbish bins as targets.

A decade later, women have truly made their mark in the AFL. There are, however,  many barriers still facing aspiring footy players — including access, cost, mobility and, more recently, lockdown restrictions. We still have to be resourceful to keep active and hone our skills. 

Three years ago, the AFL and Google first teamed up to help footy fans better connect with the games and players they love. Since then, we’ve been thinking about ways we could improve access to Aussie Rules coaching and community participation – regardless of ability, gender, location, culture or socio-economic background. 

A graphic showing a phone with the Footy Skills Lab app open, in front of a Sherrin football.

Today, we’re thrilled to launch Footy Skills Lab to help budding footy players in Australia and all around the world sharpen their AFL skills – straight from their smartphone. 

Footy Skills Lab is a free platform, powered by GoogleAI, which helps you improve your skills through activities in ball-handling, decision-making and kicking across three levels of difficulty.

A screenshot showing the AFL activities available on the app, including ball handling, decision making and kicking.

To give Footy Skills Lab a whirl, all you need is a smartphone with an internet connection, a football, something to prop your phone up (like a water bottle) and space to move. 

You’ll get tips on kicking from me, and tips on ball-handling and decision-making from athletes across the AFLW and AFL Wheelchair competitions – including Carlton’s Madison Prespakis and Richmond’s Akec Makur Chuot. Through audio prompts and closed captioning, these tips and activities are also accessible for people with visual and hearing needs. And when you finish the activity, you’ll get a scorecard that you can share with your friends, family, teammates and coaches. 

Screenshots showing still images of AFL athletes Madison Prespakis and Akec Makur Chuot providing football training tips.

Whether you’re in Manchester, Miami or in lockdown in Melbourne, Footy Skills lab is such an easy, convenient way to get motivated and access coaching from pros.  So go on, join in the fun and give us your best kick!

Read More

TensorFlow Hub for Real World Impact

Posted by Luiz GUStavo Martins and Elizabeth Kemp on behalf of the TensorFlow Hub team

As a developer, when you’re faced with a challenge, you might think: “How can I solve this?”, or, “What is the right tool for the job?” For a growing number of situations, machine learning can help! ML can solve many tasks that would be very challenging using classical programming, for example: detecting objects in images, classifying sound events, or understanding text.

But training machine learning models may take a lot of time, use large amounts of data, require deep expertise in the field, and be resource intensive. What if instead of starting from scratch, someone has already solved the same problem you have? Or at least solved a very similar problem that could give you a good starting point? This is where TensorFlow Hub can help you!

TensorFlow Hub is an open repository of pre-trained machine learning models ready for fine-tuning and deployable anywhere, from servers to mobile devices, microcontrollers and browsers.

Developers are using models available from TF Hub to solve real world problems across many domains, and at Google I/O 2021 we highlighted some examples of developers changing the world using models from TensorFlow Hub.

In this article, we’ll cover these use cases as well, with links so you can check them out.

Images

Image classification is one of the most popular use cases for machine learning. The development of this field helped the whole of machine learning by showing very good results and pushing the boundaries of research.

TensorFlow Hub has many models for the image problem domain for tasks like image classification, object detection, image segmentation, pose detection, style transfer and many others.

Many of the available models have a visualizer, like the one below, right on their documentation page, enabling you to try the model without any code or downloading anything.

TFHub makes Transfer Learning simpler and easier to experiment with many state of the art models like MobilenetV3, EfficientNet V2 to find the best one for your data. A real world use case can be seen in this CropNet tutorial to create the best model possible to detect diseases in cassava leaves and deploy it on device for use in the field.

Text

Understanding text has always been a very challenging task for computers because of all the context that is necessary, and the large number of words and phrases. Many state of the art Natural Language Processing (NLP) models are available on TensorFlow Hub and ready for you to use.

One example is the BERT family of models. Using them from TFHub is easier than ever. Aside from the base BERT model, there are more advanced versions and in many languages ready to be used like you can see here in Making BERT Easier with Preprocessing Models From TensorFlow Hub.

One good example is the MuRIL model that is a multilingual BERT model trained on 17 Indian languages used by developers to solve local NLP challenges in India.

An animation of the preprocessing model that makes it easy for you to input text into BERT
An animation of the preprocessing model that makes it easy for you to input text into BERT.

Developers can also use the TF Hub spam detection model for detecting spam comments in online forums. The model is available for TF.js and TFLite for running in the browser and on-device.

Audio

TF Hub has many audio models that you can use on desktop, for on-device inference on mobile devices, or on the web. There are also audio embedding models that can be used with transfer learning which you can adapt to your own data.

gif of dog next to microphone and sound waves

Developers are using audio classification to understand what’s happening on a forest (How ZSL uses ML to classify gunshots to protect wildlife) or inside the ocean (Acoustic Detection of Humpback Whales Using a Convolutional Neural Network) or even closer to us, understanding what is happening in your own home (Important household sounds become more accessible).

Video

Video processing is increasingly important and TensorFlow Hub also has models for this domain like the MoViNet collection that can do video classification or the I3D for action recognition.

gif of video processor in TensorFlow Hub

TFHub also has tutorials for Action recognition, Video Interpolation and Text-to-video retrieval.

Summary

Reusing code is usually better than re-writing it. The same applies to machine learning models. If you can use a pre-trained model for a task, it can save you time, resources, and help you make an impact in the world. TensorFlow Hub has thousands of models available for you to deploy or customize to your task with transfer learning.

If you want to know more about how to use TensorFlow Hub and find great tutorials, check out the documentation at tensorflow.org/hub. To find models for your own real world impact, search on tfhub.dev.

Let us know what you build and also share with the community. You can talk to the team on the TensorFlow Forum and find a community that is eager to help!

Read More

Ask a Techspert: What’s a neural network?

Back in the day, there was a surefire way to tell humans and computers apart: You’d present a picture of a four-legged friend and ask if it was a cat or dog. A computer couldn’t identify felines from canines, but we humans could answer with doggone confidence. 

That all changed about a decade ago thanks to leaps in computer vision and machine learning – specifically,  major advancements in neural networks, which can train computers to learn in a way similar to humans. Today, if you give a computer enough images of cats and dogs and label which is which, it can learn to tell them apart purr-fectly. 

But how exactly do neural networks help computers do this? And what else can — or can’t — they do? To answer these questions and more, I sat down with Google Research’s Maithra Raghu, a research scientist who spends her days helping computer scientists better understand neural networks. Her research helped the Google Health team discover new ways to apply deep learning to assist doctors and their patients.

So, the big question: What’s a neural network?

To understand neural networks, we need to first go back to the basics and understand how they fit into the bigger picture of artificial intelligence (AI). Imagine a Russian nesting doll, Maithra explains. AI would be the largest doll, then within that, there’s machine learning (ML), and within that, neural networks (… and within that, deep neural networks, but we’ll get there soon!).

If you think of AI as the science of making things smart, ML is the subfield of AI focused on making computers smarter by teaching them to learn, instead of hard-coding them. Within that, neural networks are an advanced technique for ML, where you teach computers to learn with algorithms that take inspiration from the human brain.

Your brain fires off groups of neurons that communicate with each other. In an artificial neural network, (the computer type), a “neuron” (which you can think of as a computational unit) is grouped with a bunch of other “neurons” into a layer, and those layers  stack on top of each other. Between each of those layers are connections. The more layers  a neural network has, the “deeper” it is. That’s where the idea of “deep learning” comes from. “Neural networks depart from neuroscience because you have a mathematical element to it,” Maithra explains, “Connections between neurons are numerical values represented by matrices, and training the neural network uses gradient-based algorithms.” 

This might seem complex, but you probably interact with neural networks fairly often — like when you’re scrolling through personalized movie recommendations or chatting with a customer service bot.

So once you’ve set up a neural network, is it ready to go?

Not quite. The next step is training. That’s where the model becomes much more sophisticated. Similar to people, neural networks learn from feedback. If you go back to the cat and dog example, your neural network would look at pictures and start by randomly guessing. You’d label the training data (for example, telling the computer if each picture features a cat or dog), and those labels would provide feedback, telling the neural network when it’s right or wrong. Throughout this process, the neural network’s parameters adjust, and the neural network transitions from not knowing to learning how to identify between cats and dogs.

Why don’t we use neural networks all the time?

“Though neural networks are based on our brains, the way they learn is actually very different from humans,” Maithra says. “Neural networks are usually quite specialized and narrow. This can be useful because, for example, it means a neural network might be able to process medical scans much quicker than a doctor, or spot patterns  a trained expert might not even notice.” 

But because neural networks learn differently from people, there’s still a lot that computer scientists don’t know about how they work. Let’s go back to cats versus dogs: If your neural network gives you all the right answers, you might think it’s behaving as intended. But Maithra cautions that neural networks can work in mysterious ways.

“Perhaps your neural network isn’t able to identify between cats and dogs at all – maybe it’s only able to identify between sofas and grass, and all of your pictures of cats happen to be on couches, and all your pictures of dogs are in parks,” she says. “Then, it might seem like it knows the difference when it actually doesn’t.”

That’s why Maithra and other researchers are diving into the internals of neural networks, going deep into their layers and connections, to better understand them – and come up with ways to make them more helpful.

“Neural networks have been transformative for so many industries,” Maithra says, “and I’m excited that we’re going to realize even more profound applications for them moving forward.”

Read More