Building AI talent to accelerate your business

Max Saltonstall
Google Cloud - Community
4 min readOct 22, 2020

--

(All the credit for these insights goes to Peter Grabowski, Google Austin Site Lead, Enterprise AI)

We’ve spent the past few years at Google building a horizontal AI team, focused on machine learning (ML) for enterprise applications. It wasn’t easy, but it had a huge impact. Through that process, we’ve identified multiple benefits of having a centralized ML team, including better access to talent and improved engineer retention, more reusable, organization-wide solutions and an enhanced capacity to balance bursty ML projects.

But first — what is a horizontal AI team, and how do you build one?

Why a centralized team?

Data piling up can overwhelm analysts and product teams easily, yet presents an opportunity: what if we could solve some tricky, large scale problems (like patent classification or flagging inappropriate content) with all that data, using machine learning? A central team focused on applying AI/ML techniques to company-wide problems lets us take experience from one use case (helping Legal with their mass of patents) and bring it to another (finding the right learning course for career advancement).

What is a horizontal AI team?

A horizontal AI team is a centralized group dedicated to helping the organization responsibly and productively make use of machine learning. We help ensure that business groups across Google adhere to best practices for enterprise machine learning by considering things like fairness, privacy and interpretability. We partner with Google Research to apply the latest advances in machine learning to Google’s most important business challenges. Finally, we advise business leaders across the company on where machine learning might best fit into their roadmaps, helping to prioritize the most impactful applications.

How do you build the team?

Our team was formed by combining two vertically aligned functional ML groups. After demonstrated success in those areas, our remit was expanded to include all of Google’s internal enterprise functions. Our team is part of a larger Enterprise Infrastructure horizontal supporting those in Facilities, Marketing, Finance and other functions.

Hiring for the team is challenging — this group is unlike any other team I’ve built. Traditional ML teams usually have one core competency. For instance, you might have a team focused specifically on natural language or primarily on computer vision. Given our broad remit, our team is exposed to all areas of machine learning, also including areas like optimization, time series analysis and anomaly detection.

As a result, we had to change the way we hire. No one candidate is an expert in all of the areas we cover. We look first and foremost for candidates with strong applied mathematics backgrounds and demonstrated histories of learning new techniques. From there, we select for candidates with experience in certain domains, especially those in which we currently don’t have expertise. After hiring, we place heavy emphasis on technical design reviews that effectively facilitate knowledge sharing across the team. We also frequently bring in external speakers from across and outside of Google, making sure the team stays up to date on the latest and greatest techniques.

Three things to keep in mind

We’ve learned a lot while building Google’s first Enterprise AI team — here are three key takeaways from our experience

1: Impact is key

When you first start the team, you’ll likely have a ton of interested clients. Some of the ideas they pitch to you will be great, while others will be less practical. To help sort through the inevitably long list, it’s important to focus on prioritization. On our team, we prioritize projects based on the impact they’re likely to have — whether it’s dollars or hours saved, risk mitigated, or increase in user satisfaction (of course weighing these factors against each other could be a post in itself!)

2: Education, education, education

We’ve run an extensive series of courses focused on employee education, with courses designed for recruiters, product managers, engineers, and leadership. The value of these courses can’t be overstated. Helping people learn the fundamentals of machine learning allows them to more effectively generate ideas for new projects. We’ve found the quality of proposals we’ve received is noticeably higher when it comes from groups that have received some level of ML education.

This makes sense — the teams we partner with are the experts in their respective fields, so they’ll be best equipped to highlight appropriate problems in their space. As an added bonus, machine learning has become a lot more approachable in recent years, so many tech-savvy teams will likely be able to solve their own problems with a little guidance on problem framing and tools like AutoML, allowing you and your team to focus on the more complicated issues.

3: Be careful with people data

One of the most interesting challenges comes from working with people data. This data is especially important, but also especially sensitive, and requires extra care. We’ve identified three key areas with all sorts of machine learning: privacy, fairness, and interpretability. These are especially important when dealing with data generated by people. Techniques like integrated gradients, differential privacy, federated learning, and mindiff can help to address some of these concerns

How to start

Embarking on this path can be tricky, and many teams struggle with how to get started. We’ve worked to make our Cloud AI products accessible to all kinds of developers and engineers, so you can test out AI tools without too much prep time. One of the trickiest things about getting started with ML is that hyperparameter tuning and model training can be really hard.

At least, hard for humans.

So we set our AI to fix that, and now you can use those products in language, images, videos or structured data sets yourself. Try AutoML and train your own private models that build off of Google’s work in structured data and self-tuning models.

--

--

Max Saltonstall
Google Cloud - Community

Father, gamer, juggler, tech enthusiast. I tell stories about how to cloud, and keep it all secure. Sometimes make games. Opinions are my own. Also chocolate