Improving marketplace integrations with Pandium

Max Saltonstall
Google Cloud - Community
6 min readApr 1, 2020

--

How do I get my software to all the markets?

Integrating with a huge variety of cloud services, marketplaces and products quickly creates tons of extra work for your development teams, pulling them away from launching features. As businesses continue to move their operations farther into the cloud, large companies are using an average of 203 SaaS products. How do you weave all these distinct systems together?

There are too many different places to reach B2B customers.

Pandium helps SaaS companies to solve this by enabling businesses to offer native integrations at scale. It’s a platform specifically designed to remove the heavy lifting associated with building and maintaining in-app integration marketplaces. They rely on Google Cloud and Google Kubernetes Engine (GKE) to run their platform, and the ease of use, reliability, and automation of these integrated services has enabled their engineering teams to focus less on infrastructure management and more on building out the core product.

Integrating with multiple third-party marketplace platforms is a hassle.

Make integrations faster

Pandium’s core product came about in response to the growing demand for native integration marketplaces. Fewer business users want to wait for their own developers to build custom integrations. An increasing number of SaaS companies, like Salesforce, Shopify, and Slack, are offering self-serve, ready-made integrations to their customers through an application marketplace.

Image Source

From a technical perspective, companies face challenges in building these integrations and the infrastructure they require to function at scale. Their developers need to securely build, maintain, and host these integrations, while providing a front-end that business users can easily navigate. This is incredibly complex to build, and as a result, many SaaS companies end up with only a few functional integrations to offer their customers, with high maintenance and customer support costs.

If your product makes developers’ lives easier, they will use it.

The Pandium platform provides the in-app marketplace infrastructure–authentication, security, front-end UI, account provisioning, business user logging, and hosting–so SaaS developers can focus solely on writing the specific integration configurations their customers need.

Most platform integrations create overhead

Traditional integration platforms offer some simple use cases that can be implemented without code, but for more complex configurations, they require developers to code within and around visualized elements (bundled code) in a fairly rigid system. This means engineers have to learn an esoteric system that only applies to itself. Pandium is designed for developers to securely push simple command-line based scripts (in whatever language they already write in) through their repo to the Pandium platform, giving engineers maximum flexibility and speed to iterate their integration configurations without having to learn a new system.

Language-agnostic and command-line tools speed up your engineering.

In order to make this work at scale, Pandium’s engineering team chose a microservices architecture with containers to ensure that it runs as efficiently and securely as possible. With this structure, clients and their customers will not be affected if there is ever an issue with one client’s integrations. Because Pandium runs third-party code on its platform, it faces unique concerns around security and needs to ensure no client’s errors or performance issues affect any of the other clients’ accounts.

What if my app goes viral?

In addition, any client’s in-app integration marketplace can often quickly gain or lose users, and, as the host of these marketplaces, Pandium needs an architecture that can efficiently respond to rapidly changing utilization without compromising availability or causing a huge spike in costs.

Efficient auto-scaling improves confidence and performance.

They had initially used a different cloud provider, and it took them hours to spin up clusters. They also fairly regularly received night-time pages related to the kubernetes control plane, such as high memory consumption of the master etcd3 database that backs the kubernetes control plane. Pandium decided to switch to GKE, so they could focus on providing and managing integration marketplaces at scale and leave the work of running cluster subsystems to Google.

With a few simple gcloud commands they were able to spin up clusters in minutes. Also not having to worry about Master Node health allowed them to sleep better at night… literally.

Image Source

Running at scale and saving money

GKE node pools can be set up to be preemptible, which saves 80 percent of the cost of running a cluster on Google. In this case, Pandium runs many jobs at punctuated intervals, and they do not need to run for long periods. This means ephemeral workloads can be incorporated into their setup without compromising their clients’ experience.

Save 80% with preemptible GKE node pools.

In addition, GKE’s node pools add an extra layer of security and segregation that their clients need. Pandium’s worker nodes can be separated from client workers, for example, and they can run different security levels for first-party nodes, which they completely control, compared to the clients’ nodes.

Multi-tenant systems require strong security boundaries. So, Kubernetes!

From automatic scaling and automatic upgrading, to logging through Stackdriver, GKE makes it possible to take full advantage of Kubernetes without devoting significant engineering resources to designing, managing, and maintaining its performance.

Over time, the drift between different environments (i.e. production, dev and staging) became too great to manage without more concrete management. Cluster deployment time reverted from minutes back to hours. Operations teams were sad.

Image Source

Speed up deployment => happier operators

When looking to fix this, Terraform and in particular Google’s prebaked Terraform Modules allowed Pandium to implement managing cloud infrastructure with code. Using the Hashicorp Configuration Language (HCL) they could define their infrastructure with clear, concise code, without having to figure out how to get to that state. This is similar to when one writes a SQL query against a database. You declare the result you want, not the details of how to get a response. This enabled Pandium to get their environment creation process back down to minutes.

Terraform lets you define and deploy infrastructure with code.

With the modules provided by Google, they jumped months ahead of their infrastructure roadmap, and now they could create and configure everything from Cloud Projects, to IAM service accounts, to Node and Network Policies in GKE. Running this way empowers Pandium to ensure high availability even with utilization spikes, as their systems can autoscale up and down.

By relying on Google’s infrastructure and its robust support, Pandium has been able to provide the infrastructure for SaaS companies to build scalable in-app marketplaces. This has freed up those customers’ developers to further enhance their core product, while still offering their customers the integrations they need. To learn more about how Pandium leveraged GKE, watch the Google Cloud interview of Pandium CEO Cristina Flaschen.

--

--

Max Saltonstall
Google Cloud - Community

Father, gamer, juggler, tech enthusiast. I tell stories about how to cloud, and keep it all secure. Sometimes make games. Opinions are my own. Also chocolate