Frame 3221.jpg

Ways to execute code in Google Cloud

Guillem

4 reading minutes

Unless you're an expert in Google's products and services, and that's your day-to-day job, when it comes to launching a project on Google Cloud, you're almost guaranteed to have questions. Not just because of the overwhelming number of products, but also because of how quickly they appear and evolve. Let's try to clear up some of those questions here.

First of all, what are the options?

As of today, to run our projects (or parts of them) on Google Cloud, we have:

  • App Engine
  • Compute Engine
  • Kubernetes Engine
  • Cloud Functions
  • Cloud Run

I’ve listed them in the same order they appear in the Google Console, but later on, we’ll organize them differently.

We’d also need to add several options for storing data or the state of our applications, if they have any, but we’ll dive into that another time.

A traditional machine

If what you’re looking for is  a traditional machine, even though you won’t have to worry about the power supply failing or upgrading the RAM, the closest option is  Google Compute Engine. Compute Engine itself is a universe of possibilities, where you can pay for machines based on usage (by the hour), ranging from shared machines with 0.2 cores and 0.60GB of memory to machines with over 400 cores and 11TB of memory. There are various options for persistent disks, machines for low-priority and non-critical tasks, and so on. You get discounts for sustained use and countless machine configurations to choose from. It’s easy to run code on a Compute Engine machine, realize it’s too small for your needs, stop it, edit it to add more memory, processors, or both, and restart it.

Since it’s the closest to a  standard machine, it’s relatively straightforward for someone with basic system knowledge to set up something on Google Compute Engine. You create the machine (it usually takes a couple of minutes), and soon after, you have SSH access. There are various distributions to choose from (Debian by default).

If you don’t want to install everything from scratch, Google offers some pre-configured images, like Wordpress, Jenkins, and more, in the Google  Marketplace.

Serverless options

I’m not a big fan of the word  serverless because it might make you think your code runs  in the air as if by magic, when in reality, there’s one (or more) machine(s) behind the scenes—you just don’t see them.

That said, there are basically three options for running serverless code in Google Cloud: App Engine, Cloud Run, and Cloud Functions. A general summary would be: if you want to run an entire application, use App Engine. If some components of your application are in Docker containers (if Docker sounds like a brand of cereal to you,  here’s a link ;) ), use Cloud Run to run them. And if you want to run very specific functions of code, use Cloud Functions.

With Cloud Functions, for example, you just define the maximum amount of memory your function will use (and a few security settings), and then you can upload it in Go, Node.js, or Python. Google gives you an endpoint (a URL) corresponding to your function.

With Cloud Run, you upload a Docker container, and Google gives you an endpoint. Just like in Cloud Functions, you configure some security settings, but not much else.

Both have some limitations (e.g., 1,000 functions or Cloud Run services per project), but unless you’re building the next Pokémon Go, they shouldn’t prevent you from working on your project. All  serverless solutions are capped at 2GB of memory.

Container Management

We’ve talked about all the options Google Cloud offers, except for Kubernetes Engine. If you want to deploy your code in Docker containers and also manage the cluster yourself, Kubernetes Engine is Google’s solution. When working with containers, it’s the closest option to Compute Engine—actually, you need to select Compute Engine machines as the nodes of the cluster.

Scalability management

One of the main differences between the products   serverless   and Kubernetes Engine, or Compute Engine, is the scalability management. Compute Engine itself does not scale, it is something that is completely outside the product, they are virtual machines and if you need to scale something you will have to stop, change machines and start again, and it is only a vertical scaling (more powerful machine).

In Kubernetes Engine, you have to configure scalability yourself. You have to define the minimum and maximum number of nodes in the cluster, the type of nodes, and when deploying pods (docker containers) you will have to define the minimum-maximum number of instances of each of those pods.

In the products   serverless   You have scalability by default. You can configure it, tune it, put limits on it in case you're afraid that things will get out of hand and your card will be left in tatters, but by default your function/container/application will scale without you having to worry too much about it (as far as code execution is concerned).

Not everything is the code

Since it's not all about code, in another post we'll talk about the different options when it comes to storing states or data from your application/project. We'll also go into some examples of use cases for the different tools we've discussed in this post.

📫
Here’s today’s article. Feel free to reach out to us on social media as always, or at hola@softspring.es with any questions or suggestions!

Let’s work together!

Do you want to tell us your idea?

CONTACT US