Brew Gcloud

Posted on  by admin

Brew install google-cloud-sdk. The directory structure doesn’t need to be there yet. Gcloud auth login gcloud config set project Forth step! We need a nodejs project set up. C: Users S gcloud version Google Cloud SDK 267.0.0 bq 2.0.49 core 2019.10.15 gsutil 4.44. As we see gcloud is set up and working. In later Posts, we will see How to Connect Google Cloud VMs directly from Windows command line using Gcloud.

Hey folks, with the world keeping apart I thought it would be a good idea to make a small post on how to setup a remote desktop accessible from anywhere in the world, alone or together. The costs for this tutorial range from $.70/h to $1.2/h for a gaming & ML server with a GPU (half this if you use preemptible / spot instances), but I would highly suggest using free credits.

Advantages of cloud remote

I found myself using my remote machine mostly to watch movies and play with friends over the world. I'm currently in San Francisco but have friends in Tokyo and France I often catch up with. Cloud remote lets us essentially be on the same PC, without lag or degradation of performance.
I am on a MacBook, the remote instance lets me use Windows-only apps and games.
Wanna play the latest games on maximum definition and with ray-tracing on? The cloud lets you deploy top-of-the-line servers at will, and adjust your configuration to your usage, paying only for the hours you actually use. I also found this setup to produce a much better experience than alternatives like Stadia.
There are a lot of ways to get free AWS or GCP credits. New GCP users start off with $300 of credits for instance.
The big caveat for all the claims above is that you need a connection relatively fast, at least 2mbps otherwise you might experience a degradation of the quality of the video, and possibly lag.

How do I get started?

Download on your local machine Parsec and make an account. Parsec will be used as the hub for your distant machines. I have tried different solutions and Parsec gave me by far the fastest and smoothest experience (I'm not affiliated in any way to the company btw, but if they want to send over a t-shirt, I'll let them).

You will also need an account in a cloud provider, with the appropriate quotas for GCP. If you already have one you can skip this.

Brew Gcloud
AWSGCP

Go here to create your GCP account. GCP gives you $300 of free credit as a new user, but AWS gives you a free Windows Server license so it's a trade-off. Additionally, AWS instructions require minimal use of the terminal and might be easier for some users. In either case, you will need to add a credit card to the account.

WindowsMacLinux

Open your terminal (available in the Launchpad > Utilities > Terminal)

You will need brew and brew cask to run the following instructions. You can install them with the below. Alternatively you can just follow the manual instructions here.

Run the following to install the gcloud CLI.

If you have any issue with the above just check this link.

You will need to initialize the gcloud CLI and activate the GCE API with the following commands.

GCP requires you to request an increase in quotas before using GPUs. To do so click here to access the quotas page, or search for 'quotas' in the search bar, and select 'All Quotas'. Click on the dropdown 'All metrics', and deselect all. Select 'GPUs (all regions)', and edit the quota, setting it to 1, before submitting the request. The request should be fulfilled almost instantly.

Start a server with the following command. I recommend using a n1-standard-4 which cost around $.70/h for most games / movies, or a n1-standard-8 for $1.2/h in case of particularly demanding tasks. The below command uses a T4 GPU, but you can also go for a P4. Choose the region that is the closest to you, and make sure that they have T4 GPUs available. You can use the following command to see the availability of GPUs.

Install Gcloud Sdk

Sometime regions run out of available GPUs, in which case you can just try a different one.

Launch your instance with the following command (adjust the zone, machine-type and boot-disk-size as needed).

If everything went well, you should be able to see in the terminal your remote IP, and your server running on the Compute Engine > VM instances tab in the GCP console.

Head to the console, and first set up a new Windows password, and make a note of it.

Click on 'RDP' to connect to your server (you will need to install a chrome extension).

Once you are connected, launch the Google Cloud SDK Shell.

Launch a powershell by typing... Well... 'powershell'.

The following should set up everything you need (h/t to the parsec team who made that repo. I also recommend inputting it line by line.).

As you follow the instructions, a reboot may be required.

Log into parsec and start sharing your remote server. On your local laptop / desktop, launch Parsec, and your machine should appear, ready to go. Connect to it in parsec, and exit the RDP program.

Don't forget to shut down your instance when you're not using it, otherwise you will be billed for it. You're ready to game your sorrows away, alone, or together!

0 Claps

As I’ve writtenbefore, I’m always on the lookout for a great continuous integration and delivery system. For a long time I used CircleCI, but in the last month or so I’ve started hitting some limitations that I needed to work around.

My requirements are pretty standard, I need something that works well with Android, requires minimal maintenance, and can handle somewhat larger projects. After a souring experience with CircleCI, and a short search for other hosted providers, I stumbled upon an early solution from Google that intrigued me.

This post is the first in a series on setting up Google Cloud Build (GCB) for Android developers.

  1. Introducing Google Cloud Build

I’ve been using CircleCI for quite some time to build Pigment. It’s worked quite well, and as a team of one the free tier worked just fine for me.

As Pigment grew I started running into issues with CircleCI’s 4 GB memory limit for its standard docker containers. Initially builds would fail here and there, but Pigment kept growing, including more and more tests and resources, and build failures became more and more frequent.

After perusing the web looking for solutions, I came across several forum posts from CircleCI employees mentioning that if you upgrade to a paid plan and submit a support ticket they’ll bump the limit for you. That seemed like an easy solution, so I got my company credit card, upgraded to a paid account and opened a support ticket. The response said that a salesperson would be in touch.

I received a message from a salesperson with a brochure explaining a system by which you purchase at least 5 seats (quite a lot for my 1 person team), then purchase credits that are consumed with build minutes in quantities depending on the size of machine you need. Doing the math revealed that building Pigment would likely cost $75-$135 per month, which is quite steep.

This felt like a bait-and-switch, and was a quite costly solution with some confusing aspects, so I decided to look elsewhere.

After a brief look at Bitrise, which seemed like a decent option, I came across Google Cloud Build (GCB). While not very prevalent in the Android community, GCB seemed quite promising for Android builds due to its Docker-based build configuration and availability of high memory build machines at great prices.

As I mentioned, GCB isn’t very prevalent in the Android community at the moment, so getting my Android builds running took a good bit of exploration and experimentation, but the result has been quite pleasant.

As I mentioned, Google Cloud Build has quite attractive pricing. You have your choice of three machine sizes, with the first 120 build minutes at the smallest size (the same specs as CircleCI) for free.

CPUsMemoryPrice per build-minute
13.75 GB$0.0034 / build-minute. First 120 build minutes per day are free.
87.2 GB$0.016 / build-minute
3228.8 GB$0.064 / build-minute
Gcp

For Pigment I chose to go with the mid-tier build machine with 7.2 GB of memory. At that price a 10 minute build run 100 times per month will cost $16. This is a really attractive price compared to CircleCI’s base price of $75/month.

Google Cloud Build runs on Google Cloud Platform (GCP), using cloud compute VMs, cloud storage, cloud registry and other Google Cloud Platform services. GCP is easy to work with locally using the gcloud command line tools, is highly scalable, and has some really attractive pricing.

The first step when working with GCP is to install and setup the gcloud command locally. For example, on macOS:

Once the gcloud command is setup you can easily manage your GCP resources from the command line.

Configuration⌗

Like several other solutions, GCB is configured via a yaml file that lives alongside the code in your repo. The convention is to place your configuration in a file called cloudbuild.yaml in the root of your project directory. This is the config file that defines the build steps required to build your project, and will be unique to each project.

Also similar to other solutions, GCB runs builds in isolated Docker environments. Unlike other CI providers that I’ve used, however, each step of a GCB build uses its own Docker container, sharing data via a shared file system.

In the cloudbuild.yaml file, the main build configuration happens in a steps array. Here’s an example of a simple step:

Brew

Docker Container per Step⌗

As I mentioned, each step in the cloudbuild.yaml file runs independently using its own Docker container. When configuring steps, the name property identifies the docker container that will be used to execute the step. The Cloud Build convention is to make the container name match the name of the command that is run when the container is launched, though that’s not required.

In the sample above, when the gcr.io/cloud-builders/gsutil docker container is loaded it will run the gsutil command, passing the arguments from the args array.

Though the Docker containers are cached between steps so later steps that use the same container won’t have to download them again, its advantageous to make the containers for your build steps as lightweight as possible to reduce the amount of time spent downloading them.

Bring Your Own Container⌗

While GCB is based on Docker and has several useful containers built in, they don’t yet have built-in Android support, so you need to bring your own container.

Fortunately, there are third party community cloud builders, including one specifically for Android, that you can use for your own builds. The readme in the repository contains instructions to deploy an Android container to your GCP project, but here are some simplified (read: copy and pasteable) instructions:

This will push the Android builder container to the Google Container Registry for your GCP project. While Dockerhub containers would be more convenient, GCB can download containers faster from GCR making this preferrable.

Substitution Variables⌗

Cloud Build supports user defined substitution variables for builds, which are set either via the command line arguments when running manually, or via the trigger that launches builds based on changes to your source repositories. Substitution variables start with an underscore, and can be used in your build config file to allow builds to be customized for different situations.

In this example, the variable _CACHE_BUCKET is used to identify the Google Cloud Storage bucket to use for the build cache. When running locally, you’d supply the value via the --substitutions command line flag.

Shared Volumes⌗

Because of the isolation of each build step it can be difficult to share data between steps. The working directory (/workspace) is a default shared directory that can be used to pass information between steps.

This example uses a custom container that I’ve created called buildnum. The gcr.io/$PROJECT_ID/buildnum container runs a buildnum script, passing in a file containing a number, and an output file. The script simply reads the number, if it exists, increments it, and writes the number back to the source file and an environment variable into the output file. This allows later steps to source the .buildenv file to get the BUILD_NUM and any other environment variables we want to add.

In addition to the shared workspace that all build steps receive, this step makes use of a shared volume called config. This is where the first copy_config step copied the remote config files to using the gsutil command earlier. Using this volumes notation allows you to add shared folders that are only used by steps that need them. In this case the only steps that include the config volume are the copy_config, setup_env and save_env steps, which copy the shared config, update the build number, and write that config back to cloud storage.

Ordering⌗

GCB supports parallel execution of build steps, which can tremendously speed up the build process. Each steps gets an optional id parameter, and you can set the waitFor array to contain any other steps that must complete for the step is executed. This allows for some complex workflows with many simultaneously executed steps, resulting in faster build times.

Running Your Builds⌗

You can run Google Cloud Builds locally for testing, manually via the command line in Google Cloud, or automatically based on Git commits.

Local Builds⌗

To run your builds locally, you first need to configure gcloud to access your GCR images, then install the gcloud-build-local component, and then you can trigger builds locally, without incurring any costs from GCP.

Manual Builds⌗

To trigger a build manually you simply use the gcloud utility to send a build to the Google Cloud Build environment.

Automatic Triggers⌗

Using the GCP web interface you can setup triggers that will watch your git repositories and trigger builds when certain conditions are met.

Install Gcloud On Mac

In my case I execute builds any time commit is made to any remote branch and filter certain build steps based on the branch. Depending on your branching or tagging technique you could trigger different build processes, via different cloudbuild.yaml files, for different cases.

Now we’ll take a look at an example cloudbuild.yaml file from the community cloud builders site, step by step.

Extract Cache⌗

To help speed up our builds it’s important to cache the build results so that they can be reused. This prevents Gradle from having to download all of the app’s dependencies every time a build is executed. To do this, we copy the contents of our cache bucket and extract the cache tarball.

The second step here uses a custom container from the community cloud builders repository called tar. This container runs the tar command, but you’ll notice that I override the entrypoint in this example to run bash, instead.

This is needed because we can’t guarantee the cache.tgz file exists, as it won’t the first time the build runs, or if we remove the cache while debugging. Using a bash command allows us to handle that case with echo 'No cache found.' and still continue with the build.

Build⌗

Once we have our build cache extracted, we can build our project. For this we use the custom Android Docker container that we created earlier.

Notice that the build step is configured to wait for the extract_build_cache step. It also includes the build_cache volume so it can access the cache from previous builds.

After the build completes, we write the resulting APKs to the artifact Cloud Storage bucket.

Brew Install Gcloud Command Not Found

Cleanup⌗

Finally, once the build is complete we compress the Gradle cache and wrapper and upload them to the cache Cloud Storage bucket.

I’ve been working hard to get Google Cloud Build working reliably for Android builds and have been using it for the last couple of months for Pigment. With its highly configurable builds, easy management and super affordable pricing I think Cloud Build is a great CI solution.

In a future post I’ll explore how you can customize your builds to get more out of them.

Special thanks to Sebastiano Poggi, Riccardo Ciovati and Jake Wharton for reviewing this post.