Firebase, Antigravity & Typescript FTW

Danny Davidson
Intro
Hey everybody. We're gonna begin the vlog portion of this series walking through how to use TypeScript effectively in Firebase.
To begin, make sure you have Antigravity installed as well as the gcloud command line tool. As you can see here, we have it ready and waiting .
Cloning the Template
Now we won't be starting from scratch. I have here a template that sets up a pretty good project for TypeScript development and Firebase. We'll go into details here in a bit, but to get started, you're gonna use the "Use this template" button and create a new repository. Once you've done that, go ahead and give it a name.
We'll make this "case-for-firebase" and I'll do it under our organization. We can go ahead set it as private. That's probably what you'll want to do if you're gonna be making a application for yourself and "create repository".
This will generate the repository from the template. Might take just a couple seconds . And then once you've got it, we need to go ahead and create a release branch. As part of this demo, we're gonna go through how to set up both a staging environment and a production environment. So if we go here to view all branches and create a new branch, I typically name my production branch the release branch.
Go ahead and create it, and we now have a main branch and a release branch.
Reviewing the README.md
Now that we've got the repo cloned, let's take a look at the README. So our goal with this first video is to demonstrate how you can set up Firebase App Hosting and Firebase Hosting to run TypeScript. ~~UL ~~Ultimately React in both cases, but we're going to do a NextJS application in Firebase App Hosting, and then just a traditional client-side React application to be deployed to Firebase Hosting. We're also set up with PNPM workspaces. Shared packages, so you can build up packages that can be shared across cloud functions, across Firebase Hosting and App Hosting in a way that makes it where you can maintain it in one spot.
We've also set up a full build pipeline allowing you to deploy both to Firebase Hosting and to Firebase App Hosting just with a single push to one of your branches.
Reviewing AGENTS.md
And given that AI development is here to stay we're also set up pretty well for steering any agents that you're working with to successfully produce solid TypeScript code. Here's the AGENTS.md file. We've got our persona, describing our project structure, tech stack and versions.
We're making heavy use of workflows that are available in Antigravity. As well as many kinds of guardrails as we can put on agents as they execute in our environment. Our goal is to reach convergence as best as possible by putting as many deterministic layers in place to steer the AI. We're using linting, we're using, obviously type checking, using formatters, and then we're also using commitlint to make sure our commits follow the specs we want. All of those are listed in the agents file here which gives pretty good guidance and has been pretty successful in all the projects I've done at giving the best chance of shippable code produced by the LLMs.
Cloning the Repo
Okay. So with that overview in place, let's go ahead and get the code down to our system and we'll get it open in Antigravity. So first go ahead and clone the repo here. Let's grab a copy of the URL. We'll clone it locally, CD into the directory, and we get a agy command when you install Antigravity, which allows you to open the project from the command line.
Setting up Antigravity
And with Antigravity open let's go ahead and get it set up so that we are doing the best job we can with both Google Cloud and Firebase. So one of the big things to set up beyond things that are in the file system are your connection to MCP servers. So up here with the three dots, we've got a little helper here that lists all the different MCP options you've got. You can obviously add more in JSON, but this gives you a nice kinda list view. With the ones that are default supported. You'll definitely want the Firebase one. I've already got it installed. But as you can see here, for all the different Google Cloud services that are available there's quite a few different MCP servers that can definitely help you as your application needs get more complex.
I also have used the Linear one since that's what we use for our project management. It's quite nice. There's a GitHub one here somewhere. But one thing to be in mind of is that there's a recommendation that you really want only about 25 tools active. The LLMs kind of deteriorate in their effectiveness using them if you get beyond that.
But you are allowed to have up to a hundred tools. So depending on the MCP server, you'll have a different number of tools. Here for Firebase, you've got quite a few that get installed as part of the setup, but you can turn on and off specific ones.
So something to be aware, MCP servers are available. The agent will lean on them when it's appropriate, and your target is about 25 tools active at any given time. But I've had pretty good success having closer to 50, 75 even. It's only really if they become relevant all at once that the LLMs can get confused.
Walking Through Antigravity
So with Antigravity configured, let's go ahead and do just a quick walkthrough so you can be aware of the environment. We're not gonna spend a ton of time. A lot of people have already done quite a few walkthroughs on how to use it. But let's go ahead and close out of there and we'll start with the Agent Manager.
So ultimately things are organized in Antigravity around conversations in workspaces. Workspaces are typically gonna be your git clones that are local for different projects. When you're in this view, you're able to just give a prompt directly but it'll be contextualized to the project you have set up .
If you're doing high level work this is a great place to get started. Let's do just a quick review of our code: "review this repo and give me a high level understanding of what it does". So we'll see the agent start up. What's nice is you get its thinking as it goes through organized around tasks, and it will prompt you for feedback when it needs it.
But you can see here that we've finished the work and we've got a high level description. At any given time, I can create another conversation, but can always go back to ones that I've done to see the actual content.
This is quite nice. Things obviously build up over time, quite a bit, but going back and being able to review these is quite nice. Now, if we go back over to the editor. And open up the agent tab here. We'll see that the most recent conversation is available .
So that's nice. You can be in this view, not thinking about specifics of code and quickly get over with code in context and see your active conversation.
Now one of the most common features you're going to use as your editing code is, ultimately, using the agent view over here, but contextual to specific code over here in the editor. So let's go ahead and start a new conversation. I'm gonna do "command + shift + L" to pull that off. And let's go ahead and select some text.
So I want to make a very superficial change to this component. I'll do "command + L" and you'll see I get the file and the specific line numbers that are relevant pasted over into the conversation. This gives the LLM contextualization on what we're actually talking about, which vastly improves the chances that it's gonna produce what you want.
So let's make a very superficial change. Let's say "let's change the button text to 'let's go' ". Just like in the manager view, we're gonna see it thinking and organizing its work based on tasks. It is quite nice to monitor all of the different steps it takes.
And after some thinking, you can see it figured out what the change was, and it gives me the option to accept or reject it. We'll accept it here and we now have the button saying "Let's go". This is extremely useful. Oftentimes, you can write and refactor quite a bit of code all at once. And of course there's also auto complete on steroids. You'll get that throughout the experience. I find it quite annoying oftentimes, but it does really accelerate your ability to edit and make changes across code.
And then one final thing to review before we move on to the specifics of this TypeScript project, you have workflows built in and they even render in a specific MD format here. A lot of times you'll have just basic commands you want to run. But they can get, more complex. Our commit one is very specific to try to get the best possible commit messages out of the LLM.
And so with that in mind, let's go ahead and just, we'll finish up our walkthrough with a commit. So using our slash command here, you'll see all of our workflows show up as available to us in our conversation. If we wanna do a commit and let's go ahead and push it when we're done. Because we're gonna be doing command line activities as part of this task, we will get prompted once we get this thing started. As we watch here, it's gonna generate, you can see here that it prompts for permission to do a command line execution. So we will accept it with option enter. It's gonna then ask for a diff here, and we'll allow that and we'll see the output it as it occurs.
And we'll see finally that it does come up with a commit message that does a good job of describing what we did. You will notice that all of our hooks are running. We've got a format to guarantee that all of the code is formatted. We've got linting that runs as well as a type check, and as long as all of that succeeds, our commit will execute and commit.
And then finally, since we ask it to push, it will prompt us to execute a git push, which we'll accept, and we've now pushed our local branch up to our remote. As you can see, this is quite effective. LLMs are great at summarization and so I've used it really successfully for, ultimately, writing better commit messages than I ever have before.
So I definitely recommend you take advantage of these kinds of workflows in Antigravity.
I encourage you to explore more. Antigravity has quite a few features. All of them very familiar if you're used to VS Code, but the agent capabilities are quite capable.
Setting up Firebase Projects
All right. Now let's get into the specifics of Firebase. We're going to create two projects using the Firebase console, both a staging environment and a production environment. To start, let's go ahead and create the production environment. You see here, there's a big juicy button. It will allow you to create a new Firebase project.
Let's give it a name. We'll do a "Case for Firebase", and it always appends a random string at the end. I usually like to be a little more explicit (that one's taken). Let's try like that. Click continue. It'll then ask about Google Analytics. We typically can just use the default unless you have a more complex setup and we will create the project.
And we're ready. So we'll go ahead and click continue and we'll get dropped onto the landing page for our project. Now, one thing to do for production environments to help you keep them all straight is there's an environment type under project settings. Go ahead and click that and let's switch it to production.
So once we save, you'll see that there's a nice little badge that gets added for this project, which will help you keep it differentiated from your other environments. So with production done, let's go ahead and do staging as well. It's gonna be the same process. And just like before, the analytics default should be fine and create project. And we'll click continue and we'll be dropped onto the landing page again for our second project, and this will be our staging environment. The recommendation from Firebase is to use a separate project per environment that's going to have ultimately the same application stack just for a different use case.
Setting up App Hosting
With this done, let's go ahead and configure our App Hosting. So for App Hosting, you're gonna have to have a Blaze plan initialized, but to get there, you'll go to the "Build" section and click "App Hosting". Now we're in our staging environment, so this will be for staging. You're going to have to upgrade your project.
So let's click that. And if you've used Google Cloud, you'll already have a billing account ready. If not, you'll need to create one. You can follow this button to get that done. I've already got one. I'll click and link it here. And we are now ready to set up App Hosting. So we'll follow the flow here. You can also do this on the command line but just for demonstration, we'll do it here on the console.
Click "Get started". And our App Hosting runs on Cloud Run which is KNative running on Google's infrastructure. We will need to choose a region. And for the demo, we'll do "us-central1." Click next. As you can see, we're not connected. Okay, we'll continue. We're gonna have to authenticate with GitHub. We'll choose our account. In this case it'll be our organization. And we'll click confirm. And while we're here, we'll go ahead and choose a repository. We've got our "case-for-firebase" ready to go. Click next. Now it's gonna wanna know what branch. This is our staging environment, so we'll have that be our mainline environment, so that as we merge into main, our staging environment will reflect it. Also need to choose the app root directory. And since we're doing a monorepo, we've got ours configured to sit at "apps/fb-app-hosting". And by default you can get automatic rollouts to occur whenever you check in to this branch. We are using a custom build setup so that we can deploy more things than just App Hosting at once. So we'll turn off automatic rollouts. Click next and let's give it a name for our backend. For demonstration purposes, we'll use the name of our app. And append a "backend" keyword and "Finish and Deploy". This will get us configured up, which can take a few seconds.
And you'll see that we now have a backend represented. Now it's going to, with our initial commit here, it's going to attempt to do a rollout. It will fail based on how we're configured, but once we've got our full build setup in place, we'll come back here and see how this looks whenever we've done a successful build.
So we're going to move on and go ahead and do the same thing for production. I won't make you watch it here. One thing to be aware of though, the App Hosting backend name that we chose here needs to be the same in both your staging and your production environments.
Setting up Locally
Now that we've set ourselves up, up in the console, we're going to now get ourselves set up locally. So a few different things we need to do to get started is, first, associate the projects we just created to our local environment here. We can ask using the Firebase CLI, what projects we have available with a command line option.
We've got aliased here to a package script, pnpm list-projects, and we can see here a full listing of all the different projects we have available to our user. We do want to use one of these and switch to that project so we're in that context. So let's go ahead and start with the staging environment.
We will do another package script we have aliased, which is switch-project. We wanna switch to our staging environment. All right, now that we're there, let's try to list the projects again, and we'll see we have a "(current)" decorator here on this project ID, and it's in blue. So now that we are in this context, we're going to be able to do all the different CLI commands downstream associated to this project.
But first, we're gonna need to configure our .firebaserc file, which is right here, and in it there's a mapping of name to project ID. We want to use the staging environment as both of both our "default" and our "staging" keyword, and then we're gonna wanna use our production ID here for our "production" keyword. Now, these keywords are what you're gonna be using when you use the firebase use script to switch projects, so keep those in mind. You can always look them up right here. With that done, we want to also set up our firebase.json for App Hosting. You can see here we have an App Hosting section. It's currently empty. We're going to initialize it using the Firebase CLI. So if we come back to the command line, we'll clear. And it's going to be firebase init. And we're going to init apphosting.
Okay. So we used the console to create the backend in the last step. You could also, if you hadn't done that, create the backend here at the command line, but we're gonna link it now and we can see the name of the backend that we had chosen. Enter, it's gonna ask where the app's route directory is.
It's gonna be the same as what we did last time up in the console, which will be "apps/fb-app-hosting", and you can see that it's going to list the steps that it did. It created an apphosting.yaml file, as well as it wrote a entry into the apphosting section of firebase.json.
If we go back to the firebase.json, we can see here that in the array we have a new object and it has the values that we just put in the backendId, the rootDir, and then this ignore list. This is important because whenever you deploy your App Hosting you have it built using a BuildPack in Cloud Build and then it's sent up to Cloud Run to actually deploy.
And if you have a lot of different large files in your repo, there is a maximum of about a hundred megabytes compressed that you're able to ship to do the build. And I a lot of times will have a big art directory that has a bunch of git large file system art assets and things. We don't want to deploy those.
So in this ignore array, we can go ahead and include those and they won't get shipped up and will keep us under the 100mb requirement.
Dev'ing Locally
All right. With that done, we're now set up to execute locally. So we've got a nice dev script that we have set up in our package.json, that (let's go ahead and word wrap) that gets us set up to run both the emulator for Firebase, which can emulate many of the databases as well as both static Hosting and App Hosting.
As well as Storybook, which we use for being able to build up a component library for UI separate from our main applications. We run all of those concurrently so that we've got the dev servers running in one session that we can use to turn on and off dev mode. So if we come over here to the CLI, we run pnpm dev.
We're gonna spin all of these up in parallel, and you can see that the emulator spins up and it puts our App Hosting at 5006, Hosting at 5005, and then we've got Storybook running at 6006. Now, these emulator details are also in your firebase.json. If you scroll into the emulator section, you can see all of those ports listed just as we define them.
So if you come over to the browser, we have a very simple UI, just for demonstration purposes. Here is our storybook. We've got our basic UI set up so that we can have multiple different versions running and can confirm that everything renders as we expect. Then we have App Hosting running. If we refresh here, you can see this is definitely Firebase App Hosting. And then we have just generic static Hosting. We can look at the applications themselves here in a bit, but that is what we're working with, at least for this demo.
Reviewing our Demo Applications
And let's quickly look at the applications. Nothing much to them, but we can at least give you a high level understanding of what's going on. So under fb-app-hosting, this is our NextJS application. We have an app folder here. We can see that our global.css is using Tailwind, but it's pulling all of the theme information from our @packages/ui. This way we can just declare all of our styling in one spot and then reuse it across any different application that might need it. And our very basic page is using a Demo component that we pull from @packages/ui. Similarly over for our Firebase Hosting, and this is just static hosting, we have a Vite application running and our main.tsx is using our App component here. And we're importing index.css, which just like the other CSS file we saw for NextJS is importing the theme from our @packages/ui. If we look at App.tsx, we're using the same Demo component and rendering it right here.
If you look at @packages/ui we can see we're set up with our theme.css, which just has a few basic overrides just to demonstrate, and then we have our components here with different stories as needed to represent all of our different base components.
Introducing Cloud Build
Now that we're set up to develop locally, let's go through how to build and deploy our applications. So we're first gonna go over Cloud Build, which is Google's serverless cloud build system. It uses Docker natively to allow you to build up multiple steps for a full build pipeline using Docker containers that configure up all the different utilities you need to complete each step.
The App Hosting backend that we configured in the beginning, it uses Cloud builds by default to build and then deploy your Cloud Run system for App Hosting. But we're set up with this demo to do a multi-step build that will deploy both to App Hosting and to Firebase Hosting, all from one check-in to one of our branches.
Reviewing Cloud Build Setup Scripts
And now back in our project, in order to set up Cloud Build, we need to have a few things configured up in Google Cloud to make it all work. We need to be able to create a service account that's going to be the service account that our build runs as. We need to set permissions on that account so it can do all the things it needs to do to successfully pull, build, and deploy all the different components we're going to be shipping for our environments up in Google Cloud.
And we're going to go through, ultimately, how we set that up. So it can be a pain to do it all up in the cloud console. And so for convenience and to demonstrate kinda the steps, we have some scripts set up that will do all the different configurations for us. We need to be able to create a service account. And this script here will do that for us. We'll run that here in a second. We also need to be able to set up the access management roles that our service account needs in order to do its work. We have that declared here in this script under our REQUIRED_ROLES array. We'll review and walk through the steps that we need to take in order to run this script.
But if we've got a service account set up and then it's properly configured, we'll then go and use the Cloud Build Console to set up our build to work with our different branches for this repo.
Executing Scripts to Configure Cloud Build
Okay, so now we reviewed the scripts, let's go ahead and get ourselves set up to configure Cloud Build. So the first thing we need to do is create a service account. But we need to do that under the specific project in Firebase. So let's list our projects again and confirm that we are still on our staging project.
And we are, now, one thing we're gonna be needing to use is beyond the Firebase CLI, we also need to use the GCloud CLI. So to authenticate there, we're going to do gcloud auth login. It's going to launch up another browser window, and we will give it our different access controls. We'll allow all of these, and we're now authenticated. So we come back, we will see that we are configured, and let's go ahead and configure up our config for GCloud and we'll set the project to "case-for-firebase-0001-staging". Okay, so we now have the Firebase CLI and the GCloud CLI both aligned to the same project. And now we can run our script for generating the service account. So let's see, it's gonna be pnpm create-builder-sa and we're gonna need a username for the surface account email, and we can choose builder.
This is using the GCloud API to go ahead and create that service account. And we can see we successfully created it. So now we need to provision its IM roles. Which are all needed in order for it to do the different activities it needs to do to both build and deploy. So we'll use our script again, but this will be setup-builder-iam. It also create accepts a --sa username, which we'll use builder again. So we'll see the output of every role as it gets added. And we were successful, so we now have a service account ready to go with all of the different IAM roles needed in order for it to successfully deploy up on Cloud Build
Configuring Cloud Build Triggers
And now with the service account ready to go, we can configure our triggers and cloud build against our different branches. So we'll only demonstrate staging, but you're gonna be doing this for both the staging project and the production project. Here we are not in the Firebase console, but the Google console, we can see up here on the top left we are in the "Case for Firebase Staging" project that we created. Once you've created your projects in Firebase, when you come over to the Google Cloud console, you will see those projects represented. And you can see here the history for our builds for this project. And when we first created the App Hosting backend, we saw a build get triggered. Here is that failing build. We've only had the one. We are going to now set up triggers to work against our repo to run our own custom build. You can see here we have a triggers option over in the left menu. Let's click that and we're going to create a custom trigger. So click the " Create trigger" button and let's give it a name.
This will be "demo-build-and-deploy". Let's put it in the same region as our backends. We can add a description. We won't do it for the purposes of the demo. And we need to associate the trigger to a git source. We're going to use our GitHub account, obviously, and we need to connect it. So there's several different options.
We're gonna use the 2nd gen repository generation that Google Cloud Build has support for. It will keep a local cache in Google Cloud of your repos so that even if GitHub goes down, you have access to your source. I recommend this as a best practice. We need to first link a repository and we need a connection to do that.
So we're gonna go ahead and create a host connection here. Pick the region. This is where the mirror is ultimately going to be stored. Keep it alongside App H osting, give it a name. We can call this "case-for-firebase-demo" and we'll connect. It's going to interact with GitHub and all of its OAuth steps. Click continue. And we're going to install it into our Daywards organization.
We'll confirm. And so we now have a 2nd gen repository set up for "case-for-firebase-demo" here, associated to our GitHub account. So if we come back to triggers, we can now go through the flow again. Let's see what we say, "case-for-firebase-demo".
And now when we come in here, we are going to have the ability to link the repository, but we'll have a connection ready. There it is. Choose a repository, which is "case-for-firebase", okay. And we will link it. So now that we're linked, we can select it here. And this is our staging environment, so we're going to connect that to our main branch.
So it's going to be associated to a cloudbuild.yaml file. It's going to look for in the repo, which we already have configured for this demo project. All the other options are optional except we do need to decide, or to pick a service account. So the service account that we created before using our scripts is listed here, and we know that it's configured with just the minimum set of required roles needed to do its work.
And so we will use it whenever we do a build. So with that trigger configured will click "Create" and we now have this trigger set up. So at any given time, if you want to be able to run a build, even if it isn't coming as a result of a new change pushed to your repo branch, you are able to, from this triggers tab, click "Run", and it will run with whatever's in that branch as configured here.
So we could pick "main" here and run it and it will run. But because we did all of the linking to associate it to GitHub, it will automatically run anytime we push a new change to our main branch.
Reviewing cloudbuild.yaml
Now that we've configured Cloud Build, let's review the cloudbuild.yaml file that declares all the steps taken to do a successful build and deploy. Let's start by looking at the top level keys, and then we'll go through the details of the steps. So you can see that we have a "substitutions" key at the top level.
This section allows you to declare different variables that can be used in steps that can change based on the environment running the build. You can configure these in Cloud Build for a trigger, and they'll get applied each time it gets triggered in that environment. We then have a "steps" key, which defines an array of steps that will run through in sequence or in parallel, depending on how configured, to run a step in your build.
We also have an "options" section, which has many different options. The one that you'll need at minimum to work with Cloud Build and how we're configured is a "logging" key with CLOUD_LOGGING_ONLY. This makes sure that all the build steps are written to Cloud Logging, and that is what the console that we saw uses to stream text so you can see the steps in your build running as they run.
Finally we have an "availableSecret" section. This is where you configure up specific secrets that are needed for your build. And we depend on a single secret for this cloud build, which is a GITHUB_TOKEN. Now, this wouldn't normally be a requirement. By default once you've connected Google Cloud to GitHub, it's able to pull your source without needing any separate keys. But in this build, we are pulling, using git large file system as the first step, and that does require us to configure up a GitHub token to pull that off. So we'll go jump over there and walk through how to set that up. But once we've got that done, we can review the rest of these steps.
Creating our Github Access Token
To create our GitHub token. We'll come back over to GitHub, and we'll go up to our user in the top right. Come down to settings and we'll scroll down to "Developer settings". Now we need to create a "Personal Access Token", and we're going to do a "Fine-grained Token" to show this off. Come up here and click "Generate New Token". And I have two factor auth, so I'll open up my GitHub mobile and get myself authenticated. And we're gonna go into create a token. So we'll make this say "Case for Firebase Pull". I'm gonna put it under my organization and we'll set no expiration just for convenience. And let's only allow our "case-for-firebase" project. And because this is fine-grain, we're able to add just the permissions we want, and all we need are the "Contents", which will also add the "Metadata".
So with that done, we have a token that is ready to go and will allow us to do a large file system pull. Let's generate the token. And copy it out. And don't worry, I'll be deleting this token so that you don't try to access anything on my account.
Configuring our Secret
So with that done, we need to associate it in Secret Manager. Now we can do it in the command line, but I'll demonstrate it in the console to start. So let's go over to our hamburger menu and we'll go to "Security > Secret Manager". This is your place in Google Cloud to store all secrets that you want to be able to pull into any service you might be using. Let's go "Create secret".
We're gonna use GITHUB_TOKEN to match what we had defined in our Cloud Build, and we'll paste the value. With this in place, and there's a lot of different options, you don't need to add any by default, we will have that secret ready to go. Now, all secrets in Google Cloud are versioned. Every time you make a change, it's gonna create a new version.
If we come back to our code down here, we'll see that we have the project id. The GitHub token, which is the name that we just used, and we're using the latest version. If we wanted to be explicit, we could do version 1, but latest is convenient in most cases.
And we're close, but we're not quite done. We also need to associate this secret to App Hosting so that when we run in Cloud Run, it can successfully pull the secret. So the Firebase CLI gives us a option called apphosting:secrets:grantaccess that allows us to ultimately grant access to a secret with App Hosting. So you can see the options here, we want to grant to our GITHUB_TOKEN, and we need to associate our fb-app-hosting-backend. Once we do that, we'll successfully set up the IAM binding so that secret will be available when we do run up in Cloud Run.
Reviewing our Build Steps
Now that we have everything configured in order to run our builds, let's run through all the different steps we have available, which pull off both our build and our deploy to App Hosting and Firebase Hosting. First, a little bit of mental model. The way Cloud Build works is the name field in each one of these steps, points to a Docker image that is going to be pulled and used to execute a command as part of a build step. Now, throughout all of the different steps, we have a single persistent directory, which acts as the "workspace", but each image as it gets pulled down for the step is able to bring down its own utilities and other helpers that are needed to do the work it needs to do, and it operates against the persistent data that's in that workspace directory.
So as this first step, we're pulling the alpine/git image, and we are then executing this command to do a git large file system pull to make sure that we have all the assets available for the downstream steps. The next several steps are all using the node:22 image, and you can see that they're all running basic build steps of installing, linting, testing, and building our monorepo.
One thing to be aware of for these steps is you're able to run in sequence by default, but you can also use a waitFor command to make sure that a specific step doesn't begin until the completion of a previous step. So we have this installDeps, which is installing the dependencies for all the different packages, but once that's done, we can then execute linting, testing, and building all in parallel.
And so we have all of those steps waiting for the installDeps to complete. We also don't need any large file system assets for linting and testing, but we do for the build. And so we only depend on the large file system pull being complete for our build step. Once we're there, we move on to a custom build step we've got that calls a script in the repo. This is how you can call local scripts in your repo to do build steps that kind of need to be maintained alongside the source. This one goes into some node details, but we swap out the workspace dependency for a packed package dependency so that we have an easy way to bring our @packages/ui dependencies ready for deployment to the BuildPack that builds our NextJS app in App Hosting. There's some more details there. Feel free to review the source code in the repository. But that's there as a reference.
Next step is we actually do our deploy. So we're gonna be using firebase-tools and still our node:22 image, and we're gonna call the deploy command for firebase-tools.
And we're going to deploy hosting,apphosting. Now we need the project ID, which is always available for any Cloud Build using the dollar sign variable name. And then we include a message just to be able to keep a paper trail in the build logs.
And next we need to monitor the rollout for our App Hosting to know that it successfully completes. So here we use another custom script, which is going to poll the App Hosting backend, given the PROJECT_ID and the _APP_HOSTING_BACKEND ID.
And you can see here we're using dollar sign, underscore, which maps to our substitutions up here. So these will change depending on, or can change depending on which environment we're deploying on. And this step will continually poll until it gets a successful or a failure notification from the App Hosting rollout endpoint, letting us know whether the BuildPack successfully built and deployed our newest changes to Cloud Run for App Hosting.
And once we get to this step, if it's all successful, we'll get a green check and the build will have succeeded.
Triggering a Build
So let's go ahead and yeah, move on and see this in action. So I've got some minor changes committed and we want to see how it runs up in Cloud Build. So we'll do a git push and then we can come over to our Cloud Build and we will see immediately a build start and show us that it's running. This maps to our main branch as we can see here for our source directory of "case-for-firebase" using our trigger that we set up "case-for-firebase". And let's go into the details of the build. So every one of those steps that we saw in the cloudbuild.yaml file will be represented here as a step. At any given time in the build summary, you can scroll through and see all the logging content that gets generated as the step executes in Linux during the build.
Now when we get to the deploy step, we'll step out and we will take a look at the other build that gets generated to do the actual deploying to App Hosting. So we'll wait until we get to that step. Okay, and so now we've reached this step. We'll have triggered a rollout and we can come back out to our build history, and we'll see that another build has been created. Now this is the BuildPack that is configured completely by Google for App Hosting. It has three steps, a prepare, a pack, and a publish.
When it comes to your own configuration errors the place you'll debug the most is in this "pack" step. And we will just scroll through here to see the basic steps. As you can see, it's building the NextJS app .
And we successfully built. You also notice we're using a App Hosting adapter here. These are maintained by Google. Nothing that you can change yourself, but there are other adapters for other backends.
And we're now using the standalone output and packing it up for deployment as a Docker image to run on Cloud Run.
And we can see the image right here, ready to go. So soon after this, the build will report as successful.
And there it is. But if we go back to our main build, since we're still polling to understand when we've completed our App Hosting rollout, the build is not the only step that happens as part of the rollout. It also needs to deploy to Cloud Run. So our "verify-rollout" step will still be waiting. But if you get a green check on the seventh step, you'll know that you successfully rolled out all the way to Cloud Run.
While we're waiting here, let's go ahead and go over to the Firebase console. Let's go to our staging environment and then to our App Hosting backend. We can see in that time we did get an update that our latest release was "Deployed from the Firebase CLI". As you can see, it requires quite a few steps, but once you get set up to use Cloud Build, you are in a great place to deploy, not just App Hosting, but other services that you want to deploy, whether they be Cloud Functions or Firebase Hosting, which we can see here.
If we go to build and hosting, we'll see that we were able to build with our build account and deploy to our default static hosting environment. So if I click here, we can see our Firebase Hosting. If we go back to app hosting, we can see Firebase App Hosting. So, from both a dev environment setup to run both static React apps and server side rendered NextJS apps, we've walked through how to set up both the environment for local dev and how to deploy with one push to both environments using Cloud Build.
Assigning Homework
All right. And that's pretty much as far as we're gonna go in this video. My homework for you is to set up using the same steps that you did in staging your production environment, which should by now have its prod flag ready to go. And once you've done that, you'll have a build pipeline where your team can integrate into main and deploy automatically to staging.
And then when you're ready to release to production, you'll just create a pull request, here in GitHub, going from main to release, and once we create that pull request and merge it, a new build will trigger in the production project and your changes will get deployed to production.
Coming up...
Thanks for sticking it out this long. This will hopefully be the longest video in our series, but it is the foundation we'll use to show off many more aspects of Firebase. In our upcoming session, we'll be going through Cloud DNS and how to set up custom domains using Firebase. Should be much shorter than this video.
And then we'll move on into Firebase Auth, which is a requirement to integrate with just about every one of the different databases in Firebase. So please stay tuned. If you like what you saw, give us a like and tune in next time.
Thanks so much.