Introduction! How to Access Google Labs?
Did you know that Google has a secret AI lab where you can get access to well over 35 plus AI tools that Google is currently developing and not available to the public yet? Well, in this article, i will tell you how to access Google Labs and get instant access to all these 35 plus hair tools completely for free.
Table of Contents

How to Access Google Labs and Get Instant Access?
Yes, you heard it correct. You don’t have to pay anything to use these tools from Google. And the thing is that not a lot of people actually know about this platform or all the tools within Google Labs. So definitely make sure to read the article till the very end to learn how to get access and all the AI tools available within Google Labs.
So let’s quickly get started. All right. So this right here is Google Labs and this is the platform that I’ve been talking about and as you can see it says the home for AI experiments at Google.
What is Google Labs?
So basically it is a platform that gives you access to all the latest AI tools that Google is currently developing and it is not publicly released yet. So basically it is not available to the general public yet. But if you head over to labs.goole Google or click the link in the description below. You will land on this page right here and you can get access to these unreleased tools via this particular website. So, first thing first, head over to labs.google.
To get started:
Visit labs.google
Sign in with your Google account
Explore the tools available under different categories
Finding the AI Tools: The Experiments Page
So, this right here is the address of the website. And as you can see, it says the home for a experiments at Google. And now, if you scroll down, you’ll be able to find a couple of these highlighted tools, what this platform is all about, the events, the community gallery, and a lot of stuff. And now we are specifically interested in all different AI tools that are available within the platform. Right? Well, to see all of that, all you got to do is to click this option right here that says experiments. And now it will open up a new page listing all the tools or all the experimental tools that Google is currently developing. And you can get access to all these tools from this website or let’s say this page right here. So it says tools for anyone to create, learn, develop and play with the future of AI. And towards the top you have a couple of these categories like all, create, develop, explore, learn and play. And all these cards that you see on the screen are individual AI tools that you could access right within labs.g Google.
Example Tools in the “Create” Section
Music AI Sandbox – Helps musicians, producers, and songwriters experiment with AI-generated music. (Currently requires joining a waitlist.)
Doppel – An app that lets you virtually try on clothes and explore personal styles.
Flow – A filmmaking tool to generate cinematic clips and full short films using AI.
Sparkify, Image Effects, Video Effects, Whisk – Tools for creating visuals and effects
How to Use Nano Banana (Gemini 2.5 Flash)
Understanding "Learn More" vs. "Try Now" Buttons
So first of all, if I click on this create option here, we can find a couple of these tools. First of all, we have music AI sandbox, which is an experimental suit of tools designed to enhance the creative process for musicians, producers, and songwriters. And one more thing to note here is that underneath each of these tools, we will have a CTA or button to access the tool.
And now for each tool, the button is going to be different. For example, if you want to use Doppel or Flow, you have create with Flow or try now button. But for music AI sandbox, you have a learn more button. That means music AI sandbox is not even publicly available within this platform.
So if you click on this learn more button, it will open up a Google form and you’ll have to express your interest, fill in all these details and then send a request and someone from Google will review your request and you will get access to it.
So for any AI tool here, if you find a button something like learn more, it means that that particular tool is in weight list and you’ll have to send a request to be a part of that weight list and once access is granted you’ll be able to use it. But for these tools like doppel flow and all that it is currently available and you can start using it right away.
For example, here we have Doppel. Doppel is a new experimental app from Google lab that lets you try on any look and explore your personal style. And then here we have flow. Then we have sparkify as photos, image effects, video effects, whisk. And in the develop tab, we have stacks, synth ID, detector, stitch, jewels.
I mean there’s a lot of tools, right? And now in this article, it’s not possible for me to go through each and every tools, but I’ll actually open a couple of them and show you how to use all of these tools.
Important Note: Regional Availability and Using a VPN
Now let’s quickly go over some tools within Google Labs and also show you how to use the same. And again, one more important thing that you’ll have to keep in mind is that not all the tools within Google Labs is currently available in all countries.
For example, this tool right here that is Google stacks is not currently available in my country. So if I click on this try now option right here, it says sorry stack is not available in your region yet. Well, in this case, all you got to do is to just use a VPN and you will get instant access to the same.
So, you can set the region as US and reload the page and the website should load without any problem. For example, in this case, I enabled a VPN and connected to a US server. And as you can see, right now I get access to this tax website. And there you go. I can now create a new project and get going from here.
So, just in case if any of these tools is not available in your country or region, all you got to do is to use a VPN.
A Closer Look at Google Flow (AI Filmmaking Tool)
And now let’s try to open and use a couple of AI tools right within Google apps. For example, if I click on this create option here we have flow video effects whisk. Okay, we have a lot of them in here actually. For example, let’s say I want to use flow. So flow is a new AI filmmaking tool that lets you seamlessly create cinematic clips, scenes, and stories with consistency using Google’s most advanced AI models including V3.
How Google Flow Works: Creating a Video Scene by Scene
So you have to click on the button that says Create with Flow. So there you go. Here we have opened up flow right within Google labs, and it also says new users can now try flow free of charge with 100 monthly credits. I can find a create with a flow button, so I’ll click on the same, and let’s see, okay, flow a new state of creation. Now it is asking me to sign up for a new account, so I’ll select my Google account. Just like that, okay, I’ll click on get started, and there you go so right now, we are in Google Flow.
And now to create a new project, I’ll click on the new project button right here. And before that, what exactly is Google Flow, you ask? Well, Flow is actually an AI-powered video generation website. But the interesting thing about this particular platform is that you’ll be able to create a video scene by scene, and you can create a full-length movie.
For example, let’s say you want to create a short film of, let’s say, 1 minute or 2 minutes. You can now ask the AI to create individual scenes. So, as you can see her,e we have a text input box and an option that says text to video, frame to video, and ingredients to video. So right now we will keep it in the text-to-video option. And let’s say in your short film, you have, let’s say like 18 different scenes, and now you can give prompts for each of these scenes individually in this box right here. And you can stitch all these scenes together and get the final movie.
For example, let’s start with a simple prompt in here. For example, let’s say two puppies are playing with a ball on a beach. So this is like, let’s say, a simple prompt. I will use the model that is the V3 fast landscape output. All that looks good. And now all I have to do is click on this send button right here.
And now, as you can se,e Google Flow is now generating the video for this particular prompt right here. And after that is done, you will see a preview of the same in here. And once that is finalized, you can now go ahead and create the next scene by giving a prompt in here.
And Flow will automatically stitch them together. And once done, you can download the final video. In which case, let’s wait for the AI to create the first video. So there you go. We have the first video ready.
And now you can play the scene. We even have sound effects. That’s good. Okay. So this right here is the first video. And this video actually looks good. And if you want to, you can actually download it or view the video in full screen. And now let’s say I’m actually satisfied with this particular scene right here.
And now you click on button add to scene. Okay. And now that particular scene is added. Next up, let’s just say I want another frame where the owner is playing with these puppies. So I can go ahead and say the owner walking to two puppies playing with the ball and walking towards the beach. Just some random uh prompt for the time being.
And now you click on the send button. And now, as you can see, flow is creating the second video. And here we already have the first video. And here we have the second video. And if you want to, you can extend it. You can click on this plus button right here to extend the clip. Jump to a particular clip.
And you can actually go ahead and start building your movie like this. So you give a prompt and you add scene after scene. And once you’re happy with the result, you can click on this download button right here to download the final video.
So that is basically how Google Flow works. And now the thing is that you can instantly get access to flow by heading over to labs.google. So in which ways I’ll wait for the air to complete creating the second scene as well. So there you go. Here we have the second scene ready. And this writer is the second scene. And let me play it real quick. Okay. So that also looks good. So in a similar fashion, you can keep building scenes after scenes and once you’re happy, you can download the same.
And that’s basically how Google Flow works. And now I’ll close the same. So if you want to access Google Flow, you can just head over to Google Apps and use it yourself.
How to Use Google Flow (Example)?
Go to labs.google and select Flow under the “Create” tab.
Click Create with Flow to start.
Sign in with your Google account.
Enter prompts scene by scene to generate AI videos.
For example:
Prompt 1: Two puppies playing with a ball on the beach.
Prompt 2: The owner walking towards the puppies.
A Look at Other Tools: Doppel, Sparkify, and Synth ID
We have Google Doppel. And if I click on the same, it seems like it is an app for Android and iOS. So basically the idea here is that you’ll be able to upload a photo of a garment and also a photo of you. And you can actually virtually try on the clothes.
So that’s basically the idea. Next up, we have Sparkify. So it says explore short videos created with the latest Google AI innovations including Gemini and VO. We have image effects, video effects, vis. Okay, so all of these are there in the create one.
Next up, let’s just move to the develop tab. And we have something called a synth ID detector. So, basically the idea here is that you’ll be able to upload an image, audio file or video to this particular tool right here.
And Google’s synth ID detector will analyze the image or whatever stuff that you uploaded and will tell you if this is an AI generated one.
And if I click on this learn more button, it seems like synth ID is currently in a wait list. So, you’ll have to fill in this form and they have to give you an access manually.
A Closer Look at Google Stitch (UI Design Tool)
Next up, we have Google Stitch. So it says turn simple prompts or images into intricate desktop mobile UI designs and front-end code then refine via AI chat and export to Figma. And if I click on this try now button I’ll be able to access the same.
How Google Stitch Works: Generating a UI from a Prompt
So basically the idea here is that you’ll be able to give simple text prompts and turn it into a full-fledged UI.
For example, let’s say I want to create a UI for a music streaming app. A UI for a music streaming app. So in this case, I just gave a simple very basic prompt. So in your case, you can actually give a very detailed and elaborative prompt with all the elements that you want within your UI and okay experimental mode and standard mode.
Okay, You’ll keep it in experimental mode. You have the option to create UI for web app and mobile app. So in this case, I’ll keep it as mobile. And you also have the option to upload a sketch, mockup or visual inspiration. So just as to show you the demo, I’ll give a simple prompt and it click on the generate design button. And let’s see what happens.
And there you go. It says to design a music streaming app, we could consider the following screens. That is a home screen, search screen, library screen, and now playing screen. And yeah, that looks good. And if you want to add any more screens, you can just go ahead and describe that in here. But this looks good for now. So I’ll click on this design all the screens option.
And within seconds, the screens will be ready. In which ways let’s wait for it. All right. So all the four screens are ready in here. So this is what the AI built by giving a simple prompt like this.
A UI for a music streaming app. So here we have the home screen, search screen, library screen and also now playing screen. And now if you want to let’s say make any changes, you can click on this edit button towards the top and you can go ahead and give follow prompt to make any changes.
Change the color palette styling. And if you click on this button right here, you’ll be able to get the code for that particular screen and you can even copy the design to Figma as well. So that is basically how Google stitch works. So that is also one more tool that is available within Google labs.
A Look at More Tools: Jules, Help Me Script, and Project Mariner
And next up here we have Jules an asynchronous AI coding agent that automate tasks within GitHub workflows like bug fixes, writing test and adding new features.
And if I click on this try now button, there you go. It has opened up the same and it says Juul does coding task that you don’t want to do like bug fixes, version bump, fixing code, test and all that. So it is actually publicly available and if you want to use it all you got to do is to click on this try jewels button sign up for a new account and you can start using it right away. And next up we have AI first collab help me script Firebase Studio which is again a AI coding platform that allows you to build full stack apps using Firebase. Next up here we have project marer.
Project Mariner is a research prototype exploring the future of human agent interactions and starting with browsers. Okay. Next up we have AMO daily project Astra. I mean there’s a lot of tools that you can explore within Google labs. And again if I was about to let’s say open and explore all these tools in this video itself. The video will be easily at least you know 2 to three hours long.
Conclusion: How to Get Started with Google Labs
So yeah this is all I wanted to show about Google labs. And now if you to want to get instant access to all these tools all you got to do is to click the first link in the description below and head over to labs.goo. Google and you can sign up for a new account and start using it right away. And again, if any of these tools are not available in your region, just use a VPN and connect to the US region and you should be pretty much good to go.
So, there’s a platform called as Google Labs that gives you instant access to a lot of AI tools that currently Google is developing and you can access it completely for free by heading over to labs.google.