In a bold but expected move, Google finally expands 'Opal' beyond the U.S., signaling the next phase in the AI-driven programming revolution.
By opening access to 15 more countries (among them India, Indonesia, Brazil, Singapore, Argentina), the company is clearly trying to democratize “vibe coding” globally. In these new regions, creators and curious users can now dream up mini web apps just by typing what they want, and let Opal stitch together the logic, UI, and data flow behind the scenes.
The tool works by transforming a natural language description into a visual workflow, where prompt nodes, generation steps, inputs and outputs, all laid out in a canvas.
Users can click into any step to inspect or tweak the prompt, inject new logic, or rerun parts of the flow.
Errors pop up in context, making debugging more intuitive. Google also added support for running multiple steps in parallel, helping more complex apps execute more efficiently.
In early use, Opal reportedly took ~5 seconds to spin up a new app; recent improvements have notably sped that up.
Today, Opal is rolling out to Canada, India, Japan, South Korea, Vietnam, Indonesia, Brazil, Singapore, Colombia, El Salvador, Costa Rica, Panamá, Honduras, Argentina and Pakistan. We are working hard to make Opal available in even more countries soon.
— Google Labs (@GoogleLabs) October 7, 2025
What’s striking is that when Opal first launched in the U.S., Google expected only “simple, fun tools” to emerge.
Instead, early adopters built surprisingly sophisticated, useful mini-apps. That mismatch between expected toy prototypes and real productivity tools was no accident.
For years, Google has been building powerful tools, as well as large language models (LLMs) like Gemini. But at this time, the consumer-facing layer matters more than ever. Not everybody can code, nor learn a new language, let alone putting the logic together to create an app. Opal lets "ordinary" users compose the idea, not the syntax.
Opal can let someone build a multi-step workflow, like conditional branching, parallel runs, tool calls, without writing a line of code. This is proof that LLMs are becoming logic engines, not just linguistic mirrors.
Opal helps illustrate how the gulf between generative creativity and computational logic is narrowing.
It reflects a shift in how large language models are being seen: not just as chatbots or creative copywriters, but as logical, compositional engines. Megan Li, senior product manager at Google Labs, acknowledged that this surge in inventiveness made it essential to expand Opal’s reach.
This crescendo of innovation didn’t start with Google. The arrival of ChatGPT was a turning point.
That moment ignited what people call an arms race among tech companies. And as a result of this race, and as newer models are introduced, they are judged not merely for their fluency, but increasingly for their capacity to reason and program.
In that sense, Google is catching up.
But perhaps the most revealing insight is how few people yet grasp the full potential of large language models for logic and programming. Many still see them chiefly as storytelling engines, content assistants, or problem solvers in natural language. Yet under the surface, these models are doing algorithmic reasoning, branching, planning, and even error recovery.
Of course, limitations remain. AI-generated logic isn’t infallible; there’s risk in hidden bugs, security gaps, or opaque reasoning. The user must still validate, test, and understand the behavior. But as these tools improve, the entry barrier to building meaningful applications is collapsing.
So yes, Google is playing catch-up in the consumer AI frontier, but Opal demonstrates how even a late entrant can ride the wave if it leans into usability, interactivity, and visual logic.