Google Jules Hands-on Review

I got hands-on with Google's Jules AI coding assistant. Here's what I thought...
Google enters the AI coding assistant race with Jules

Table of contents

Jules: How it works

Like Codex, Jules is currently available through a research preview with limited access.

After receiving an invitation, you're guided through a setup process that includes connecting to your GitHub account and authorizing the Jules GitHub app to access your repositories.

Jules main interface

Jules creates sandboxed environments for your repositories, allowing it to run commands and generate code changes without affecting your production branches until you explicitly approve them.

Things I like about Jules

The UX feels more polished

In early testing, Jules' user experience feels noticeably more refined than the initial experience I had with Codex.

One standout feature is how it formulates an execution plan before taking any action:

Jules formulates a plan before executing

For each task you submit, Jules presents a detailed plan and asks for your approval. What's particularly clever is that if you don't respond within 5 minutes, Jules assumes approval and proceeds with executing the plan.

Jules approval flow with 5-minute automatic proceed

This strikes an excellent balance between giving you control and maintaining momentum - you don't need to babysit the process, but you still have the opportunity to course-correct if needed.

Once the plan is approved, Jules gets to work making code changes and adding files in a way that's familiar to anyone who has requested Artifacts in an OpenAI or Anthropic chat.

What did catch my eye is that Jules added tests alongside its implementations, a behavior I rarely observe in other flagship LLMs / integrations.

Once you approve the plan, Jules starts coding

A more intuitive branching and PR flow

The branching and PR workflow in Jules feels more natural than what I experienced with Codex.

When Jules completes a task, it allows you to optionally publish the branch containing its changes, but leaves you to author and open the pull request yourself.

Jules branch publishing flow

While this approach is technically less automated than Codex's one-click PR creation, it actually feels smoother in practice.

By separating branch creation from PR submission, Jules avoids some of the error handling issues I encountered with Codex. This separation of concerns gives you more control over the final PR description and review process.

Current limitations

Usage restrictions

In its current research preview, Jules limits users to 5 tasks per day:

Jules 5 task per day limit

If you need more capacity, you have to contact Google directly to request an upgrade. This contrasts sharply with Codex's approach, which is only available to pro users paying $200 per month but encourages spinning up as many parallel tasks as you need.

This fundamental difference shapes how you prioritize work with each platform. With Jules, I find myself being more selective about which tasks I delegate, saving my limited quota for the highest-value items.

Jules vs. Codex: Early comparisons

Having used both platforms in their early stages, I can offer some initial comparisons:

User experience

Jules feels more polished, with its plan-first approach and thoughtful timeout features creating a smoother experience. Codex, while powerful, still shows rough edges in its execution and error handling.

Task execution model

Codex encourages parallel task execution but sometimes struggles with reliability. Jules takes a more measured approach with its daily task limits but executes them with higher reliability.

GitHub integration

Both platforms integrate with GitHub, but Jules' separation of branch publishing from PR creation feels more aligned with how developers typically work. Codex's automated PR creation is convenient when it works but frustrating when it fails.

Conclusion

It's still early days for both Jules and Codex. Each platform shows tremendous promise while revealing the current limitations of AI coding assistants. Jules impresses with its polished UX and thoughtful workflow design, though its current task limits are restrictive.

For now, I'll likely use both platforms strategically - Jules for high-priority, complex tasks where I value reliability and a smoother git workflow, and Codex for lower-priority tasks where I can leverage its unlimited parallel execution.

I'm excited to see how both platforms evolve as they move from research previews to general availability, and how competition between Google and OpenAI drives innovation in this space.