Do Programmers Love or Hate AI? A Google Manager Tells Us Everything

Image showing Human and AI Developer Collaboration

The AI Revolution in Software Development: A Conversation with a Google Manager

The rapid advancement of Artificial Intelligence (AI) fascinates us with its dual nature. Some see algorithms as an unprecedented opportunity to advance science, cure diseases, or generate immense wealth. Others fear catastrophic errors, inexplicable decisions made by “black box” models, or AI escaping human oversight.

Ultimately, developers and major tech companies hold the keys to our AI future. Their understanding and approach to artificial intelligence shape the millions of applications and services humanity relies on daily. Even minor architectural changes can have a massive ripple effect across the globe.

So, how do developers at top Silicon Valley corporations view AI today? Where do these tools fall short, and where are they completely irreplaceable? We sit down with Kamil Pluta, an Engineering Manager at Google, to find out.

AI Through the Eyes of Its Creators

For several years now, AI has been dynamically changing the workflow and expectations placed on programmers. But is this impact also visible higher up the corporate ladder, among IT team managers?

According to Kamil Pluta, the shift is undeniable. Generative models have become proficient enough to assist with organizational duties, internal communications, and team documentation. While these tasks might not be glamorous, they are essential for keeping engineering teams aligned and informed. Read more about how AI is transforming management roles in the tech industry.

The role of a software engineer is shifting from purely technical coding to deeply understanding the business context and the private needs of the end-user.

Where Does AI Excel and Where Does It Fail?

When asked about the most useful AI support today, Pluta highlighted inbox management. For team leaders, parsing through hundreds of emails to identify urgent tasks versus background noise is a massive challenge. AI excels at categorizing and prioritizing this influx of information.

However, AI still has significant limitations. Current AI models struggle with:

  • Strategic Thinking: Making long-term decisions where the consequences might only become visible weeks or months later.
  • Team Alignment: Ensuring all team members understand shared goals and are moving in the same direction without getting distracted.
  • Human Resources: Recognizing unique human talents, fostering skill development, and proposing tailored career challenges.

Despite these limitations, the adoption of AI tools among developers is accelerating. As code generation quality improves, even early skeptics are returning to experiment with new methodologies and build best practices for large organizations.

Will Google Programmers Stop Writing Code?

There is a growing theory that soon, programmers will stop writing code entirely and instead become reviewers of code generated by AI. Pluta agrees that the industry is trending toward AI agents and multi-agent systems.

Developers are evolving into software orchestrators or architects. Instead of typing out every line, they delegate the heavy lifting to subordinate AI agents. They then step in to refine the product, catch edge-case bugs, and polish the final solution—the ultimate realization of working smarter, not harder.

The New Quality Standards

In the past, the core challenge was writing clean syntax line by line. Today, almost anyone can generate thousands of lines of code in seconds. The new challenge is understanding the operational context and designing a system that delivers genuine value.

This means the industry must develop new quality criteria for AI-generated code. Developers must verify functional correctness, architectural integrity, and compatibility with existing legacy systems. Much of this verification process will likely become fully or semi-automated in the near future.

Who is to Blame When AI Makes a Mistake?

Currently, AI often gets a “free pass” when it makes an error, often dismissed as a mere “hallucination,” whereas human developers face strict consequences for pushing fatal bugs. As we rely more heavily on AI, this double standard must be addressed.

According to Pluta, the industry is currently building the safety paradigms of the future. Key strategies include:

  • Creating Sandboxes: AI agents must operate in isolated environments where they cannot accidentally delete crucial production databases or overstep their permissions.
  • Intent Code Review: Instead of reviewing every single line of syntax, developers are moving toward reviewing the “intent” of the AI’s code to ensure it aligns with the project’s goals.

Ultimately, a human must always bear responsibility for the final code. AI is a tool, not a legal entity. The human orchestrator who approves and deploys the code is accountable for its performance and any resulting bugs.

Do AI Programmers Hate AI?

How do developers feel about transitioning from creative coders to glorified proofreaders? The sentiment is highly individual and generally falls into two distinct developer archetypes:

  • The Code Magicians: Highly technical engineers who derive immense satisfaction from building complex logic from the ground up. For them, AI might feel like an intruder automating the very aspect of their job they love the most.
  • The Goal-Oriented Builders: Developers motivated by the final product—seeing an app go live and users enjoying new features. For this group, typing out boilerplate code was always tedious. They embrace AI because it clears the path to faster deployments and allows them to focus purely on creative problem-solving.

Vibe Coding: A Gimmick or a Real Tool?

A new trend called “vibe coding” suggests that with AI, anyone can be a programmer, even without knowing how to read code. But is it dangerous to deploy software whose underlying mechanics you cannot independently verify?

Pluta sees AI as a powerful skill multiplier. In the hands of a seasoned developer, it leads to incredible efficiency. However, multiplying zero knowledge by an AI multiplier might seem like it yields zero results. Yet, Pluta defends the vibe coding trend as a powerful force for democratization.

Previously, a non-technical person with a brilliant idea was completely blocked unless they hired an expensive developer. Today, they can build a functional prototype in a single evening. Learn more about whether anyone can become an app creator through vibe coding.

Vibe coding serves as an excellent entry point. By experimenting with AI-generated apps, users gradually learn how software works, naturally building their technical literacy over time. It transforms AI from a mere shortcut into a personalized interactive tutor.

Frequently Asked Questions (FAQ)


How is the role of a software engineer changing with AI?

Software engineers are transitioning from writing code line-by-line to becoming software architects or orchestrators. They delegate basic coding tasks to AI agents and focus on system design, business context, and verifying AI-generated output.


What is “intent code review”?

Intent code review is an emerging practice where developers focus on verifying whether the AI-generated code successfully fulfills the intended goal or business logic, rather than scrutinizing every single line of syntax for traditional perfection.


Who is legally responsible if AI writes buggy code that causes damage?

The human developer or the deploying organization remains responsible. AI is considered a tool, not an independent entity. The engineer who oversees, tests, and deploys the AI’s code is ultimately accountable for its real-world effects.


Can AI completely manage a software engineering team?

Not currently. While AI is excellent at organizing tasks, summarizing emails, and tracking statuses, it lacks the human empathy required for team alignment, talent recognition, and strategic, long-term decision-making.

Source: Gemini & Opening photo: Gemini

About Post Author