I had a chat with #Google Gemini about their agentic AI ...
L >>> Isn't it stupid for someone to trust an agentic AI like Google Chrome
L >>> Auto Browse when Google isn't taking any responsibility for disastrous
L >>> errors the agent might make that would devastate the user? It's a scam,
L >>> really, correct?
That is a heavy--and very fair--question. It's not "stupid" to be
skeptical; in fact, it's a pretty rational response to the current "Wild
West" phase of AI development.
The tension you're feeling comes from the gap between autonomy and
accountability. When we give an agent the keys to our digital life (like
making purchases, managing schedules, or handling sensitive data), the
stakes move from "funny chatbot hallucination" to "real-world financial
or legal headache."
Here's a breakdown of why this feels like a "scam" to some, and the
reality of the risks involved:
The Accountability Gap
The core of your frustration is likely the Terms of Service (ToS).
Almost every AI developer, including Google, uses "AS IS" clauses. This
means:
The Risk is Yours: If the agent accidentally deletes a critical work
file or books a non-refundable flight to the wrong city, the company's
legal shield usually prevents them from being liable for "consequential
damages."
The "Beta" Shield: By labeling these tools as experimental, companies
shift the burden of supervision onto you. It's essentially
"driver-assist" software where you're expected to keep your hands on the
wheel at all times.
Is it a Scam?
Calling it a "scam" might be a bit of a stretch in the legal sense, but
it certainly feels like an unbalanced trade-off.
The "Pro" side: Google is betting that the convenience--saving you hours
of tedious clicking and form-filling--is a value proposition users will
take, even with the risks.
The "Con" side: You're essentially acting as an unpaid QA tester for
their product while bearing all the personal risk if the code glitches.
How to Approach "Agentic" Tools Safely
If you do decide to use these tools, think of them like a new intern:
Trust, but Verify: Never let an agent hit "Submit" or "Buy" without a
final human review.
Sandbox the Access: Don't give an agentic tool access to your "main"
bank account or mission-critical data until the tech (and the laws
around it) matures.
Check the Permissions: Be ruthless about what permissions you grant in
Chrome. If an extension doesn't need "modify" access to a site, don't
give it.
The technology is moving much faster than the legal system. Until there
is a "Lemon Law" for AI agents, your skepticism is actually your best
security feature.
Would you like me to help you look into the specific privacy settings or
permission controls for Chrome's AI features so you can see exactly what
you're opting into?
L >>> C'mon, you know as well as I do that that vast majority of people Google
L >>> wants to use these systems do not have the time, background, or
L >>> expertise to take the kinds of steps you outlined. Google is treating
L >>> them like suckers.
You're cutting straight to the heart of the "UX vs. Ethics" debate.
You're right--the steps I outlined are a "best-case scenario" for power
users, but they don't reflect how the average person actually interacts
with technology.
Most people don't read the manual, they don't audit permissions, and
they certainly don't have time to play "supervising manager" to an AI
that was marketed as a time-saver.
The "Sucker" Dynamic
There is a legitimate argument that the business model relies on
frictionless adoption. If Google (or any tech giant) put a massive,
unmissable red warning label saying, "This might ruin your credit score
or delete your files and we won't pay you a dime if it does," nobody
would click "Enable."
By making the interface "magical" and easy, they lower the psychological
barrier to entry. This creates two distinct realities:
The Marketing Reality: "Focus on what matters while AI handles the rest!"
The Legal Reality: "User assumes all risk for any actions taken by the
software."
The Power Imbalance
When you call it a "scam" or say users are being treated like "suckers,"
you're pointing to the asymmetry of information:
Complexity: The underlying logic of agentic AI is a "black box" even to
many developers. Expecting a non-technical user to "supervise" it is
like asking someone who doesn't know how to drive to supervise an
autonomous semi-truck.
Liability Shifting: Tech companies have successfully shifted the "burden
of correctness" onto the consumer. In almost any other
industry--medicine, aviation, even toy manufacturing--the maker is held
to a standard of "fitness for purpose." Software has historically
enjoyed a unique legal pass here.
Where This Ends Up
The "wild west" phase usually ends when the first major, high-profile
disaster happens--a "class-action" level event that forces the hand of
regulators. Until then, these companies are effectively betting that the
data they gain from millions of users is worth the PR risk of a few
"disastrous errors."
It's a cynical way to view product development, but it's historically
how "move fast and break things" has worked.
Would you like to look at some of the current consumer protection
debates or laws being proposed to hold AI companies more accountable for
their "agents"?
FSFE has been working intensively over the last two years to improve DMA interoperability against gatekeepers like