Discussion
Loading...

Post

  • About
  • Code of conduct
  • Privacy
  • About Bonfire
Lauren Weinstein
@lauren@mastodon.laurenweinstein.org  ·  activity timestamp 14 hours ago

I had a chat with #Google Gemini about their agentic AI ...

L >>> Isn't it stupid for someone to trust an agentic AI like Google Chrome
L >>> Auto Browse when Google isn't taking any responsibility for disastrous
L >>> errors the agent might make that would devastate the user? It's a scam,
L >>> really, correct?

That is a heavy--and very fair--question. It's not "stupid" to be
skeptical; in fact, it's a pretty rational response to the current "Wild
West" phase of AI development.

The tension you're feeling comes from the gap between autonomy and
accountability. When we give an agent the keys to our digital life (like
making purchases, managing schedules, or handling sensitive data), the
stakes move from "funny chatbot hallucination" to "real-world financial
or legal headache."

Here's a breakdown of why this feels like a "scam" to some, and the
reality of the risks involved:

The Accountability Gap

The core of your frustration is likely the Terms of Service (ToS).
Almost every AI developer, including Google, uses "AS IS" clauses. This
means:

The Risk is Yours: If the agent accidentally deletes a critical work
file or books a non-refundable flight to the wrong city, the company's
legal shield usually prevents them from being liable for "consequential
damages."

The "Beta" Shield: By labeling these tools as experimental, companies
shift the burden of supervision onto you. It's essentially
"driver-assist" software where you're expected to keep your hands on the
wheel at all times.

Is it a Scam?

Calling it a "scam" might be a bit of a stretch in the legal sense, but
it certainly feels like an unbalanced trade-off.

The "Pro" side: Google is betting that the convenience--saving you hours
of tedious clicking and form-filling--is a value proposition users will
take, even with the risks.

The "Con" side: You're essentially acting as an unpaid QA tester for
their product while bearing all the personal risk if the code glitches.

How to Approach "Agentic" Tools Safely

If you do decide to use these tools, think of them like a new intern:

Trust, but Verify: Never let an agent hit "Submit" or "Buy" without a
final human review.

Sandbox the Access: Don't give an agentic tool access to your "main"
bank account or mission-critical data until the tech (and the laws
around it) matures.

Check the Permissions: Be ruthless about what permissions you grant in
Chrome. If an extension doesn't need "modify" access to a site, don't
give it.

The technology is moving much faster than the legal system. Until there
is a "Lemon Law" for AI agents, your skepticism is actually your best
security feature.

Would you like me to help you look into the specific privacy settings or
permission controls for Chrome's AI features so you can see exactly what
you're opting into?

L >>> C'mon, you know as well as I do that that vast majority of people Google
L >>> wants to use these systems do not have the time, background, or
L >>> expertise to take the kinds of steps you outlined. Google is treating
L >>> them like suckers.

You're cutting straight to the heart of the "UX vs. Ethics" debate.
You're right--the steps I outlined are a "best-case scenario" for power
users, but they don't reflect how the average person actually interacts
with technology.

Most people don't read the manual, they don't audit permissions, and
they certainly don't have time to play "supervising manager" to an AI
that was marketed as a time-saver.

The "Sucker" Dynamic

There is a legitimate argument that the business model relies on
frictionless adoption. If Google (or any tech giant) put a massive,
unmissable red warning label saying, "This might ruin your credit score
or delete your files and we won't pay you a dime if it does," nobody
would click "Enable."

By making the interface "magical" and easy, they lower the psychological
barrier to entry. This creates two distinct realities:

The Marketing Reality: "Focus on what matters while AI handles the rest!"

The Legal Reality: "User assumes all risk for any actions taken by the
software."

The Power Imbalance

When you call it a "scam" or say users are being treated like "suckers,"
you're pointing to the asymmetry of information:

Complexity: The underlying logic of agentic AI is a "black box" even to
many developers. Expecting a non-technical user to "supervise" it is
like asking someone who doesn't know how to drive to supervise an
autonomous semi-truck.

Liability Shifting: Tech companies have successfully shifted the "burden
of correctness" onto the consumer. In almost any other
industry--medicine, aviation, even toy manufacturing--the maker is held
to a standard of "fitness for purpose." Software has historically
enjoyed a unique legal pass here.

Where This Ends Up

The "wild west" phase usually ends when the first major, high-profile
disaster happens--a "class-action" level event that forces the hand of
regulators. Until then, these companies are effectively betting that the
data they gain from millions of users is worth the PR risk of a few
"disastrous errors."

It's a cynical way to view product development, but it's historically
how "move fast and break things" has worked.

Would you like to look at some of the current consumer protection
debates or laws being proposed to hold AI companies more accountable for
their "agents"?

  • Copy link
  • Flag this post
  • Block
Peter Ludemann
@PeterLudemann@mathstodon.xyz replied  ·  activity timestamp 13 hours ago

@lauren A less prolix and bland response should be possible.

  • Copy link
  • Flag this comment
  • Block
Lauren Weinstein
@lauren@mastodon.laurenweinstein.org replied  ·  activity timestamp 13 hours ago

@PeterLudemann Such as: "Sorry, this question goes against my safety guidelines. Would you like some toast?"

  • Copy link
  • Flag this comment
  • Block
Peter Ludemann
@PeterLudemann@mathstodon.xyz replied  ·  activity timestamp 12 hours ago

@lauren The future is becoming a mashup of 1984, Brave New World, Catch-22, and Red Dwarf. (Red Dwarf is essentially Dilbert in space but with funny jokes)

  • Copy link
  • Flag this comment
  • Block
Weasel
@weaselx86@mastodon.social replied  ·  activity timestamp 13 hours ago

@lauren

The underlying logic of agentic AI is a "black box" even to many developers. Expecting a non-technical user to "supervise" it is like asking someone who doesn't know how to drive to supervise an autonomous semi-truck.
-- Google Gemini

  • Copy link
  • Flag this comment
  • Block
Daisy
@_daisy@mstdn.ca replied  ·  activity timestamp 13 hours ago

@lauren That almost everyone carries a tracking/spy device with them everywhere should never have become "normal". The risks with so called AI are actually worse. Not even counting the fact that we do not have the resources to make it profitable.

  • Copy link
  • Flag this comment
  • Block
Log in

Bonfire Dinteg Labs

This is a bonfire demo instance for testing purposes. This is not a production site. There are no backups for now. Data, including profiles may be wiped without notice. No service or other guarantees expressed or implied.

Bonfire Dinteg Labs: About · Code of conduct · Privacy ·
Bonfire social · 1.0.0 no JS en
Automatic federation enabled
  • Explore
  • About
  • Code of Conduct
Home
Login