![]() |
| Source. |
It's been nearly 13 years since Gmail offered automatic classification of e-mails (sorting them into "Primary," "Promotion," "Social," etc.). I appreciated the way this organized my in-box, and I soon began creating filters that would override Gmail's decisions when necessary. Even so, I wondered what Google was doing with the data it must have been collecting to do that sorting of my mail.
More recently, Gmail has been trying to finish my sentences, and has been offering summaries of my incoming mail and my e-mail conversations. It offers to help me write letters.
I posted the following query into Google's search engine:
"Examples of unwanted and uninvited use of AI in our everyday Internet-based tasks (e-mail, for example)." I'll let AI speak for itself (but I'll return at the end).
Note: Google provides the sources of the information and advice in this essay through links in the actual Google-constructed reply to my request.
Platforms are moving from simple "tools" to "collaborators" without always being asked.
It’s a fascinating (and sometimes frustrating) time to be online. We are currently in the "opt-out" era of the internet, where AI features are often pushed live first, leaving users to figure out how to disable them later.
While some find these tools helpful, many users experience them as "feature creep"—unnecessary clutter that interrupts flow or compromises privacy.
1. Communication & Email
This is where most people encounter unwanted AI daily.Smart Compose & Predictive Text: Gmail and Outlook now suggest the end of your sentences. While it saves seconds, many feel it flattens their personal voice or makes emails sound robotic.
AI-Generated Summaries: Some email clients now place an AI-generated summary at the top of long threads. If the AI misses a nuance or gets a deadline wrong, it can lead to costly misunderstandings.
Priority Inboxes: Algorithms decide which emails are "Important." If the AI "hallucinates" that a bill or a personal note is "Promotions," you might miss it entirely. [This happens to me frequently.]
2. Search Engines & Information
The way we look for facts has fundamentally shifted in the last year, often removing the "human" element of the web.
AI Overviews (SGE): Google and Bing now place an AI-generated answer at the very top of search results. This pushes actual websites (written by humans) further down the page and can sometimes present "hallucinations" as factual truth.
Chatbot Sidebars: Many browsers (like Edge with Copilot) now have persistent sidebars that analyze the page you are reading, even if you just wanted a clean reading experience.
3. Social Media & Content Creation
Social platforms are using AI to keep you scrolling longer and to fill gaps in original content.
AI-Generated Commenting: LinkedIn and other platforms now suggest "AI replies" to posts. This can lead to comment sections filled with shallow, automated praise rather than genuine networking.
Feed Manipulation: Algorithms have moved beyond "who you follow" to "what the AI thinks will keep you hooked," often surfacing AI-generated "slop" (synthetic images or low-quality AI articles) in your main feed.
Invisible Editing: Instagram and Facebook have integrated AI tools that can "expand" your photos or remove people from backgrounds automatically, sometimes blurring the line between reality and digital fabrication.
4. Workplace & Productivity Tools
The "Software as a Service" (SaaS) world has integrated AI into almost every text box.
The "Ask AI" Pop-up: In tools like Notion, Slack, or Google Docs, highlighting text often triggers a "Write with AI" button that can get in the way of manual editing.
Meeting Transcription Bots: You may join a Zoom call only to find an "AI Note-taker" bot joined too. This can feel like uninvited surveillance, especially if the host didn't ask for everyone's consent first.
Why this feels "unwanted"
The tension usually comes down to three things:
- Cognitive Load: Having to constantly dismiss pop-ups or "X out" of AI suggestions is exhausting.
- Privacy Concerns: Users often worry that their private drafts or emails are being used to "train" the next version of the AI.
- Loss of Agency: When a tool suggests how you should finish a sentence, it subtly influences your thoughts and communication style.
Even more unwanted! Google's sources did not mention perhaps the most disastrous aspect of AI's invasion into our online lives: the electricity it takes to make those often-unwanted features possible.
I asked Google, "How much additional electricity (beyond what would be used simply by the Internet) is required to power AI?" Here is its full answer. In brief,
AI-optimized data centers require 3–5 times more power per square foot than traditional facilities, with a single AI server rack consuming 50–150 kilowatts compared to 10–15 kW for conventional computing. AI searches use roughly 10 times more electricity than standard, non-AI internet searches, driving a potential 10%–20% increase in total U.S. power demand by 2030.
I admit that I appreciate that AI is probably helping Google process the questions I ask it. Instead of guessing at the best key words and their best order, as I used to do, I can frame my queries in natural language. However, my occasional and voluntary use of AI in this direction is not the same as having it intrude when not invited.
At the end of the original response to my query, Google asks, "Would you like me to show you how to disable some of these specific AI features in Gmail, Google Search, or LinkedIn?" I answered "yes," and it linked to these suggestions.
In the context of ICE and other U.S. Homeland Security officers' abusive behavior, it's not surprising that audiences yearn for evidence that justice is on the way. A whole new AI-powered trope has arisen to meet this hunger: videos of officers making ludicrous and cruel arrests (it happens!) and then getting scolded by angry citizens, business owners, local police, and judges. Here's a YouTube channel specializing in such videos. The channel's front page makes it clear that every video is fictional, a similar note is on each video's individual page, and the videos themselves bear all the typical features of fakes, but the vast majority of the comments on many of these videos are cheering on those righteous resisters. Occasionally there's a plaintive "It's probably AI but I wish this were true."
Other AI-generated videos tell stories of miraculous landings of stricken airplanes, or detailed accounts of Ukrainian drone strikes, and no doubt far worse material ... and we're all paying for the power that's needed to compose these AI fakes, and the mental and spiritual pollution they spread, just when civil society needs true discernment more than ever. Among the names of the early Quaker movement were "Publishers of Truth" and "Children of the Light"; I guess we're still needed, if we're up to it!
Was it inevitable? AI agents have their own social network?? Benj Edwards on Moltbook.
Micah Bales on the humility of God. "Be encouraged, brothers and sisters."
Sunita Viswanath: The "theology of showing up" is making Minneapolis a holy place.
Artemis II's lunar mission is delayed. Amy Shira Teitel's sad and blunt commentary on this rocket and its ultra-expensive path to irrelevance: her video and article.
A Guardian report on Dezer Development and Palestinian deportations: an infuriating glimpse into the world of wealthy presidential friends who earn big fees by transporting deportees and treating them as utter nobodies.
One of my favorite versions of "Baby Scratch My Back" ... Jason Ricci with John Lisi, Sam Hotchkiss, Andy Kurz, and Adam Baumol.

No comments:
Post a Comment