Blog

How to win fake friends and influence fake people – Computerworld


We’re all talking to fake
people now, but most people don’t realize that interacting with AI is a subtle
and powerful skill that can and should be learned.The first step in developing
this skill set is to acknowledge to yourself what kind of AI you’re talking to
and why you’re talking to it. AI voice interfaces are
powerful because our brains are hardwired for human speech. Even babies’ brains
are tuned to voices before they can talk, picking up language patterns early
on. This built-in conversational skill helped our ancestors survive and connect,
making language one of our most essential and deeply rooted abilities.But that doesn’t mean we can’t
think more clearly about how to talk when we speak to AI. After all, we already
speak differently to other people in different situations. For example, we talk
one way to our colleagues at work and a different way to our spouses. Yet people still talk to AI
like it’s a person, which it’s not; like it can understand, which it cannot;
and like it has feelings, pride, or the ability to take offense, which it
doesn’t. The two main categories of
talking AIIt’s helpful to break the
world of talking AI (both spoken and written) into two categories: 1.    
Fantasy role playing, which we use for
entertainment. 2.    
Tools, which we use for some productive end,
either to learn information or to get a service to do something useful for us. Let’s start with role-playing
AI.

AI for pretending

You may have heard of a site
and app called Status AI, which is often
described as a social network where everyone else on the network is an AI
agent. A better way to think about it
is that it’s a fantasy role-playing game in which the user can pretend to be a
popular online influencer. Status AI is a virtual world
that simulates social media platforms. Launched as a digital playground, it
lets people create online personas and join fan communities built around shared
interests. It “feels” like a social network, but every interaction—likes,
replies, even heated debates—comes from artificial intelligence programmed to
act like real users, celebrities, or fictional characters.It’s a place to experiment,
see how it feels to be someone else, and interact with digital versions of
celebrities in ways that aren’t possible on real social media. The feedback is
instant, the engagement is constant, and the experience, though fake, is
basically a game rather than a social network. Another basket of role-playing
AI comes from Meta, which has launched
AI-powered accounts on Facebook, Instagram, and WhatsApp that let users
interact with digital personas — some based on real celebrities like Tom Brady
and Paris Hilton, others entirely fictional. These AI accounts are clearly
labeled as such, but (thanks to AI) can chat, post, and respond like real
people. Meta also offers tools for influencers to use AI agents to reply to
fans and manage posts, mimicking their style. These features are live in the
US, with plans to expand, and are part of Meta’s push to automate and
personalize social media.Because these tools aim to
provide make-believe engagements, it’s reasonable for users to pretend like
they’re interacting with real people. These Meta tools attempt to
cash in on the wider and older phenomenon of virtual online influencers. These
are digital characters created by companies or artists, but they have social
media accounts and appear to post just like any influencer. The best-known
example is Lil Miquela, launched in 2016 by the Los Angeles startup Brud, which
has amassed 2.5 million Instagram followers. Another is Shudu, created in 2017
by British photographer Cameron-James Wilson, presented as the world’s first
digital supermodel. These characters often partner with big brands. A post by one of the major
virtual influencer accounts can get hundreds or thousands of likes and
comments. The content of these comments ranges from admiration for their style
and beauty to debates about their digital nature. Presumably, many people think
they’re commenting to real people, but most probably engage with a role-playing
mindset. By 2023, there were hundreds
of these virtual influencers worldwide, including Imma from Japan and Noonoouri
from Germany. They’re especially popular in fashion and beauty, but some, like
FN Meka, have even released music. The trend is growing fast, with the global
virtual influencer market estimated at over $4 billion by 2024.

AI for knowledge and productivity

We’re all familiar with
LLM-based chatbots like ChatGPT, Gemini, Claude, Copilot, Meta AI, Mistral, and
Perplexity. The public may be even more
familiar with non-LLM assistants like Siri, Google Assistant, Alexa, Bixby, and
Cortana, which have been around much longer.I’ve noticed that most people
make two general mistakes when interacting with these chatbots or assistants.The first is that they
interact with them as if they’re people (or role-playing bots). And the second
is that they don’t use special tactics to get better answers. People often treat AI chatbots
like humans, adding “please,” “thank you,” and even
apologies. But the AI doesn’t care, remember, and is not significantly affected
by these niceties. Some people even say “hi” or “how are
you?” before asking their real questions. They also sometimes ask for
permission, like “Can you tell me…” or “Would you mind…”
which adds no value. Some even sign off with “goodbye” or
“thanks for your help,” but the AI doesn’t notice or care. Politeness to AI wastes time —
and money! A year ago, Wharton professor Ethan Mollick pointed out that people
using “please” and “thank you” in AI prompts add extra
tokens, which increases the compute power needed by the LLM chatbot companies.
This concept resurfaced on April 16 of this year, when OpenAI CEO Sam Altman
replied to another
user on X, confirming that polite words in prompts have cost OpenAI “tens of millions of
dollars.” “But wait a second,
Mike,” you say. “I heard that saying ‘please’ to AI chatbots gets you
better results.” And that’s true — sort of. Several studies and user
experiments have found that AI chatbots can give more helpful, detailed answers
when users phrase requests politely or add “please” and “thank
you.” This happens because the AI models, trained on vast amounts of human
conversation, tend to interpret polite language as a cue for more thoughtful
responses.But prompt engineering experts
say that clear, specific prompts — such as giving context or stating exactly
what you want — consistently produce much better results than politeness. In other words, politeness is
a tactic for people who aren’t very good at prompting AI chatbots. The best way to get
top-quality answers from AI chatbots is to be specific and direct in your
request. Always say exactly what you want, using clear details and context. Another powerful tactic is
something called “role prompting” — tell the chatbot to act as a
world-class expert, such as, “You are a leading cybersecurity
analyst,” before asking a question about cybersecurity. This method,
proven in studies like Sander
Schulhoff’s 2025 review of over 1,500 prompt engineering papers, leads to
more accurate and relevant answers because it tells the chatbot to favor
content in the training data produced by experts, rather than just lumping the
expert opinion in with the uneducated viewpoints. Also: Give background if it
matters, like the audience or purpose. (And don’t forget to
fact-check responses. AI chatbots often lie and hallucinate.)Its time to up your AI chatbot game.
Unless you’re into using AI for fantasy role playing, stop being polite.
Instead, use prompt engineering best practices for better results.


Source link

Related Articles

Back to top button
close