top of page

What Makes AI Seem Human

Explores why AI chatbots feel so human-like, tracing it both to our social instincts and the design choices made to trigger them. While these features make AI more engaging and practical, they also carry risks.

April 2, 2026

What Makes AI Seem Human-4.jpg

You probably know this already. AI isn’t human. And yet, you would converse with it as if it were a person, saying “hi” and “thank you."


Maybe it offered a winning game strategy or helped you with a creative project. Or maybe it was always there to chat about your day or answer questions you were embarrassed to ask anyone. Responses in these moments probably felt calm, thoughtful, and even caring. You forget it’s a machine because it feels like a person—a companion you can trust.


That response is neither naive nor an accident. It happens when machines are made to sound like people. That's why, despite not being human, AI feels human.


We tend to project human qualities onto non-human things. We’ve been doing this with cars, computers, pets, toys, and, currently, AI. This behavior is known as anthropomorphism — a tendency to think that non-human things have human qualities like personalities, intentions, and emotions. It's something our brains do often even before we've consciously decided anything.


Researchers have found that people apply social rules to computers and AI, such as being polite, feeling understood, and adjusting their tone, simply because the interaction resembles human conversation. This happens even when users know perfectly well they're talking to a machine.


However, anthropomorphism isn't just a user response. It’s also an engineering strategy AI developers intentionally use to evoke a human-like response.


Large volumes of data, including texts from websites, books, articles, and social media platforms, train AI assistants and chatbots, helping them detect patterns in grammar, word choice, and ways humans express ideas. This process relies on deep learning, a type of machine learning that uses neural networks to perform difficult tasks such as learning a language. It helps generate responses that resemble how humans communicate, including expressions people commonly associate with emotion and intelligence—language that can trigger the same instinctive responses we often bring to human conversation.


These expressions include projecting a sense of identity through the use of the first-person pronoun “I” or by using a name to refer to itself, like Anthropic’s Claude or Inflection AI's Pi, which is marketed as the first emotionally intelligent AI.


Another factor that heightens anthropomorphism is AI’s ability to analyze and express emotions. AIs are now capable of analyzing sentiments and can offer words of comfort by using phrases like “That’s tough" or “I’m just here if you need me.”


They can even personalize conversations like, “You got this, Jac!" which can be encouraging in a time of uncertainty while also signaling that you have formed a trustworthy relationship. Warm, calm, and neutral responses are less intimidating, especially for users who are feeling anxious and embarrassed.


AI assistants have a useful feature that allows them to remember past conversations. For instance, it can remember a previous project you’ve worked on and link it to a new topic in your chat. Personalization and memory can help AIs maintain continuity across projects that take longer to complete or revisit ideas that could link with new ones.


There are many examples of how AI is anthropomorphized or achieving human-likeness, making it user-friendly, engaging, and practical. In fact, ChatGPT’s weekly active users reached 800 million in 2025, helping people in a myriad of tasks like coding, writing and brainstorming.


In this light, AIs make great learning companions, brainstorming partners, and effective productivity tools. People are more likely to open up, ask questions, explore ideas, and experiment. Humanlikeness, in this case, serves as a complement to human judgment and decision-making rather than replacing them.


However, as AIs mirror us, that same mirror shapes us in return. When we rely on them excessively and without question, the same human-like traits that help us connect and engage more with AIs also put us at risk.


Given how AI generates responses that appear extremely well-structured and eloquent, we tend to overestimate its capabilities. We often accept their responses without thorough examination and verification, which is highly risky given how AI can produce false or misleading information, often referred to as "hallucinations." In situations that require more context and moral reasoning, we may be outsourcing decision-making to AIs that don’t have a full picture or a true understanding of the situation.


There are also growing concerns about their frequent use and their impact on our cognitive skills. A recent study indicates that excessive reliance on AI for cognitive tasks may negatively affect our critical thinking abilities, including evaluation, analysis, and synthesis of information. When used routinely, cognitive offloading, which entrusts cognitive tasks to people and systems like AI chatbots, may lead to a decline in our ability to retain information and analysis skills.


Other studies have also linked high levels of AI use to emotional dependence. OpenAI's recent report on emotional reliance shows that about 0.15% of users who are active in a given week and 0.03% of messages show signs of a stronger emotional connection to ChatGPT. A separate study on AI companionship points to anthropomorphism as one of the reasons that boost engagements and develop trust and emotional connections with AI chatbots. Some users who are experiencing loneliness could potentially be more at risk, leading them to greater dependency and making them more vulnerable to misinformation and manipulation.


Using AI assistants and chatbots presents both advantages and disadvantages. We now know that designing AIs with human-like qualities enhances our engagement with them, making them more relatable and useful. However, AIs could also harm us if we fail to consider how these systems are built and how they might shape us now and in the long term. Understanding how these responses were designed to feel natural is the literacy we need to take control of our engagements with AI.





You probably know this already. AI isn’t human. And yet, you would converse with it as if it were a person, saying “hi” and “thank you."


Maybe it offered a winning game strategy or helped you with a creative project. Or maybe it was always there to chat about your day or answer questions you were embarrassed to ask anyone. Responses in these moments probably felt calm, thoughtful, and even caring. You forget it’s a machine because it feels like a person—a companion you can trust.


That response is neither naive nor an accident. It happens when machines are made to sound like people. That's why, despite not being human, AI feels human.


We tend to project human qualities onto non-human things. We’ve been doing this with cars, computers, pets, toys, and, currently, AI. This behavior is known as anthropomorphism — a tendency to think that non-human things have human qualities like personalities, intentions, and emotions. It's something our brains do often even before we've consciously decided anything.


Researchers have found that people apply social rules to computers and AI, such as being polite, feeling understood, and adjusting their tone, simply because the interaction resembles human conversation. This happens even when users know perfectly well they're talking to a machine.


However, anthropomorphism isn't just a user response. It’s also an engineering strategy AI developers intentionally use to evoke a human-like response.


Large volumes of data, including texts from websites, books, articles, and social media platforms, train AI assistants and chatbots, helping them detect patterns in grammar, word choice, and ways humans express ideas. This process relies on deep learning, a type of machine learning that uses neural networks to perform difficult tasks such as learning a language. It helps generate responses that resemble how humans communicate, including expressions people commonly associate with emotion and intelligence—language that can trigger the same instinctive responses we often bring to human conversation.


These expressions include projecting a sense of identity through the use of the first-person pronoun “I” or by using a name to refer to itself, like Anthropic’s Claude or Inflection AI's Pi, which is marketed as the first emotionally intelligent AI.


Another factor that heightens anthropomorphism is AI’s ability to analyze and express emotions. AIs are now capable of analyzing sentiments and can offer words of comfort by using phrases like “That’s tough" or “I’m just here if you need me.”


They can even personalize conversations like, “You got this, Jac!" which can be encouraging in a time of uncertainty while also signaling that you have formed a trustworthy relationship. Warm, calm, and neutral responses are less intimidating, especially for users who are feeling anxious and embarrassed.


AI assistants have a useful feature that allows them to remember past conversations. For instance, it can remember a previous project you’ve worked on and link it to a new topic in your chat. Personalization and memory can help AIs maintain continuity across projects that take longer to complete or revisit ideas that could link with new ones.


There are many examples of how AI is anthropomorphized or achieving human-likeness, making it user-friendly, engaging, and practical. In fact, ChatGPT’s weekly active users reached 800 million in 2025, helping people in a myriad of tasks like coding, writing and brainstorming.


In this light, AIs make great learning companions, brainstorming partners, and effective productivity tools. People are more likely to open up, ask questions, explore ideas, and experiment. Humanlikeness, in this case, serves as a complement to human judgment and decision-making rather than replacing them.


However, as AIs mirror us, that same mirror shapes us in return. When we rely on them excessively and without question, the same human-like traits that help us connect and engage more with AIs also put us at risk.


Given how AI generates responses that appear extremely well-structured and eloquent, we tend to overestimate its capabilities. We often accept their responses without thorough examination and verification, which is highly risky given how AI can produce false or misleading information, often referred to as "hallucinations." In situations that require more context and moral reasoning, we may be outsourcing decision-making to AIs that don’t have a full picture or a true understanding of the situation.


There are also growing concerns about their frequent use and their impact on our cognitive skills. A recent study indicates that excessive reliance on AI for cognitive tasks may negatively affect our critical thinking abilities, including evaluation, analysis, and synthesis of information. When used routinely, cognitive offloading, which entrusts cognitive tasks to people and systems like AI chatbots, may lead to a decline in our ability to retain information and analysis skills.


Other studies have also linked high levels of AI use to emotional dependence. OpenAI's recent report on emotional reliance shows that about 0.15% of users who are active in a given week and 0.03% of messages show signs of a stronger emotional connection to ChatGPT. A separate study on AI companionship points to anthropomorphism as one of the reasons that boost engagements and develop trust and emotional connections with AI chatbots. Some users who are experiencing loneliness could potentially be more at risk, leading them to greater dependency and making them more vulnerable to misinformation and manipulation.


Using AI assistants and chatbots presents both advantages and disadvantages. We now know that designing AIs with human-like qualities enhances our engagement with them, making them more relatable and useful. However, AIs could also harm us if we fail to consider how these systems are built and how they might shape us now and in the long term. Understanding how these responses were designed to feel natural is the literacy we need to take control of our engagements with AI.




bottom of page