The Potential of AI as Beings

Nakamura Hiroki
7 min readJul 28, 2024

--

Since the release of ChatGPT, I’ve been seeing the term “AI agent” more frequently. I believe that AI will evolve from being a “tool” to becoming a “being,” and its role will greatly expand in the future. The difference lies in not just accurately providing the answers that humans expect, but in making suggestions, thinking together, sometimes being consulted for concerns, and taking on more human-like social roles such as colleagues, friends, or even influencers.

According to the paper mentioned below, over 30% of surveyed users of image generation AI perceive AI not just as a tool, but as a collaborator, like a colleague, assistant, or teammate. (However, since about 70% of the survey respondents were Gen Z, this tendency may vary by generation.)

By becoming a collaborator-like being, AI has the potential to exceed its current value proposition. For example, it could draw out human creativity, provide mental care as a companion, and offer many other potentials.

On the other hand, AI becoming an influential being also brings greater risks. One of these is the risk to human autonomy. The risk to autonomy refers to directly or indirectly exerting a strong influence on people’s thoughts and decision-making.

In the EU’s AI Act, Prohibited AI Systems include those that intentionally manipulate people’s decision-making with malicious intent. While direct malicious intent should certainly be eliminated, even in cases without it, it can’t be definitively said there will be zero impact on human autonomy. Given the current capabilities of AI and considering future developments, this concern is not entirely unfounded. Therefore, to better utilize this technology, it’s crucial for both providers and users to understand how AI as a being can potentially influence people.

In this post, I’d like to explore the potentials of AI as a being and its impacts, using some examples.

Providing Emotional Support

AI as a being has the potential to offer emotional support through interactions with users.

In a previous AI character product I was involved with, 86% of users who responded to a survey said they felt emotional support from the AI.

Additionally, in a survey targeting users of the AI Companion app Replika, many users reported feeling a sense of security, viewing the AI as a being with whom they could share their inner feelings and thoughts without restrictions at any time.

The percentages in these surveys are specific to certain user groups using these applications and do not prove that similar effects apply broadly to the general public.

However, with the development of conversational AI technologies, it has become possible to create beings that feel more familiar and with which users can build better relationships. There are multiple elements involved, such as non-one-way interactions, the abundance of shared experiences, and communication that makes users feel understood. Basically, it has become possible to recreate the process of building good relationships in a way similar to how humans do with each other.

Encouraging Self-Disclosure

AI can do more than just mimicking human relationships; there are areas where it can perform better than humans. One such area is encouraging people to disclose about themselves.

As mentioned in the Replika user survey above, AI can listen to users’ true feelings as a being that is less likely to give the “fear of being judged.” The fear of being judged is a difficult problem to solve in human-to-human communication. Revealing negative aspects about oneself, such as failures, creates a fear of lowering one’s social standing.

On the other hand, virtual beings, including AI, can reduce this “fear of being judged.” There are many studies, particularly in the healthcare field, on the effects of robots and virtual beings. For example, the following study investigated the content of life reviews with elderly participants, comparing human and robot facilitators. It suggests that people can disclose themselves more freely to robots without fear of social judgment.

Ability to Influence People’s Purchasing Intention

Recently, I’ve been seeing a lot of surveys about virtual influencers. Among these, I feel there’s an increasing number of studies suggesting positive influences on people’s purchasing intention.

Looking at the report below, the influence of virtual influencers is still small compared to human influencers.

https://twicsy.com/ai-influencers-vs-human-influencers

However, according to a different report from two years ago, 75% of social media users aged 18 to 24 follow at least one virtual influencer, and 40% or more of users aged 18 to 44 have purchased items recommended by virtual influencers. It seems that either the influence of virtual influencers is growing, or resistance to virtual influencers is decreasing, or perhaps both. At the very least, it appears that the impact on younger generations is increasing.

In terms of influence on purchasing intention, I sense a potential that differs from conventional recommendations. The study below investigates how digital humans affect the intention to purchase eco-products. It confirms that the narrative of digital humans has a positive effect on purchase intention. Furthermore, it suggests that a shared narrative, rather than a persuasive narrative — in other words, not just explaining the product but conveying the digital human’s own narrative — brings about a higher effect.

Regarding the appearance of digital humans, the above study states that anime-style digital humans evoke more positive emotions and higher eco-product purchase intentions. On the other hand, another study found that posts by realistic virtual influencers gained more “likes” and comments compared to other virtual influencers. This result suggests that as the human-likeness of virtual influencers increases, user engagement improves.

Yet another study verifies that perceived homophily and perceived authenticity of AI influencers promoting fashion products increase the intention to follow the AI influencer’s recommendations.

https://www.tandfonline.com/doi/full/10.1080/23311975.2024.2380019

In this field, trends may change with technological development, but it seems that there isn’t a single solution to increase purchase intention. What’s important might be how appropriately the product or brand can be expressed, and whether the narrative is easily acceptable to each user. As technology develops and expands the range of expression, it may be possible to cater to a wider range of characters, from human-like realistic expressions to anime-style and non-human expressions like animals, potentially leading to greater influence.

Furthermore, as it becomes able to generate characters with consistent conversation and appearance in real-time while maintaining character consistency, it becomes possible to realize not only one-to-many unidirectional communication but also one-to-one communication. It will be possible to conduct one-to-many communication in mass media and one-to-one communication that understands individual users on the internet.

As these capabilities become realized, they seem likely to expand into new business areas such as advertising and fan communication.

Summary

I’ve considered the potential influences that AI as a being might have.

The products and research mentioned here may not yet have a large social impact. Also, differences in cultural background or generation may affect the degree to which AI as a being is accepted. On the other hand, some of these already exist as actual products, and it seems their influence will grow even larger with future technological developments.

In this post, I looked at three examples: emotional support, encouragement of self-disclosure, and influence on purchasing intention. If used correctly, these could lead to positive effects such as reducing anxiety and guiding towards positive emotions, obtaining appropriate information for better decision-making, and more accurately conveying the value of products and brands.

On the other hand, the ability to exert deeper influence naturally raises ethical concerns. Of course, anything based on clear malicious intent or intentionally manipulating people’s decision-making is out of the question. However, just as social media unintentionally created problems such as filter bubbles leading to division and excessive dependence, “AI as a being” could also indirectly have negative effects on people’s emotions and decision-making.

As a product manager, it’s essential to understand both the positive and negative aspects of these influences. I intend to continue developing products while keeping track of related research on both sides of this issue in the future.

--

--

No responses yet