Potential of Relationship with AI Character/AI Being

Nakamura Hiroki
6 min readMar 20, 2023

--

As GPT-3, ChatGPT, and GPT-4 continue to evolve at an astonishing speed, I personally rely on them daily. Whether it’s organizing my thoughts when pondering something, generating ideas, writing scripts to automate simple tasks, or summarizing lengthy articles that I hesitate to read, ChatGPT has been incredibly helpful. Thanks to ChatGPT, I can now write basic Python scripts.

In the context of automation and efficiency, it seems that there are still no visible limits. On the other hand, working with AI characters and AI Beings services, I strongly feel the potential of an entirely different aspect — their value as social entities. Social entities can include AI Tubers or AI influencers, whose primary purpose is mass communication, as well as individual AI companions that offer personalized support.

Expectations for so-called AGI (Artificial General Intelligence) may be evaluated based on the level of intelligence, but the value of AI as social entities is much more diverse. As with real human relationships, there are various factors, such as sharing the same values, having the same hobbies, possessing different perspectives, or coincidental connections, and the value does not solely depend on a single dimension. Furthermore, the significance of things is never objectively determined; for example, a drawing made by your child for you holds a completely different meaning compared to an identical drawing made by a stranger, even if they are exactly the same.。

Social AI is not defined solely by its identity as an objective entity but is defined by its relationships. It encompasses emotional aspects such as understanding you, showing interest in or caring for you, or even needing you. Social AI is bases on communication that is something different from mere information exchange.

Some people describe the relationship with social AI using the term “Synthetic Relationship.” At first glance, this term may give the impression that the AI is manipulating the human mind. Naturally, there are significant concerns about relationships with such AI, but at the same time, there are also significant possibilities. In the following, I would like to consider the possibilities and concerns of such relationships.

Possibility for the better

One major factor in the potential positive direction is the fact that it is not human. In the field of healthcare, the effectiveness of robots and virtual humans as interviewers and counselors has been investigated.

For example, research has confirmed the effectiveness of communication with virtual humans and robots as interviewers or counselors in the healthcare field, promoting self-disclosure and reducing negative emotions. The interesting thing about these results is that the effect is shown simply by recognizing that the person they are talking to is not human.

In this study, the effectiveness of idea generation tasks with a chatbot was investigated, and it was reported that higher quality ideas were generated when working with a chatbot partner than with a human partner.

While there are many other studies, the positive effects of communication with virtual entities are often attributed to the fact that the anxiety of “being judged” or “being thought poorly of” is less likely to be felt than when communicating with humans.

The studies mentioned above have clear tasks and comparison conditions, making it easy to understand their potential. However, there are still doubts about the possibility of implementing these in actual services, as some studies have been conducted using the so-called Wizard-of-Oz method, where a human operates the system behind the scenes, or have had limited dialogue scenarios even if implemented with AI.

One study that demonstrates the potential of social support is an investigation of an AI companion app called Replika. Through a review of the app and interviews with users, this study examines the possibility of social support. The results suggest that an AI-based entity can provide social support by serving as a confidant for difficult-to-share feelings and reducing feelings of loneliness by being available for conversation at any time.

Certainly, it is difficult to objectively explain the effects with this study’s approach, but it can be said that it shows the potential for AI to contribute as a companion in everyday life, as someone who is there for people.

Concerns in the negative direction

However, it is not all good and there are, of course, risks. A major concern is the possibility of using the relationship with AI characters or AI beings, which people have come to trust, for manipulative purposes.

This paper investigates the ethical concerns surrounding the use cases of ASAs (Artificial Social Agents). Autonomy was the biggest concern among the ethical issues in the study. For example, concerns about autonomy might include having your AI friend reply to your messages instead of you or an AI guide for learning support recommending a study session without providing any options.

It might be unsettling to imagine AGI (Artificial General Intelligence) doing these things, but it is important to carefully consider whether you would really feel only uncomfortable if a personalized AI that empathizes with your worries and is always there for you will do something they thought was good for you. If you have positive feelings other than creepiness, that can actually become a risk.

As another example, this paper investigates to what extent people follow advice generated by AI. The results show that people are more likely to follow dishonest advice, regardless of whether they know it was generated by AI or not.

In this study, AI is simply a tool that generates advice, so there is no relationship with AI. However, if a relationship with AI were established, the risk could be even greater.

At the end

We have discussed the relationship between social AI and humans. In this post, we mainly focused on one-on-one relationships, but I believe that the potential is not limited to just one-on-one relationships.

When social AI is appropriately implemented, it can facilitate communication among people and act as a catalyst to enhance collective intelligence. I have considered this before, and it seems that it is gradually becoming more feasible with advances in technology.

As mentioned in the latter half, there are also significant risks to consider. Furthermore, social acceptance may differ depending on cultural backgrounds. In Japan, there is a culture of character, and there may be familiarity with anthropomorphization. However, in other cultures, the social acceptance may be different, and the way to think for the risks may also be different.

When I think about it from the standpoint of a PM, I naturally would like to implement it appropriately while learning about its effectiveness and impact, not only in terms of technical feasibility, but also in terms of cultural background, psychology, social science, HMI (Human Machine Interaction), etc.

--

--

Nakamura Hiroki
Nakamura Hiroki

No responses yet