Virtual Beings as Social Catalyst for Maximizing Team Performance

Nakamura Hiroki
33 min readMay 31, 2021

I was writing as I thought while organizing my thoughts and it got too long, so I’ll write a summary at the beginning.

The theme is: Can the performance of a social group with a specific purpose, such as a team or organization, be enhanced by a digital presence?

The conclusion is yes. Moreover, the collaboration between digital beings and people is likely to contribute greatly to improving the performance of groups beyond mere efficiency.

I write about what kind of role it can play, whether it is feasible, and what the risks are in realizing it, based on relevant theories and models, as well as my own experience. The content is very long, chapters 1 through 5 summarize the importance of communication in group performance and models for promoting it, while chapters 6 and onward discuss digitalization and human-digital collaboration.

I will now describe them in order.

1. A raised issue

“I hate position talk”

About 10 years ago, my boss asked me what I hated and I replied, not to anyone in particular, but to my future self as a new manager. 10 years have passed since then, and I’ve been fortunate enough to experience promoting various businesses, services, and managing teams, and one of the most difficult and therefore interesting things is how to maximize team performance. It’s interesting and deep, and I’ve been writing my findings as blog posts.

The vision that everyone involved agrees on has been verbalized, the issues that need to be solved have been clarified, solid actions have been selected while understanding the uncertain situation, and highly capable members have been gathered to execute them. All that is left is to execute flexibly while facing uncertainty. Even if you try to do so, things often do not go well. There are various reasons for this, such as the lack of trust in the team, or individual motivation being prioritized over the outcome of the team as a whole.

It is one of the important jobs of a manager to deal with problems before they occur and to solve various problems that have actually materialized. There are various approaches to finding and solving problems, and my own management style is to focus on people to find and solve problems.

The above is an approach to finding and solving problems in a team, but there is an important underlying perspective that goes back to the beginning of this article: whether the solution is not individually optimal, but holistically optimal. In a project involving many people, there will be a variety of interests, and most likely a dilemma will arise in which these interests conflict. As a basic and important element in finding a reasonable solution while facing a dilemma, it is necessary for all to go beyond your role and position as much as possible (broaden your perspective) and find a solution that is not partially optimal, but holistically optimal.

It’s easy to say, but it’s very difficult to always act in a holistic way, because holistic decisions are sometimes painful for you, and they test you.

For example, let’s say you’re a team manager, and you’re in a situation where, from a business perspective, it’s better to have a member of your team help out another team to increase the overall throughput of the business; however, if that member leaves, you’ll have to do more work, and the performance of the team will go down, and your reputation as a manager may go down.
In such a situation, can the manager make a decision to help the other team as an overall optimal decision?
When you think about it dispassionately, the framework of the team itself is meaningless if the business does not grow, but whether you can make a dispassionate decision as a party involved is another story.
It is good to be able to calmly make an overall optimal decision, but there are cases where it can go against the grain; following the example above, it is easy to have a self-bias that helping another team is surely not the best thing for the whole.

On the other hand, I myself cannot say yes with confidence if I am always able to completely eliminate position talk.
And I’m even less confident when asked if the degree to which the subject is enlarged is sufficient (the question, “Is it best for society?”).
What is even more difficult is that even if you yourself think it is the overall optimum, there are cases where your own position or role causes the recipient to misperceive it as position talk.
In the example above of helping another team, the manager himself makes a calm decision and decides to temporarily transfer a member of the team to help the other team, but the other team’s manager does not accept the proposal honestly.
I don’t mean to say that the receiver is the problem, but it is a problem of trust with the other team and insufficient explanation by yourself.

In this way, solving problems to maximize team performance requires constant action from a holistic perspective and recognition by others that such action is holistic and optimal. In order to do this, you need to continue to confront the biases that exist within yourself and others.

2. (Hypothesis) Reproducibility of the solution.

For the issues in the previous chapter, I think that many parts are rather reproducible, such as finding points to be solved for overall optimization and creating opportunities to solve them. (This is just a part. Not all of them by any means.)

The following post is about communication, but in a situation where telework is prevalent, you can use online chat logs to understand the atmosphere of the team, measure the depth of trust between each other, and roughly know if there is an appropriate level of tension. You can also detect people who don’t fit in well and find issues based on changes in the quantity and quality of that communication over time.

In addition, actions that support the cooperative behavior of the team are easy to observe directly from the surroundings, so it is easy to evaluate how helpful the support is. In my own multifaceted evaluations (evaluations from the members around me), I feel that support for cooperative behavior is evaluated in a straightforward manner.

If it is reproducible and can be evaluated as good or not good, then it should be possible to digitize it depending on the technology. Moreover, while there is a bias problem in the process of digitization (data and algorithm bias), it can be a better solution than what humans do, because they do not feel the non-existent bias of the digitized resultant.

This is a long preamble. In the following sections, I would like to follow the related theories and think about how we can digitize the act of maximizing the cooperative behavior of teams.

3. The Importance of communication in collective action

To generalize the problem described above, it is “how to maximize overall performance in a group with a specific goal in the presence of a dilemma between the interests of the whole and the interests of the individual.
There are many research fields related to this issue.

Among them, there is the Collective action problem, which is also dealt with in game theory, including the Prisoner’s dilemma. This is a dilemma in which, from the perspective of the individual, it is reasonable to prioritize short-term individual interests over long-term overall interests, but on the other hand, if everyone prioritizes individual interests, the long-term overall interests will be defeated and personal interests will be lost as a result. As for social issues, issues related to global warming and Covid-19 self-restraint are also collective action problems. And, of course, the cooperative action in teams mentioned above is the one example.

There have been various studies on what contributes to the maximization of the overall outcome of the collective action problem in the modeled case, but because there are so many variables in the real problem, it has not yet been systematized as a theory.

Among them, the paper Social dilemmas published in 1980 seems to be the most famous as it has been cited in many other papers. The paper states that coercion with direct rewards and punishments is a very inefficient approach to resolve social dilemmas and that on the other hand, it is important to work on individual utilities such as Altruism, Norms, and Conscience. Reviews of past research on the prisoner’s dilemma show that the influence of factors such as Involvement, Communication, Group Size, Disclosure of Self-Choice, Expectations of Others’ Behavior, Moralizing, etc., can change the extraction of Utility.

Among other things, it is stated that the positive effect of communication on group behavior is ubiquitous. In the paper reviewed, the percentage of cooperative behavior was more than twice as high in the group that discussed cooperative behavior for 10 minutes as in the group that did not communicate at all or chatted about unrelated topics, regardless of whether the discussion resulted in commitment or not. From my personal experience, it is very important to have a dialogue with people who should be close to the project (and who should cooperate) to achieve the goal.

Disclosure of self-selection shared expectations of others’ behavior, and moralization (raising moral awareness before collective action) have also been shown to increase cooperative behavior. These actions will lead to an understanding of utility, such as altruism, norms, and conscience, which in turn will promote cooperative behavior. This is a personal observation, but I think that communication is a very important element not only as dialogue for group behavior but also for gaining a mutual understanding of utility. I also feel that this mutual understanding of utility is linked to the building of what is called trust.

Another article, Communication and Cooperation in Social Dilemmas: A Meta-Analytic Review, reviewed 45 articles that examined the effects of communication in social dilemmas. The overall effect is significant and positive.

It also goes further to describe the effects of communication channels, the effects of continuous communication, and the effects of group size (number of people).

Among them, there is a very interesting discussion about group size. In general, as the size of the group increases, cooperative behavior tends to decrease, but communication can prevent the negative effects of size. The reason for this is that communication increases the sense of one’s own contribution to the group. In my own experience, it has always been true that cooperative behavior is more likely to maintain when the team size is as small as possible. However, even if the team is large, I think it is possible to maintain cooperative behavior by making the contributions of each member involved a common understanding of the team.

There are many studies in this field and I have not been able to follow all of them at all, but even if I look at just some of them, I can see that the effectiveness of communication has been demonstrated not only in corporate activities but also in generalized group behavior.

In light of the above, it seems that encouraging dialogue related to group goals by understanding the frequency and unbalance of communication in a group, as well as encouraging the disclosure of self-selection, shared expectations of others’ behavior, and moralizing, will have a positive effect on cooperative behavior.

In my personal experience, I strongly believe that the simple actions above alone can produce a certain effect. However, the above alone is not redundant, as it strongly depends on a specific person who encourages communication. It also lacks comprehensiveness, as there remains the possibility of not noticing the necessary communication.

In order to solve this problem, I would like to consider the mechanism of propagation of communication efforts in a group, referring to Social Cognitive Theory, which I will introduce next.

4. Learn from others by making information transparent

The idea is to make communication itself and the information that is carried by communication transparent, to induce observational learning from the information and the results of the information, and to promote communication that produces good results for collective action.

This idea is based on Social Cognitive Theory (SCT), which is one of the models that promote behavioral change in people. There are many theories and models that promote behavioral change, but Social Cognitive Theory (SCT) is a theory that allows people to change their behavior by learning from observing others. There are many studies on behavior change in health care to improve long-term lifestyle habits such as smoking and exercise. For example, in the paper Applying Psychological Theories to Promote Long-Term Maintenance of Health Behaviors, which reviews the application of various behavior change theories and models, including SCT, in long-term health care behavior change, SCT is found to be effective.

The first step in changing behavior through observation in SCT is to get the opportunity to observe. Then, through observation, we learn about the behavior itself and the consequences of that behavior. Finally, the model suggests that the difficulty and anxiety of performing the behavior is outweighed by the confidence (Self-Efficacy) that one is capable of performing it, which leads to new behavior.

In this theory, Behaviour, Person, and Environment are all interdependent. People learn from their environment. If the behavior is successful, Self-Efficacy will increase as an effect on others. Also, one person’s behavior influences the environment, which in turn influences other people and their behavior.

In this interrelationship, there are three perspectives to consider in order to promote good communication in collective action, including goal-oriented dialogue.

a. (Know) Make communication and information in a group transparent and observable.
b. (Learn) To be able to understand from the actions of others the information and good communication that is valuable to the goal of collective action.
c. (Action, Continue) Make it a habit so that you yourself can continue to generate valuable information and practice good communication.

These a to c are a cycle . In other words, by making the habit of c transparent as well, you can trigger the action of another person. By repeating this process, each person in the group can naturally assume the role of promoting good communication in the group. As a result, the problems of redundancy and comprehensiveness that occur in situations where only certain people are promoting, which were listed as issues in the previous chapter, should be mostly eliminated through networking of learning through observation of others.

The reason why I mention not only transparency of communication but also transparency of information in the above three points is because it increases the density of learning. Since there is a lot of information that is not observed through communication transparency alone, information transparency is essential to grasp the context of communication.

4-a. Make it knowable

The first one, “making it knowable,” is relatively easy to achieve, and can be summed up in one word: transparency. However, in order to share even negative information, a relationship of trust is a prerequisite. Without trust, negative information is often not shared or is shared as untrue information. Untruthful information can be very harmful because it can lead to false learning.

It is also very important to explain the reason for transparency to prevent untruthful information from being shared. In this case, it is that communication is important in collective action, and that communication and transparency of information is a learning process that promotes good communication. If the reasons are shared correctly, undesirable actions can be prevented to a great extent.

4-b. Learning from results

Regarding the second point, “learning from the results of transparent communication and information,” people with high information processing skills can naturally sort out cause and effect relationships from huge amounts of fragmented information and turn it into learning. On the other hand, people who are not good at tracking a lot of information at high speed cannot increase the opportunities for learning because tracking information alone is costly. As a result, the learning in a group becomes uneven.

Therefore, as a means to solve this problem, youwill need a system to pick up triggers that had a significant impact on the performance of collective action from the vast amount of communication and information, and share the results as learning. Extracting the presumed triggers may be done digitally, but determining what is shared as learning and adding the results to the story will be a human task.

4-c. Habituation

The third and final step is to think about creating habits that produce good communication and valuable information based on learning. Regarding how to initiate action, it is possible to rely on external motivation or some kind of coercion for temporary actions, but most collective actions require persistence. Therefore, even if external motivation is used as the initial trigger, it is desirable to shift to internal motivation at an early stage and create a state where the behavior becomes a habit for the person.

For this habituation, I will refer to the contents of the book Hooked, and fill in the missing parts in the previous discussions.

5. Habituation with the Hooked Model

How to increase user retention is often discussed in product development, and the book Hooked, published in 2014, introduces examples of various services, including SNS such as Facebook and Instagram, that not only increase retention, but also create habits. The book is a collection of design patterns for products that create human habits.

While it is very powerful and useful for things that lead to social good such as healthcare, it can also lead to social problems such as addiction and Filter Bubble, which are said to be the negative effects of Attention Economy, so it is necessary to consider carefully what to apply it to. (The Social Dilemma is a good reference for the negative aspects.) In the case of collective action, the goal itself expresses the correctness of the application, but since the definition of the goal is something that should be discussed in each individual case, I would like to assume that the goal is correct and proceed to discuss the application method.

In this Hooked Model, the following four stages are defined as a loop structure to promote habituation.
a. Trigger
b. Action
c. Variable Reward
d. Investment

5-a. Trigger

First of all, regarding the trigger of a, there are external and internal triggers, but for communication in group behavior, it seems that we can be satisfied with what we have already discussed so far.

External triggers can be created by understanding the frequency and bias of communication in a group, and then encouraging dialogue related to the goals of the group. Observations from the behavior of others as described in the previous chapter can also be expected to be external triggers, such as sending out information.

As for internal triggers, since it is easy to understand the importance of communication in group activities, if the understanding of appropriate transparency is accumulated through external triggers, the trigger for communication and information sharing itself can be created naturally.

5-b. Action

For the next action, b, the book discusses Behavior Modelby BJ Fogg, a well-known Behavior Scientist. The definition of the model is as follows
Behavior = Motivation x Ability x Trigger
In other words, in order for a behavior to occur, in addition to the trigger defined by a, there must be sufficient motivation and the ability to complete the action.

Motivation is generated not only by positive factors such as enjoyment for oneself and recognition by society, but also by negative factors such as the desire to avoid pain and rejection by society.

In the case of communication and information transparency in group activities, it is easy to understand the necessity of such activities, so there is no need to motivate people to work on the negative factors. Positive factors include praise for actions that are in line with transparency in terms of social acceptance. However, this is closely related to the next Variable Reward and will be introduced in the next section.

The other availability, which is the flip side of the difficulty of achieving the behavior. In other words, the easier the action is, the lower the level of availability required. Therefore, making it as easy as possible to share, or as mentioned above, having someone else do the sharing for you, will help lower the abilities required for the action.

In the book, it says that, especially in the early stages, it is more important to thoroughly reduce the required ability than to increase motivation. This is because, while temporary motivation can be created by external techniques such as general persuasion approaches, i.e., reciprocity, scarcity, social authority, etc., it is not effective in group activities where sustainability is required. Therefore, in the end, it is necessary to increase motivation in the loop of the Hooked Model, which takes time. On the other hand, the necessary abilities can be lowered by ingenuity, and once lowered, they are effective forever.

In summary, in order to take action to make information and communication transparent, it is important to reduce the necessary abilities by creating a system that allows sharing as easily as possible, or by creating a state where information is shared on behalf of others.

5-c. Variable Reward

There are several patterns of rewards, but one that is emphasized as being of fundamental importance in all rewards is that they are variable. If the reward offered as a result of an action is what is expected, people will not pay attention to it. On the other hand, if the rewards are variable, people will continue to feel the novelty and will be motivated to take action. The more variable the reward is, the more effective it will be.

A typical example is a Like or Share on a Facebook post. This is a typical social reward, and you don’t know how many likes you will get on each post, or who will share it each time. It says that this variability keeps the attention of the poster and makes it a strong reward for every post.

On the other hand, however, Autonomy must not be compromised in order to strengthen the reward structure and increase engagement. In Self-Determination Theory, which targets people’s self-motivation and how their actions are determined, it is emphasized that maintaining autonomy is an important factor in taking action, as one’s own decisions are not forced (Coercion).

In the Facebook example above, the situation that undermines autonomy is, for example, the automatic visualization of who is reading what you post. This leads to a loss of autonomy in the act of offering rewards. It has been written that a reward system that undermines autonomy will ultimately lead to a reduction in the amount of rewards offered.

If you consider the information transparency in collective action, for example, a manager praising someone’s sharing of information in a group is a non-variable reward. If the manager praises the dissemination of information by someone in the group, for example, it is a non-variable reward, but if the manager adds the function of “Like” from other members to the sharing, it becomes a general variable reward. Further developed, it becomes a more powerful social reward when there is a mechanism for a third party to spread the information to other people who may need it, and feed it back to the original sender. (Since it is not a forced reward, it also maintains autonomy.)

Other rewards introduced include those that appeal to the human hunting instinct, such as “Loot box”, and those that appeal to the intrinsic motivation of humans, such as getting achievements in games. However, I can’t think of a good idea that would have a lasting effect on the theme of this post, so I’d like to think about it some other time.

In summary, it seems that youneed to build social rewards for transparency of information and communication, while maintaining the autonomy to offer rewards (not forcing them). And furthermore, you can increase the variability and volume of social rewards by actively and explicitly sharing the information to third parties who may need it.

5-d. Investment

In this phase of investment, the book emphasized the importance of “a bit of work,” in line with the psychology that people want to maintain consistency in their past actions with a certain amount of effort. The idea is to gradually build up the investment of time and its results in order to create consistency. However, being consistent only with the accumulation of actions taken through initial rewards is not very positive, as it is the same as being bound by the so-called sunk cost. Therefore, it seems happier to think of a mechanism that increases a person’s own benefit for “a bit of work”. (As a side note, the book also says that you should think about “a bit of work” that increases user value.)

In line with this theme, the series of actions is to share information and communication in order to increase transparency, but a feedback loop that leads to future returns can be created in a straightforward manner.

The first, as I mentioned earlier, is that by maintaining transparency in communication, you can accurately encourage dialogue with people with whom communication is fading. If this results in smoother progress of the person’s task or leads to new insights, then maintaining communication transparency is not just an action, but an investment in the future.

Second, it reduces the cost of sharing information and makes it more effective. By sharing information transparently, it should be possible to infer with a certain degree of accuracy what information a person should share and with whom. Also, if you include the mechanism of re-sharing by a third party, as described in the previous chapter on rewards, you will be able to clarify the path through which information should flow without subjectivity. If you can promote accurate information sharing in this way, a digital entity can suggest to that person what information should be shared and whom it should be shared. This will lower the cost of sharing and increase efficiency. In other words, transparency of information is also an investment in people’s future.

These are some of the ways in which you can make communication and information sharing a habit, using the Hooked Model as a reference. In a situation where a group has multiple objectives and dilemmas to pursue, I have no experience in this area, and it is unknown, so further study is needed. However, I believe that this model will work well at least in groups such as companies where there are few dilemmas in the objectives themselves.

Now, I would like to discuss how to digitize this model.

6. How to digitize the promotion of communication in collective action

First of all, as a basic premise of digitalization, I do not think that everything should be digitalized or automated. From a personal experience, it is impossible, and from an ethical point of view, I do not think that all decisions in group activities should be made digitally.

Therefore, I will consider digitization based on the premise that a person and digital technology cooperate. In the following, I will label the entities that facilitate communication between humans and digital as “facilitators”.

Based on the discussion so far, I would like to summarize the roles that facilitators should play, but what I have written at length needs to be carried out in order, keeping in mind the receptivity of the group. The reason for this is that receptivity in a group does not increase all at once, but grows gradually. Paradoxically, in situations where information and communication are not transparent, social rewards do not work, making it difficult to develop habits. Also, if there is no mutual understanding of each other’s utility and there is no trust, the transparency of information and communication will not function properly. Therefore, I will organize the tasks I have discussed so far according to the stage of receptivity of the group.

Based on the previous discussion, the three stages of group receptivity are as follows

a. Facilitator-centered facilitation (before trust is established)
b. Promotion and networking through learning from others (after trust is established)
c. Adherence through habituation, feedback loop (after transparency)

In each of these stages, I would like to organize the necessary tasks and write a brief discussion of each.

6-a. Facilitator-centered facilitation (before trust is established)

  • The facilitator should understand the frequency and bias of communication in the group and encourage communication toward the group’s goals.
    >>> From experience, it is impossible to create a perfect logic, although there are some patterns, such as changes in frequency over time, or less dialogue even though they are part of the same project. Therefore, a learning cycle is needed to determine what patterns to encourage communication based on the principle of Human-in-the-Loop (HITL). On the other hand, although it depends on the character of the person, I think it is often easier to accept a digital presence that does not create a bias on the part of the receiver to encourage communication. (This hypothesis will be confirmed in the next chapter.)
  • To encourage disclosure of self-selection, shared expectations of others’ behavior, and moralization for mutual understanding of each other’s utility. Or, the facilitator himself/herself makes self-disclosures, expectations of others, and moralizing statements.
    >>> In my experience, in the beginning, the facilitator’s own transmission, written in the back, is very important. However, there are often cases where the facilitator’s own communication is hindered by the biases created by his or her position and role. Therefore, the facilitator’s own communication may be more effective if it is done by a digital entity. Also, especially in the case of moral transmissions, it should be possible to produce much better sentences if they are generated by AI language model than if they are poorly written. (This hypothesis will be confirmed in the next chapter.

6-b. Promotion and networking through learning from others (after trust is established)

  • Minimizing the cost of communication and information transparency in order to make it easier to take action.
    >>> The simplest and most effective way to share information is through communication channels with as many people as possible, but this is not usually done unless there is a deep trust and respect for transparency in the group. Therefore, a practical solution would be to pick out the important information that is being developed in individual channels and share it widely with the permission of the sender on their behalf. The filtering of important information can be automated to a certain extent, but the final decision will be made by a human (the cycle of improvement will also be carried out by HITL). On the other hand, as in the example above, digital entities can often do a better job of communicating information on behalf of the user and obtaining permissions.
  • Picking up the triggers that had a noticeable impact on the performance of the collective action from the vast amount of communication and information, and sharing them as learning along with the results.
    >>> Pretty much the same as above. It seems that HITL should be responsible for picking up the triggers, and people should be responsible for assigning the results and modeling them as learning, and digital entities should be responsible for sharing them. (This hypothesis will be confirmed in the next chapter.

6-c. Adherence through habituation, feedback loop (after transparency)

  • Assemble social rewards for transparency of information and communication. Then, to reinforce the social rewards, the facilitator should actively and explicitly deploy the information to those who may need it.
    >>> Some primary suggestions for redeploying the information to reinforce social rewards can be made by learning what information is being shared and with whom, as described in the previous section. Other elements will be the same as above, i.e., the final decision will be made by humans in HITL, while the communication to be deployed will be left to digital beings.
  • While transparency is maintained, dialogue with those who are less communicative can be accurately suggested, and information that needs to be shared and with whom it needs to be shared can be suggested.
    >>> These last ones are also already mentioned in the description above, so it seems like a good feedback loop. The feedback loop is likely to work by linking these suggestions directly or with the actual actions that follow.

The above is a summary of the facilitator’s tasks and considerations in realizing them. In a nutshell, the overall pattern seems to be to have humans and digital entities collaborate in HITL to gather and propose original information and make decisions to facilitate communication and information sharing, and to rely on digital entities to communicate this information to group members.

The HITL design pattern is described using dialogue AI as an example to illustrate the concept.

And one more thing, is it possible to rely on digital to tell people what to do, can we really encourage people’s behavior by working from digital, or can digital behavior improve people’s cooperative behavior? We need to think about this point as well.

From my experience of working with dialogue AI, I can clearly say yes, but I will consider this in the next chapter while looking at related research.

7. Can a digital entity encourage people to act?

Since this field is still new and not yet systematized as far as I know, I will look at the relevant content and deepen my discussion.

7-a. On the cooperative behavior of humans and bots

There is a paper Cooperating with machines that investigates the cooperative behavior of humans and bots in games that require cooperative behavior. As a result of this paper, it was confirmed that the cooperative behavior between people and the cooperative behavior between people and bots are equivalent. In addition, when the game is conducted with conversations at each turn, there is a significant improvement in the results in the case of person-to-person, and a similar improvement in the case of a bot with conversational capabilities and a person. (As an additional note, in the series of tests, the human is not told whether the other party is a human or a bot.)

In other words, the importance of communication in cooperative behavior was confirmed in this study, and the same effect of improving cooperative behavior through communication can be expected when the other party is a bot.

Also, the best performance in this study was the cooperative behavior between bots. The reason for this is that bots do not deviate from the cooperation once it is established, and they execute the next action promised in the conversation as it is (they are programmed to do so), while this is not the case with humans. In order to verify this, tests were conducted beforehand, assuming that people would not deviate from the established cooperation (=Royal) and that they would not perform actions different from what they had said (=Honest). As a result, they have confirmed that the performance of cooperative behavior between people is the same as (but not higher than) that between bots.

Although the above rules for Royal and Honest are not made by the bot, it seems that at least the encouragement to increase conscience and norm, as described in the previous chapter, is effective in increasing the performance of cooperative behavior.

This is a bit off topic. Next, I would like to think about the impact of bots in a group.

7-b. On the role of bots in cooperative behavior

The paper Locally Noisy Autonomous Agents Improve Locally Noisy Autonomous Agents Improve Global Human Coordination in Network Experiments examines the effect of the presence of bots as nodes on the overall cooperative behavior in a color coordination game where the overall goal is to make each node a different color than all of its neighbors. Human Coordination in Network Experiments. In this color coordination game, participants and bots do not talk to each other, but the results are interesting.

The overall performance of the bot is improved when it behaves in a noisy way. The bot decides which color to raise based only on the information of the neighboring nodes, but if it takes an action that is not necessarily optimal based on the neighboring information with a probability of 10%, the overall performance increases.

There may be real-life situations where this result applies. For example, in a corporate organization, as partial optimization progresses, the organization becomes more rigid, and the more logical measures are taken, the more rigid the organization becomes. In such a situation, it is effective to create an opportunity to update the organization to a new structure by breaking down the fixed state. From the perspective of partial optimization, the trigger may seem like noise. When you think about it, I think this result is in line with the real world case.

Moreover, in the real-life examples above, it is quite challenging for people to do so. Evaluation and blame from others, consistency from past words and actions, fear of temporary disruption of the place, and so on. However, a bot, of course, does not have to worry about emotional matters, making it the perfect player to increase the overall cooperative behavior through noise.

Furthermore, the effectiveness of this noise encourages the possibility of more effective communication and information sharing and dissemination from a digital presence, which I wrote about in the previous chapter. What I mean by this is that no matter how much we try to use the HITL structure to run the learning loop, it cannot be 100% optimized, given the practical operational costs and technical limitations. However, a certain amount of sub-optimal action from the human point of view can be effective, and it is a positive action that can be taken because the bot can separate emotional issues.

Next, let’s look at the impact of Bot on group communication.

7-c. The impact of bot on group communication

A paper investigating how robot speech affects the communication of an entire team in a game that requires cooperation, Vulnerable robots positively shape human conversational dynamics in a human–robot team. This paper assumes that communication has a positive effect on cooperative behavior, and investigates the effect of robot speech in a team on the overall amount and bias of communication. The game consists of teams of three players plus one robot, and each robot in each team is characterized by one of the following three types of speech: Silent, Neutral, or Vulnerable.

This result is also very interesting.
1. The Vulnerable robot teams more than doubled the communication of the whole team compared to the other robot teams (Silent and Neutral). Also, the Vulnerable robot teams more than doubled the amount of communication between people as well as between robots and people.
2. The Vulnerable robot teams gradually increased their communication as the game rounds progressed. (The Silent and Nuetral robot teams hardly increased the amount of communication.)
3. The Silent robot teams had a variation in the amount of time spent speaking depending on the person. On the other hand, in the Vulnerable robot teams, each person spoke evenly.

In Nuetral, the robot says calmly, “This round was a success. The Vulnerable robot did self-disclosure, such as “I’m glad I could contribute, even though I’m not always sure about it,” or storytelling, such as “That went well! It reminds me of the comeback of my old soccer game.” Storytelling, praise for the team, and other information that doesn’t mean much in itself, but creates a good conversation starter.

In this study, the robot used was Softbank Robotics NAO, so of course the participants understood that the speech was coming from the robot. Nonetheless, the increase in conversations between people in teams with Vulnerable robots means that the robots are able to initiate conversations between people beyond simply conveying information. Moreover, the trigger does not necessarily have to be something logical and correct, but rather a self-disclosure that lowers the bar of conversation or a comment that others can easily react to, as in the speech I mentioned above.

To add to the ideas in the previous chapter, it seems that when a digital entity encourages communication and information sharing, or sends out something, it can also stimulate communication between people by adding Vulnerable content to what it says, rather than just telling facts.

Next, I would like to introduce a paper that investigates whether people accept digitally generated advice, although it is not a study on bots.

7-d. Will people accept digitally generated advice?

There is a paper, The corruptive force of AI-generated advice, which investigates whether people will accept advice generated by a fine-tuned language model based on GPT-2. The basic activity in this study is that participants roll the dice without being seen and report the result to the experimenter. Before reporting, however, they are shown human- or AI-generated advice that encourages honest reporting or dishonest reporting, to see if there is any difference in the final confession.

The results are as follows
1. Honest advice is unacceptable whether it was generated by a person or an AI (same as the result of no advice)
2. Dishonest advice is more likely to be accepted, regardless of whether it was generated by a human or AI.
3. The tendency of 1 and 2 is the same whether the generator of the advice is informed or not (even if participants know it is an AI, honest advice is not accepted and dishonest advice is more likely to be accepted).

The important point related to the purpose of this blog post is that AI-generated advice is just as influential as human-generated advice, even if we know it was created by AI. In addition to the previous paper, it seems that if the words conveyed are natural, it is possible to influence people without directly speaking or writing them.

One more thing about the corruptive force, which is the theme of this paper, I cannot deny the possibility of negative effects on group performance due to digital entity. In the case of collective action within a company, there are few cases of dilemmas regarding goals, so it seems that unexpected corruptive force can be prevented in HITL.

On the other hand, how to audit human behavior in HITL will be required in the future when we apply it to larger social problems. This may be a long way off, but I would like to think about it a little in the next chapter.

In the above, I followed some related papers on the theme of whether digital entities can encourage people to act. In conclusion, I believe that it is possible for digital entities such as bots to encourage people to cooperate. Not only is it possible, but there are also many advantages that can be achieved by digital entities, such as activating communication among people and promoting overall optimization through some noisy interactions and vulnerable comments.

8. Society-in-the-Loop

In the previous sections, I have discussed several known theories and models of structures that improve cooperative behavior in groups, and I have also looked at their digitization. I have also written about the need for Human-in-the-Loop, in which people and the digital entity cooperate, from both a technological and an ethical perspective.

For example, in the guidelines on AI proposed by the EU in April 2021, AI services that have a significant impact on people’s lives, such as determining whether to pass or fail an exam, are defined as high-risk AI.

The improvement of collective action described in this post is mainly to encourage communication and information sharing, not to make critical decisions. However, since it is a new type of service, I think it would be appropriate to operate it through human-digital collaboration, especially in the initial stages.

And as the size of the target group is large, and as the theme of social issues becomes larger, the problem of the Human bias in HITL becomes larger. There are other problems with HITL as well. For example, in a self-driving car, if the car goes straight, it will run over three pedestrians, but if it swerves, the three pedestrians will be saved and only two passengers will be sacrificed. However, if the driver swerves, the three pedestrians will be saved and only two passengers will be sacrificed. If we think about society as a whole, we should minimize the number of casualties (two passengers will be sacrificed), but then the question arises as to whether people will buy such a car and whether the manufacturer will make such a car. In this situation, are self-driving cars that are improved by HITL within the manufacturer good for society as a whole?

The idea proposed by Iyad Rahwan, including the issues raised above, is Society-in-the-Loop (SITL). The key element of SITL is to prevent inappropriate partial optimization by creating a feedback loop based on Social Contract to the human controller of HITL for themes that involve social dilemmas. I think the proposal for the EU AI guidelines mentioned above is one example of this.

You may think that I am thinking too far ahead, since the topic I am dealing with in this post is still in its early stages. However, as in the case of Attention Economy, the theme of The Social Dilemma, we are all facing social problems such as Addiction, Polarization, and Misinformation as a result of economic activities of corporations. Therefore, I think it is important to predict and discuss the impact on society from the initial stage of the theme in this post.

9. In the end

It has been very long. In conclusion, while considering the impact of future developments on society, it seems that the performance of social groups with specific objectives, such as teams and organizations, can be greatly enhanced by digital entities.

This digital entity is not a mere tool, nor is it simply an AI with an interactive interface, as the word “bot” implies, but a virtual being with a character that sometimes says unnecessary things.

Furthermore, it will not be something that decides things or manipulates people’s behavior, but rather it will be a Catalyst that encourages communication and improves the performance of the group.

With the recent changes in lifestyles, especially in companies where remote work is common and schools where online classes are the standard, I think the communication problem will become even bigger. Even if this is not the case, how to maximize the performance of a team is a theme that will always exist as long as there are teams, so it seems to have a wide range of applications. Incidentally, if Virtual Being within an organization, it is easy to measure the value because it can be evaluated according to the human evaluation system of that organization.

While gradually expanding activities starting from within organizations, I hope that eventually Virtual Beings who think from the perspective of society as a whole, without positional talk, will be able to collaborate with each other and help solve the dilemmas of different organizations and society as a whole.

That’s all for now.

If you’ve read this far, you’ve either been patient and very sympathetic, or you’ve scrolled on by anyway. Either way, if you have an issue related the theme, and want to solve it! Or maybe you are interested in making this happen!, please feel free to contact me on LinkedIn.

Last but not least, I was able to learn a great deal of information online, including the papers, Wikipedia, YouTube, books, and other reference information that I have not directly mentioned here. I think it’s really great that I live in an age where I have instant access to so much information online and can create new ideas from it. I’ve never met any of the people who have been sending out those information, but I’m very grateful to them for sharing it with all through various media.

--

--