Influence of augmented humans in online interactions during voting events

Influence of augmented humans in online interactions during voting events

  • Massimo Stella (Scholar)
  • Marco Cristoforetti (Scholar)
  • Marco Cristoforetti (Scholar)
  • Abstract: Overwhelming empirical evidence has shown that online social dynamics mirrors real-world events. Hence, understanding the mechanisms leading to social contagion in online ecosystems is fundamental for predicting, and even manouvering, human behavior. It has been shown that one of such mechanisms is based on fabricating armies of automated agents that are known as social bots. Using the recent Italian elections as an emblematic case study, here we provide evidence for the existence of a special class of highly influential users, that we name “augmented humans”. They exploit bots for enhancing both their visibility and influence, generating deep information cascades to the same extent of news media and other broadcasters. Augmented humans uniformly infiltrate across the full range of identified clusters of accounts, the latter reflecting political parties and their electoral ranks.
  • Bruter and Harrison [19] shift the focus on the psychological in uence that electoral arrangements exert on voters by altering their emotions and behavior. The investigation of voting from a cognitive perspective leads to the concept of electoral ergonomics: Understanding optimal ways in which voters emotionally cope with voting decisions and outcomes leads to a better prediction of the elections. (pg 1)
  • Most of the Twitter interactions are from humans to bots (46%); Humans tend to interact with bots in 56% of mentions, 41% of replies and 43% of retweets. Bots interact with humans roughly in 4% of the interactions, independently on interaction type. This indicates that bots play a passive role in the network but are rather highly mentioned/replied/retweeted by humans. (pg 2)
  • bots’ locations are distributed worldwide and they are present in areas where no human users are geo-localized such as Morocco.  (pg 2)
  • Since the number of social interactions (i.e., the degree) of a given user is an important estimator of the in uence of user itself in online social networks [1722], we consider a null model fixing users’ degree while randomizing their connections, also known as configuration model [2324].  (pg 2)
  • During the whole period, bot bot interactions are more likely than random (Δ > 0), indicating that bots tend to interact more with other bots rather than with humans (Δ < 0) during Italian elections. Since interactions often encode the spread of a given content online [16], the positive assortativity highlights that bots share contents mainly with each other and hence can resonate with the same content, be it news or spam.  (pg 2)
  • Differently from previous works, where the semantic content of bots and humans differs in its emotional polarity [12], in here we nd that bots mainly repeat the same political content of human users, thus boosting the spreading of hashtags strongly related to the electoral process, such as hashtags referring to the government or to political victory, names of political parties or names of influential politicians (see also 3). (pg 4)
  • Frequencies of individual hashtags during the whole electoral process display some interesting shifts, reported in Table III (Top). For instance, the hashtag #exitpoll, indicating the electoral outcome, becomes 10000 times more frequent on the voting day than before March 4. These shifts indicate that the frequency of hashtags reflects real-world events, thus underlining the strong link between online social dynamics and the real-world electoral process. (pg 4)
  • TABLE II. Top influencers are mostly bots. Hubs characterize influential users and broadcasters in online social systems [17], hence we use degree rankings for identifying the most in uential users in the network. (pg 5)
  • bots are mostly influential nodes which tend to interact mostly with other bots rather than humans and, when they interact with human users, they preferentially target the most influential ones. (pg 5)
  • we first filter the network by considering only pair of users with at least one retweet, with either direction, because re-sharing content it is often a good proxy of social endorsement [21]. However, Retweets alone are not sufficient to wash out the noise intrinsic to systems like Twitter, therefore we apply a more selective restriction, by requiring that at least another social action – i.e., either mention or reply – must be present in addition to a retweet [12]. This restrictive selection allows one to filter out all spurious interactions among users with the advantage of not requiring any thresholding approach with respect to the frequency of interactions themselves. (pg 5)
  • The resulting network is what we call the social bulk, i.e. a network core of endorsement and exchange among users. By construction, information ows among users who share strong social relationships and are characterized by similar ideologies: in fact, when a retweet goes from one user to another one, both of them are endorsing the same content, thus making non-directionality a viable approach for representing the endorsement related to content sharing. (pg 5)
  • Fiedler partitioning
  • The relevant literature has used the term “cyborg” for identifying indistinctly bot-assisted human or human-assisted bot accounts generating spam content over social platforms such as Twitter [5, 35]. Here, we prefer to use the term \augmented human” for indicating specifically those human accounts exploiting bots for artificially increasing, i.e. augmenting, their in uence in online social platforms, analogously to physical augmentation improving human performances in the real world [36]. (pg 8)
  • Baseline social behavior is defined by the medians of the two observables, like shown in Fig. 6c. This map allows to easily identify four categories of individuals in the social dynamics: i) hidden in uentials, generating information cascades rapidly spreading from a large small number of followers; ii) in uentials, generating information cascades rapidly spreading from a large number of followers; iii) broadcasters, generating information cascades slowly spreading from a large number of followers; iv) common users, generating information cascades slowly spreading from a small number of followers. (pg 9)
  • Hidden influentials, known to be efficient spreaders in viral phenomena [45], are mostly humans: in this category falls the augmented humans, assisted by social bots to increase their online visibility. (pg 10)
  • We define augmented humans as human users having at least 50% + 1 of bot neighbours in the social bulk. We discard users having less than 3 interactions in the social bulk. (pg 10)
  • The most central augmented human in terms of number of social interactions is Utente01, which interacts with 2700 bots and 55 humans in the social bulk. (pg 10)
  • The above cascade analysis reveals that almost 2 out 3 augmented humans resulted playing an important role in the flow of online content: 67% of augmented humans were either influentials or hidden influentials or broadcasters. These results strongly support the idea that via augmentation even common users can become social influencers without having a large number of followers/friends but rather by recurring to the aid of either armies of bots (e.g., Utente01, an hidden in uential) or the selection of a few key helping bots. (pg 11)
Advertisements

Beyond Individual Choice

Beyond Individual Choice: Teams and Frames in Game Theory

  • Michael Bacharach
  • Natalie Gold
  • Robert Sugden
  • From Amazon.com: In the classical tradition of game theory, Bacharach models human beings as rational actors, but he revises the standard definition of rationality to incorporate two major new ideas. He enlarges the model of a game so that it includes the ways agents describe to themselves (or “frame”) their decision problems. And he allows the possibility that people reason as members of groups (or “teams”), each taking herself to have reason to perform her component of the combination of actions that best achieves the group’s common goal. Bacharach shows that certain tendencies for individuals to engage in team reasoning are consistent with recent findings in social psychology and evolutionary biology.
  • The following list of notes is oldest (bottom) to newest (top)
  • It is a central component of resolute choice, as presented by McClennen, that (unless new information becomes available) later transient agents recognise the authority of plans made by earlier agents. Being resolute just is recognising that authority (although McClennen’ s arguments for the rationality and psychological feasibility of resoluteness apply only in cases in which the earlier agents’ plans further the common ends of earlier and later agents). This feature of resolute choice is similar to Bacharach’ s analysis of direction, explained in section 5. If the relationship between transient agents is modelled as a sequential game, resolute choice can be thought of as a form of direction, in which the first transient agent plays the role of director; the plan chosen by that agent can be thought of as a message sent by the director to the other agents. To the extent that each later agent is confident that this plan is in the best interests of the continuing person, that confidence derives from the belief that the first agent identified with the person and that she was sufficiently rational and informed to judge which sequence of actions would best serve the person’s objectives. (pg 197)
  • The problem posed by Heads and Tails is not that the players lack a common understanding of salience; it is that game theory lacks an adequate explanation of how salience affects the decisions of rational players. All we gain by adding preplay communication to the model is the realisation that game theory also lacks an adequate explanation of how costless messages affect the decisions of rational players. (pg 180)
  • The fundamental principle of this morality is that what each agent ought to do is to co-operate, with whoever else is co-operating, in the production of the best consequences possible given the behaviour of non-co-operators’ (Regan 1980, p. 124). (pg 167)
  • Ordered On Social Facts
    • Are social groups real in any sense that is independent of the thoughts, actions, and beliefs of the individuals making up the group? Using methods of philosophy to examine such longstanding sociological questions, Margaret Gilbert gives a general characterization of the core phenomena at issue in the domain of human social life.
  • Schema 3: Team reasoning (from a group viewpoint) pg 153
    • We are the members of S.
    • Each of us identifies with S.
    • Each of us wants the value of U to be maximized.
    • A uniquely maximizes U.
    • Each of us should choose her component of A.
  • Schema 4: Team reasoning (from an individual viewpoint) pg 159
    • I am a member of S.
    • It is common knowledge in S that each member of S identifies
      with S.
    • It is common knowledge in S that each member of S wants the
      value of U to be maximized.
    • It is common knowledge in S that A uniquely maximizes U.
    • I should choose my component of A.
  • Schema 7: Basic team reasoning pg 161
    • I am a member of S.
    • It is common knowledge in S that each member of S identifies
      with S.
    • It is common knowledge in S that each member of S wants the
      value of U to be maximized.
    • It is common knowledge in S that each member of S knows his
      component of the profile that uniquely maximizes U.
    • I should choose my component of the profile that uniquely
      maximizes U.

      • Bacharach notes to himself the ‘hunch’ that this schema is ‘the basic rational capacity’ which leads to high in Hi-Lo, and that it ‘seems to be indispensable if a group is ever to choose the best plan in the most ordinary organizational circumstances’. Notice that Schema 7 does not require that the individual who uses it know everyone’s component of the profile that maximizes U.
  • His hypothesis is that group identification is an individual’s psychological response to the stimulus of a particular decision situation. It is not in itself a group action. (To treat it as a group action would, in Bacharach’ s framework, lead to an infinite regress.) In the theory of circumspect team reasoning, the parameter w is interpreted as a property of a psychological mechanism-the probability that a person who confronts the relevant stimulus will respond by framing the situation as a problem ‘for us’. The idea is that, in coming to frame the situation as a problem ‘for us’, an individual also gains some sense of how likely it is that another individual would frame it in the same way; in this way, the value of w becomes common knowledge among those who use this frame. (Compare the case of the large cube in the game of Large and Small Cubes, discussed in section 4 of the introduction.) Given this model, it seems that the ‘us’ in terms of which the problem is framed must be determined by how the decision situation first appears to each individual. Thus, except in the special case in which w == 1, we must distinguish S (the group with which individuals are liable to identify, given the nature of the decision situation) from T (the set of individuals who in fact identify with S). pg 163
  • The psychology of group identity allows us to understand that group identification can be due to factors that have nothing to do with the individual preferences. Strong interdependence and other forms of common individual interest are one sort of favouring condition, but there are many others, such as comembership of some existing social group, sharing a birthday, and the artificial categories of the minimal group paradigm. (pg 150)
  • Wherever we may expect group identity we may also expect team reasoning. The effect of team reasoning on behavior is different from that of individualistic reasoning. We have already seen this for Hi-Lo. This has wide implications. It makes the theory of team reasoning a much more powerful explanatory and predictive theory than it would be if it came on line only in games with th3e right kind of common interest. To take just one example, if management brings it about so that the firm’s employees identify with the firm, we may expect for them to team-reason and so to make choices that are not predicted by the standard theories of rational choice.(pg 150)
  • As we have seen, the same person passes through many group identities in the flux of life, and even on a single occasion more than one of these identities may be stimulated. So we will need a model of identity in which the probability of a person’s identification is distributed over not just two alternatives-personal self-identity or identity with a fixed group-but, in principle, arbitrarily many. (pg 151)
  • The explanatory potential of team reasoning is not confined to pure coordination games like Hi-Lo. Team reasoning is assuredly important for its role in explaining the mystery facts about Hi-Lo; but I think we have stumbled on something bigger than a new theory of behaviour in pure coordination games. The key to endogenous group identification is not identity of interest but common interest giving rise to strong interdependence. There is common interest in Stag Hunts, Battles of the Sexes, bargaining games and even Prisoner’s Dilemmas. Indeed, in any interaction modelable as a ‘mixed motive’ game there is an element of common interest. Moreover, in most of the landmark cases, including the Prisoner’s Dilemma, the common interest is of the kind that creates strong interdependence, and so on the account of chapter 2 creates pressure for group identification. And given group identification, we should expect team reasoning.(pg 144)
  • There is a second evolutionary argument in favour of the spontaneous team-reasoning hypothesis. Suppose there are two alternative mental mechanisms that, given common interest, would lead humans to act to further that interest. Other things being equal, the cognitively cheapest reliable mechanism will be favoured by selection. As Sober and Wilson (1998) put it, mechanisms will be selected that score well on availability, reliability and energy efficiency. Team reasoning meets these criteria; more exactly, it does better on them than the alternative heuristics suggested in the game theory and psychology literature for the efficient solution of common-interest games. (pg 146)
  • BIC_pg 149 (pg 149)
  • I think MB is getting at the theory for why there is explore/exploit in populations
  • We have progressed towards a plausible explanation of the behavioural fact about Hi-Lo. It is explicable as an outcome of group identification by the players, because this is likely to produce a way of reasoning, team reasoning, that at once yields A. Team reasoning satisfies the conditions for the mode-P reasoning that we concluded in chapter 1 must be operative if people are ever to reason their way to A. It avoids magical thinking. It takes the profile-selection problem by the scruff of the neck. What explains its onset is an agency transformation in the mind of the player; this agency transformation leads naturally to profile-based reasoning and is a natural consequence of self-identification with the player group. (pg 142)
  • Hi-Lo induces group identification. A bit more fully: the circumstances of Hi-Lo cause each player to tend to group-identify as a member of the group G whose membership is the player-set and whose goal is the shared payoff. (pg 142)
  • If what induces A-choices is a piece of reasoning which is part of our mental constitution, we are likely to have the impression that choosing A is obviously right. Moreover, if the piece of reasoning does not involve a belief that the coplayer is bounded, we will feel that choosing A is obviously right against a player as intelligent as ourselves; that is, our intuitions will be an instance of the judgemental fact. I suspect, too, that if the reasoning schema we use is valid, rather than involving falacy, our intuitions of reality are likely to be more robust. Later I shall argue that team reasoning is indeed nonfallacious. (pg 143)
    • I think this is more than “as intelligent as ourselves”, I think this is a position/orientation/velocity case. I find it compelling that people with different POVs regard each other as ‘stupid’
    • When framing tendencies are culture-wide, people in whom a certain frame is operative are aware that it may be operative in others; and if its availability is high, those in it think that it is likely to be operative in others. Here the framing tendency is-so goes my claim-universal, and a fortiori it is culture-wide. (pg 144)
    • But for the theory of endogenous team reasoning there are two differences between the Hi-Lo case and these other cases of strong interdependence. First, outside Hi-Los there are counterpressures towards individual self-identification and so I-framing of the problem. In my model this comes out as a reduction in the salience of the strong interdependence, or an increase in that of other features. One would expect these pressures to be very strong in games like Prisoner’s Dilemma, and the fact that C rates are in the 40 per cent range rather than the 90 percent range, so far from surprising, is a prediction of the present theory (pg 144)
      • This is where MB starts to get to explore/exploit in populations. There are pressueres that drive groups together and apart. And as individuals, our thresholds for group identification varies
  • Now it is the case, and increasingly widely recognized to be, that in games in general there’s no way players can rationally deliberate to a Nash equilibrium. Rather, classical canons of rationality do not in general support playing in Nash equilibria. So it looks as though shared intentions cannot, in the general run of games, by classical canons, be rationally formed! And that means in the general run of life as well. This is highly paradoxical if you think that rational people can have shared intentions. The paradox is not resolved by the thought that when they do, the context is not a game: any situation in which people have to make the sorts of decisions that issue in shared intentions must be a game, which is, after all, just a situation in which combinations of actions matter to the combining parties. (pg 139)
  • Turn to the idea that a joint intention to do (x,y) is rationally produced in 1 and 2 by common knowledge of two conditional intentions: Pl has the intention expressed by ‘I’ll do x if and only if she does y’, and P2 the counterpart one. Clearly P1 doesn’t have the intention to do x if and. only if P2 in fact does y whether or not Pl believes P2 will do y; the right condition must be along the lines of:
    (C1) P1 intends to do x if and only if she believes P2 will do y. (pg 139)

    • So this is in belief space, and belief is based on awareness and trust
  • There are two obstacles to showing this, one superable, the other not, I think. First, there are two Nash equilibria, and nothing in the setup to suggest that some standard refinement (strengthening) of the Nash equilibrium condition will eliminate one. However, I suspect that my description of the situation could be refined without ‘changing the subject’. Perhaps the conditional intention Cl should really be ‘I’ll do x if and only if she’ll do y, and that’s what I would like best’. For example, if x and y are the two obligations in a contract being discussed, it is natural to suppose that Pl thinks that both signing would be better than neither signing. If we accept this gloss then the payoff structure becomes a Stag Hunt – Hi-Lo if both are worse off out of equilibrium than in the poor equilibrium (x’ ,y’). To help the cause of rationally deriving the joint intention (x,y), assume the Hi-Lo case. What are the prospects now? As I have shown in chapter 1, there is no chance of deriving (x,y) by the classical canons, and the only (so far proposed) way of doing to is by team reasoning. (pg 140)
  • The nature of team reasoning, and of the conditions under which it is likely to be primed in individual agents, has a consequence that gives further support to this claim. This is that joint intentions arrived at by the route of team reasoning involve, in the individual agents, a ‘sense of collectivity’. The nature of team reasoning has this effect, because the team reasoner asks herself not ‘What should I do?’ but ‘What should we do?’ So, to team-reason, you must already be in a frame in which first-person plural concepts are activated. The priming conditions for team reasoning have this effect because, as we shall see later in this chapter, team reasoning, for a shared objective, is likely to arise spontaneously in an individual who is in the psychological state of group-identifying with the set of interdependent actors; and to self-identify as a member of a group essentially involves a sense of collectivity. (pg 141)
  • One of the things that MB seems to be saying here is that group identification has two parts. First is the self-identification with the group. Second is the mechanism that supports that framing. You can’t belong to a group you don’t see.
  • To generalize the notions of team mechanism and team to unreliable contexts, we need the idea of the profile that gets enacted if all the agents function under a mechanism. Call this the protocol delivered by the mechanism. The protocol is , roughly, what everyone is supposed to do, what everyone does if the mechanism functions without any failure. But because there may well be failures, the protocol of a mechanism may not get enacted, some agents not playing their part but doing their default actions instead. For this reason the best protocol to have is not in general the first-best profile o*. In judging mechanisms we must take account of the states of the world in which there are failures, with their associated probabilities. How? Put it this way: if we are choosing a mechanism, we want one that delivers the protocol that maximizes the expected value of U. (pg 131)
  • Group identification is a framing phenomenon. Among the many different dimensions of the frame of a decision-maker is the ‘unit of agency’ dimension: the framing agent may think of herself as an individual doer or as part of some collective doer. The first type of frame is operative in ordinary game-theoretic, individualistic reasoning, and the second in team reasoning. The concept-clusters of these two basic framings center round ‘I/ she/he’ concepts and ‘we’ concepts respectively. Players in the two types of frame begin their reasoning with the two basic conceptualizations of the situation, as a ‘What shall I do?’ problem, and a ‘What shall we do?’ problem, respectively. (pg 137)
  • A mechanism is a general process. The idea (which I here leave only roughly stated) is of a causal process which determines (wholly or partly) what the agents do in any simple coordination context. It will be seen that all the examples I have mentioned are of this kind; contrast a mechanism that applies, say, only in two-person cases, or only to matching games, or only in business affairs. In particular, team reasoning is this kind of thing. It applies to any simple coordination context whatsoever. It is a mode of reasoning rather than an argument specific to a context. (pg 126)
  • In particular, [if U is Paretian] the correct theory of Hi-Lo says that all play A. In short, an intuition in favour of C’ supports A-playing in Hi-Lo if we believe that all players are rational and there is one rationality. (pg 130)
    • Another form of dimension reduction – “We are all the same”
  • There are many conceivable team mechanisms apart from simple direction and team reasoning; they differ in the way in which computation is distributed and the pattern of message sending. For example, one agent might compute o* and send instructions to the others. With the exception of team reasoning, these mechanisms involve the communication of information. If they do I shall call them modes of organization or protocols. (pg 125)
  • A mechanism is a general process. The idea (which I here leave only roughly stated) is of a causal process which determines (wholly or partly) what the agents do in any simple coordination context. It will be seen that all the examples I have mentioned are of this kind; contrast a mechanism that applies, say, only in two-person cases, or only to matching games, or only in business affairs. In particular, team reasoning is this kind of thing. It applies to any simple coordination context whatsoever. It is a mode of reasoning rather than an argument specific to a context. (pg 126)
  •  .
    • BIC_102 (page 102)
    • BIC107 (pg 107)
    • BIC107b (pg 107)
  • Evolutionary reasons for cooperation as group fitness, where group payoff is maximized. This makes the stag salient in stag hunt.
  • Explaining the evolution of any human behavior trait (say, a tendency to play C in Prisoner’s Dilemmas) raises three questions. The first is the behavior selection question: why did this trait, rather than some other, get selected by natural selection? Answering this involves giving details of the selection process, and saying what made the disposition confer fitness in the ecology in which selection took place. But now note that ‘When a behavior evolves, a proximate mechanism also must evolve that allows the organism to produce the target behavior. Ivy plants grow toward the light. This is a behavior, broadly construed. For phototropism to evolve, there must be some mechanism inside of ivy plants that causes them to grow in one direction rather than in another’ (Sober and Wilson 1998, pp. 199-200). This raises the second question, the production question: how is the behavior produced within the individual-what is the ‘proximate mechanism’? In the human case, the interest is often in a psychological mechanism: we ask what perceptual, affective and cognitive processes issue in the behavior. Finally, note that these processes must also have evolved, so an answer to the second question brings a third: why did this proximate mechanism evolve rather than some other that could have produced the same behavior? This is the mechanism selection question. (pg 95)
    • These are good questions to answer, or at least address. Roughly, I thing my answers are
      • Selection Question: The three phases are a very efficient way to exploit an environment
      • Production Question: Neural coupling, as developed in physical swarms and moving on to cognitive clustering
      • Mechanism Question: Oscillator frequency locking provides a natural foundation for  collective behavior. Dimension reduction is how axis are selected for matching.
  • “We need to know, in detail, what deliberations are like that people engage in when they group-identify”. Also, agency transformationAgencyTransformation
  • Dimension reduction is a form of induced conceptual myopia (pg 89)? Conceptual Myopia
  • GroupIdentification
  • Group as Frame
  • Categorizatino and bias