Personal tools
You are here: Home Wiki Glossary

This page is an open forum for proposing and discussing definitions of terms used for describing emotions and related concepts. A more formal recommendation for a glossary of definitions will be issued by the HUMAINE workpackage on Theories and Models of emotion.

As a starting point, this page contains provisional working definitions from the first round of HUMAINE deliverables.

(originally from deliverable D3c):

Appraisal

In emotion psychology the term appraisal specifically refers to the cognitive evaluation antecedent to an emotional episode. Central in this concept is the notion that different individuals (with different motives, goals, norms…) will appraise the same event/situation in a different way and, consequently, present different emotional reactions. Appraisal models are characterized by the appraisal dimensions they include – i.e. the aspects of the event/situation that have to be appraised by an organism in order to elicit an emotional reaction. Scherer has labeled the appraisal dimensions included in his model: stimulus evaluation checks (SEC). In Scherer's model the outcome of the SECs is sequential. In his view, the relevance of the situation/event is appraised first, followed by the implications of the situation/event for the goals and needs of the individual. The outcome of the assessment of the individual's coping potential (evaluated control over and personal power in the situation) takes place subsequently and, finally, the compatibility of the event/situation with the norms and standards of the individual will be assessed.

 

Motivation

Motivation is closely related to emotional reactions, both as an antecedent factor and as an outcome (consequence).

As antecedents, motives contribute to the differentiation of emotional reactions. Individuals will differ in the emotional reaction they will show in a similar situation relatively to their variable motivations in this situation (see the notion of appraisal above; in this sense motives include: needs, interests, desires, …).

In turn, motives are affected by emotional reactions. For example, 'fear' would entail a motivation to flight and 'anger' a motivation to fight.

Some authors consider motivation to be a central component of emotional reactions (see next section: motivational models). In their view, the action tendency (also called: action readiness or motor preparation) component of emotional reactions is the essence of the emotional reaction. (see Frijda, 1986).

Approach and avoidance are two fundamental motives (or action tendencies). They are sometimes considered as the two ends of a single dimension. But this view is largely questionable (as indicated by the existence of possible ambivalences, when one aspect of a situation triggers avoidance, whereas another aspect of the same situation triggers approach).

 

 

Feeling

The term emotion is sometimes used in reference to the emotional feeling. The famous controversy between William James and his opponents relied largely on this confusion. When William James stated:

"My thesis is that […]? the bodily changes follow directly the PERCEPTION of the exciting fact, and that our feeling of the same changes as they occur IS the emotion" (James, 1884, p. 190), he was referring to what we would call today the emotional feeling. With this statement, James was stressing the importance of the peripheral physiological reactions for the subjective experience of the emotional reaction, as one of his later statements indicates: "Common sense says, we lose our fortune, are sorry and weep; we meet a bear, are frightened and run; we are insulted by a rival, are angry and strike. The hypothesis here to be defended says that this order of sequence is incorrect, that the one mental state is not immediately induced by the other, that the bodily manifestations must first be interposed between, and that the more rational statement is that we feel sorry because we cry, angry because we strike, afraid because we tremble, and not that we cry, strike, or tremble, because we are sorry, angry, or fearful, as the case may be. Without the bodily states following on the perception, the latter would be purely cognitive in form, pale, colourless, destitute of emotional warmth. We might then see the bear, and judge it best to run, receive the insult and deem it right to strike, but we could not actually feel afraid or angry." (James, 1884, p. 190).

This view – of the embodiment of emotional feelings – is today largely accepted by most researchers in emotion psychology and has been recently especially supported by Damasio (1994).

Scherer has proposed that the emotional feeling could be considered to function as a monitoring system, integrating all information about the continuous patterns of change in the autonomous and motor/expressive systems, as well as, in the appraisal and motivational systems. In this view, the feeling corresponds to the reflection and integration of all the other emotional components.

Following Wundt's early proposal (introspection as the method of choice for the study of mental states, 1897), emotional feelings have often been considered in a phenomenological perspective. Different subjective dimensions have been put forward by various authors (see next section: dimensional models) to account for the subjective experience of emotion. The most common dimensions used to describe this subjective feeling are: (a) Valence – the degree of pleasantness/unpleasantness of the emotional state – (b) Arousal – which corresponds to perception of the bodily activation associated to the emotional reaction. Other subjective dimensions used to describe the emotional feeling also include: control, power, tension, intensity, etc…

 

Basic emotions

Although basic emotions (also called: fundamental or discrete emotions) can be considered basic (or fundamental) in a variety of different ways, this concept habitually refers to a research tradition that emphasizes the role evolution has played in shaping emotional reactions and displays (see next section: discrete emotion models). Basic emotions are defined as corresponding to inborn, phylogenetically selected, neuro-motor programs. They are in limited number and are universal reactions (universality operates across ages, across cultures and across species). The reliance on basic emotions gave rise to secondary notions such as: emotion blends (mixed emotions) – to account for the large variety of observed emotional reactions and display rules – to account for individual and cultural variations in emotional expressions.


Primary/secondary emotions

This distinction is especially problematic. It refers to several definitions and should therefore be used only with caution.

In his emotion wheel, Plutchik (1962) classified emotion categories along four dimensions (positive/negative, primary/mixed, polar opposites, varying intensity). In this system, he distinguishes, eight primary emotions (fear, surprise, sadness, disgust, anger, anticipation, joy acceptance). In this view, secondary emotions are produced by combinations of primary emotions. Hence this definition of secondary emotions is close to the concept of emotion blends (see above).

Another definition of primary/secondary emotions is used in the field of neuropsychology. In this field, primary emotions are innate, triggered by sensory input, and processed through the limbic system. Whereas secondary emotions (also called social emotions) are acquired through learning/experience, generated through higher cortical processing (frontal cortices send signals to limbic structures to generate an emotional response) and are not necessarily "embodied".

 

Emotional intelligence

The concept of emotional intelligence (EI) was first introduced by Salovey & Mayer (1990) and relatively quickly popularized by Goleman's (1995) best-selling book. A central notion in this concept is that a variety of emotional skills/competences are related and reflect a – more general – underlying emotional competence. The emotional skills/competences included vary according to the multiple models of EI that have been proposed during the past decade. Skills/competences generally included are related to several aspects, for instance:

·        regulation/coping – the ability to "manage" ones emotional responses, reduce/suppress them or activate them

·        emotional resilience – the ability to recover from "traumatic" experiences

·        expressivity and regulation of expressivity – the ability to control – suppress, substitute or simulate emotional expressions

·        emotional sensitivity – the ability to recognize emotions expressed by others

·        abstract understanding of emotional reactions and strategical use of this knowledge – the ability understand and manipulate emotions in others, machiavelism or empathy

Current studies of EI are frequently relying on verbal reports (questionnaire studies). Past results in the fields of nonverbal skills, emotional sensitivity, regulation or coping research suggest that some of the emotional competences included in the broad concept of EI might actually be relatively independent.

 

Emotion

Scherer's definition of emotion is the following: "Emotions are – "episodes of massive, synchronized recruitment of mental and somatic resources allowing to adapt to or cope with a stimulus event subjectively appraised as being highly pertinent to the needs, goals, and values of the individuals".

In this definition the notion of synchronization is a central feature. Emotions are seen as occurring when the cognitive, physiological and motor/expressive components – which are usually more or less dissociated in serving separate functions – synchronize, as a consequence of a situation/event appraised as highly relevant for an individual.

For the more general definition of emotions, one crucial aspect is the distinctive features of emotions as compared with other psychological states – that may have an affective element to them but that can hardly be considered to be full-fledged emotions. Scherer has proposed a design feature approach to distinguish the following classes of affective states:

          Emotions (e.g., angry, sad, joyful, fearful, ashamed, proud, elated, desperate)

          Moods (e.g., cheerful, gloomy, irritable, listless, depressed, buoyant)

          Interpersonal stances (e.g., distant, cold, warm, supportive, contemptuous)

          Preferences/Attitudes (e.g., liking, loving, hating, valuing, desiring)

          Affect dispositions (e.g., nervous, anxious, reckless, morose, hostile)

The design features proposed for the differential definition of these states are partly based on a) response characteristics, such as intensity and duration or the degree of synchronization of different reaction modalities (e.g., physiological responses, motor expression, and action tendencies); b) antecedents (e.g., whether they are elicited by a particular event on the basis of cognitive appraisal); and c) consequences in terms of stability and impact on behavior choices. Table 1 shows a proposal for the specific feature profiles of each state. The more important the feature to the definition of the affect, the bolder the dot will be.

 


Table 1 – Defining different types of affect: A design feature approach

 

All of these states have relevance for HMI. However, one can expect that the underlying mechanisms are variable and may interact in complex ways for the different states. For example, each of these states is characterized by a specific pattern of interaction between "push effects" (the biologically determined externalization of naturally occurring internal processes of the organism, particularly information processing and behavioral preparation) and "pull effects" (socioculturally determined norms or moulds concerning the signal characteristics required by the socially shared codes for the communication of internal states and behavioral intentions). Given that the underlying biological processes are likely to be dependent on both the idiosyncratic nature of the individual and the specific nature of the situation, relatively strong interindividual differences in the expressive patterns will result from push effects. Conversely, for pull effects, a very high degree of symbolization and conventionalization, and thus comparatively few and small individual differences, are expected. With respect to cross-cultural comparison, one would expect the opposite: very few differences between cultures for push effects and large differences for pull effects. In consequence, computational models of affect that are to serve useful functions in an HMI context need to make clear choices as to which kind of state is to be modeled.

 

Model (in Cognitive Neurosciences)

The goal of Cognitive Neurosciences is to build a model of cognition. A model is a representation that describes and explains the different components, or sub-processes, involved in a cognitive process, as well as the interactions between them. Building such model consists of identifying the sub-processes and the organization that structures them. Considering the fact that several models can be used to describe the same cognitive activity, it is very important to settle rules and laws in regard to the purpose of the model. For example, such a model must respect two principles: biological plausibility and computational coherence and adequacy.

 

Computational modeling

Computational modeling has been inspired by Computer Science approaches. It is meant to separate the information, made of the data manipulated by the system, and the treatments, or actions described in terms of rules. Representations, flow charts, are then built, giving a reflection of what occurs in reality.

In order to do so, one has to:

          identify the data (the signifier and the signified) ;

          identify the correlations between them ;

          define the actions (treatments) applied on the data ;

          take into account the influences between the processes described.

One of the first computational model in Psychology has been proposed by Atkinson and Shiffrin (1968, 1979). It describes memory in terms of components through which the information transits. Each component is identified by the quantity of information it will stock and the amount of time it will be stored. The authors distinguish 3 memories (buffers): sensorial memory, short-term memory, and long-term memory.

Although this model is not any more accepted by the research community, it has helped to create this new approach.


(originally from deliverable D7b):

Action (or behavior) selection (versus decision making)

The problem of action (or behavior) selection for an autonomous agent consists in making a decision as to what behavior to execute next in order to fulfill several time-dependent, conflicting goals. It opposes to the more analytic, functional, “high-level” decision-making problem, which optimizes the behavioral choice using mathematical modeling of both agent and environment. An action selection mechanism provides a “low-level” arbitration between behavioral alternatives, following the synthetic approach to artificial intelligent of “Behavior-Based Robotics” and “Embodied Artificial Intelligence”.

Appraisal

Magda Arnold introduced the term appraisal in the 1960s, in the sense of direct, immediate, and intuitive evaluations, to account for qualitative distinctions among emotions. “Appraisal is the process triggered by an eliciting event wherein the subjective potential or actual significance of an event or situation is assessed; i.e., with respect to the subject’s own goals, needs, and concerns on the one hand, and the capacity to adapt on the other hand.” (Kappas, 2001, p.157).

A controversy about whether cognition is involved in appraisal can be considered mostly settled, along the lines cautiously put down in (Frijda ,1993, p. 379): "Then, how should one conceive of the basic processes of emotion elicitation? First, it must be admitted that it hinges upon a noncognitive step... primary appraisal often involves elaborate steps of inference and the intervention of knowledge. ...This most basic appraisal process may perhaps not meaningfully be caled cognitive, as it may not always involve comparison between two representations, whch might be taken as the minimal attribute of "cognition". Still, it involves some "computation" (LeDoux 1989, p. 271) and an appraisal process thus is a necessary condition for emotional experience and major aspects of emotional response.

Architecture

“The main goal of research in autonomous agents is to understand better the principles and organizations that underlie adaptive, robust, effective behavior. A secondary goal is to also develop tools, techniques, and algorithms for constructing autonomous agents that embody these principles and organizations. We call the totality of a set of principles and organizations, and the set of tools, algorithms and techniques that support them an “architecture” for modeling autonomous agents.” (Maes, 1995, page 138)

Architectures operationalized in robots are often called “controllers”.

In the context of Action Selection, an (action selection) architecture specifies the way in which different architectural elements, such as internal and external stimuli, motivations, emotions, behaviors, etc. are combined to produce the final selection of one behavioral alternative.

Artificial Intelligence (AI)

“Broadly (and somewhat circularly) defined, is concerned with intelligent behavior in artifacts. Intelligent behavior, in turn, involves perception, reasoning, learning, communicating, and acting in complex environments. AI has as one of its long-term goals the development of machines that can do these thing as well as humans can, or probably even better. Another goal of AI is to understand this kind of behavior whether it occurs in machines of in humans of other machines.” (Nilsson, 1998)

Autonomous (and adaptive) agent

“An agent is a system that tries to fulfill a set of goals in a complex, dynamic environment. An agent is situated in the environment: It can sense the environment through its sensors and act upon the environment using its actuators. An agent’s goals can take many different forms: They can be “end goals” or particular states the agent tries to achieve, they can be a selective reinforcement or reward that the agent attempts to maximize, they can be internal needs or motivations that the agent has to keep within certain viability zones, and so on. An agent is called autonomous if it operates completely autonomously, that is, if it decides itself how to relate its sensor data to motor commands in such a way that its goals are attended to successfully. An agent is said to be adaptive if it is able to improve over time, that is, if the agent becomes better at achieving its goals with experience. Notice that there is a continuum of ways in which an agent can be adaptive, from being able to adapt flexibly to short-term, smaller changes in the environment, to dealing with more significant and long-term (lasting) changes in the environment, that is, being able to change and improve behavior over time.” (Maes, 1995, page 136)

Behavior-Based Robotics

A subdiscipline of (embodied) AI and autonomous robotics that conceives robots architectures in terms of “behaviors” or competence modules implementing the various activities that a robot can perform in the particular environment that it inhabits. A behavior-based robot has a set of behavior modules that compete with one another in order to gain control of the robot’s actuators. This discipline was born during mid 80’s as a response to the apparent “failure” of the more traditional “knowledge-based” or “top-down” Artificial Intelligence (AI) in building intelligent autonomous robots. It uses a “bottom-up” methodology to synthesize systems incrementally adding behavioral modules. It closely relates to “Embodied Artificial Intelligence”. (cf. Arkin, 1998)

Belief-Desire-Intention (BDI) Architecture

Within the research community concerned with software agents, the term beliefs-desires-intentions (BDI) has been used variously to denote a position on theoretically useful mental state distinctions, particular models of how these mental states affect reasoning and a genre of architectures or frameworks for developing software agents.

“BDI agents are rational agents having certain mental attitudes of Belief, Desire and Intention, representing respectively, the information, motivational and Deliberative states of agent. These mental attitudes determine the agent's behavior and are critical for achieving adequate or optimal performance when deliberation is subject to resource bounds.” (Rao and Georgeff, 1995).

Concern

“A concern is a disposition to desire occurrence or non-occurrence of a given kind of situation; the dispositions that turn given kinds of events into satisfiers or annoyers, into positive or negative reinforcers, for the subject or the species as a whole. The dispositions can be conceived as internal representations serving as standards against which actual situations are tested. These representations need not be explicit or reified or consciously accessible or consciously modifiable.” (Frijda 1986, p.335)

“Concerns are defined as internal representations of preferred states that serve as standards against which actual states of the world are tested. People seek to achieve them and events may agree or disagree with them.” (Frijda et al. 1991, p.213)

Embodied Artificial Intelligence

New approach to studying Artificial Intelligence (AI) in the context of “complete” (embodied, situated) autonomous agents. It exploits the richness of behavior shown by an embodied agent that acts in the real world (as complex as it is) obtaining its (partial) information about the environment through its sensors in continuous interaction with the real world (situated agent). The development of Embodied AI has gone in parallel with “Behavior-Based Robotics”, the discipline that first pointed out the need to study intelligence in the framework of complete autonomous robots and that provides a natural test-bed for its theories.

Embodied Conversational Agent (ECA)

“Embodied conversational agents are computer-generated cartoonlike characters that demonstrate many of the same properties as humans in face-to-face conversation, including the ability to produce and respond to verbal and nonverbal communication. They constitute a type of (a) multimodal interface where the modalities are those natural to human conversation: speech, facial displays, hand gestures, and body stance; (b) software agent, insofar as they represent their human users in a computational environment (as avatars, for example); and (c) dialogue systems where both verbal and nonverbal devices advance and regulate the dialogue between the user and the computer.” (Casell et al., 2000, cover)

Emergence (emergent behavior, emergent functionality)

“Emergence is a classical concept in system theory, where it denotes the principle that the global properties defining higher order systems or ‘wholes’ (e.g. boundaries, organization, control, …) can in general not be reduced to the properties of the lower order subsystems or ‘parts’. Such irreducible properties are called emergent.” (Heylighen 1989)

“Agents can become more complex in two ways. First, a designer (or more generally a designing agency) can identify a functionality that the agent needs to achieve, then investigate possible behaviors that could realize the functionality, and then introduce various mechanisms that sometimes give rise to the behavior. Second, existing behavior systems in interaction with each other and the environment can show side effects, in other words, emergent behavior. This behavior may sometimes yield new useful capabilities for the agent, in which case we talk about emergent functionality. In engineering, increased complexity through side effects is usually regarded as negative and avoided, particularly in computer programming. But it seems that in nature, this form of complexity buildup is preferred.” (Steels, 1994). This notion is highly exploited by the new approach to Artificial Intelligence (AI) characterized as “Embodied AI”, “Botton-Up AI” or Behavior-Based Robotics.

However one has to be careful not to mistake emergence for the unexpected effects produced by a lack of understanding of the system:

“We are often told that certain wholes are ‘more than the sum of their parts.’ We hear this expressed with reverent words like ‘holistic’ and ‘gestalt,’ whose academic tones suggest that they refer to clear and definite ideas. But I suspect the actual function of such terms is to anesthetize a sense of ignorance. We say ‘gestalt’ when things combine to act in ways we can’t explain, ‘holistic’ when we are caught off guard by unexpected happenings and realize we understand less than we thought we did.” (Minsky, 1986, p. 27)

Emotional Contagion

“The tendency to automatically mimic and synchronize facial expressions, vocalizations, postures and movements with those of another person and, consequently, to converge emotionally.” (Hatfield et al., 1992, pages 153-154)

Ethology

The study of animal behavior under natural conditions, i.e., the animal’s responses are interpreted within the context of its actual environmental situation. Its aim is to interpret behavioral acts and whole patterns of animal behavior in ways that emphasize their functions and evolutionary history. Tinbergen (1963) categorized four areas of study in ethology: function, causation, ontogeny and evolution of behavior.

Goals

“One hallmark of an active goal is that the individual will persist on the task, striving to reach the desired goal, in spite of obstacles and interruptions.” (Bargh and Chartrand 1999, p. 472)

Once activated, a goal operates in the same way whether activated by will or by the environment” (ibid., p. 470)

Goals do not require an act of will to operate and guide information processing and behavior. They can be activated instead by external, environmental information and events. Once they are put into motion they operate just as if they had been consciously intended, even to the point of producing changes in mood and in self-efficacy beliefs depending on one's degree of success or failure at reaching the goal. The goal does not know the source of its activation and behaves the same way regardless of where the command to do its thing came from (...). Note that this argument applies to complex self-regulatory goals - such as those that serve achievement motives - as well as to simpler behavioral goals.” (ibid., p. 473)

The process of goal pursuit does not stop with the behavioral attempt to attain the goal, however. Inevitably, the individual either achieves or does not achieve (in varying degrees) the pursued goal and tends to evaluate his or her performance following the attempt. Many researchers have demonstrated the consequences of success or failure at conscious goal pursuit for one's mood and beliefs of self-efficacy (...). ... Our approach suggests that there are such consequences of succeeding and failing, even at goals of which one was not aware of pursuing.” (ibid. p.472)

Neural Network

A Neural Network is a network of nerve cells (neurons) in an organism. Artificial Neural Networks (ANN) is the discipline of computer sciences that models those biological neural networks to use its computational properties. (cf. Rolls and Treves, 1998; Arbib, 2003)

Neuromodulation

Neuromodulation refers to the action on neurons of a large family of chemicals called neuromodulators, e.g., dopamine, serotonin and norepinephrine. Each neuromodulator activates specific receptors on the neural membrane, having specific effects on the functioning of the neuron. Since neurons in different parts of the brain may have different receptors within its membrane, the same neuromodulator can thus have distinct effects in different parts of the brain. The overall result is that a single neuromodulator can modulate the functioning of a neural network. (cf. Kravitz, 1988; Fellous, 1999, 2004)

Perception-Action Model (or Hypothesis)

“The Perception-Action Hypothesis (a term from motor behavior) is grounded in the theoretical idea, adopted by many fields over time, that perception and action share a common code of representation in the brain.” (Preston et al., 2002) This hypothesis is closely related to the principle of sensory-motor coordination in Embodied AI and Behavior-Based Robotics, that states that all (intelligent) behavior is to be conceived as sensory-motor coordination that serves to structure the sensory input (cf. Pfeifer and Schreier, 1999).

Regulation (of emotions)

Emotion regulation refers to a broad constellation of processes that serve to either amplify, attenuate or maintain the strength of emotional reactions. Included among these processes are certain features of attention that regulate the extent to which an organism can be distracted from a potentially aversive stimulus and the capacity for self-generated imagery to replace emotions that are unwanted with more desirable imagery scripts. Emotion regulation can be both automatic and controlled. Automatic emotion regulation may result from the progressive automatization of processes that initially were voluntary and controlled and have evolved to get generated in the absence of recruiting associated regulatory processes. For this reason, it is often conceptually difficult to distinguish sharply between where an emotion ends and regulation begins. Even more problematic is the methodological challenge of operationalizing these different components in the stream of affective behavior. (Davidsson 1999, p.104)

Rule-Based System

A rule-based system is a particular instance of symbolic AI. As the name suggests, a rule-based system uses a library of operators or rules (e.g., of the form If CONDITION(S) then ACTION(S)) specific to a particular problem domain. Hence, the term ‘expert system’ describes a kind of rule-based system where the rules have been supplied by a human expert. An example of this is Prospector, an expert system used to assist geologists in locating valuable mineral deposits such as oil, coal or precious metals.

Standards

Standards are a major determinant of the psychological significance of an event. A “standard” is a criterion or rule established by experience, desires, or authority for the measure of quantity and extent or quality and value. Both people and situations can be differentiated in terms of associated standards. Personal standards are seen to play an important role for individual differences in motivation, self-regulation and self-evaluation. Standards can function either as reference points or as regulatory criteria (e.g., tendency to surpass the performance of another person).

Standards constitute different kinds of knowledge – general declarative knowledge (social category standards), episodical knowledge (e.g., autobiographical standards), and procedural knowledge (e.g., normative guides).

Social standards are established by past interpersonal experiences, knowledge of self and others, and current social contexts. Action that occurs in relation to social standards is social action. (cf. Higgins, 1990)

Symbolic Artificial Intelligence

Symbolic AI is best defined with the help of the classical water jugs problem: We have one 3-liter jug, one 5-liter jug and an unlimited supply of water. The goal is to get exactly one liter of water into either jug. Either jug can be emptied or filled, or poured into the other. One approach to implementing a solution would be to define a set of rules that encapsulate the behavior of water levels in the jugs after each action has been carried out. Consequently, the task can be regarded as the manipulation of the rules until the goal is reached, perhaps by depth-first search. Therefore, symbolic AI, as illustrated by this example, considers intelligence as problem solving that can be characterized by a set of rules and a method for manipulating them in order to satisfy some goal. Importantly, a result of this approach to AI is that the solution can be “human interpretable” – the solution is the sequence of rules applied to an initial state that solves the problem.

Uphill Analysis and Downhill Invention (Braitenberg’s Law of)

“It is pleasurable and easy to create little machines that do certain tricks. It is also quite easy to observe the full repertoire of behavior of these machines - even if it goes beyond what we had
originally planned, as it often does. But it is much more difficult to start from the outside and try to guess internal structure just from the observation of behavior. It is actually impossible in theory to determine exactly what the hidden mechanism is without opening the box, since there are always many different mechanisms with identical behavior. Quite apart from this, analysis is more difficult than invention in the sense in which, generally, induction takes more time to perform than deduction: in induction one has to search for the way, whereas in deduction one follows a straightforward path. A psychological consequence of this is the following: when we analyze a mechanism, we tend to overestimate its complexity. In the uphill process of analysis, a given degree of complexity offers more resistance to the workings of our mind than it would if we encountered it downhill, in the process of invention.” (Braitenberg, 1984, page 20)

User Model

A formal representation of the main characteristics of a user that may affect his/her interaction with software products or, more in general, with technology.

User models can have static or dynamic components. The static component includes a description of the long-term characteristics which are not likely to vary during interaction (typically: gender, name, social status). The dynamic component includes a description of those characteristics which are less stable (typically, knowledge, in interaction with intelligent tutoring systems). As far as characteristics with some affective connotation are concerned, the following is a list of the main ones, in decreasing order of stability: personality traits, values, norms, goals, preferences, mood, beliefs, intentions, attitudes, emotions.

User Modeling

The method (and the software component that performs it) to build an initial user model and to update it consistently during interaction. Building and updating may be performed in an implicit or an explicit way. In implicit user modeling, data are acquired by the system without directly requesting them to the user. In explicit user modeling, data are acquired by direct interviewing. In both cases, some form of reasoning on the acquired data has to be done, to infer the user modeling features. An example of implicit acquisition in affective user modeling would be the “recognition” of the user’s emotional state from biological signals. An example of explicit acquisition: filling-up of personality questionnaires.

A key problem in user modeling is to insure consistency after updating. To this aim, typical methods of artificial intelligence may be applied:  if the model is represented in logical form, truth maintenance and non monotonic reasoning methods are applied; if uncertainty is represented in the model, bayesian updating is the widely recognized appropriate method to apply. Other methods (like fuzzy logic, neural networks or learning algorithms) are appropriate in more specific domains and cases.



(originally from deliverable D8b):

Persuasion = the definitions, that historically has been given, can be divided according to what they refer to.

  1. Definitions referring to the goal of persuasion: (behaviour, attitude or  action inducement)
  2. Definitions referring to the functioning of persuasion:  (e.g. peripheral vs. central route in the elaboration of a message [Petty & Cacioppo, 86]?. Persuasion uses the peripheral route of the receiver).

We converge with the first point of view: persuading a (human or artificial) agent implies planning how to modify its predispositions to certain actions, its/his/her complex of beliefs and judgments (see also the concept of “argumentation”). According to the work developed by linguists, philosophers and cognitive psychologists, persuasion may appeal to both the informational and the emotional route of the recipients [Petty & Cacioppo, 86; Sillince & Minors, 91]. In defining persuasion we differentiate a “large” definition of persuasion (behaviour inducement) from a “narrow” one (action inducement). Another distinction can be made between the weak notion (capturing the idea that persuadee is not already planning to perform the required action/behaviour) and the strong notion (capturing the idea that persuadee has also some barriers against the required action/behaviour).

 

Influence = when loosely speaking about persuasion we are in the field of (social) influence, defined as: “affecting or changing how someone behaves or thinks”.

 

Argumentation = Argumentation is strictly connected with the concept of rationality. It is a resource for persuasion because: 

  1. Planning of the message involves some sort of  ‘rational’ activity, even when emotion inducement is employed as a means to increase the persuasion strength. On the other side, the way persuasion is performed (items selected, their order of presentation, their ‘surface’ formulation) also depends on the emotional state of the persuader.
  2. Argumentation is concerned with the goal of making the receiver believe a certain proposition (influence his mental state) and, apart from coercion, the only way to make someone doing something (persuasion) is to change his beliefs [Castelfranchi, 96]?.

Persuasion includes a-rational elements as well and so is a “superset” of argumentation. This does not rule out that there is a role for emotion within argumentation [Miceli et al.]?: through arousal of emotions (see Rhetorics) or through appeal to expected emotions. In classical argumentation, though, these problems are not addressed since emotional argumentation is often considered as some sort of ‘deceptive’ argumentation [Grasso et al., 00]?.

Coercion = using force to “persuade” someone to do something he is not willing to. Obviously coercion falls out of our definition of persuasion.

Rhetorics = the study of the ways of using language effectively. This area of studies concerns the linguistics means of persuasion (one of the main, but not the only one).

 

Affective verbal communication = natural language communication finalized either to inform the hearer about an affective state or to induce emotions, affective attitudes, opinions and evaluations.

 

Affective induction = It consists of the communication process that induces affective states/attitudes in the recipients.

 

Affective attitudes = They consist of complex mental state such as beliefs, feelings, values, and dispositions to act in certain ways

 

Evaluative language = Evaluative language is the kind of language that expresses an evaluation/appreciation of the object of the discourse. The evaluations/appreciations reflect the opinions and/or the attitudes of the speakers. Evaluative language is called also subjective language.

Slanting = Slanted writing is the type of writing that springs from our conscious or subconscious choice of words and images. In particular we refer to slanting writing whenever we load our description of a specific situation with vivid, connotative words and figures of speech. Below are some examples of a denotative (no slant) word and its positive and negative word associates.

NO  SLANT

POSITIVE  SLANT

NEGATIVE  SLANT

Eats

Dines

Gorges

Doctor

Physician

Quack

Car

Sedan

Jalopy

Old age

Golden years

Decrepitude

Intoxicated

Tiddly

Smashed

 

Polarity or gradability in the lexicon = The valence of emotion words. It is related to semantic orientation of words (e.g. positive and negative lexicon). Some recent works in NLP show that is possible to partially learn these features from corpora in an automatic way.

 

Emotion words, affective lexicon = It is important to have lexical resources that contain words referring to emotions (e.g. anger, fear), moods (animosity, affable), emotion-related cognitive states (confusion, dazed), emotional responses (tremble, cry), etc. An affective lexicon is per se an important resource for many applications, both based on language recognition and on language production. The potential applications in natural language processing are the basis for those in human-computer interaction.

 

Computational humour = An emerging computational field in artificial intelligence that deals with building systems capable of inducing amusement and affecting the emotional state of users.

Empathy = The process by which one agent’s affective state (the target)  modulates in a similar way to that of another agent (the source), drawing on situation or expression. In cognitive empathy the target understands the affective state of the source (a specific variant of the theory of mind) – for example, seeing the source lose its wallet understands that the source is sad, or seeing the source cry understands that it is sad. In affective empathy the target itself feels the affective state of the source; finally in ideomotoric empathy the motor action of the target is modulated by that of the source – for example, seeing the source dancing makes the target want to dance too. 

Empathic agent = An empathic agent is either able to produce a feeling of empathy in another agent or to itself respond empathically, or both.

Emergent narrative = A participative style of narrative in which the lower-levels of narrative structure emerge from interaction between characters rather than being scripted as part of a pre-defined plot.

Story-telling = A style of narrative in which a particular agent - the story-teller – presents a narrative to one or more agents, usually through verbal and accompanying expressive behaviour.


(originally from deliverable D9b):

Usability: The process of creating and evaluating a system which delivers a positive user experience.

Evaluation: The process of submitting a design (whether concept, prototype, or finished system) to examination through some form of observed use, towards gathering information that will be helpful for further iteration.

User-centred design: Design of systems which integrally incorporates the needs, context, and insights of future users.

Participatory design: Design which incorporates members of the target user group as part of the design team.

Phenomenology: a 20th-century philosophical movement dedicated to describing the structures of experience as they present themselves to consciousness, without recourse to theory, deduction, or assumptions from other disciplines such as the natural sciences (Microsoft Encarta)

User Experience: The overall set of perceptions and reactions during a system’s use, which are co-emergent from the person and the system with which s/he is engaged.

Quantitative Measures: Instruments to collect data in controlled settings, which will be analyzed statistically. (See Ebling and John, 2000, for an interesting discussion of the relative merits of quantitative and qualitative data in usability evaluation).

Qualitative Measures: Methods for collecting data which seek to preserve context and subjectivity, and which lean toward thick description rather than statistical summaries. (See Ebling and John, 2000, for an interesting discussion of the relative merits of quantitative and qualitative data in usability evaluation).

Validity: The extent to which an instrument adequately and accurately captures the theoretical construct it seeks to measure (e.g. does a Likert scale asking how ‘usable’ a system was capture the theoretical construct of successful user experience).






Powered by Plone

Portal usage statistics