Judgment
judgment
Short and Synthetically Distort: Investor Reactions to Deepfake Financial News
Recent advances in artificial intelligence have led to new forms of misinformation, including highly realistic “deepfake” synthetic media. We conduct three experiments to investigate how and why retail investors react to deepfake financial news. Results from the first two experiments provide evidence that investors use a “realism heuristic,” responding more intensely to audio and video deepfakes as their perceptual realism increases. In the third experiment, we introduce an intervention to prompt analytical thinking, varying whether participants make analytical judgments about credibility or intuitive investment judgments. When making intuitive investment judgments, investors are strongly influenced by both more and less realistic deepfakes. When making analytical credibility judgments, investors are able to discern the non-credibility of less realistic deepfakes but struggle with more realistic deepfakes. Thus, while analytical thinking can reduce the impact of less realistic deepfakes, highly realistic deepfakes are able to overcome this analytical scrutiny. Our results suggest that deepfake financial news poses novel threats to investors.
Distinctive features of experiential time: Duration, speed and event density
William James’s use of “time in passing” and “stream of thoughts” may be two sides of the same coin that emerge from the brain segmenting the continuous flow of information into discrete events. Departing from that idea, we investigated how the content of a realistic scene impacts two distinct temporal experiences: the felt duration and the speed of the passage of time. I will present you the results from an online study in which we used a well-established experimental paradigm, the temporal bisection task, which we extended to passage of time judgments. 164 participants classified seconds-long videos of naturalistic scenes as short or long (duration), or slow or fast (passage of time). Videos contained a varying number and type of events. We found that a large number of events lengthened subjective duration and accelerated the felt passage of time. Surprisingly, participants were also faster at estimating their felt passage of time compared to duration. The perception of duration heavily depended on objective duration, whereas the felt passage of time scaled with the rate of change. Altogether, our results support a possible dissociation of the mechanisms underlying the two temporal experiences.
Rodents to Investigate the Neural Basis of Audiovisual Temporal Processing and Perception
To form a coherent perception of the world around us, we are constantly processing and integrating sensory information from multiple modalities. In fact, when auditory and visual stimuli occur within ~100 ms of each other, individuals tend to perceive the stimuli as a single event, even though they occurred separately. In recent years, our lab, and others, have developed rat models of audiovisual temporal perception using behavioural tasks such as temporal order judgments (TOJs) and synchrony judgments (SJs). While these rodent models demonstrate metrics that are consistent with humans (e.g., perceived simultaneity, temporal acuity), we have sought to confirm whether rodents demonstrate the hallmarks of audiovisual temporal perception, such as predictable shifts in their perception based on experience and sensitivity to alterations in neurochemistry. Ultimately, our findings indicate that rats serve as an excellent model to study the neural mechanisms underlying audiovisual temporal perception, which to date remains relativity unknown. Using our validated translational audiovisual behavioural tasks, in combination with optogenetics, neuropharmacology and in vivo electrophysiology, we aim to uncover the mechanisms by which inhibitory neurotransmission and top-down circuits finely control ones’ perception. This research will significantly advance our understanding of the neuronal circuitry underlying audiovisual temporal perception, and will be the first to establish the role of interneurons in regulating the synchronized neural activity that is thought to contribute to the precise binding of audiovisual stimuli.
Explaining an asymmetry in similarity and difference judgments
Explicit similarity judgments tend to emphasize relational information more than do difference judgments. In this talk, I propose and test the hypothesis that this asymmetry arises because human reasoners represent the relation different as the negation of the relation same (i.e., as not-same). This proposal implies that processing difference is more cognitively demanding than processing similarity. Both for verbal comparisons between word pairs, and for visual comparisons between sets of geometric shapes, participants completed a triad task in which they selected which of two options was either more similar to or more different from a standard. On unambiguous trials, one option was unambiguously more similar to the standard, either by virtue of featural similarity or by virtue of relational similarity. On ambiguous trials, one option was more featurally similar (but less relationally similar) to the standard, whereas the other was more relationally similar (but less featurally similar). Given the higher cognitive complexity of assessing relational similarity, we predicted that detecting relational difference would be particularly demanding. We found that participants (1) had more difficulty accurately detecting relational difference than they did relational similarity on unambiguous trials, and (2) tended to emphasize relational information more when judging similarity than when judging difference on ambiguous trials. The latter finding was captured by a computational model of comparison that weights relational information more heavily for similarity than for difference judgments. These results provide convergent evidence for a representational asymmetry between the relations same and different.
Analogical inference in mathematics: from epistemology to the classroom (and back)
In this presentation, we will discuss adaptations of historical examples of mathematical research to bring out some of the intuitive judgments that accompany the working practice of mathematicians when reasoning by analogy. The main epistemological claim that we will aim to illustrate is that a central part of mathematical training consists in developing a quasi-perceptual capacity to distinguish superficial from deep analogies. We think of this capacity as an instance of Hadamard’s (1954) discriminating faculty of the mathematical mind, whereby one is led to distinguish between mere “hookings” (77) and “relay-results” (80): on the one hand, suggestions or ‘hints’, useful to raise questions but not to back up conjectures; on the other, more significant discoveries, which can be used as an evidentiary source in further mathematical inquiry. In the second part of the presentation, we will present some recent applications of this epistemological framework to mathematics education projects for middle and high schools in Italy.
Nonlinear neural network dynamics accounts for human confidence in a sequence of perceptual decisions
Electrophysiological recordings during perceptual decision tasks in monkeys suggest that the degree of confidence in a decision is based on a simple neural signal produced by the neural decision process. Attractor neural networks provide an appropriate biophysical modeling framework, and account for the experimental results very well. However, it remains unclear whether attractor neural networks can account for confidence reports in humans. We present the results from an experiment in which participants are asked to perform an orientation discrimination task, followed by a confidence judgment. Here we show that an attractor neural network model quantitatively reproduces, for each participant, the relations between accuracy, response times and confidence. We show that the attractor neural network also accounts for confidence-specific sequential effects observed in the experiment (participants are faster on trials following high confidence trials), as well as non confidence-specific sequential effects. Remarkably, this is obtained as an inevitable outcome of the network dynamics, without any feedback specific to the previous decision (that would result in, e.g., a change in the model parameters before the onset of the next trial). Our results thus suggest that a metacognitive process such as confidence in one’s decision is linked to the intrinsically nonlinear dynamics of the decision-making neural network.
Feedforward and feedback processes in visual recognition
Progress in deep learning has spawned great successes in many engineering applications. As a prime example, convolutional neural networks, a type of feedforward neural networks, are now approaching – and sometimes even surpassing – human accuracy on a variety of visual recognition tasks. In this talk, however, I will show that these neural networks and their recent extensions exhibit a limited ability to solve seemingly simple visual reasoning problems involving incremental grouping, similarity, and spatial relation judgments. Our group has developed a recurrent network model of classical and extra-classical receptive field circuits that is constrained by the anatomy and physiology of the visual cortex. The model was shown to account for diverse visual illusions providing computational evidence for a novel canonical circuit that is shared across visual modalities. I will show that this computational neuroscience model can be turned into a modern end-to-end trainable deep recurrent network architecture that addresses some of the shortcomings exhibited by state-of-the-art feedforward networks for solving complex visual reasoning tasks. This suggests that neuroscience may contribute powerful new ideas and approaches to computer science and artificial intelligence.
The bounded rationality of probability distortion
In decision-making under risk (DMR) participants' choices are based on probability values systematically different from those that are objectively correct. Similar systematic distortions are found in tasks involving relative frequency judgments (JRF). These distortions limit performance in a wide variety of tasks and an evident question is, why do we systematically fail in our use of probability and relative frequency information? We propose a Bounded Log-Odds Model (BLO) of probability and relative frequency distortion based on three assumptions: (1) log-odds: probability and relative frequency are mapped to an internal log-odds scale, (2) boundedness: the range of representations of probability and relative frequency are bounded and the bounds change dynamically with task, and (3) variance compensation: the mapping compensates in part for uncertainty in probability and relative frequency values. We compared human performance in both DMR and JRF tasks to the predictions of the BLO model as well as eleven alternative models each missing one or more of the underlying BLO assumptions (factorial model comparison). The BLO model and its assumptions proved to be superior to any of the alternatives. In a separate analysis, we found that BLO accounts for individual participants’ data better than any previous model in the DMR literature. We also found that, subject to the boundedness limitation, participants’ choice of distortion approximately maximized the mutual information between objective task-relevant values and internal values, a form of bounded rationality.
What the fluctuating impact of memory load on decision speed tells us about thinking
Previous work with complex memory span tasks, in which simple choice decisions are imposed between presentations of to-be-remembered items, shows that these secondary tasks reduce memory span. It is less clear how reconfiguring and maintaining various amounts of information affects decision speeds. We documented and replicated a non-linear effect of accumulating memory items on concurrent processing judgments, showing that this pattern could be made linear by introducing "lead-in" processing judgments prior to the start of the memory list. With lead-in judgments, there was a large and consistent cost to processing response times with the introduction of the first item in the memory list, which increased gradually per item as the list accumulated. However, once presentation of the list was complete, decision responses sped rapidly: within a few seconds, decisions were at least as fast as when remembering a single item. This pattern of findings is inconsistent with the idea that merely holding information in mind conflicts with attention-demanding decision tasks. Instead, it is possible that reconfiguring memory items for responding provokes conflict between memory and processing in complex span tasks.
Why does online collaboration work? Insights into sequential collaboration
The last two decades have seen a rise in online projects such as Wikipedia or OpenStreetMap in which people collaborate to create a common product. Contributors in such projects often work together sequentially. Essentially, the first contributor generates an entry (e.g., a Wikipedia article) independently which is then adjusted in the following by other contributors by adding or correcting information. We refer to this way of working together as sequential collaboration. This process has not yet been studied in the context of judgment and decision making even though research has demonstrated that Wikipedia and OpenStreetMap yield very accurate information. In this talk, I give first insights into the structure of sequential collaboration, how adjusting each other’s judgments can yield more accurate final estimates, which boundary conditions need to be met, and which underlying mechanisms may be responsible for successful collaboration. A preprint is available at https://psyarxiv.com/w4xdk/
Understanding "why": The role of causality in cognition
Humans have a remarkable ability to figure out what happened and why. In this talk, I will shed light on this ability from multiple angles. I will present a computational framework for modeling causal explanations in terms of counterfactual simulations, and several lines of experiments testing this framework in the domain of intuitive physics. The model predicts people's causal judgments about a variety of physical scenes, including dynamic collision events, complex situations that involve multiple causes, omissions as causes, and causal responsibility for a system's stability. It also captures the cognitive processes underlying these judgments as revealed by spontaneous eye-movements. More recently, we have applied our computational framework to explain multisensory integration. I will show how people's inferences about what happened are well-accounted for by a model that integrates visual and auditory evidence through approximate physical simulations.
What is serially-dependent perception good for?
Perception can be strongly serially-dependent (i.e. biased toward previously seen stimuli). Recently, serial dependencies in perception were proposed as a mechanism for perceptual stability, increasing the apparent continuity of the complex environments we experience in everyday life. For example, stable scene perception can be actively achieved by the visual system through global serial dependencies, a special kind of serial dependence between summary statistical representations. Serial dependence occurs also between emotional expressions, but it is highly selective for the same identity. Overall, these results further support the notion of serial dependence as a global, highly specialized, and purposeful mechanism. However, serial dependence could also be a deleterious phenomenon in unnatural or unpredictable situations, such as visual search in radiological scans, biasing current judgments toward previous ones even when accurate and unbiased perception is needed. For example, observers make consistent perceptual errors when classifying a tumor- like shape on the current trial, seeing it as more similar to the shape presented on the previous trial. In a separate localization test, observers make consistent errors when reporting the perceived position of an objects on the current trial, mislocalizing it toward the position in the preceding trial. Taken together, these results show two opposite sides of serial dependence; it can be a beneficial mechanism which promotes perceptual stability, but at the same time a deleterious mechanism which impairs our percept when fine recognition is needed.
Human cognitive biases and the role of dopamine
Cognitive bias is a "subjective reality" that is uniquely created in the brain and affects our various behaviors. It may lead to what is widely called irrationality in behavioral economics, such as inaccurate judgment and illogical interpretation, but it also has an adaptive aspect in terms of mental hygiene. When such cognitive bias is regarded as a product of information processing in the brain, the approach to clarify the mechanism in the brain will play a part in finding the direct relations between the brain and the mind. In my talk, I will introduce our studies investigating the neural and molecular bases of cognitive biases, especially focusing on the role of dopamine.
Time perception: how our judgment of time is influenced by the regularity and change in stimulus distribution?
To organize various experiences in a coherent mental representation, we need to properly estimate the duration and temporal order of different events. Yet, our perception of time is noisy and vulnerable to various illusions. Studying these illusions can elucidate the mechanism by which the brain perceives time. In this talk, I will review a few studies on how the brain perceives duration of events and the temporal order between self-generated motion and sensory feedback. Combined with computational models at different levels, these experiments illustrated that the brain incorporates the prior knowledge of the statistical distribution of the duration of stimuli and the decay of memory when estimating duration of an individual event, and adjusts its perception of temporal order to changes in the statistics of the environment.
Neural correlates of belief updates in the mouse secondary motor cortex
To make judgments, brain must be able to infer the state of the world based on often incomplete and ambiguous evidence. To probe neural circuits that perform the computations underlying such judgments, we developed a behavioral task for mice that required them to detect sustained increases in the speed of a continuously varying visual stimulus. In this talk, I will present evidence that the responses of secondary motor cortex to stimulus fluctuations in this task are consistent with updates of the animal’s state of belief that the change has occurred. These results establish a framework for mechanistic inquiries into neural circuits underlying inference during perceptual decision-making.
Predicting Patterns of Similarity Among Abstract Semantic Relations
In this talk, I will present some data showing that people’s similarity judgments among word pairs reflect distinctions between abstract semantic relations like contrast, cause-effect, or part-whole. Further, the extent that individual participants’ similarity judgments discriminate between abstract semantic relations was linearly associated with both fluid and crystallized verbal intelligence, albeit more strongly with fluid intelligence. Finally, I will compare three models according to their ability to predict these similarity judgments. All models take as input vector representations of individual word meanings, but they differ in their representation of relations: one model does not represent relations at all, a second model represents relations implicitly, and a third model represents relations explicitly. Across the three models, the third model served as the best predictor of human similarity judgments suggesting the importance of explicit relation representation to fully account for human semantic cognition.