TopicNeuroscience

fairness

Latest

SeminarNeuroscience

Behavioral and neurobiological mechanisms of social cooperation

Yina Ma
Beijing Normal University
Jun 30, 2021

Human society operates on large-scale cooperation and shared norms of fairness. However, individual differences in cooperation and incentives to free-riding on others’ cooperation make large-scale cooperation fragile and can lead to reduced social-welfare. Deciphering the neural codes representing potential rewards/costs for self and others is crucial for understanding social decision-making and cooperation. I will first talk about how we integrate computational modeling with functional magnetic resonance imaging to investigate the neural representation of social value and the modulation by oxytocin, a nine-amino acid neuropeptide, in participants evaluating monetary allocations to self and other (self-other allocations). Then I will introduce our recent studies examining the neurobiological mechanisms underlying intergroup decision-making using hyper-scanning, and share with you how we alter intergroup decisions using psychological manipulations and pharmacological challenge. Finally, I will share with you our on-going project that reveals how individual cooperation spreads through human social networks. Our results help to better understand the neurocomputational mechanism underlying interpersonal and intergroup decision-making.

SeminarNeuroscienceRecording

Data-driven Artificial Social Intelligence: From Social Appropriateness to Fairness

Hatice Gunes
Department of Computer Science and Technology, University of Cambridge
Mar 16, 2021

Designing artificially intelligent systems and interfaces with socio-emotional skills is a challenging task. Progress in industry and developments in academia provide us a positive outlook, however, the artificial social and emotional intelligence of the current technology is still limited. My lab’s research has been pushing the state of the art in a wide spectrum of research topics in this area, including the design and creation of new datasets; novel feature representations and learning algorithms for sensing and understanding human nonverbal behaviours in solo, dyadic and group settings; designing longitudinal human-robot interaction studies for wellbeing; and investigating how to mitigate the bias that creeps into these systems. In this talk, I will present some of my research team’s explorations in these areas including social appropriateness of robot actions, virtual reality based cognitive training with affective adaptation, and bias and fairness in data-driven emotionally intelligent systems.

SeminarNeuroscienceRecording

Protecting Machines from Us

Pelonomi Moila
Nedbank
Sep 23, 2020

The possibilities of machine learning and neural networks in particular are ever expanding. With increased opportunities to do good, however there are just as many opportunities to do harm and even in the case that good intentions are at the helm, evidence suggests that opportunities for good may eventually prove to be the opposite. The greatest threat to what machine learning is able to achieve and to us as humans, is machine learning that does not reflect the diversity of the users it is meant to serve. It is important that we are not so pre-occupied with advancing technology into the future that we have not taken the time to invest the energy into engineering the security measures this future requires. It is important to investigate now, as thoroughly as we investigate differing deep neural network architectures, the complex questions regarding the fact that humans and the society in which they operate is inherently biased and loaded with prejudice and that these traits find themselves in the machines we create (and increasingly allow to run our lives).

fairness coverage

3 items

Seminar3

Share your knowledge

Know something about fairness? Help the community by contributing seminars, talks, or research.

Contribute content
Domain spotlight

Explore how fairness research is advancing inside Neuroscience.

Visit domain

Cookies

We use essential cookies to run the site. Analytics cookies are optional and help us improve World Wide. Learn more.