← Back

Neural Nets

Topic spotlight
TopicWorld Wide

neural nets

Discover seminars, jobs, and research tagged with neural nets across World Wide.
4 curated items4 Seminars
Updated almost 2 years ago
4 items · neural nets
4 results
SeminarNeuroscienceRecording

Deepfake Detection in Super-Recognizers and Police Officers

Meike Ramon
University of Lausanne
Feb 12, 2024

Using videos from the Deepfake Detection Challenge (cf. Groh et al., 2021), we investigated human deepfake detection performance (DDP) in two unique observer groups: Super-Recognizers (SRs) and "normal" officers from within the 18K members of the Berlin Police. SRs were identified either via previously proposed lab-based procedures (Ramon, 2021) or the only existing tool for SR identification involving increasingly challenging, authentic forensic material: beSure® (Berlin Test For Super-Recognizer Identification; Ramon & Rjosk, 2022). Across two experiments we examined deepfake detection performance (DDP) in participants who judged single videos and pairs of videos in a 2AFC decision setting. We explored speed-accuracy trade-offs in DDP, compared DDP between lab-identified SRs and non-SRs, and police officers whose face identity processing skills had been extensively tested using challenging. In this talk I will discuss our surprising findings and argue that further work is needed too determine whether face identity processing is related to DDP or not.

SeminarNeuroscienceRecording

On the implicit bias of SGD in deep learning

Amir Globerson
Tel Aviv University
Oct 19, 2021

Tali's work emphasized the tradeoff between compression and information preservation. In this talk I will explore this theme in the context of deep learning. Artificial neural networks have recently revolutionized the field of machine learning. However, we still do not have sufficient theoretical understanding of how such models can be successfully learned. Two specific questions in this context are: how can neural nets be learned despite the non-convexity of the learning problem, and how can they generalize well despite often having more parameters than training data. I will describe our recent work showing that gradient-descent optimization indeed leads to 'simpler' models, where simplicity is captured by lower weight norm and in some cases clustering of weight vectors. We demonstrate this for several teacher and student architectures, including learning linear teachers with ReLU networks, learning boolean functions and learning convolutional pattern detection architectures.

SeminarNeuroscienceRecording

Making neural nets simple enough to succeed at universal relational generalization

Kenneth Kurtz
Binghamton University
Nov 16, 2020

Traditional brain-style (connectionist) approaches basically hit a wall when it comes to relational cognition. As an alternative to the well-known approaches of structured connectionism and deep learning, I present an engine for relational pattern recognition based on minimalist reinterpretations of first principles of connectionism. Results of computational experiments will be discussed on problems testing relational learning and universal generalization.

SeminarNeuroscienceRecording

Logical Neural Networks

Ndivhuwo Makondo
IBM Research-Africa & the University of Witwatersrand
Oct 20, 2020

The work to be presented in this talk proposes a novel framework seamlessly providing key properties of both neural nets (learning) and symbolic logic (knowledge and reasoning). Every neuron has a meaning as a component of a formula in a weighted real-valued logic, yielding a highly interpretable disentangled representation. Inference is omnidirectional rather than focused on predefined target variables, and corresponds to logical reasoning, including classical first-order logic theorem proving as a special case. The model is end-to-end differentiable, and learning minimizes a novel loss function capturing logical contradiction, yielding resilience to inconsistent knowledge. It also enables the open-world assumption by maintaining bounds on truth values which can have probabilistic semantics, yielding resilience to incomplete knowledge.