← Back

Mathematics

Topic spotlight
TopicWorld Wide

mathematics

Discover seminars, jobs, and research tagged with mathematics across World Wide.
67 curated items41 Seminars25 Positions1 ePoster
Updated 1 day ago
67 items · mathematics
67 results
Position

Mitra Baratchi

Leiden Institute of Advanced Computer Science, Leiden University
Leiden University, Netherlands
Dec 5, 2025

We are looking for an excellent candidate with a master’s degree in MSc in Artificial Intelligence, Computer Science, Mathematics, Statistics, or a closely related field to join a project focused on developing an advanced transparent machine learning framework with application on movement behavioural analysis. Smartwatches and other wearable technologies allow us to continuously collect data on our daily movement behaviour patterns. We would like to understand how machine learning techniques can be used to learn causal effects from time-series data to identify and recommend effective changes in daily activities (i.e., possible behavioural interventions) that are expected to result in concrete health improvements (e.g., improving cardiorespiratory fitness). This research, at the intersection of machine learning and causality, aims to develop algorithms for finding causal relations between behavioural indicators learned from the time series data and associated health-outcomes.

Position

Xavier Alameda-Pineda

RobotLearn team, Inria Grenoble
Inria Grenoble
Dec 5, 2025

The internship aims to explore the usefulness of the Fisher-Ráo metric combined with deep probabilistic models. The main question is whether or not this metric has some relationship with the training of deep generative models. In plain, we would like to understand if the training and/or fine-tuning of such probabilistic models follow optimal paths on the manifold of probability distributions. Your task will be to design and implement an experimental framework allowing to measure what kind of paths are followed on the manifold of probability distributions when such deep probabilistic models are trained. To that aim, one must first be able to measure distances in this manifold, and here is where the Fisher-Ráo metric comes in the game. The candidate does not need to be familiar with the specific concepts of Fisher-Ráo metric, but needs to be open to learning new mathematical concepts. The implementation of these experiments will require knowledge in Python and in PyTorch.

PositionComputational Neuroscience

Yashar Ahmadian

Computational and Biological Learning Lab, University of Cambridge
University of Cambridge
Dec 5, 2025

The postdoc will work on a collaborative project between the labs of Yashar Ahmadian at the Computational and Biological Learning Lab (CBL), and Zoe Kourtzi at the Psychology Department, both at the University of Cambridge. The project investigates the computational principles and circuit mechanisms underlying human visual perceptual learning, particularly the role of adaptive changes in the balance of cortical excitation and inhibition resulting from perceptual learning. The postdoc will be based in CBL, with free access to the Kourtzi lab in the Psychology department.

Position

Felipe Tobar

Universidad de Chile
Universidad de Chile
Dec 5, 2025

The Initiative for Data & Artificial Intelligence at Universidad de Chile is looking for Postdoctoral Researchers to join a collaborative team of PIs working on theoretical and applied aspects of Data Science. The role of the postholder(s) is twofold: first, they will engage and collaborate in current projects at the Initiative related to statistical machine learning, natural language processing and deep learning, with applications to time series analysis, health informatics, and astroinformatics. Second, they are expected to bring novel research lines affine to those currently featured at the Initiative, possibly in the form of theoretical work or applications to real-world problems of general interest. These positions are offered on a fixed term basis for up to one year with a possibility for a further year extension.

Position

Max Garagnani

Department of Computing, Goldsmiths, University of London
Goldsmiths, University of London, Lewisham Way, New Cross, London SE14 6NW, UK
Dec 5, 2025

The project involves implementing a brain-realistic neurocomputational model able to exhibit the spontaneous emergence of cognitive function from a uniform neural substrate, as a result of unsupervised, biologically realistic learning. Specifically, it will focus on modelling the emergence of unexpected (i.e., non stimulus-driven) action decisions using neo-Hebbian reinforcement learning. The final deliverable will be an artificial brain-like cognitive architecture able to learn to act as humans do when driven by intrinsic motivation and spontaneous, exploratory behaviour.

PositionComputational Neuroscience

N/A

Istituto Italiano di Tecnologia
Istituto Italiano di Tecnologia
Dec 5, 2025

IIT welcomes applicants with an outstanding track-record in Computational Neuroscience. Appropriate research areas include computational and modelling approaches for understanding the function of the nervous system. Investigators with expertise in mathematics, physics, statistics, and machine learning for neuroscience are also encouraged to apply. The position can be either tenured or tenure-track, depending on seniority and expertise. If tenure-track, the position is for an initial period of 5 years with renewal depending on evaluation. We provide generous support for salary, start-up budget, and annual running costs.

PositionComputational Neuroscience

Vinita Samarasinghe

Institute for Neural Computation, Faculty of Computer Science, Ruhr University Bochum
Ruhr University Bochum, Faculty of Computer Science, Institute for Neural Computation
Dec 5, 2025

The position is part of the Collaborative Research Center “Extinction Learning” (SFB 1280) and studies the principles underlying spatial learning and its extinction with reinforcement learning models. A particular focus is the role of episodic-like memory in learning and extinction processes. The research group is highly dynamic and uses diverse computational modeling approaches including biological neural networks, cognitive modeling, and machine learning to investigate learning and memory in humans and animals.

Position

Rainer Stiefelhagen

Karlsruhe Institute of Technology (KIT) and the Hochschule Karlsruhe (HKA)
Karlsruhe Institute of Technology (KIT) and the Hochschule Karlsruhe (HKA)
Dec 5, 2025

The Cooperative Graduate School Accessibility through AI-based Assistive Technology (KATE - www.kate.kit.edu) is a new cooperative and interdisciplinary graduate school between the Karlsruhe Institute of Technology (KIT) and the Hochschule Karlsruhe (HKA). The program revolves around investigating state-of-the-art artificial intelligence methods in order to improve the autonomy and participation of persons with special needs. Different dissertation topics ranging from AI-based methods for text, audio, and multimedia document processing, AI methods for interactive training and assistance systems, to investigating the consequences and ethical, legal, social, and societal implications of AI systems for people with disabilities will be offered. The sponsored persons will work on a selected topic scientifically in depth within the framework of their doctorate and will receive an overall view of all relevant topics - including medical causes as well as their effects, the needs of the target groups, AI-based approaches, ethics, technology assessment, and societal aspects - through the exchange within the doctoral college for this purpose.

Position

N/A

N/A
N/A
Dec 5, 2025

We are announcing one or more 2-year postdoc positions in identification and analysis of lexical semantic change using computational models applied to diachronic texts. Our languages change over time. As a consequence, words may look the same, but have different meanings at different points in time, a phenomenon called lexical semantic change (LSC). To facilitate interpretation, search, and analysis of old texts, we build computational methods for automatic detection and characterization of LSC from large amounts of text. Our outputs will be used by the lexicographic R&D unit that compiles the Swedish Academy dictionaries, as well as by researchers from the humanities and social sciences that include textual analysis as a central methodological component. The Change is Key! program and the Towards Computational Lexical Semantic Change Detection research project offer a vibrant research environment for this exciting and rapidly growing cutting-edge research field in NLP. There is a unique opportunity to contribute to the field of LSC, but also to humanities and social sciences through our active collaboration with international researchers in historical linguistics, analytical sociology, gender studies, conceptual history, and literary studies.

PositionNeuroscience

N/A

New York University
New York University
Dec 5, 2025

New York University is seeking exceptional PhD candidates with strong quantitative training (e.g., physics, mathematics, engineering) coupled with a clear interest in scientific study of the brain. Doctoral programs are flexible, allowing students to pursue research across departmental boundaries. Admissions are handled separately by each department, and students interested in pursuing graduate studies should submit an application to the program that best fits their goals and interests.

PositionNeuroscience

Geoffrey J Goodhill

Washington University School of Medicine
St. Louis, MO
Dec 5, 2025

An NIH-funded collaboration between David Prober (Caltech), Thai Truong (USC) and Geoff Goodhill (Washington University in St Louis) aims to gain new insight into the neural circuits underlying sleep, through a combination of whole-brain neural recordings in zebrafish and theoretical/computational modeling. A postdoc position is available in the Goodhill lab to contribute to the modeling and computational analysis components. Using novel 2-photon imaging technologies Prober and Truong are recording from the entire larval zebrafish brain at single-neuron resolution continuously for long periods of time, examining neural circuit activity during normal day-night cycles and in response to genetic and pharmacological perturbations. The Goodhill lab is analyzing the resulting huge datasets using a variety of sophisticated computational approaches, and using these results to build new theoretical models that reveal how neural circuits interact to govern sleep.

Position

KongFatt Wong-Lin

Intelligent Systems Research Centre (ISRC), Ulster University
Ulster University, UK
Dec 5, 2025

The successful candidate will develop and apply computational modelling, and theoretical and analytical techniques to understand brain and behavioural data across primate species, and to apply biologically based neural network modelling to elucidate mechanisms underlying perceptual decision-making. The duration of the position is 24 months, from January 2024 till end of 2025. The personnel will be based at the ISRC in Ulster University, working with Prof. KongFatt Wong-Lin and his team, while collaborating closely with international collaborators in the USA and the Republic of Ireland.

PositionNeuroscience

N/A

New York University
New York University
Dec 5, 2025

New York University is seeking exceptional PhD candidates with strong quantitative training (e.g., physics, mathematics, engineering) coupled with a clear interest in scientific study of the brain. Doctoral programs are flexible, allowing students to pursue research across departmental boundaries. Admissions are handled separately by each department, and students interested in pursuing graduate studies should submit an application to the program that best fits their goals and interests.

PositionNeuroscience

Professor Geoffrey J Goodhill

Department of Neuroscience, Washington University School of Medicine
Washington University School of Medicine, 660 S. Euclid Avenue, St. Louis, MO 63110
Dec 5, 2025

The Department of Neuroscience at Washington University School of Medicine is currently recruiting investigators with the passion to create knowledge, pursue bold visions, and challenge canonical thinking as we expand into our new 600,000 sq ft purpose-built neurosciences research building. We are now seeking a tenure-track investigator at the level of Assistant Professor to develop an innovative research program in Theoretical/Computational Neuroscience. The successful candidates will join a thriving theoretical/computational neuroscience community at Washington University, including the new Center for Theoretical and Computational Neuroscience. In addition, the Department also has world-class research strengths in systems, circuits and behavior, cellular and molecular neuroscience using a variety of animal models including worms, flies, zebrafish, rodents and non-human primates. We are particularly interested in outstanding researchers who are both creative and collaborative.

PositionMachine Learning

Carl Rasmussen, Bernhard Schölkopf

University of Cambridge, Max Planck Institute for Intelligent Systems
University of Cambridge, Max Planck Institute for Intelligent Systems
Dec 5, 2025

The University of Cambridge Machine Learning Group and the Max Planck Institute for Intelligent Systems Empirical Inference Department in Tübingen are two of the world’s leading centres for machine learning research. In 2014, we launched a new and exciting initiative whereby a small group of select PhD candidates are jointly supervised at both institutions. The principal supervisors are Carl Rasmussen, Neil Lawrence, Ferenc Huszar, Jose Miguel Hernandez-Lobato, David Krueger, Adrian Weller and Rika Antonova at Cambridge University, and Bernhard Schölkopf and other research group leaders at the Max Planck Institute in Tübingen. This program is specific for candidates whose research interests are well-matched to both the principal supervisors in Cambridge and the MPI for Intelligent Systems in Tuebingen. The overall duration of the PhD will be four years, with roughly three years spent at one location, and one year spent at the other location, including initial coursework at the University of Cambridge. Successful PhDs will be officially granted by the University of Cambridge.

Position

Prof. Sacha Jennifer van Albada

Jülich Research Center, University of Cologne
Jülich, Germany
Dec 5, 2025

PhD and postdoc opportunities with a focus on the simulation of large-scale biological neural networks are available in the Theoretical Neuroanatomy group at Jülich Research Center, Germany. The projects will advance a research program that centers on the full-scale simulation of thalamocortical networks using the simulator NEST. The postdoc position is available in the context of the Henriette Herz Scouting Program of the Humboldt Foundation, and will be offered to a female candidate. The program is particularly aimed at candidates from countries underrepresented in the Humboldt Foundation. We will jointly define a research project and the selected candidate will receive a Humboldt Research Fellowship. The position is available for 24 months for postdocs up to 4 years after the PhD defense and for 18 months for experienced researchers 4-12 years after the PhD defense. The PhD defense should not be more than 12 years ago, and candidates should not have previous or existing links to Germany in terms of study, research stays, or citizenship. Due consideration will be given to any gaps in the CV due to family care or other personal circumstances. The PhD position is open to candidates regardless of gender. The candidate should have a background in physics, mathematics, computer science, biology (or specifically neuroscience), or engineering. Excellent quantitative and analytical skills are highly valued. We offer a structured program guiding doctoral researchers through the PhD work and plenty of opportunities for local and international collaboration. The researchers will be embedded in a vibrant research institute and have links to the University of Cologne, so that candidates can gain teaching/tutoring experience.

Position

Włodzisław Duch

University Centre of Excellence “Dynamics, Mathematical Analysis and Artificial Intelligence” - Nicolaus Copernicus University
Nicolaus Copernicus University, Toruń, Poland
Dec 5, 2025

Grants are available for young researchers (no more than 5 years after PhD) who during the last 3 years did not stay in Poland for a period longer than 6 months. The grants are for 6 or 12 months at the University Centre of Excellence “Dynamics, Mathematical Analysis and Artificial Intelligence” - Nicolaus Copernicus University. There is also a second contest for experienced researchers (at least 6 years after the PhD) with outstanding scientific achievements.

PositionComputer Science

Dr. Amir Aly

Center for Robotics and Neural Systems (CRNS), School of Engineering, Computing, and Mathematics, University of Plymouth
University of Plymouth, UK
Dec 5, 2025

The University of Plymouth has several available positions in Computer Science.

Position

Md Sahidullah

Institute for Advancing Intelligence (IAI), TCG Centres for Research and Education in Science and Technology (TCG CREST)
India
Dec 5, 2025

Ph.D. fellowships in broad areas of Computer Science and Mathematics with a special focus on Cryptography and Security, Quantum Information and Quantum Cryptography, Mathematics and its Applications, Artificial Intelligence and Machine Learning. The Ph.D. degree will be awarded in collaboration with AcSIR (Academy of Scientific and Innovative Research).

PositionComputational Neuroscience

Dr. Udo Ernst

Computational Neurophysics Lab, University of Bremen
University of Bremen, Hochschulring 18, D-28359 Bremen, Germany
Dec 5, 2025

In this project we want to study organization and optimization of flexible information processing in neural networks, with specific focus on the visual system. You will use network modelling, numerical simulation, and mathematical analysis to investigate fundamental aspects of flexible computation such as task-dependent coordination of multiple brain areas for efficient information processing, as well as the emergence of flexible circuits originating from learning schemes which simultaneously optimize for function and flexibility. These studies will be complemented by biophysically realistic modelling and data analysis in collaboration with experimental work done in the lab of Prof. Dr. Andreas Kreiter, also at the University of Bremen. Here we will investigate selective attention as a central aspect of flexibility in the visual system, involving task-dependent coordination of multiple visual areas.

PositionNeuroscience

Jean-Pascal Pfister

Theoretical Neuroscience Group, Department of Physiology, University of Bern
University of Bern, Bühlplatz 5, 3012 Bern, Switzerland
Dec 5, 2025

The Theoretical Neuroscience Group of the University of Bern is seeking applications for a PhD position, funded by a Swiss National Science Foundation grant titled “Why Spikes?”. This project aims at answering a nearly century-old question in Neuroscience: “What are spikes good for?”. Indeed, since the discovery of action potentials by Lord Adrian in 1926, it has remained largely unknown what the benefits of spiking neurons are, when compared to analog neurons. Traditionally, it has been argued that spikes are good for long-distance communication or for temporally precise computation. However, there is no systematic study that quantitatively compares the communication as well as the computational benefits of spiking neuron w.r.t analog neurons. The aim of the project is to systematically quantify the benefits of spiking at various levels by developing and analyzing appropriate mathematical models. The PhD student will be supervised by Prof. Jean-Pascal Pfister (Theoretical Neuroscience Group, Department of Physiology, University of Bern). The project will involve close collaborations within a highly motivated team as well as regular exchange of ideas with the other theory groups at the institute.

Position

Zoran Tiganj, PhD

College of Arts and Sciences, Luddy School of Informatics, Computing, and Engineering, Indiana University Bloomington
Indiana University Bloomington
Dec 5, 2025

The College of Arts and Sciences and the Luddy School of Informatics, Computing, and Engineering at Indiana University Bloomington invite applications for multiple open-rank, tenured or tenure-track faculty positions in one or more of the following areas: artificial intelligence, human intelligence, and machine learning to begin in Fall 2025 or after. Appointments will be in one or more departments, including Cognitive Science, Computer Science, Informatics, Intelligent Systems Engineering, Mathematics, and Psychological and Brain Sciences. We encourage applications from scholars who apply interdisciplinary perspectives across these fields to a variety of domains, including cognitive science, computational social sciences, computer vision, education, engineering, healthcare, mathematics, natural language processing, neuroscience, psychology, robotics, virtual reality, and beyond. Reflecting IU’s strong tradition of interdisciplinary research, we encourage diverse perspectives and innovative research that may intersect with or extend beyond these areas. The positions are part of a new university-wide initiative that aims to transform our understanding of human and artificial intelligence, involving multiple departments and schools, as well as the new Luddy Artificial Intelligence Center.

Position

N/A

New York University
New York University
Dec 5, 2025

New York University is home to a thriving interdisciplinary community of researchers using computational and theoretical approaches in neuroscience. We are interested in exceptional PhD candidates with strong quantitative training (e.g., physics, mathematics, engineering) coupled with a clear interest in scientific study of the brain. A listing of faculty, sorted by their primary departmental affiliation, is given below. Doctoral programs are flexible, allowing students to pursue research across departmental boundaries. Nevertheless, admissions are handled separately by each department, and students interested in pursuing graduate studies should submit an application to the program that best fits their goals and interests.

SeminarNeuroscience

“Brain theory, what is it or what should it be?”

Prof. Guenther Palm
University of Ulm
Jun 26, 2025

n the neurosciences the need for some 'overarching' theory is sometimes expressed, but it is not always obvious what is meant by this. One can perhaps agree that in modern science observation and experimentation is normally complemented by 'theory', i.e. the development of theoretical concepts that help guiding and evaluating experiments and measurements. A deeper discussion of 'brain theory' will require the clarification of some further distictions, in particular: theory vs. model and brain research (and its theory) vs. neuroscience. Other questions are: Does a theory require mathematics? Or even differential equations? Today it is often taken for granted that the whole universe including everything in it, for example humans, animals, and plants, can be adequately treated by physics and therefore theoretical physics is the overarching theory. Even if this is the case, it has turned out that in some particular parts of physics (the historical example is thermodynamics) it may be useful to simplify the theory by introducing additional theoretical concepts that can in principle be 'reduced' to more complex descriptions on the 'microscopic' level of basic physical particals and forces. In this sense, brain theory may be regarded as part of theoretical neuroscience, which is inside biophysics and therefore inside physics, or theoretical physics. Still, in neuroscience and brain research, additional concepts are typically used to describe results and help guiding experimentation that are 'outside' physics, beginning with neurons and synapses, names of brain parts and areas, up to concepts like 'learning', 'motivation', 'attention'. Certainly, we do not yet have one theory that includes all these concepts. So 'brain theory' is still in a 'pre-newtonian' state. However, it may still be useful to understand in general the relations between a larger theory and its 'parts', or between microscopic and macroscopic theories, or between theories at different 'levels' of description. This is what I plan to do.

SeminarArtificial IntelligenceRecording

Computational modelling of ocular pharmacokinetics

Arto Urtti
School of Pharmacy, University of Eastern Finland
Apr 21, 2025

Pharmacokinetics in the eye is an important factor for the success of ocular drug delivery and treatment. Pharmacokinetic features determine the feasible routes of drug administration, dosing levels and intervals, and it has impact on eventual drug responses. Several physical, biochemical, and flow-related barriers limit drug exposure of anterior and posterior ocular target tissues during treatment during local (topical, subconjunctival, intravitreal) and systemic administration (intravenous, per oral). Mathematical models integrate joint impact of various barriers on ocular pharmacokinetics (PKs) thereby helping drug development. The models are useful in describing (top-down) and predicting (bottom-up) pharmacokinetics of ocular drugs. This is useful also in the design and development of new drug molecules and drug delivery systems. Furthermore, the models can be used for interspecies translation and probing of disease effects on pharmacokinetics. In this lecture, ocular pharmacokinetics and current modelling methods (noncompartmental analyses, compartmental, physiologically based, and finite element models) are introduced. Future challenges are also highlighted (e.g. intra-tissue distribution, prediction of drug responses, active transport).

SeminarArtificial IntelligenceRecording

Why age-related macular degeneration is a mathematically tractable disease

Christine Curcio
The University of Alabama at Birmingham Heersink School of Medicine
Aug 18, 2024

Among all prevalent diseases with a central neurodegeneration, AMD can be considered the most promising in terms of prevention and early intervention, due to several factors surrounding the neural geometry of the foveal singularity. • Steep gradients of cell density, deployed in a radially symmetric fashion, can be modeled with a difference of Gaussian curves. • These steep gradients give rise to huge, spatially aligned biologic effects, summarized as the Center of Cone Resilience, Surround of Rod Vulnerability. • Widely used clinical imaging technology provides cellular and subcellular level information. • Data are now available at all timelines: clinical, lifespan, evolutionary • Snapshots are available from tissues (histology, analytic chemistry, gene expression) • A viable biogenesis model exists for drusen, the largest population-level intraocular risk factor for progression. • The biogenesis model shares molecular commonality with atherosclerotic cardiovascular disease, for which there has been decades of public health success. • Animal and cell model systems are emerging to test these ideas.

SeminarArtificial IntelligenceRecording

Mathematical and computational modelling of ocular hemodynamics: from theory to applications

Giovanna Guidoboni
University of Maine
Nov 13, 2023

Changes in ocular hemodynamics may be indicative of pathological conditions in the eye (e.g. glaucoma, age-related macular degeneration), but also elsewhere in the body (e.g. systemic hypertension, diabetes, neurodegenerative disorders). Thanks to its transparent fluids and structures that allow the light to go through, the eye offers a unique window on the circulation from large to small vessels, and from arteries to veins. Deciphering the causes that lead to changes in ocular hemodynamics in a specific individual could help prevent vision loss as well as aid in the diagnosis and management of diseases beyond the eye. In this talk, we will discuss how mathematical and computational modelling can help in this regard. We will focus on two main factors, namely blood pressure (BP), which drives the blood flow through the vessels, and intraocular pressure (IOP), which compresses the vessels and may impede the flow. Mechanism-driven models translates fundamental principles of physics and physiology into computable equations that allow for identification of cause-to-effect relationships among interplaying factors (e.g. BP, IOP, blood flow). While invaluable for causality, mechanism-driven models are often based on simplifying assumptions to make them tractable for analysis and simulation; however, this often brings into question their relevance beyond theoretical explorations. Data-driven models offer a natural remedy to address these short-comings. Data-driven methods may be supervised (based on labelled training data) or unsupervised (clustering and other data analytics) and they include models based on statistics, machine learning, deep learning and neural networks. Data-driven models naturally thrive on large datasets, making them scalable to a plethora of applications. While invaluable for scalability, data-driven models are often perceived as black- boxes, as their outcomes are difficult to explain in terms of fundamental principles of physics and physiology and this limits the delivery of actionable insights. The combination of mechanism-driven and data-driven models allows us to harness the advantages of both, as mechanism-driven models excel at interpretability but suffer from a lack of scalability, while data-driven models are excellent at scale but suffer in terms of generalizability and insights for hypothesis generation. This combined, integrative approach represents the pillar of the interdisciplinary approach to data science that will be discussed in this talk, with application to ocular hemodynamics and specific examples in glaucoma research.

SeminarNeuroscienceRecording

Brain network communication: concepts, models and applications

Caio Seguin
Indiana University
Aug 23, 2023

Understanding communication and information processing in nervous systems is a central goal of neuroscience. Over the past two decades, advances in connectomics and network neuroscience have opened new avenues for investigating polysynaptic communication in complex brain networks. Recent work has brought into question the mainstay assumption that connectome signalling occurs exclusively via shortest paths, resulting in a sprawling constellation of alternative network communication models. This Review surveys the latest developments in models of brain network communication. We begin by drawing a conceptual link between the mathematics of graph theory and biological aspects of neural signalling such as transmission delays and metabolic cost. We organize key network communication models and measures into a taxonomy, aimed at helping researchers navigate the growing number of concepts and methods in the literature. The taxonomy highlights the pros, cons and interpretations of different conceptualizations of connectome signalling. We showcase the utility of network communication models as a flexible, interpretable and tractable framework to study brain function by reviewing prominent applications in basic, cognitive and clinical neurosciences. Finally, we provide recommendations to guide the future development, application and validation of network communication models.

SeminarArtificial IntelligenceRecording

Computational and mathematical approaches to myopigenesis

C. Ross Ethier
Georgia Institute of Technology and Emory University
Jul 31, 2023

Myopia is predicted to affect 50% of all people worldwide by 2050, and is a risk factor for significant, potentially blinding ocular pathologies, such as retinal detachment and glaucoma. Thus, there is significant motivation to better understand the process of myopigenesis and to develop effective anti-myopigenic treatments. In nearly all cases of human myopia, scleral remodeling is an obligate step in the axial elongation that characterizes the condition. Here I will describe the development of a biomechanical assay based on transient unconfined compression of scleral samples. By treating the scleral as a poroelastic material, one can determine scleral biomechanical properties from extremely small samples, such as obtained from the mouse eye. These properties provide proxy measures of scleral remodeling, and have allowed us to identify all-trans retinoic acid (atRA) as a myopigenic stimulus in mice. I will also describe nascent collaborative work on modeling the transport of atRA in the eye.

SeminarArtificial IntelligenceRecording

Computational models and experimental methods for the human cornea

Anna Pandolfi
Politecnico di Milano
May 1, 2023

The eye is a multi-component biological system, where mechanics, optics, transport phenomena and chemical reactions are strictly interlaced, characterized by the typical bio-variability in sizes and material properties. The eye’s response to external action is patient-specific and it can be predicted only by a customized approach, that accounts for the multiple physics and for the intrinsic microstructure of the tissues, developed with the aid of forefront means of computational biomechanics. Our activity in the last years has been devoted to the development of a comprehensive model of the cornea that aims at being entirely patient-specific. While the geometrical aspects are fully under control, given the sophisticated diagnostic machinery able to provide a fully three-dimensional images of the eye, the major difficulties are related to the characterization of the tissues, which require the setup of in-vivo tests to complement the well documented results of in-vitro tests. The interpretation of in-vivo tests is very complex, since the entire structure of the eye is involved and the characterization of the single tissue is not trivial. The availability of micromechanical models constructed from detailed images of the eye represents an important support for the characterization of the corneal tissues, especially in the case of pathologic conditions. In this presentation I will provide an overview of the research developed in our group in terms of computational models and experimental approaches developed for the human cornea.

SeminarNeuroscienceRecording

Cognitive supports for analogical reasoning in rational number understanding

Shuyuan Yu
Carleton University
Mar 2, 2023

In cognitive development, learning more than the input provides is a central challenge. This challenge is especially evident in learning the meaning of numbers. Integers – and the quantities they denote – are potentially infinite, as are the fractional values between every integer. Yet children’s experiences of numbers are necessarily finite. Analogy is a powerful learning mechanism for children to learn novel, abstract concepts from only limited input. However, retrieving proper analogy requires cognitive supports. In this talk, I seek to propose and examine number lines as a mathematical schema of the number system to facilitate both the development of rational number understanding and analogical reasoning. To examine these hypotheses, I will present a series of educational intervention studies with third-to-fifth graders. Results showed that a short, unsupervised intervention of spatial alignment between integers and fractions on number lines produced broad and durable gains in fractional magnitudes. Additionally, training on conceptual knowledge of fractions – that fractions denote magnitude and can be placed on number lines – facilitates explicit analogical reasoning. Together, these studies indicate that analogies can play an important role in rational number learning with the help of number lines as schemas. These studies shed light on helpful practices in STEM education curricula and instructions.

SeminarArtificial IntelligenceRecording

Unique features of oxygen delivery to the mammalian retina

Robert Linsenmeier
Northwestern University
Feb 6, 2023

Like all neural tissue, the retina has a high metabolic demand, and requires a constant supply of oxygen. Second and third order neurons are supplied by the retinal circulation, whose characteristics are similar to brain circulation. However, the photoreceptor region, which occupies half of the retinal thickness, is avascular, and relies on diffusion of oxygen from the choroidal circulation, whose properties are very different, as well as the retinal circulation. By fitting diffusion models to oxygen measurements made with oxygen microelectrodes, it is possible to understand the relative roles of the two circulations under normal conditions of light and darkness, and what happens if the retina is detached or the retinal circulation is occluded. Most of this work has been done in vivo in rat, cat, and monkey, but recent work in the isolated mouse retina will also be discussed.

SeminarNeuroscience

Maths, AI and Neuroscience Meeting Stockholm

Roshan Cools, Alain Destexhe, Upi Bhalla, Vijay Balasubramnian, Dinos Meletis, Richard Naud
Dec 14, 2022

To understand brain function and develop artificial general intelligence it has become abundantly clear that there should be a close interaction among Neuroscience, machine learning and mathematics. There is a general hope that understanding the brain function will provide us with more powerful machine learning algorithms. On the other hand advances in machine learning are now providing the much needed tools to not only analyse brain activity data but also to design better experiments to expose brain function. Both neuroscience and machine learning explicitly or implicitly deal with high dimensional data and systems. Mathematics can provide powerful new tools to understand and quantify the dynamics of biological and artificial systems as they generate behavior that may be perceived as intelligent.

SeminarNeuroscienceRecording

The impact of analogical learning approaches on mathematics education

Bing Ngu
University of New England
Nov 24, 2022
SeminarNeuroscienceRecording

Learning by Analogy in Mathematics

Pooja Sidney
University of Kentucky
Nov 9, 2022

Analogies between old and new concepts are common during classroom instruction. While previous studies of transfer focus on how features of initial learning guide later transfer to new problem solving, less is known about how to best support analogical transfer from previous learning while children are engaged in new learning episodes. Such research may have important implications for teaching and learning in mathematics, which often includes analogies between old and new information. Some existing research promotes supporting learners' explicit connections across old and new information within an analogy. In this talk, I will present evidence that instructors can invite implicit analogical reasoning through warm-up activities designed to activate relevant prior knowledge. Warm-up activities "close the transfer space" between old and new learning without additional direct instruction.

SeminarNeuroscienceRecording

A Framework for a Conscious AI: Viewing Consciousness through a Theoretical Computer Science Lens

Lenore and Manuel Blum
Carnegie Mellon University
Aug 4, 2022

We examine consciousness from the perspective of theoretical computer science (TCS), a branch of mathematics concerned with understanding the underlying principles of computation and complexity, including the implications and surprising consequences of resource limitations. We propose a formal TCS model, the Conscious Turing Machine (CTM). The CTM is influenced by Alan Turing's simple yet powerful model of computation, the Turing machine (TM), and by the global workspace theory (GWT) of consciousness originated by cognitive neuroscientist Bernard Baars and further developed by him, Stanislas Dehaene, Jean-Pierre Changeux, George Mashour, and others. However, the CTM is not a standard Turing Machine. It’s not the input-output map that gives the CTM its feeling of consciousness, but what’s under the hood. Nor is the CTM a standard GW model. In addition to its architecture, what gives the CTM its feeling of consciousness is its predictive dynamics (cycles of prediction, feedback and learning), its internal multi-modal language Brainish, and certain special Long Term Memory (LTM) processors, including its Inner Speech and Model of the World processors. Phenomena generally associated with consciousness, such as blindsight, inattentional blindness, change blindness, dream creation, and free will, are considered. Explanations derived from the model draw confirmation from consistencies at a high level, well above the level of neurons, with the cognitive neuroscience literature. Reference. L. Blum and M. Blum, "A theory of consciousness from a theoretical computer science perspective: Insights from the Conscious Turing Machine," PNAS, vol. 119, no. 21, 24 May 2022. https://www.pnas.org/doi/epdf/10.1073/pnas.2115934119

SeminarNeuroscienceRecording

How Children Discover Mathematical Structure through Relational Mapping

Kelly Mix
University of Maryland
Jun 29, 2022

A core question in human development is how we bring meaning to conventional symbols. This question is deeply connected to understanding how children learn mathematics—a symbol system with unique vocabularies, syntaxes, and written forms. In this talk, I will present findings from a program of research focused on children’s acquisition of place value symbols (i.e., multidigit number meanings). The base-10 symbol system presents a variety of obstacles to children, particularly in English. Children who cannot overcome these obstacles face years of struggle as they progress through the mathematics curriculum of the upper elementary and middle school grades. Through a combination of longitudinal, cross-sectional, and pretest-training-posttest approaches, I aim to illuminate relational learning mechanisms by which children sometimes succeed in mastering the place value system, as well as instructional techniques we might use to help those who do not.

SeminarNeuroscienceRecording

Population coding in the cerebellum: a machine learning perspective

Reza Shadmehr
Johns Hopkins School of Medicine
Apr 5, 2022

The cerebellum resembles a feedforward, three-layer network of neurons in which the “hidden layer” consists of Purkinje cells (P-cells) and the output layer consists of deep cerebellar nucleus (DCN) neurons. In this analogy, the output of each DCN neuron is a prediction that is compared with the actual observation, resulting in an error signal that originates in the inferior olive. Efficient learning requires that the error signal reach the DCN neurons, as well as the P-cells that project onto them. However, this basic rule of learning is violated in the cerebellum: the olivary projections to the DCN are weak, particularly in adulthood. Instead, an extraordinarily strong signal is sent from the olive to the P-cells, producing complex spikes. Curiously, P-cells are grouped into small populations that converge onto single DCN neurons. Why are the P-cells organized in this way, and what is the membership criterion of each population? Here, I apply elementary mathematics from machine learning and consider the fact that P-cells that form a population exhibit a special property: they can synchronize their complex spikes, which in turn suppress activity of DCN neuron they project to. Thus complex spikes cannot only act as a teaching signal for a P-cell, but through complex spike synchrony, a P-cell population may act as a surrogate teacher for the DCN neuron that produced the erroneous output. It appears that grouping of P-cells into small populations that share a preference for error satisfies a critical requirement of efficient learning: providing error information to the output layer neuron (DCN) that was responsible for the error, as well as the hidden layer neurons (P-cells) that contributed to it. This population coding may account for several remarkable features of behavior during learning, including multiple timescales, protection from erasure, and spontaneous recovery of memory.

SeminarPhysics of LifeRecording

4D Chromosome Organization: Combining Polymer Physics, Knot Theory and High Performance Computing

Anna Lappala
Harvard University
Mar 6, 2022

Self-organization is a universal concept spanning numerous disciplines including mathematics, physics and biology. Chromosomes are self-organizing polymers that fold into orderly, hierarchical and yet dynamic structures. In the past decade, advances in experimental biology have provided a means to reveal information about chromosome connectivity, allowing us to directly use this information from experiments to generate 3D models of individual genes, chromosomes and even genomes. In this talk I will present a novel data-driven modeling approach and discuss a number of possibilities that this method holds. I will discuss a detailed study of the time-evolution of X chromosome inactivation, highlighting both global and local properties of chromosomes that result in topology-driven dynamical arrest and present and characterize a novel type of motion we discovered in knots that may have applications to nanoscale materials and machines.

SeminarNeuroscienceRecording

Neural correlates of temporal processing in humans

Andre M. Cravo
Center for Mathematics, Computing and Cognition, Federal University of ABC
Jan 25, 2022

Estimating intervals is essential for adaptive behavior and decision-making. Although several theoretical models have been proposed to explain how the brain keeps track of time, there is still no evidence toward a single one. It is often hard to compare different models due to their overlap in behavioral predictions. For this reason, several studies have looked for neural signatures of temporal processing using methods such as electrophysiological recordings (EEG). However, for this strategy to work, it is essential to have consistent EEG markers of temporal processing. In this talk, I'll present results from several studies investigating how temporal information is encoded in the EEG signal. Specifically, across different experiments, we have investigated whether different neural signatures of temporal processing (such as the CNV, the LPC, and early ERPs): 1. Depend on the task to be executed (whether or not it is a temporal task or different types of temporal tasks); 2. Are encoding the physical duration of an interval or how much longer/shorter an interval is relative to a reference. Lastly, I will discuss how these results are consistent with recent proposals that approximate temporal processing with decisional models.

SeminarNeuroscience

Maths, AI and Neuroscience meeting

Tim Vogels, Mickey London, Anita Disney, Yonina Eldar, Partha Mitra, Yi Ma
Dec 12, 2021

To understand brain function and develop artificial general intelligence it has become abundantly clear that there should be a close interaction among Neuroscience, machine learning and mathematics. There is a general hope that understanding the brain function will provide us with more powerful machine learning algorithms. On the other hand advances in machine learning are now providing the much needed tools to not only analyse brain activity data but also to design better experiments to expose brain function. Both neuroscience and machine learning explicitly or implicitly deal with high dimensional data and systems. Mathematics can provide powerful new tools to understand and quantify the dynamics of biological and artificial systems as they generate behavior that may be perceived as intelligent. In this meeting we bring together experts from Mathematics, Artificial Intelligence and Neuroscience for a three day long hybrid meeting. We will have talks on mathematical tools in particular Topology to understand high dimensional data, explainable AI, how AI can help neuroscience and to what extent the brain may be using algorithms similar to the ones used in modern machine learning. Finally we will wrap up with a discussion on some aspects of neural hardware that may not have been considered in machine learning.

SeminarNeuroscience

When and (maybe) why do high-dimensional neural networks produce low-dimensional dynamics?

Eric Shea-Brown
Department of Applied Mathematics, University of Washington
Nov 17, 2021

There is an avalanche of new data on activity in neural networks and the biological brain, revealing the collective dynamics of vast numbers of neurons. In principle, these collective dynamics can be of almost arbitrarily high dimension, with many independent degrees of freedom — and this may reflect powerful capacities for general computing or information. In practice, neural datasets reveal a range of outcomes, including collective dynamics of much lower dimension — and this may reflect other desiderata for neural codes. For what networks does each case occur? We begin by exploring bottom-up mechanistic ideas that link tractable statistical properties of network connectivity with the dimension of the activity that they produce. We then cover “top-down” ideas that describe how features of connectivity and dynamics that impact dimension arise as networks learn to perform fundamental computational tasks.

SeminarNeuroscienceRecording

3 Reasons Why You Should Care About Category Theory

Hayato Saigo
Nagahama Institute of Bio-Science and Technology
Nov 3, 2021

Category theory is a branch of mathematics which have been used to organize various regions of mathematics and related sciences from a radical “relation-first” point of view. Why consciousness researchers should care about category theory? " "There are (at least) 3 reasons:" "1 Everything is relational" "2 Everything is relation" "3 Relation is everything" "In this talk we explain the reasons above more concretely and introduce the ideas to utilize basic concepts in category theory for consciousness studies.

SeminarNeuroscienceRecording

Physical Computation in Insect Swarms

Orit Peleg
University of Colorado Boulder & Santa Fe Institute
Oct 7, 2021

Our world is full of living creatures that must share information to survive and reproduce. As humans, we easily forget how hard it is to communicate within natural environments. So how do organisms solve this challenge, using only natural resources? Ideas from computer science, physics and mathematics, such as energetic cost, compression, and detectability, define universal criteria that almost all communication systems must meet. We use insect swarms as a model system for identifying how organisms harness the dynamics of communication signals, perform spatiotemporal integration of these signals, and propagate those signals to neighboring organisms. In this talk I will focus on two types of communication in insect swarms: visual communication, in which fireflies communicate over long distances using light signals, and chemical communication, in which bees serve as signal amplifiers to propagate pheromone-based information about the queen’s location.

SeminarNeuroscienceRecording

Comparing Multiple Strategies to Improve Mathematics Learning and Teaching

Bethany Rittle-Johnson
Vanderbilt University
May 19, 2021

Comparison is a powerful learning process that improves learning in many domains. For over 10 years, my colleagues and I have researched how we can use comparison to support better learning of school mathematics within classroom settings. In 5 short-term experimental, classroom-based studies, we evaluated comparison of solution methods for supporting mathematics knowledge and tested whether prior knowledge impacted effectiveness. We next developed supplemental Algebra I curriculum and professional development for teachers to integrate Comparison and Explanation of Multiple Strategies (CEMS) in their classrooms and tested the promise of the approach when implemented by teachers in two studies. Benefits and challenges emerged in these studies. I will conclude with evidence-based guidelines for effectively supporting comparison and explanation in the classroom. Overall, this program of research illustrates how cognitive science research can guide the design of effective educational materials as well as challenges that occur when bridging from cognitive science research to classroom instruction.

SeminarNeuroscienceRecording

Dr Lindsay reads from "Models of the Mind : How Physics, Engineering and Mathematics Shaped Our Understanding of the Brain" 📖

Grace Lindsay
Gatsby Unit for Computational Neuroscience
May 9, 2021

Though the term has many definitions, computational neuroscience is mainly about applying mathematics to the study of the brain. The brain—a jumble of all different kinds of neurons interconnected in countless ways that somehow produce consciousness—has been described as “the most complex object in the known universe”. Physicists for centuries have turned to mathematics to properly explain some of the most seemingly simple processes in the universe—how objects fall, how water flows, how the planets move. Equations have proved crucial in these endeavors because they capture relationships and make precise predictions possible. How could we expect to understand the most complex object in the universe without turning to mathematics? — The answer is we can’t, and that is why I wrote this book. While I’ve been studying and working in the field for over a decade, most people I encounter have no idea what “computational neuroscience” is or that it even exists. Yet a desire to understand how the brain works is a common and very human interest. I wrote this book to let people in on the ways in which the brain will ultimately be understood: through mathematical and computational theories. — At the same time, I know that both mathematics and brain science are on their own intimidating topics to the average reader and may seem downright prohibitory when put together. That is why I’ve avoided (many) equations in the book and focused instead on the driving reasons why scientists have turned to mathematical modeling, what these models have taught us about the brain, and how some surprising interactions between biologists, physicists, mathematicians, and engineers over centuries have laid the groundwork for the future of neuroscience. — Each chapter of Models of the Mind covers a separate topic in neuroscience, starting from individual neurons themselves and building up to the different populations of neurons and brain regions that support memory, vision, movement and more. These chapters document the history of how mathematics has woven its way into biology and the exciting advances this collaboration has in store.

SeminarNeuroscienceRecording

Structure-mapping in Human Learning

Dedre Gentner
Northwestern University
Apr 1, 2021

Across species, humans are uniquely able to acquire deep relational systems of the kind needed for mathematics, science, and human language. Analogical comparison processes are a major contributor to this ability. Analogical comparison engages a structure-mapping process (Gentner, 1983) that fosters learning in at least three ways: first, it highlights common relational systems and thereby promotes abstraction; second, it promotes inferences from known situations to less familiar situations; and, third, it reveals potentially important differences between examples. In short, structure-mapping is a domain-general learning process by which abstract, portable knowledge can arise from experience. It is operative from early infancy on, and is critical to the rapid learning we see in human children. Although structure-mapping processes are present pre-linguistically, their scope is greatly amplified by language. Analogical processes are instrumental in learning relational language, and the reverse is also true: relational language acts to preserve relational abstractions and render them accessible for future learning and reasoning. Although structure-mapping processes are present pre-linguistically, their scope is greatly amplified by language. Analogical processes are instrumental in learning relational language, and the reverse is also true: relational language acts to preserve relational abstractions and render them accessible for future learning and reasoning.

SeminarNeuroscienceRecording

One Instructional Sequence Fits all? A Conceptual Analysis of the Applicability of Concreteness Fading

Dr Tommi Kokkonen / Prof Lennart Schalk
University of Helsinki / University of Education Schwyz
Feb 10, 2021

According to the concreteness fading approach, instruction should start with concrete representations and progress stepwise to representations that are more idealized. Various researchers have suggested that concreteness fading is a broadly applicable instructional approach. In this talk, we conceptually analyze examples of concreteness fading in mathematics and various science domains. In this analysis, we draw on theories of analogical and relational reasoning and on the literature about learning with multiple representations. Furthermore, we report on an experimental study in which we employed concreteness fading in advanced physics education. The results of the conceptual analysis and the experimental study indicate that concreteness fading may not be as generalizable as has been suggested. The reasons for this limited generalizability are twofold. First, the types of representations and the relations between them differ across different domains. Second, the instructional goals between domains and the subsequent roles of the representations vary.

SeminarNeuroscienceRecording

Cross Domain Generalisation in Humans and Machines

Leonidas Alex Doumas
The University of Edinburgh
Feb 3, 2021

Recent advances in deep learning have produced models that far outstrip human performance in a number of domains. However, where machine learning approaches still fall far short of human-level performance is in the capacity to transfer knowledge across domains. While a human learner will happily apply knowledge acquired in one domain (e.g., mathematics) to a different domain (e.g., cooking; a vinaigrette is really just a ratio between edible fat and acid), machine learning models still struggle profoundly at such tasks. I will present a case that human intelligence might be (at least partially) usefully characterised by our ability to transfer knowledge widely, and a framework that we have developed for learning representations that support such transfer. The model is compared to current machine learning approaches.

SeminarNeuroscience

European University for Brain and Technology Virtual Opening

Virtual Opening
European University for Brain and Technology (NeurotechEU)
Dec 15, 2020

The European University for Brain and Technology, NeurotechEU, is opening its doors on the 16th of December. From health & healthcare to learning & education, Neuroscience has a key role in addressing some of the most pressing challenges that we face in Europe today. Whether the challenge is the translation of fundamental research to advance the state of the art in prevention, diagnosis or treatment of brain disorders or explaining the complex interactions between the brain, individuals and their environments to design novel practices in cities, schools, hospitals, or companies, brain research is already providing solutions for society at large. There has never been a branch of study that is as inter- and multi-disciplinary as Neuroscience. From the humanities, social sciences and law to natural sciences, engineering and mathematics all traditional disciplines in modern universities have an interest in brain and behaviour as a subject matter. Neuroscience has a great promise to become an applied science, to provide brain-centred or brain-inspired solutions that could benefit the society and kindle a new economy in Europe. The European University of Brain and Technology (NeurotechEU) aims to be the backbone of this new vision by bringing together eight leading universities, 250+ partner research institutions, companies, societal stakeholders, cities, and non-governmental organizations to shape education and training for all segments of society and in all regions of Europe. We will educate students across all levels (bachelor’s, master’s, doctoral as well as life-long learners) and train the next generation multidisciplinary scientists, scholars and graduates, provide them direct access to cutting-edge infrastructure for fundamental, translational and applied research to help Europe address this unmet challenge.

SeminarNeuroscienceRecording

Analogies, Games and the Learning of Mathematics

Jairo Navarrete
O’Higgins University
Oct 21, 2020

Research on analogical processing and reasoning has provided strong evidence that the use of adequate educational analogies has strong and positive effects on the learning of mathematics. In this talk I will show some experimental results suggesting that analogies based on spatial representations might be particularly effective to improve mathematics learning. Since fostering mathematics learning also involves addressing psychosocial factors such as the development of mathematical anxiety, providing social incentives to learn, and fostering engagement and motivation, I will argue that one area to explore with great potential to improve math learning is applying analogical research in the development of learning games aimed to improve math learning. Finally, I will show some early prototypes of an educational project devoted to developing games designed to foster the learning of early mathematics in kindergarten children.

SeminarNeuroscienceRecording

Abstraction and Analogy in Natural and Artificial Intelligence

Lindsey Richland
University of California, Irvine
Oct 7, 2020

Learning by analogy is a powerful tool children’s developmental repertoire, as well as in educational contexts such as mathematics, where the key knowledge base involves building flexible schemas. However, noticing and learning from analogies develops over time and is cognitively resource intensive. I review studies that provide insight into the relationship between mechanisms driving children’s developing analogy skills, highlighting environmental inputs (parent talk and prior experiences priming attention to relations) and neuro-cognitive factors (Executive Functions and brain injury). I then note implications for mathematics learning, reviewing experimental findings that show analogy can improve learning, but also that both individual differences in EFs and environmental factors that reduce available EFs such as performance pressure can predict student learning.

SeminarNeuroscience

MidsummerBrains - computational neuroscience from my point of view

Christian Leibold
LMU Munich
Jul 21, 2020

Computational neuroscience is a highly interdisciplinary field ranging from mathematics, physics and engineering to biology, medicine and psychology. Interdisciplinary collaborations have resulted in many groundbreaking innovations both in the research and application. The basis for successful collaborations is the ability to communicate across disciplines: What projects are the others working on? Which techniques and methods are they using? How is data collected, used and stored? In this webinar series, several experts describe their view on computational neuroscience in theory and application, and share experiences they had with interdisciplinary projects. This webinar is open for all interested students and researchers. If you are interested to participate live, please send a short message to smartstart@fz-juelich.de Please note, these lectures will be recorded for subsequent publishing as online lecture material.

SeminarNeuroscienceRecording

MidsummerBrains - computational neuroscience from my point of view

Julijana Gjorgjieva
MPI brain research
Jul 14, 2020

Computational neuroscience is a highly interdisciplinary field ranging from mathematics, physics and engineering to biology, medicine and psychology. Interdisciplinary collaborations have resulted in many groundbreaking innovations both in the research and application. The basis for successful collaborations is the ability to communicate across disciplines: What projects are the others working on? Which techniques and methods are they using? How is data collected, used and stored? In this webinar series, several experts describe their view on computational neuroscience in theory and application, and share experiences they had with interdisciplinary projects. This webinar is open for all interested students and researchers. If you are interested to participate live, please send a short message to smartstart@fz-juelich.de Please note, these lectures will be recorded for subsequent publishing as online lecture material.

SeminarNeuroscienceRecording

MidsummerBrains - computational neuroscience from my point of view

Katharina Wilmes
University of Bern
Jul 7, 2020

Computational neuroscience is a highly interdisciplinary field ranging from mathematics, physics and engineering to biology, medicine and psychology. Interdisciplinary collaborations have resulted in many groundbreaking innovations both in the research and application. The basis for successful collaborations is the ability to communicate across disciplines: What projects are the others working on? Which techniques and methods are they using? How is data collected, used and stored? In this webinar series, several experts describe their view on computational neuroscience in theory and application, and share experiences they had with interdisciplinary projects. This webinar is open for all interested students and researchers. If you are interested to participate live, please send a short message to smartstart@fz-juelich.de Please note, these lectures will be recorded for subsequent publishing as online lecture material.

SeminarNeuroscienceRecording

MidsummerBrains - computational neuroscience from my point of view

Jutta Kretzberg
University of Oldenburg
Jun 30, 2020

Computational neuroscience is a highly interdisciplinary field ranging from mathematics, physics and engineering to biology, medicine and psychology. Interdisciplinary collaborations have resulted in many groundbreaking innovations both in the research and application. The basis for successful collaborations is the ability to communicate across disciplines: What projects are the others working on? Which techniques and methods are they using? How is data collected, used and stored? In this webinar series, several experts describe their view on computational neuroscience in theory and application, and share experiences they had with interdisciplinary projects. This webinar is open for all interested students and researchers. If you are interested to participate live, please send a short message to smartstart@fz-juelich.de Please note, these lectures will be recorded for subsequent publishing as online lecture material.

SeminarNeuroscience

MidsummerBrains - computational neuroscience from my point of view

Hermann Cuntz
Ernst Strüngmann Institute & Frankfurt Institute for Advanced Studies
Jun 29, 2020

Computational neuroscience is a highly interdisciplinary field ranging from mathematics, physics and engineering to biology, medicine and psychology. Interdisciplinary collaborations have resulted in many groundbreaking innovations both in the research and application. The basis for successful collaborations is the ability to communicate across disciplines: What projects are the others working on? Which techniques and methods are they using? How is data collected, used and stored? In this webinar series, several experts describe their view on computational neuroscience in theory and application, and share experiences they had with interdisciplinary projects. This webinar is open for all interested students and researchers. If you are interested to participate live, please send a short message to smartstart@fz-juelich.de Please note, these lectures will be recorded for subsequent publishing as online lecture material.

SeminarNeuroscienceRecording

MidsummerBrains - computational neuroscience from my point of view

Constantin Rothkopf
TU Darmstadt
Jun 23, 2020

Computational neuroscience is a highly interdisciplinary field ranging from mathematics, physics and engineering to biology, medicine and psychology. Interdisciplinary collaborations have resulted in many groundbreaking innovations both in the research and application. The basis for successful collaborations is the ability to communicate across disciplines: What projects are the others working on? Which techniques and methods are they using? How is data collected, used and stored? In this webinar series, several experts describe their view on computational neuroscience in theory and application, and share experiences they had with interdisciplinary projects. This webinar is open for all interested students and researchers. If you are interested to participate live, please send a short message to smartstart@fz-juelich.de Please note, these lectures will be recorded for subsequent publishing as online lecture material.

SeminarNeuroscienceRecording

Relational Reasoning in Curricular Knowledge Components

Priya B. Kalra
University of Wisconsin-Madison
Jun 3, 2020

It is a truth universally acknowledged that relational reasoning is important for learning in Science, Technology, Engineering, and Mathematics (STEM) disciplines. However, much research on relational reasoning uses examples unrelated to STEM concepts (understandably, to control for prior knowledge in many cases). In this talk I will discuss how real STEM concepts can be profitably used in relational reasoning research, using fraction concepts in mathematics as an example.

ePoster

Parallels between Intuitionistic Mathematics and Neurophenomenology

Brian McCorkle

Neuromatch 5