Mathematics
mathematics
Eero Simoncelli, Ph.D.
The Center for Neural Science at New York University (NYU), jointly with the Center for Computational Neuroscience (CCN) at the Flatiron Institute of the Simons Foundation, invites applications for an open rank joint position, with a preference for junior or mid-career candidates. We seek exceptional candidates that use computational frameworks to develop concepts, models, and tools for understanding brain function. Areas of interest include sensory representation and perception, memory, decision-making, adaptation and learning, and motor control. A Ph.D. in a relevant field, such as neuroscience, engineering, physics or applied mathematics, is required. Review of applications will begin 28 March 2021. Further information: * Joint position: https://apply.interfolio.com/83845 * NYU Center for Neural Science: https://www.cns.nyu.edu/ * Flatiron Institute Center for Computational Neuroscience: https://www.simonsfoundation.org/flatiron/center-for-computational-neuroscience/
Mitra Baratchi
We are looking for an excellent candidate with a master’s degree in MSc in Artificial Intelligence, Computer Science, Mathematics, Statistics, or a closely related field to join a project focused on developing an advanced transparent machine learning framework with application on movement behavioural analysis. Smartwatches and other wearable technologies allow us to continuously collect data on our daily movement behaviour patterns. We would like to understand how machine learning techniques can be used to learn causal effects from time-series data to identify and recommend effective changes in daily activities (i.e., possible behavioural interventions) that are expected to result in concrete health improvements (e.g., improving cardiorespiratory fitness). This research, at the intersection of machine learning and causality, aims to develop algorithms for finding causal relations between behavioural indicators learned from the time series data and associated health-outcomes.
Xavier Alameda-Pineda
The internship aims to explore the usefulness of the Fisher-Ráo metric combined with deep probabilistic models. The main question is whether or not this metric has some relationship with the training of deep generative models. In plain, we would like to understand if the training and/or fine-tuning of such probabilistic models follow optimal paths on the manifold of probability distributions. Your task will be to design and implement an experimental framework allowing to measure what kind of paths are followed on the manifold of probability distributions when such deep probabilistic models are trained. To that aim, one must first be able to measure distances in this manifold, and here is where the Fisher-Ráo metric comes in the game. The candidate does not need to be familiar with the specific concepts of Fisher-Ráo metric, but needs to be open to learning new mathematical concepts. The implementation of these experiments will require knowledge in Python and in PyTorch.
Yashar Ahmadian
The postdoc will work on a collaborative project between the labs of Yashar Ahmadian at the Computational and Biological Learning Lab (CBL), and Zoe Kourtzi at the Psychology Department, both at the University of Cambridge. The project investigates the computational principles and circuit mechanisms underlying human visual perceptual learning, particularly the role of adaptive changes in the balance of cortical excitation and inhibition resulting from perceptual learning. The postdoc will be based in CBL, with free access to the Kourtzi lab in the Psychology department.
Felipe Tobar
The Initiative for Data & Artificial Intelligence at Universidad de Chile is looking for Postdoctoral Researchers to join a collaborative team of PIs working on theoretical and applied aspects of Data Science. The role of the postholder(s) is twofold: first, they will engage and collaborate in current projects at the Initiative related to statistical machine learning, natural language processing and deep learning, with applications to time series analysis, health informatics, and astroinformatics. Second, they are expected to bring novel research lines affine to those currently featured at the Initiative, possibly in the form of theoretical work or applications to real-world problems of general interest. These positions are offered on a fixed term basis for up to one year with a possibility for a further year extension.
Max Garagnani
The project involves implementing a brain-realistic neurocomputational model able to exhibit the spontaneous emergence of cognitive function from a uniform neural substrate, as a result of unsupervised, biologically realistic learning. Specifically, it will focus on modelling the emergence of unexpected (i.e., non stimulus-driven) action decisions using neo-Hebbian reinforcement learning. The final deliverable will be an artificial brain-like cognitive architecture able to learn to act as humans do when driven by intrinsic motivation and spontaneous, exploratory behaviour.
N/A
IIT welcomes applicants with an outstanding track-record in Computational Neuroscience. Appropriate research areas include computational and modelling approaches for understanding the function of the nervous system. Investigators with expertise in mathematics, physics, statistics, and machine learning for neuroscience are also encouraged to apply. The position can be either tenured or tenure-track, depending on seniority and expertise. If tenure-track, the position is for an initial period of 5 years with renewal depending on evaluation. We provide generous support for salary, start-up budget, and annual running costs.
Vinita Samarasinghe
The position is part of the Collaborative Research Center “Extinction Learning” (SFB 1280) and studies the principles underlying spatial learning and its extinction with reinforcement learning models. A particular focus is the role of episodic-like memory in learning and extinction processes. The research group is highly dynamic and uses diverse computational modeling approaches including biological neural networks, cognitive modeling, and machine learning to investigate learning and memory in humans and animals.
Rainer Stiefelhagen
The Cooperative Graduate School Accessibility through AI-based Assistive Technology (KATE - www.kate.kit.edu) is a new cooperative and interdisciplinary graduate school between the Karlsruhe Institute of Technology (KIT) and the Hochschule Karlsruhe (HKA). The program revolves around investigating state-of-the-art artificial intelligence methods in order to improve the autonomy and participation of persons with special needs. Different dissertation topics ranging from AI-based methods for text, audio, and multimedia document processing, AI methods for interactive training and assistance systems, to investigating the consequences and ethical, legal, social, and societal implications of AI systems for people with disabilities will be offered. The sponsored persons will work on a selected topic scientifically in depth within the framework of their doctorate and will receive an overall view of all relevant topics - including medical causes as well as their effects, the needs of the target groups, AI-based approaches, ethics, technology assessment, and societal aspects - through the exchange within the doctoral college for this purpose.
N/A
We are announcing one or more 2-year postdoc positions in identification and analysis of lexical semantic change using computational models applied to diachronic texts. Our languages change over time. As a consequence, words may look the same, but have different meanings at different points in time, a phenomenon called lexical semantic change (LSC). To facilitate interpretation, search, and analysis of old texts, we build computational methods for automatic detection and characterization of LSC from large amounts of text. Our outputs will be used by the lexicographic R&D unit that compiles the Swedish Academy dictionaries, as well as by researchers from the humanities and social sciences that include textual analysis as a central methodological component. The Change is Key! program and the Towards Computational Lexical Semantic Change Detection research project offer a vibrant research environment for this exciting and rapidly growing cutting-edge research field in NLP. There is a unique opportunity to contribute to the field of LSC, but also to humanities and social sciences through our active collaboration with international researchers in historical linguistics, analytical sociology, gender studies, conceptual history, and literary studies.
N/A
New York University is seeking exceptional PhD candidates with strong quantitative training (e.g., physics, mathematics, engineering) coupled with a clear interest in scientific study of the brain. Doctoral programs are flexible, allowing students to pursue research across departmental boundaries. Admissions are handled separately by each department, and students interested in pursuing graduate studies should submit an application to the program that best fits their goals and interests.
Geoffrey J Goodhill
An NIH-funded collaboration between David Prober (Caltech), Thai Truong (USC) and Geoff Goodhill (Washington University in St Louis) aims to gain new insight into the neural circuits underlying sleep, through a combination of whole-brain neural recordings in zebrafish and theoretical/computational modeling. A postdoc position is available in the Goodhill lab to contribute to the modeling and computational analysis components. Using novel 2-photon imaging technologies Prober and Truong are recording from the entire larval zebrafish brain at single-neuron resolution continuously for long periods of time, examining neural circuit activity during normal day-night cycles and in response to genetic and pharmacological perturbations. The Goodhill lab is analyzing the resulting huge datasets using a variety of sophisticated computational approaches, and using these results to build new theoretical models that reveal how neural circuits interact to govern sleep.
KongFatt Wong-Lin
The successful candidate will develop and apply computational modelling, and theoretical and analytical techniques to understand brain and behavioural data across primate species, and to apply biologically based neural network modelling to elucidate mechanisms underlying perceptual decision-making. The duration of the position is 24 months, from January 2024 till end of 2025. The personnel will be based at the ISRC in Ulster University, working with Prof. KongFatt Wong-Lin and his team, while collaborating closely with international collaborators in the USA and the Republic of Ireland.
N/A
New York University is seeking exceptional PhD candidates with strong quantitative training (e.g., physics, mathematics, engineering) coupled with a clear interest in scientific study of the brain. Doctoral programs are flexible, allowing students to pursue research across departmental boundaries. Admissions are handled separately by each department, and students interested in pursuing graduate studies should submit an application to the program that best fits their goals and interests.
Professor Geoffrey J Goodhill
The Department of Neuroscience at Washington University School of Medicine is currently recruiting investigators with the passion to create knowledge, pursue bold visions, and challenge canonical thinking as we expand into our new 600,000 sq ft purpose-built neurosciences research building. We are now seeking a tenure-track investigator at the level of Assistant Professor to develop an innovative research program in Theoretical/Computational Neuroscience. The successful candidates will join a thriving theoretical/computational neuroscience community at Washington University, including the new Center for Theoretical and Computational Neuroscience. In addition, the Department also has world-class research strengths in systems, circuits and behavior, cellular and molecular neuroscience using a variety of animal models including worms, flies, zebrafish, rodents and non-human primates. We are particularly interested in outstanding researchers who are both creative and collaborative.
Carl Rasmussen, Bernhard Schölkopf
The University of Cambridge Machine Learning Group and the Max Planck Institute for Intelligent Systems Empirical Inference Department in Tübingen are two of the world’s leading centres for machine learning research. In 2014, we launched a new and exciting initiative whereby a small group of select PhD candidates are jointly supervised at both institutions. The principal supervisors are Carl Rasmussen, Neil Lawrence, Ferenc Huszar, Jose Miguel Hernandez-Lobato, David Krueger, Adrian Weller and Rika Antonova at Cambridge University, and Bernhard Schölkopf and other research group leaders at the Max Planck Institute in Tübingen. This program is specific for candidates whose research interests are well-matched to both the principal supervisors in Cambridge and the MPI for Intelligent Systems in Tuebingen. The overall duration of the PhD will be four years, with roughly three years spent at one location, and one year spent at the other location, including initial coursework at the University of Cambridge. Successful PhDs will be officially granted by the University of Cambridge.
Alessio Martino
2-years funded PostDoc position in the scientific area of Mathematics and Computer Science on the topic: 'Algorithmic Approaches for Complex Network Analysis'.
Prof. Sacha Jennifer van Albada
PhD and postdoc opportunities with a focus on the simulation of large-scale biological neural networks are available in the Theoretical Neuroanatomy group at Jülich Research Center, Germany. The projects will advance a research program that centers on the full-scale simulation of thalamocortical networks using the simulator NEST. The postdoc position is available in the context of the Henriette Herz Scouting Program of the Humboldt Foundation, and will be offered to a female candidate. The program is particularly aimed at candidates from countries underrepresented in the Humboldt Foundation. We will jointly define a research project and the selected candidate will receive a Humboldt Research Fellowship. The position is available for 24 months for postdocs up to 4 years after the PhD defense and for 18 months for experienced researchers 4-12 years after the PhD defense. The PhD defense should not be more than 12 years ago, and candidates should not have previous or existing links to Germany in terms of study, research stays, or citizenship. Due consideration will be given to any gaps in the CV due to family care or other personal circumstances. The PhD position is open to candidates regardless of gender. The candidate should have a background in physics, mathematics, computer science, biology (or specifically neuroscience), or engineering. Excellent quantitative and analytical skills are highly valued. We offer a structured program guiding doctoral researchers through the PhD work and plenty of opportunities for local and international collaboration. The researchers will be embedded in a vibrant research institute and have links to the University of Cologne, so that candidates can gain teaching/tutoring experience.
Włodzisław Duch
Grants are available for young researchers (no more than 5 years after PhD) who during the last 3 years did not stay in Poland for a period longer than 6 months. The grants are for 6 or 12 months at the University Centre of Excellence “Dynamics, Mathematical Analysis and Artificial Intelligence” - Nicolaus Copernicus University. There is also a second contest for experienced researchers (at least 6 years after the PhD) with outstanding scientific achievements.
Dr. Amir Aly
The University of Plymouth has several available positions in Computer Science.
Md Sahidullah
Ph.D. fellowships in broad areas of Computer Science and Mathematics with a special focus on Cryptography and Security, Quantum Information and Quantum Cryptography, Mathematics and its Applications, Artificial Intelligence and Machine Learning. The Ph.D. degree will be awarded in collaboration with AcSIR (Academy of Scientific and Innovative Research).
Dr. Udo Ernst
In this project we want to study organization and optimization of flexible information processing in neural networks, with specific focus on the visual system. You will use network modelling, numerical simulation, and mathematical analysis to investigate fundamental aspects of flexible computation such as task-dependent coordination of multiple brain areas for efficient information processing, as well as the emergence of flexible circuits originating from learning schemes which simultaneously optimize for function and flexibility. These studies will be complemented by biophysically realistic modelling and data analysis in collaboration with experimental work done in the lab of Prof. Dr. Andreas Kreiter, also at the University of Bremen. Here we will investigate selective attention as a central aspect of flexibility in the visual system, involving task-dependent coordination of multiple visual areas.
Jean-Pascal Pfister
The Theoretical Neuroscience Group of the University of Bern is seeking applications for a PhD position, funded by a Swiss National Science Foundation grant titled “Why Spikes?”. This project aims at answering a nearly century-old question in Neuroscience: “What are spikes good for?”. Indeed, since the discovery of action potentials by Lord Adrian in 1926, it has remained largely unknown what the benefits of spiking neurons are, when compared to analog neurons. Traditionally, it has been argued that spikes are good for long-distance communication or for temporally precise computation. However, there is no systematic study that quantitatively compares the communication as well as the computational benefits of spiking neuron w.r.t analog neurons. The aim of the project is to systematically quantify the benefits of spiking at various levels by developing and analyzing appropriate mathematical models. The PhD student will be supervised by Prof. Jean-Pascal Pfister (Theoretical Neuroscience Group, Department of Physiology, University of Bern). The project will involve close collaborations within a highly motivated team as well as regular exchange of ideas with the other theory groups at the institute.
Zoran Tiganj, PhD
The College of Arts and Sciences and the Luddy School of Informatics, Computing, and Engineering at Indiana University Bloomington invite applications for multiple open-rank, tenured or tenure-track faculty positions in one or more of the following areas: artificial intelligence, human intelligence, and machine learning to begin in Fall 2025 or after. Appointments will be in one or more departments, including Cognitive Science, Computer Science, Informatics, Intelligent Systems Engineering, Mathematics, and Psychological and Brain Sciences. We encourage applications from scholars who apply interdisciplinary perspectives across these fields to a variety of domains, including cognitive science, computational social sciences, computer vision, education, engineering, healthcare, mathematics, natural language processing, neuroscience, psychology, robotics, virtual reality, and beyond. Reflecting IU’s strong tradition of interdisciplinary research, we encourage diverse perspectives and innovative research that may intersect with or extend beyond these areas. The positions are part of a new university-wide initiative that aims to transform our understanding of human and artificial intelligence, involving multiple departments and schools, as well as the new Luddy Artificial Intelligence Center.
N/A
New York University is home to a thriving interdisciplinary community of researchers using computational and theoretical approaches in neuroscience. We are interested in exceptional PhD candidates with strong quantitative training (e.g., physics, mathematics, engineering) coupled with a clear interest in scientific study of the brain. A listing of faculty, sorted by their primary departmental affiliation, is given below. Doctoral programs are flexible, allowing students to pursue research across departmental boundaries. Nevertheless, admissions are handled separately by each department, and students interested in pursuing graduate studies should submit an application to the program that best fits their goals and interests.
“Brain theory, what is it or what should it be?”
n the neurosciences the need for some 'overarching' theory is sometimes expressed, but it is not always obvious what is meant by this. One can perhaps agree that in modern science observation and experimentation is normally complemented by 'theory', i.e. the development of theoretical concepts that help guiding and evaluating experiments and measurements. A deeper discussion of 'brain theory' will require the clarification of some further distictions, in particular: theory vs. model and brain research (and its theory) vs. neuroscience. Other questions are: Does a theory require mathematics? Or even differential equations? Today it is often taken for granted that the whole universe including everything in it, for example humans, animals, and plants, can be adequately treated by physics and therefore theoretical physics is the overarching theory. Even if this is the case, it has turned out that in some particular parts of physics (the historical example is thermodynamics) it may be useful to simplify the theory by introducing additional theoretical concepts that can in principle be 'reduced' to more complex descriptions on the 'microscopic' level of basic physical particals and forces. In this sense, brain theory may be regarded as part of theoretical neuroscience, which is inside biophysics and therefore inside physics, or theoretical physics. Still, in neuroscience and brain research, additional concepts are typically used to describe results and help guiding experimentation that are 'outside' physics, beginning with neurons and synapses, names of brain parts and areas, up to concepts like 'learning', 'motivation', 'attention'. Certainly, we do not yet have one theory that includes all these concepts. So 'brain theory' is still in a 'pre-newtonian' state. However, it may still be useful to understand in general the relations between a larger theory and its 'parts', or between microscopic and macroscopic theories, or between theories at different 'levels' of description. This is what I plan to do.
Computational modelling of ocular pharmacokinetics
Pharmacokinetics in the eye is an important factor for the success of ocular drug delivery and treatment. Pharmacokinetic features determine the feasible routes of drug administration, dosing levels and intervals, and it has impact on eventual drug responses. Several physical, biochemical, and flow-related barriers limit drug exposure of anterior and posterior ocular target tissues during treatment during local (topical, subconjunctival, intravitreal) and systemic administration (intravenous, per oral). Mathematical models integrate joint impact of various barriers on ocular pharmacokinetics (PKs) thereby helping drug development. The models are useful in describing (top-down) and predicting (bottom-up) pharmacokinetics of ocular drugs. This is useful also in the design and development of new drug molecules and drug delivery systems. Furthermore, the models can be used for interspecies translation and probing of disease effects on pharmacokinetics. In this lecture, ocular pharmacokinetics and current modelling methods (noncompartmental analyses, compartmental, physiologically based, and finite element models) are introduced. Future challenges are also highlighted (e.g. intra-tissue distribution, prediction of drug responses, active transport).
Why age-related macular degeneration is a mathematically tractable disease
Among all prevalent diseases with a central neurodegeneration, AMD can be considered the most promising in terms of prevention and early intervention, due to several factors surrounding the neural geometry of the foveal singularity. • Steep gradients of cell density, deployed in a radially symmetric fashion, can be modeled with a difference of Gaussian curves. • These steep gradients give rise to huge, spatially aligned biologic effects, summarized as the Center of Cone Resilience, Surround of Rod Vulnerability. • Widely used clinical imaging technology provides cellular and subcellular level information. • Data are now available at all timelines: clinical, lifespan, evolutionary • Snapshots are available from tissues (histology, analytic chemistry, gene expression) • A viable biogenesis model exists for drusen, the largest population-level intraocular risk factor for progression. • The biogenesis model shares molecular commonality with atherosclerotic cardiovascular disease, for which there has been decades of public health success. • Animal and cell model systems are emerging to test these ideas.
Mathematical and computational modelling of ocular hemodynamics: from theory to applications
Changes in ocular hemodynamics may be indicative of pathological conditions in the eye (e.g. glaucoma, age-related macular degeneration), but also elsewhere in the body (e.g. systemic hypertension, diabetes, neurodegenerative disorders). Thanks to its transparent fluids and structures that allow the light to go through, the eye offers a unique window on the circulation from large to small vessels, and from arteries to veins. Deciphering the causes that lead to changes in ocular hemodynamics in a specific individual could help prevent vision loss as well as aid in the diagnosis and management of diseases beyond the eye. In this talk, we will discuss how mathematical and computational modelling can help in this regard. We will focus on two main factors, namely blood pressure (BP), which drives the blood flow through the vessels, and intraocular pressure (IOP), which compresses the vessels and may impede the flow. Mechanism-driven models translates fundamental principles of physics and physiology into computable equations that allow for identification of cause-to-effect relationships among interplaying factors (e.g. BP, IOP, blood flow). While invaluable for causality, mechanism-driven models are often based on simplifying assumptions to make them tractable for analysis and simulation; however, this often brings into question their relevance beyond theoretical explorations. Data-driven models offer a natural remedy to address these short-comings. Data-driven methods may be supervised (based on labelled training data) or unsupervised (clustering and other data analytics) and they include models based on statistics, machine learning, deep learning and neural networks. Data-driven models naturally thrive on large datasets, making them scalable to a plethora of applications. While invaluable for scalability, data-driven models are often perceived as black- boxes, as their outcomes are difficult to explain in terms of fundamental principles of physics and physiology and this limits the delivery of actionable insights. The combination of mechanism-driven and data-driven models allows us to harness the advantages of both, as mechanism-driven models excel at interpretability but suffer from a lack of scalability, while data-driven models are excellent at scale but suffer in terms of generalizability and insights for hypothesis generation. This combined, integrative approach represents the pillar of the interdisciplinary approach to data science that will be discussed in this talk, with application to ocular hemodynamics and specific examples in glaucoma research.
Brain network communication: concepts, models and applications
Understanding communication and information processing in nervous systems is a central goal of neuroscience. Over the past two decades, advances in connectomics and network neuroscience have opened new avenues for investigating polysynaptic communication in complex brain networks. Recent work has brought into question the mainstay assumption that connectome signalling occurs exclusively via shortest paths, resulting in a sprawling constellation of alternative network communication models. This Review surveys the latest developments in models of brain network communication. We begin by drawing a conceptual link between the mathematics of graph theory and biological aspects of neural signalling such as transmission delays and metabolic cost. We organize key network communication models and measures into a taxonomy, aimed at helping researchers navigate the growing number of concepts and methods in the literature. The taxonomy highlights the pros, cons and interpretations of different conceptualizations of connectome signalling. We showcase the utility of network communication models as a flexible, interpretable and tractable framework to study brain function by reviewing prominent applications in basic, cognitive and clinical neurosciences. Finally, we provide recommendations to guide the future development, application and validation of network communication models.
Computational and mathematical approaches to myopigenesis
Myopia is predicted to affect 50% of all people worldwide by 2050, and is a risk factor for significant, potentially blinding ocular pathologies, such as retinal detachment and glaucoma. Thus, there is significant motivation to better understand the process of myopigenesis and to develop effective anti-myopigenic treatments. In nearly all cases of human myopia, scleral remodeling is an obligate step in the axial elongation that characterizes the condition. Here I will describe the development of a biomechanical assay based on transient unconfined compression of scleral samples. By treating the scleral as a poroelastic material, one can determine scleral biomechanical properties from extremely small samples, such as obtained from the mouse eye. These properties provide proxy measures of scleral remodeling, and have allowed us to identify all-trans retinoic acid (atRA) as a myopigenic stimulus in mice. I will also describe nascent collaborative work on modeling the transport of atRA in the eye.
Computational models and experimental methods for the human cornea
The eye is a multi-component biological system, where mechanics, optics, transport phenomena and chemical reactions are strictly interlaced, characterized by the typical bio-variability in sizes and material properties. The eye’s response to external action is patient-specific and it can be predicted only by a customized approach, that accounts for the multiple physics and for the intrinsic microstructure of the tissues, developed with the aid of forefront means of computational biomechanics. Our activity in the last years has been devoted to the development of a comprehensive model of the cornea that aims at being entirely patient-specific. While the geometrical aspects are fully under control, given the sophisticated diagnostic machinery able to provide a fully three-dimensional images of the eye, the major difficulties are related to the characterization of the tissues, which require the setup of in-vivo tests to complement the well documented results of in-vitro tests. The interpretation of in-vivo tests is very complex, since the entire structure of the eye is involved and the characterization of the single tissue is not trivial. The availability of micromechanical models constructed from detailed images of the eye represents an important support for the characterization of the corneal tissues, especially in the case of pathologic conditions. In this presentation I will provide an overview of the research developed in our group in terms of computational models and experimental approaches developed for the human cornea.
How Children Design by Analogy: The Role of Spatial Thinking
Analogical reasoning is a common reasoning tool for learning and problem-solving. Existing research has extensively studied children’s reasoning when comparing, or choosing from ready-made analogies. Relatively less is known about how children come up with analogies in authentic learning environments. Design education provides a suitable context to investigate how children generate analogies for creative learning purposes. Meanwhile, the frequent use of visual analogies in design provides an additional opportunity to understand the role of spatial reasoning in design-by-analogy. Spatial reasoning is one of the most studied human cognitive factors and is critical to the learning of science, technology, engineering, arts, and mathematics (STEAM). There is growing interest in exploring the interplay between analogical reasoning and spatial reasoning. In this talk, I will share qualitative findings from a case study, where a class of 11-to-12-year-olds in the Netherlands participated in a biomimicry design project. These findings illustrate (1) practical ways to support children’s analogical reasoning in the ideation process and (2) the potential role of spatial reasoning as seen in children mapping form-function relationships in nature analogically and adaptively to those in human designs.
Cognitive supports for analogical reasoning in rational number understanding
In cognitive development, learning more than the input provides is a central challenge. This challenge is especially evident in learning the meaning of numbers. Integers – and the quantities they denote – are potentially infinite, as are the fractional values between every integer. Yet children’s experiences of numbers are necessarily finite. Analogy is a powerful learning mechanism for children to learn novel, abstract concepts from only limited input. However, retrieving proper analogy requires cognitive supports. In this talk, I seek to propose and examine number lines as a mathematical schema of the number system to facilitate both the development of rational number understanding and analogical reasoning. To examine these hypotheses, I will present a series of educational intervention studies with third-to-fifth graders. Results showed that a short, unsupervised intervention of spatial alignment between integers and fractions on number lines produced broad and durable gains in fractional magnitudes. Additionally, training on conceptual knowledge of fractions – that fractions denote magnitude and can be placed on number lines – facilitates explicit analogical reasoning. Together, these studies indicate that analogies can play an important role in rational number learning with the help of number lines as schemas. These studies shed light on helpful practices in STEM education curricula and instructions.
Analogical inference in mathematics: from epistemology to the classroom (and back)
In this presentation, we will discuss adaptations of historical examples of mathematical research to bring out some of the intuitive judgments that accompany the working practice of mathematicians when reasoning by analogy. The main epistemological claim that we will aim to illustrate is that a central part of mathematical training consists in developing a quasi-perceptual capacity to distinguish superficial from deep analogies. We think of this capacity as an instance of Hadamard’s (1954) discriminating faculty of the mathematical mind, whereby one is led to distinguish between mere “hookings” (77) and “relay-results” (80): on the one hand, suggestions or ‘hints’, useful to raise questions but not to back up conjectures; on the other, more significant discoveries, which can be used as an evidentiary source in further mathematical inquiry. In the second part of the presentation, we will present some recent applications of this epistemological framework to mathematics education projects for middle and high schools in Italy.
Unique features of oxygen delivery to the mammalian retina
Like all neural tissue, the retina has a high metabolic demand, and requires a constant supply of oxygen. Second and third order neurons are supplied by the retinal circulation, whose characteristics are similar to brain circulation. However, the photoreceptor region, which occupies half of the retinal thickness, is avascular, and relies on diffusion of oxygen from the choroidal circulation, whose properties are very different, as well as the retinal circulation. By fitting diffusion models to oxygen measurements made with oxygen microelectrodes, it is possible to understand the relative roles of the two circulations under normal conditions of light and darkness, and what happens if the retina is detached or the retinal circulation is occluded. Most of this work has been done in vivo in rat, cat, and monkey, but recent work in the isolated mouse retina will also be discussed.
Maths, AI and Neuroscience Meeting Stockholm
To understand brain function and develop artificial general intelligence it has become abundantly clear that there should be a close interaction among Neuroscience, machine learning and mathematics. There is a general hope that understanding the brain function will provide us with more powerful machine learning algorithms. On the other hand advances in machine learning are now providing the much needed tools to not only analyse brain activity data but also to design better experiments to expose brain function. Both neuroscience and machine learning explicitly or implicitly deal with high dimensional data and systems. Mathematics can provide powerful new tools to understand and quantify the dynamics of biological and artificial systems as they generate behavior that may be perceived as intelligent.
The impact of analogical learning approaches on mathematics education
Learning by Analogy in Mathematics
Analogies between old and new concepts are common during classroom instruction. While previous studies of transfer focus on how features of initial learning guide later transfer to new problem solving, less is known about how to best support analogical transfer from previous learning while children are engaged in new learning episodes. Such research may have important implications for teaching and learning in mathematics, which often includes analogies between old and new information. Some existing research promotes supporting learners' explicit connections across old and new information within an analogy. In this talk, I will present evidence that instructors can invite implicit analogical reasoning through warm-up activities designed to activate relevant prior knowledge. Warm-up activities "close the transfer space" between old and new learning without additional direct instruction.
A Framework for a Conscious AI: Viewing Consciousness through a Theoretical Computer Science Lens
We examine consciousness from the perspective of theoretical computer science (TCS), a branch of mathematics concerned with understanding the underlying principles of computation and complexity, including the implications and surprising consequences of resource limitations. We propose a formal TCS model, the Conscious Turing Machine (CTM). The CTM is influenced by Alan Turing's simple yet powerful model of computation, the Turing machine (TM), and by the global workspace theory (GWT) of consciousness originated by cognitive neuroscientist Bernard Baars and further developed by him, Stanislas Dehaene, Jean-Pierre Changeux, George Mashour, and others. However, the CTM is not a standard Turing Machine. It’s not the input-output map that gives the CTM its feeling of consciousness, but what’s under the hood. Nor is the CTM a standard GW model. In addition to its architecture, what gives the CTM its feeling of consciousness is its predictive dynamics (cycles of prediction, feedback and learning), its internal multi-modal language Brainish, and certain special Long Term Memory (LTM) processors, including its Inner Speech and Model of the World processors. Phenomena generally associated with consciousness, such as blindsight, inattentional blindness, change blindness, dream creation, and free will, are considered. Explanations derived from the model draw confirmation from consistencies at a high level, well above the level of neurons, with the cognitive neuroscience literature. Reference. L. Blum and M. Blum, "A theory of consciousness from a theoretical computer science perspective: Insights from the Conscious Turing Machine," PNAS, vol. 119, no. 21, 24 May 2022. https://www.pnas.org/doi/epdf/10.1073/pnas.2115934119
How Children Discover Mathematical Structure through Relational Mapping
A core question in human development is how we bring meaning to conventional symbols. This question is deeply connected to understanding how children learn mathematics—a symbol system with unique vocabularies, syntaxes, and written forms. In this talk, I will present findings from a program of research focused on children’s acquisition of place value symbols (i.e., multidigit number meanings). The base-10 symbol system presents a variety of obstacles to children, particularly in English. Children who cannot overcome these obstacles face years of struggle as they progress through the mathematics curriculum of the upper elementary and middle school grades. Through a combination of longitudinal, cross-sectional, and pretest-training-posttest approaches, I aim to illuminate relational learning mechanisms by which children sometimes succeed in mastering the place value system, as well as instructional techniques we might use to help those who do not.
It’s not over our heads: Why human language needs a body
n the ‘orthodox’ view, cognition has been seen as manipulation of symbolic, mental representations, separate from the body. This dualist Cartesian approach characterised much of twentieth-century thought and is still taken for granted by many people today. Language, too, has for a long time been treated across scientific domains as a system operating largely independently from perception, action, and the body (articulatory-perceptual organs notwithstanding). This could lead one into believing that to emulate linguistic behaviour, it would suffice to develop ‘software’ operating on abstract representations that would work on any computational machine. Yet the brain is not the sole problem-solving resource we have at our disposal. The disembodied picture is inaccurate for numerous reasons, which will be presented addressing the issue of the indissoluble link between cognition, language, body, and environment in understanding and learning. The talk will conclude with implications and suggestions for pedagogy, relevant for disciplines as diverse as instruction in language, mathematics, and sports.
Population coding in the cerebellum: a machine learning perspective
The cerebellum resembles a feedforward, three-layer network of neurons in which the “hidden layer” consists of Purkinje cells (P-cells) and the output layer consists of deep cerebellar nucleus (DCN) neurons. In this analogy, the output of each DCN neuron is a prediction that is compared with the actual observation, resulting in an error signal that originates in the inferior olive. Efficient learning requires that the error signal reach the DCN neurons, as well as the P-cells that project onto them. However, this basic rule of learning is violated in the cerebellum: the olivary projections to the DCN are weak, particularly in adulthood. Instead, an extraordinarily strong signal is sent from the olive to the P-cells, producing complex spikes. Curiously, P-cells are grouped into small populations that converge onto single DCN neurons. Why are the P-cells organized in this way, and what is the membership criterion of each population? Here, I apply elementary mathematics from machine learning and consider the fact that P-cells that form a population exhibit a special property: they can synchronize their complex spikes, which in turn suppress activity of DCN neuron they project to. Thus complex spikes cannot only act as a teaching signal for a P-cell, but through complex spike synchrony, a P-cell population may act as a surrogate teacher for the DCN neuron that produced the erroneous output. It appears that grouping of P-cells into small populations that share a preference for error satisfies a critical requirement of efficient learning: providing error information to the output layer neuron (DCN) that was responsible for the error, as well as the hidden layer neurons (P-cells) that contributed to it. This population coding may account for several remarkable features of behavior during learning, including multiple timescales, protection from erasure, and spontaneous recovery of memory.
4D Chromosome Organization: Combining Polymer Physics, Knot Theory and High Performance Computing
Self-organization is a universal concept spanning numerous disciplines including mathematics, physics and biology. Chromosomes are self-organizing polymers that fold into orderly, hierarchical and yet dynamic structures. In the past decade, advances in experimental biology have provided a means to reveal information about chromosome connectivity, allowing us to directly use this information from experiments to generate 3D models of individual genes, chromosomes and even genomes. In this talk I will present a novel data-driven modeling approach and discuss a number of possibilities that this method holds. I will discuss a detailed study of the time-evolution of X chromosome inactivation, highlighting both global and local properties of chromosomes that result in topology-driven dynamical arrest and present and characterize a novel type of motion we discovered in knots that may have applications to nanoscale materials and machines.
Neural correlates of temporal processing in humans
Estimating intervals is essential for adaptive behavior and decision-making. Although several theoretical models have been proposed to explain how the brain keeps track of time, there is still no evidence toward a single one. It is often hard to compare different models due to their overlap in behavioral predictions. For this reason, several studies have looked for neural signatures of temporal processing using methods such as electrophysiological recordings (EEG). However, for this strategy to work, it is essential to have consistent EEG markers of temporal processing. In this talk, I'll present results from several studies investigating how temporal information is encoded in the EEG signal. Specifically, across different experiments, we have investigated whether different neural signatures of temporal processing (such as the CNV, the LPC, and early ERPs): 1. Depend on the task to be executed (whether or not it is a temporal task or different types of temporal tasks); 2. Are encoding the physical duration of an interval or how much longer/shorter an interval is relative to a reference. Lastly, I will discuss how these results are consistent with recent proposals that approximate temporal processing with decisional models.
Maths, AI and Neuroscience meeting
To understand brain function and develop artificial general intelligence it has become abundantly clear that there should be a close interaction among Neuroscience, machine learning and mathematics. There is a general hope that understanding the brain function will provide us with more powerful machine learning algorithms. On the other hand advances in machine learning are now providing the much needed tools to not only analyse brain activity data but also to design better experiments to expose brain function. Both neuroscience and machine learning explicitly or implicitly deal with high dimensional data and systems. Mathematics can provide powerful new tools to understand and quantify the dynamics of biological and artificial systems as they generate behavior that may be perceived as intelligent. In this meeting we bring together experts from Mathematics, Artificial Intelligence and Neuroscience for a three day long hybrid meeting. We will have talks on mathematical tools in particular Topology to understand high dimensional data, explainable AI, how AI can help neuroscience and to what extent the brain may be using algorithms similar to the ones used in modern machine learning. Finally we will wrap up with a discussion on some aspects of neural hardware that may not have been considered in machine learning.
When and (maybe) why do high-dimensional neural networks produce low-dimensional dynamics?
There is an avalanche of new data on activity in neural networks and the biological brain, revealing the collective dynamics of vast numbers of neurons. In principle, these collective dynamics can be of almost arbitrarily high dimension, with many independent degrees of freedom — and this may reflect powerful capacities for general computing or information. In practice, neural datasets reveal a range of outcomes, including collective dynamics of much lower dimension — and this may reflect other desiderata for neural codes. For what networks does each case occur? We begin by exploring bottom-up mechanistic ideas that link tractable statistical properties of network connectivity with the dimension of the activity that they produce. We then cover “top-down” ideas that describe how features of connectivity and dynamics that impact dimension arise as networks learn to perform fundamental computational tasks.
3 Reasons Why You Should Care About Category Theory
Category theory is a branch of mathematics which have been used to organize various regions of mathematics and related sciences from a radical “relation-first” point of view. Why consciousness researchers should care about category theory? " "There are (at least) 3 reasons:" "1 Everything is relational" "2 Everything is relation" "3 Relation is everything" "In this talk we explain the reasons above more concretely and introduce the ideas to utilize basic concepts in category theory for consciousness studies.
Physical Computation in Insect Swarms
Our world is full of living creatures that must share information to survive and reproduce. As humans, we easily forget how hard it is to communicate within natural environments. So how do organisms solve this challenge, using only natural resources? Ideas from computer science, physics and mathematics, such as energetic cost, compression, and detectability, define universal criteria that almost all communication systems must meet. We use insect swarms as a model system for identifying how organisms harness the dynamics of communication signals, perform spatiotemporal integration of these signals, and propagate those signals to neighboring organisms. In this talk I will focus on two types of communication in insect swarms: visual communication, in which fireflies communicate over long distances using light signals, and chemical communication, in which bees serve as signal amplifiers to propagate pheromone-based information about the queen’s location.
The quest for the cortical algorithm
The cortical algorithm hypothesis states that there is one common computational framework to solve diverse cognitive problems such as vision, voice recognition and motion control. In my talk, I propose a strategy to guide the search for this algorithm and I present a few ideas on how some of its components might look like. I'll explain why a highly interdisciplinary approach is needed from neuroscience, computer science, mathematics and physics to make further progress in this important question.
Comparing Multiple Strategies to Improve Mathematics Learning and Teaching
Comparison is a powerful learning process that improves learning in many domains. For over 10 years, my colleagues and I have researched how we can use comparison to support better learning of school mathematics within classroom settings. In 5 short-term experimental, classroom-based studies, we evaluated comparison of solution methods for supporting mathematics knowledge and tested whether prior knowledge impacted effectiveness. We next developed supplemental Algebra I curriculum and professional development for teachers to integrate Comparison and Explanation of Multiple Strategies (CEMS) in their classrooms and tested the promise of the approach when implemented by teachers in two studies. Benefits and challenges emerged in these studies. I will conclude with evidence-based guidelines for effectively supporting comparison and explanation in the classroom. Overall, this program of research illustrates how cognitive science research can guide the design of effective educational materials as well as challenges that occur when bridging from cognitive science research to classroom instruction.
Dr Lindsay reads from "Models of the Mind : How Physics, Engineering and Mathematics Shaped Our Understanding of the Brain" 📖
Though the term has many definitions, computational neuroscience is mainly about applying mathematics to the study of the brain. The brain—a jumble of all different kinds of neurons interconnected in countless ways that somehow produce consciousness—has been described as “the most complex object in the known universe”. Physicists for centuries have turned to mathematics to properly explain some of the most seemingly simple processes in the universe—how objects fall, how water flows, how the planets move. Equations have proved crucial in these endeavors because they capture relationships and make precise predictions possible. How could we expect to understand the most complex object in the universe without turning to mathematics? — The answer is we can’t, and that is why I wrote this book. While I’ve been studying and working in the field for over a decade, most people I encounter have no idea what “computational neuroscience” is or that it even exists. Yet a desire to understand how the brain works is a common and very human interest. I wrote this book to let people in on the ways in which the brain will ultimately be understood: through mathematical and computational theories. — At the same time, I know that both mathematics and brain science are on their own intimidating topics to the average reader and may seem downright prohibitory when put together. That is why I’ve avoided (many) equations in the book and focused instead on the driving reasons why scientists have turned to mathematical modeling, what these models have taught us about the brain, and how some surprising interactions between biologists, physicists, mathematicians, and engineers over centuries have laid the groundwork for the future of neuroscience. — Each chapter of Models of the Mind covers a separate topic in neuroscience, starting from individual neurons themselves and building up to the different populations of neurons and brain regions that support memory, vision, movement and more. These chapters document the history of how mathematics has woven its way into biology and the exciting advances this collaboration has in store.
Structure-mapping in Human Learning
Across species, humans are uniquely able to acquire deep relational systems of the kind needed for mathematics, science, and human language. Analogical comparison processes are a major contributor to this ability. Analogical comparison engages a structure-mapping process (Gentner, 1983) that fosters learning in at least three ways: first, it highlights common relational systems and thereby promotes abstraction; second, it promotes inferences from known situations to less familiar situations; and, third, it reveals potentially important differences between examples. In short, structure-mapping is a domain-general learning process by which abstract, portable knowledge can arise from experience. It is operative from early infancy on, and is critical to the rapid learning we see in human children. Although structure-mapping processes are present pre-linguistically, their scope is greatly amplified by language. Analogical processes are instrumental in learning relational language, and the reverse is also true: relational language acts to preserve relational abstractions and render them accessible for future learning and reasoning. Although structure-mapping processes are present pre-linguistically, their scope is greatly amplified by language. Analogical processes are instrumental in learning relational language, and the reverse is also true: relational language acts to preserve relational abstractions and render them accessible for future learning and reasoning.
One Instructional Sequence Fits all? A Conceptual Analysis of the Applicability of Concreteness Fading
According to the concreteness fading approach, instruction should start with concrete representations and progress stepwise to representations that are more idealized. Various researchers have suggested that concreteness fading is a broadly applicable instructional approach. In this talk, we conceptually analyze examples of concreteness fading in mathematics and various science domains. In this analysis, we draw on theories of analogical and relational reasoning and on the literature about learning with multiple representations. Furthermore, we report on an experimental study in which we employed concreteness fading in advanced physics education. The results of the conceptual analysis and the experimental study indicate that concreteness fading may not be as generalizable as has been suggested. The reasons for this limited generalizability are twofold. First, the types of representations and the relations between them differ across different domains. Second, the instructional goals between domains and the subsequent roles of the representations vary.
Cross Domain Generalisation in Humans and Machines
Recent advances in deep learning have produced models that far outstrip human performance in a number of domains. However, where machine learning approaches still fall far short of human-level performance is in the capacity to transfer knowledge across domains. While a human learner will happily apply knowledge acquired in one domain (e.g., mathematics) to a different domain (e.g., cooking; a vinaigrette is really just a ratio between edible fat and acid), machine learning models still struggle profoundly at such tasks. I will present a case that human intelligence might be (at least partially) usefully characterised by our ability to transfer knowledge widely, and a framework that we have developed for learning representations that support such transfer. The model is compared to current machine learning approaches.
European University for Brain and Technology Virtual Opening
The European University for Brain and Technology, NeurotechEU, is opening its doors on the 16th of December. From health & healthcare to learning & education, Neuroscience has a key role in addressing some of the most pressing challenges that we face in Europe today. Whether the challenge is the translation of fundamental research to advance the state of the art in prevention, diagnosis or treatment of brain disorders or explaining the complex interactions between the brain, individuals and their environments to design novel practices in cities, schools, hospitals, or companies, brain research is already providing solutions for society at large. There has never been a branch of study that is as inter- and multi-disciplinary as Neuroscience. From the humanities, social sciences and law to natural sciences, engineering and mathematics all traditional disciplines in modern universities have an interest in brain and behaviour as a subject matter. Neuroscience has a great promise to become an applied science, to provide brain-centred or brain-inspired solutions that could benefit the society and kindle a new economy in Europe. The European University of Brain and Technology (NeurotechEU) aims to be the backbone of this new vision by bringing together eight leading universities, 250+ partner research institutions, companies, societal stakeholders, cities, and non-governmental organizations to shape education and training for all segments of society and in all regions of Europe. We will educate students across all levels (bachelor’s, master’s, doctoral as well as life-long learners) and train the next generation multidisciplinary scientists, scholars and graduates, provide them direct access to cutting-edge infrastructure for fundamental, translational and applied research to help Europe address this unmet challenge.
The impact of elongation on transport in shear flow
I shall present two recent piece of work investigating how shape effects the transport of active particles in shear. Firstly we will consider the sedimentation of particles in 2D laminar flow fields of increasing complexity; and how insights from this can help explain why turbulence can enhance the sedimentation of negatively buoyant diatoms [1]. Secondly, we will consider the 3D transport of elongated active particles under the action of an aligning force (e.g. gyrotactic swimmers) in some simple flow fields; and will see how shape can influence the vertical distribution, for example changing the structure of thin layers [2]. [1] Enhanced sedimentation of elongated plankton in simple flows (2018). IMA Journal of Applied Mathematics W Clifton, RN Bearon, & MA Bees. [2] Elongation enhances migration through hydrodynamic shear (in Prep), RN Bearon & WM Durham.
Analogies, Games and the Learning of Mathematics
Research on analogical processing and reasoning has provided strong evidence that the use of adequate educational analogies has strong and positive effects on the learning of mathematics. In this talk I will show some experimental results suggesting that analogies based on spatial representations might be particularly effective to improve mathematics learning. Since fostering mathematics learning also involves addressing psychosocial factors such as the development of mathematical anxiety, providing social incentives to learn, and fostering engagement and motivation, I will argue that one area to explore with great potential to improve math learning is applying analogical research in the development of learning games aimed to improve math learning. Finally, I will show some early prototypes of an educational project devoted to developing games designed to foster the learning of early mathematics in kindergarten children.
Abstraction and Analogy in Natural and Artificial Intelligence
Learning by analogy is a powerful tool children’s developmental repertoire, as well as in educational contexts such as mathematics, where the key knowledge base involves building flexible schemas. However, noticing and learning from analogies develops over time and is cognitively resource intensive. I review studies that provide insight into the relationship between mechanisms driving children’s developing analogy skills, highlighting environmental inputs (parent talk and prior experiences priming attention to relations) and neuro-cognitive factors (Executive Functions and brain injury). I then note implications for mathematics learning, reviewing experimental findings that show analogy can improve learning, but also that both individual differences in EFs and environmental factors that reduce available EFs such as performance pressure can predict student learning.
MidsummerBrains - computational neuroscience from my point of view
Computational neuroscience is a highly interdisciplinary field ranging from mathematics, physics and engineering to biology, medicine and psychology. Interdisciplinary collaborations have resulted in many groundbreaking innovations both in the research and application. The basis for successful collaborations is the ability to communicate across disciplines: What projects are the others working on? Which techniques and methods are they using? How is data collected, used and stored? In this webinar series, several experts describe their view on computational neuroscience in theory and application, and share experiences they had with interdisciplinary projects. This webinar is open for all interested students and researchers. If you are interested to participate live, please send a short message to smartstart@fz-juelich.de Please note, these lectures will be recorded for subsequent publishing as online lecture material.
MidsummerBrains - computational neuroscience from my point of view
Computational neuroscience is a highly interdisciplinary field ranging from mathematics, physics and engineering to biology, medicine and psychology. Interdisciplinary collaborations have resulted in many groundbreaking innovations both in the research and application. The basis for successful collaborations is the ability to communicate across disciplines: What projects are the others working on? Which techniques and methods are they using? How is data collected, used and stored? In this webinar series, several experts describe their view on computational neuroscience in theory and application, and share experiences they had with interdisciplinary projects. This webinar is open for all interested students and researchers. If you are interested to participate live, please send a short message to smartstart@fz-juelich.de Please note, these lectures will be recorded for subsequent publishing as online lecture material.
MidsummerBrains - computational neuroscience from my point of view
Computational neuroscience is a highly interdisciplinary field ranging from mathematics, physics and engineering to biology, medicine and psychology. Interdisciplinary collaborations have resulted in many groundbreaking innovations both in the research and application. The basis for successful collaborations is the ability to communicate across disciplines: What projects are the others working on? Which techniques and methods are they using? How is data collected, used and stored? In this webinar series, several experts describe their view on computational neuroscience in theory and application, and share experiences they had with interdisciplinary projects. This webinar is open for all interested students and researchers. If you are interested to participate live, please send a short message to smartstart@fz-juelich.de Please note, these lectures will be recorded for subsequent publishing as online lecture material.
MidsummerBrains - computational neuroscience from my point of view
Computational neuroscience is a highly interdisciplinary field ranging from mathematics, physics and engineering to biology, medicine and psychology. Interdisciplinary collaborations have resulted in many groundbreaking innovations both in the research and application. The basis for successful collaborations is the ability to communicate across disciplines: What projects are the others working on? Which techniques and methods are they using? How is data collected, used and stored? In this webinar series, several experts describe their view on computational neuroscience in theory and application, and share experiences they had with interdisciplinary projects. This webinar is open for all interested students and researchers. If you are interested to participate live, please send a short message to smartstart@fz-juelich.de Please note, these lectures will be recorded for subsequent publishing as online lecture material.
MidsummerBrains - computational neuroscience from my point of view
Computational neuroscience is a highly interdisciplinary field ranging from mathematics, physics and engineering to biology, medicine and psychology. Interdisciplinary collaborations have resulted in many groundbreaking innovations both in the research and application. The basis for successful collaborations is the ability to communicate across disciplines: What projects are the others working on? Which techniques and methods are they using? How is data collected, used and stored? In this webinar series, several experts describe their view on computational neuroscience in theory and application, and share experiences they had with interdisciplinary projects. This webinar is open for all interested students and researchers. If you are interested to participate live, please send a short message to smartstart@fz-juelich.de Please note, these lectures will be recorded for subsequent publishing as online lecture material.
MidsummerBrains - computational neuroscience from my point of view
Computational neuroscience is a highly interdisciplinary field ranging from mathematics, physics and engineering to biology, medicine and psychology. Interdisciplinary collaborations have resulted in many groundbreaking innovations both in the research and application. The basis for successful collaborations is the ability to communicate across disciplines: What projects are the others working on? Which techniques and methods are they using? How is data collected, used and stored? In this webinar series, several experts describe their view on computational neuroscience in theory and application, and share experiences they had with interdisciplinary projects. This webinar is open for all interested students and researchers. If you are interested to participate live, please send a short message to smartstart@fz-juelich.de Please note, these lectures will be recorded for subsequent publishing as online lecture material.
Relational Reasoning in Curricular Knowledge Components
It is a truth universally acknowledged that relational reasoning is important for learning in Science, Technology, Engineering, and Mathematics (STEM) disciplines. However, much research on relational reasoning uses examples unrelated to STEM concepts (understandably, to control for prior knowledge in many cases). In this talk I will discuss how real STEM concepts can be profitably used in relational reasoning research, using fraction concepts in mathematics as an example.
Parallels between Intuitionistic Mathematics and Neurophenomenology
Neuromatch 5