What is the main question?:
What are the caveats of real-time neuroimaging research practices? Can we eventually arrive at a set of guidelines and recommendations for quality control, to be employed by all researchers across the field of real-time fMRI neurofeedback?
Why is this an important question to neurofeedback researchers:
With the recent exponential increase in rt-fMRI studies, especially from new research groups, it is easy for tacit quality assurance practices to be neglected.
A lack of commonly accepted standard recommendations means that a novel neurofeedback researcher has no precise guidelines to adhere to, in order to benefit from the accumulated experience of the field and ensure that even one’s first experiment will not be flawed by technological challenges that other researchers have already resolved in older labs. This is a problem that in the long run can become damaging to the entire field of real-time neurofeedback.
In this special session, we have invited a few experienced rt-fMRI researchers to share their experiences and approaches to overcoming common caveats of real-time practices, in an attempt to eventually integrate various approaches to real-time quality control into a coherent set of recommendations for best practices.
Is there extant literature to support different sides of the debate:
This is a special topic with different sides that are complementary rather than opposing one another. This is so, because it seems that different labs have placed differential emphasis on particular aspects of quality control, often depending on the particularities of the research questions addressed and the technological apparatus available.
The literature on this topic is relatively limited and this is precisely why the functional neurofeedback community must collaborate on establishing more precise criteria. This topic is controversial because most people agree on the need for quality assurance but there is no fixed set of recommendations for best practices.
Some of the ensuing issues, that our speakers have placed differential emphasis on, include: optimizing filtering procedures, controlling timing accuracy, maximizing SNR and CNR, controlling for physiological noise, ensuring data consistency, discarding outliers, selecting reproducible ROIs, correcting for field distortions and working in real-time at 7T.
1. Klaus Mathiak (Uniklinik RWTH Aachen)
2. Lydia Hellrung (University of Zürich)
3. Johan van der Meer (Otto-von-Guericke University / University of Amsterdam)
What is the main question?:
Historically, the general lack of double-blinded placebo controls in neurofeedback studies has always been a source of concern. This issue is regularly brought up in high profile reviews as a caveat for the method. And yet, it remains unclear if genuine double-blinded studies are feasible for most applications. Others may question whether placebo control may be truly necessary or just too conservative; one could argue if a treatment is better than passage of time or some other existing treatment, it does not have to be shown to be more effective than placebo-controlled sham treatments. On the other hand, it is the gold standard of modern medicine to implement double-blinded clinical trials. Should neurofeedback studies attempt meet this standard? What would be the best way for our community to establish common guidelines for future studies?
Why is this an important question to neurofeedback researchers?:
The issue of double-blinded control concerns not just acceptance for clinical trials. It is a matter of experimental rigor that can affect the broad image of the field as a legitimate scientific community of the highest standard. This can affect the publishability of our papers in top high-profile journals, funding, etc, and may have far reaching implications for the growth and prosperity of the field.
Is there extant literature to support different sides of the debate?:
The following are some example articles on the relevant controversies, which appeared recently in the literature and generated considerable impact:
Sitaram R, Ros T, Stoeckel L, Haller S, Scharnowski F, Lewis-Peacock J, Weiskopf N, Blefari ML, Rana M, Oblak E, Birbaumer N, Sulzer J. Closed-loop brain training: the science of neurofeedback. Nat Rev Neurosci. 2017 Feb;18(2):86-100.
Lofthouse N, Arnold LE, Hurt E. Current status of neurofeedback for attention-deficit/hyperactivity disorder. Curr Psychiatry Rep. 2012 Oct;14(5):536-42.
Young KD, Siegle GJ, Zotev V, Phillips R, Misaki M, Yuan H, Drevets WC, Bodurka J. Randomized Clinical Trial of Real-Time fMRI Amygdala Neurofeedback for Major Depressive Disorder: Effects on Symptoms and Autobiographical Memory Recall. Am J Psychiatry. 2017 Apr 14:appiajp201716060637
Schabus M, Griessenberger H, Gnjezda MT, Heib DPJ, Wislowska M, Hoedlmoser K. Better than sham? A double-blind placebo-controlled neurofeedback study in primary insomnia. Brain. 2017 Apr 1;140(4):1041-1052.
1. Tor Wager (University of Colorado, Boulder)
2. Vincent Taschereau-Dumouchel (UCLA)
3. Kimberly Young (University of Pittsburg)
Co-chair / Co-submitter:
Mitsuo Kawato (ATR, Kyoto)
What are the various control groups/conditions being implemented in rtfMRI studies, and what the pros and cons of each ?
To establish that behavioral/cognitive changes are directly related to rtfMRI neurofeedback training and establish causality, studies must implement control conditions. This session will provide an overview of control conditions currently being employed in rtfMRI neurofeedback designs as well as how to select one over the other to meet the particular study goals.
There are many different control conditions being employed in rtfMRI-nf research. There is no consensus as to which condition is best, and the answer likely depends on what aspects of the task design one is trying to control for. This can range from determining whether participants can learn to control regional hemodynamic activity via rtfMRI-nf to determining whether the feedback signal is necessary for learning to regulate hemodynamic activity. This session is designed to present participants with the most commonly employed control conditions in rtfMRI neurofeedback designs and to discuss the pros and cons of using each. Conditions to be covered include strategy only, bidirectional control, sham, and feedback from a different region.
1. Frank Scharnowski (Department of Psychiatry, Psychotherapy and Psychosomatics, University of Zurich, Zurich, Switzerland)
2. Bettina Sorger (Departments of Vision and Cognitive Neuroscience, Maastricht University, Maastricht, 6200 MD, The Netherlands)
3. Kymberly Young (Department of Psychiatry, University of Pittsburgh School of Medicine, Pittsburgh, USA)
4. Mariela Rance (Department of Radiology and Biomedical Imaging & Magnetic Resonance Research Center, Yale University School of Medicine, New Haven, CT, USA)
How can we define successful neurofeedback learning and how can we measure it?
This session will focus on a major issue in neurofeedback (NF) research; the neural measure of learning effect. A matter not yet decided both at the group and individual levels, leading to inconsistencies between studies and difficulties in establishing a standard for comparison. The significance of this topic is related to the well known fact that NF learning progression is different among individuals, leading to a need to better characterize and predict who is most likely to learn and benefit from an intervention. However, it is yet unclear when and why to use various possible indices of learning such as; the mean signal difference between conditions, the progression along different trials or change in resting state signal fluctuations. More so, it is debatable whether there is one correct way or perhaps we should combine different measurements.
In the proposed session we will discuss different methods of measuring NF learning and point to advantages and disadvantages of each method. Our goal is to open a discussion that may lead towards a consensus on this matter, thus enhancing research reliability and comparability and through that improving standard of care.
1. Mr. Noam Goldway, MA (Tel-Aviv Sourasky Medical Center, Affiliated to Tel-Aviv University. and Sagol School of Neuroscience, Tel-Aviv University, Tel-Aviv, Israel)
2. Prof. David Linden, MD, PhD (Faculty of Psychology and Neuroscience, Maastricht University, The Netherlands. Netherlands Institute for Neuroscience, Amsterdam, The Netherlands)
3. Rainer Goebel, PhD (Faculty of Psychology and Neuroscience, Maastricht University, The Netherlands. Netherlands Institute for Neuroscience, Amsterdam, The Netherlands)
4. Ayumu Yamashita (Advanced Telecommunications Research Institute (ATR), Department of Cognitive Neuroscience, Kyoto, Japan)
Neurofeedback training induces neuroplastic changes underlying behavior, but it is unclear by what mechanism it operates. Our lack of a mechanistic model for neurofeedback learning (Sitaram et al. 2016) prevents our understanding of why some of the most promising pilot studies fail to translate in clinical trials (Hawkinson et al. 2012). It is unclear how such self-modulated brain activation results in neuroplasticity underlying behavioral changes. The prevailing hypothesis is that self-regulation acts as endogenous stimulation, analogous to exogenous transcranial magnetic stimulation (TMS) (Ros et al 2010) or transcranial direct current stimulation (tDCS). According to this mechanism, participants should maximize activity in a neural circuit as much as possible. Some recent examples showing behavioral changes following such neurofeedback training include self-regulation of alpha waves (Okazaki et al. 2015) to improve visual sensitivity, increasing motor cortical BOLD to enhance fine motor control (Blefari et al. 2015), sensorimotor rhythm to enhance surgical skills (Ros et al. 2009), and up-regulation of DLPFC BOLD to improve working memory (Sherwood et al. 2016). Likewise, down-regulation of activation in the presence of a stimulus (e.g. Zilverstand et al. 2015; Nicholson et al. 2017) is assumed to be a type of inhibitory command. Nevertheless, we still lack sufficient empirical evidence to fully embrace the endogenous stimulation model. One confounding factor is the emerging evidence from brain stimulation studies indicating high intra-individual variability of induced plasticity following stimulation (e.g. LTP-like vs LTD-like) (Ridding and Ziemann 2010), suggesting that Hebbian-like plasticity following neurofeedback can be additionally influenced by homeostatic mechanisms (Kluetsch et al 2014).
Hence, instead of maximizing activity of a given substrate, another model of the efficacious mechanism of neurofeedback may be matching a set point of activation, for example, that of an expert or healthy control. This is most visible in studies that employ multivariate pattern analysis (MVPA)-based neurofeedback, such as matching a voxel-wise pattern of activity in visual cortex to manipulate visual perception (Shibata et al. 2011) or adaptive neurofeedback to manipulate attentional vigilance (deBettencourt et al. 2015). Non-MVPA-based studies that differentially regulate neural circuitry like motor cortical laterality (Nayedli et al. 2017) aim to normalize brain activity, in those with neurological dysfunction relative to healthy controls in order to restore more balanced network function (e.g. excitation/inhibition ratio, neural synchronization). In this Special Topics discussion, we will cover the evidence to support these competing, and probably coexisting, accounts of neuroplasticity in neurofeedback and offer suggestions on how to test these hypotheses.
1. James Sulzer, Ph.D. (University of Texas at Austin)
2. Tomas Ros, Ph.D. (University of Geneva)
3. Jarrod Lewis-Peacock Ph.D. (University of Texas at Austin)
Methods of cortical information flow analysis, based on non-invasive EEG/MEG recordings, provide a unique contribution to the human connectome, from the point of view that they span the whole cortex and give a directional, high-time resolution account of information transactions. Firstly, an inverse solution is used for the estimation of cortical signals of electric neuronal activity. Secondly, measures of “connectivity” (e.g. derived from autoregressive models or from cross-frequency coupling models) are applied to these signals for connectome inference.
However, these signals have low spatial resolution, such that any estimated signal is an instantaneous mixture of the true-unobserved signals across the cortex. False positive connections will result from low spatial resolution, thus producing false connectomes, even for connectivity measures thought to be immune to low resolution, such as cross-frequency phase-amplitude coupling and isolated effective coherence. One recent literature approach to solve this problem is called “leakage correction”, which is based on the assumption that the unobserved signals have zero cross-correlation at lag zero. We contend that this is a baseless assumption, and show how it leads to false connections under a very broad range of electrophysiological conditions.
We solve this problem with the method of “innovations orthogonalization”, which is based on the assumption that the multivariate autoregressive innovations are orthogonal. It is shown that under very broad conditions, the new method produces proper human connectomes, even when the signals are not generated by an autoregressive model.
Recent developments of resting state functional connectivity and diffusion MRI allow for investigating how our brain is organized as a network. These methods have revealed macro-scale network organization of human brain such as small world topology, existence of functional subnetwork (default mode, attention, sensory and motor systems), and individual difference of functional connectivity. However investigating dynamics on the brain network is challenging partially because of lack of measurement methodology. To address this issue, we have been developing fMRI-informed MEG/EEG source reconstruction method to visualize brain activities in milli-second temporal resolution (Sato et al. 2004, NeuroImage, MATLAB toolbox available from http://vbmeg.atr.jp/?lang=en). Effectiveness of the method has been demonstrated with basic neuroscience experiments (Yosioka et al, 2008, NeuroImage, Shibata et al. 2008, Cerebral Cortex) as well as BMI applications (Toda et al. 2011, NeuroImage, Yoshimura et al. 2012, NeuroImage, Yanagisawa et al, 2016, Nature comm). Recently we further developed extension of VBMEG to describe event-related brain network dynamics (Fukushima et al, NeuroImage, 2015). In this method, a dynamical generativeprocess of MEG is modeled via a current-source network dynamical model whose network structure is constrained by diffusion MRI. Algorithm to infer current sources and the dynamics model parameters from MEG and fMRI data is proposed. This method allows for visualization how brain activities are generated by interactions on structural network. In this talk I summarize our attempt to understand event-related brain dynamics using multi-modal integration approach.
One broad objective in neuroscience is to comprehend the mechanisms of large- scale, oscillatory neural dynamics: how they enable functions by shaping communication in brain networks, and how the earliest detection of their alterations in disease can contribute to improved healthcare prevention and interventions. We will review how the ubiquitous polyrhythmic activity of the brain has been approached empirically so far, with underlying mechanisms that remain not understood. This hinders our comprehension of how 1) perception and behaviour emerge from brain network activity, and 2) the pathophysiological developments of brain and mental- health disorders increasingly studied as network diseases, affect large-scale neural communication.
I will introduce how these difficult questions can benefit from a bottom-up approach: We aim to understand how basic physiological factors of neural integrity and function shape the dynamical structure of oscillatory brain rhythms, such as their interdependence across multiple frequencies through cross-frequency coupling. These phenomena represent a deep source of uncharted markers of neural excitability, activity and connectivity. I will illustrate these principles with our latest results concerning the resting brain, multimodal perception and pathophysiological markers of epilepsy and neurodegenerative syndromes.
My talk will focus on the intentions and hurdles in using NeuroFeedback (NF) as a non-invasive neuromodulation technique in neuropsychiatry by addressing three opened issues. First, possible therapeutic mechanisms of NF, considering the neural systems of reward, salience and control process their dependency in targeting. Second, the scope and limitations of existing clinical trials with NF that were aimed to alleviate depression, trauma related anxiety and emotion dysregulation. Third, ways and directions for improving scientific rigor and through that clinical effectiveness of NF training. Across these issues, the usage of personalized neural indications for training protocol, context specific immersive brain-computer-interfaces, and home-based NF platforms, will be pondered. Altogether this talk will provide a critical overview of current and future perspectives on the role of NF in harnessing the brain to heal itself.
Real-time fMRI neurofeedback can change local brain activity and its related behavior. Since the advent of fMRI neurofeedback, amazing progress was achieved by being equipped with an implicit protocol, external reward, multivariate analysis. In this talk, first I will summarize these three aspects. Second, I will describe how these aspects have been integrated into the new technology, decoded neurofeedback (DecNef), and how DecNef has advanced basic and clinical brain research. Third, I will discuss the potential problems of DecNef such as the one-to-many relationship from a voxel pattern to neuronal activity patterns and the curse of dimensionality, and propose theoretical solutions to these problems. Finally, I will present results of our meta-analyses and simulations based on recent DecNef studies in order to assess the validity of these theoretical solutions.
Learning has been studied at multiple levels, including behavior, brain regions, individual neurons, and synapses. However, little is known about how populations of neurons change their activity in concert during learning. Are there network constraints on the types of new neural activity patterns that can be achieved? We studied this
question using a brain-computer interface (BCI), which allows us to specify which population activity patterns lead to task success. We identified a simple network principle that can predict which types of activity patterns are easier or harder for the subject to learn to generate. This work provides a network-level explanation for why
learning some tasks may be easier than others.
Resective brain surgery is often performed in people with intractable epilepsy, congenital structural lesions, vascular anomalies, or neoplasms. Surgical planning of the resection procedure depends substantially on the delineation of abnormal tissue and on the creation of a functional map of eloquent cortex, i.e., cortex involved in motor or language function. Traditionally, different methodologies have been used to produce this functional map, most notably electrical cortical stimulation (ECS) and functional magnetic resonance imaging (fMRI), but each of these methods has important shortcomings (including increased morbidity, time consumption, expense, or practicality).
Patients undergoing invasive brain surgery would benefit greatly from a mapping methodology that is safe, can be rapidly applied, is comparatively inexpensive, is procedurally simple, and also is congruent to existing techniques (in particular to ECS mapping). Task-related changes detected in electrocorticographic (ECoG) recordings could provide the basis for a technique with those desirable characteristics. This approach seems particularly attractive because existing surgical protocols often already include the placement of subdural electrodes, and because a number of recent studies have shown that ECoG activity in the broadband gamma (70-170 Hz) band are directly reflective of activity of neuronal populations directly underneath the electrodes and can also be directly linked to the BOLD response detected using fMRI.
Over the past decade, we have been using and extending this understanding, and been applying it to develop a robust and practical ECoG-based procedure for presurgical functional mapping of eloquent cortex. This procedure is now readily available to others (cortiQ, www.cortiq.eu). We and others have shown that this procedure can produce a functional map of motor, language, or cognitive function within a few minutes, and that the results of this ECoG-based mapping are in strong congruence to the results derived using ECS mapping.
In this talk, I will be describing the neurophysiological and technical principles of this technique, and give examples of its clinical utility in the context of different types of invasive brain surgery. I will also be able to discuss the practical clinical relevance of this technique compared to ECS and fMRI.
A fundamental challenge of modern society is the development of effective approaches to enhance brain function and cognition in both the healthy and impaired. For the healthy, this should be a core mission of our educational system and for the cognitively impaired this is the primary goal of our medical system. Unfortunately, neither of these systems have effectively met this challenge. I will describe a novel approach out of our center at UCSF – Neuroscape – that uses custom-designed video games to achieve meaningful and sustainable cognitive enhancement via personalized closed-loop systems (Nature 2013; Neuron 4014). I will also share with you the next stage of our research program, which integrates our video games with the latest technological innovations in software (e.g., brain computer interface algorithms, GPU computing, cloud-based analytics) and hardware (e.g., virtual reality, mobile EEG, motion capture, physiological recording devices (watches), transcranial brain stimulation) to further enhance our brain’s information processing systems with the ultimate aim of improving quality of life.