{"id":1157,"date":"2018-06-19T13:58:12","date_gmt":"2018-06-19T11:58:12","guid":{"rendered":"http:\/\/cuttingeeg.org\/?page_id=1157"},"modified":"2018-06-19T13:58:12","modified_gmt":"2018-06-19T11:58:12","slug":"tutorials","status":"publish","type":"page","link":"https:\/\/cuttingeeg2018.org\/?page_id=1157","title":{"rendered":"tutorials"},"content":{"rendered":"<p>[et_pb_section fb_built=&#8221;1&#8243; background_color=&#8221;#00659d&#8221; _builder_version=&#8221;3.0.97&#8243; background_color_gradient_direction=&#8221;152deg&#8221; custom_css_after=&#8221;display: block;||position: absolute;||content: &#8221;;||width: 100px;||height: 100px;||bottom: -50px;||left: 50%;||margin-left: -50px;||background-color: #00659d; \/** Change This Value ***\/||-ms-transform: rotate(45deg);||-webkit-transform: rotate(45deg);||transform: rotate(45deg);||z-index: 1;||&#8221;][et_pb_row custom_padding=&#8221;0px||0px|&#8221; custom_margin=&#8221;||-50px|&#8221; _builder_version=&#8221;3.0.92&#8243;][et_pb_column type=&#8221;4_4&#8243; _builder_version=&#8221;3.0.47&#8243; parallax=&#8221;off&#8221; parallax_method=&#8221;on&#8221;][et_pb_text background_layout=&#8221;dark&#8221; _builder_version=&#8221;3.0.97&#8243; header_font=&#8221;Comfortaa|700||on|||||&#8221; custom_padding=&#8221;20px|||&#8221;]<\/p>\n<h1 style=\"text-align: center;\">Tutorials instructions<span style=\"color: #ffffff;\"><br \/>\n<\/span><\/h1>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][\/et_pb_section][et_pb_section fb_built=&#8221;1&#8243; fullwidth=&#8221;on&#8221; _builder_version=&#8221;3.0.92&#8243;][et_pb_fullwidth_code _builder_version=&#8221;3.0.92&#8243;]&lt;svg id=&quot;curveDownColor&quot; xmlns=&quot;http:\/\/www.w3.org\/2000\/svg&quot; version=&quot;1.1&quot; width=&quot;100%&quot; height=&quot;100&quot; style=&quot;position:absolute; padding-top:0; margin-top:0;fill: #00659d; stroke: #00659d; top:0px;&quot; viewBox=&quot;0 0 100 100&quot; preserveAspectRatio=&quot;none&quot;&gt;&lt;path d=&quot;M0 0 C 50 100 80 100 100 0 Z&quot;&gt;&lt;\/path&gt;&lt;\/svg&gt;[\/et_pb_fullwidth_code][\/et_pb_section][et_pb_section fb_built=&#8221;1&#8243; _builder_version=&#8221;3.0.47&#8243;][et_pb_row custom_margin=&#8221;5%|||&#8221; _builder_version=&#8221;3.0.97&#8243; background_size=&#8221;initial&#8221; background_position=&#8221;top_left&#8221; background_repeat=&#8221;repeat&#8221;][et_pb_column type=&#8221;1_2&#8243; _builder_version=&#8221;3.0.47&#8243; parallax=&#8221;off&#8221; parallax_method=&#8221;on&#8221;][et_pb_text _builder_version=&#8221;3.0.97&#8243;]<\/p>\n<p style=\"text-align: justify;\">All documents we have received in advance are in the following online folder. You can download all the ones you want, install them beforehand and come over with your laptop or you can just discover everything at T\u00e9l\u00e9com ParisTech using the PC on site. The password will be provided by email. <\/p>\n<p>[\/et_pb_text][\/et_pb_column][et_pb_column type=&#8221;1_2&#8243; _builder_version=&#8221;3.0.47&#8243; parallax=&#8221;off&#8221; parallax_method=&#8221;on&#8221;][et_pb_button button_url=&#8221;https:\/\/owncloud.icm-institute.org\/index.php\/s\/5zD4t4hr9uHAKBA&#8221; url_new_window=&#8221;on&#8221; button_text=&#8221;Tutorials softs, data &#038; instruction&#8221; button_alignment=&#8221;right&#8221; _builder_version=&#8221;3.0.97&#8243; custom_margin=&#8221;10px|||&#8221;][\/et_pb_button][\/et_pb_column][\/et_pb_row][et_pb_row custom_padding=&#8221;||0px|&#8221; _builder_version=&#8221;3.0.97&#8243;][et_pb_column type=&#8221;4_4&#8243; _builder_version=&#8221;3.0.47&#8243; parallax=&#8221;off&#8221; parallax_method=&#8221;on&#8221;][et_pb_blurb title=&#8221; Session 1: Monday morning &#8211; 9:00 to 12:00 AM&#8221; use_icon=&#8221;on&#8221; font_icon=&#8221;%%36%%&#8221; icon_color=&#8221;#0dc7e8&#8243; icon_placement=&#8221;left&#8221; content_max_width=&#8221;1100px&#8221; _builder_version=&#8221;3.0.97&#8243; header_level=&#8221;h2&#8243; header_font=&#8221;|||on|||||&#8221; custom_margin=&#8221;||0px|&#8221; custom_padding=&#8221;||0px|&#8221;][\/et_pb_blurb][\/et_pb_column][\/et_pb_row][et_pb_row _builder_version=&#8221;3.0.97&#8243;][et_pb_column type=&#8221;1_2&#8243; _builder_version=&#8221;3.0.47&#8243; parallax=&#8221;off&#8221; parallax_method=&#8221;on&#8221;][et_pb_text _builder_version=&#8221;3.0.97&#8243; text_font=&#8221;||||||||&#8221; header_font=&#8221;||||||||&#8221; header_3_font=&#8221;Comfortaa||||||||&#8221; header_3_font_size=&#8221;20px&#8221; header_3_text_color=&#8221;#00659d&#8221; header_4_font=&#8221;||on||||||&#8221; header_4_text_color=&#8221;#000000&#8243;]<\/p>\n<h3><b>Hierarchical General Linear Modelling and Robust Statistics for EEG<\/b> <b><br \/> <\/b><\/h3>\n<h4><i><span style=\"font-weight: 400;\">Cyril Pernet, Arnaud Delorme<\/span><\/i><\/h4>\n<p style=\"text-align: justify;\"><span style=\"font-weight: 400;\">During the workshop, we will analyze the full data space<\/span><span style=\"font-weight: 400;\"> of publicly available data, using the open source LIMO EEG Toolbox <\/span><span style=\"font-weight: 400;\">(in the time domain, but it works the same in the frequency domain). The LInear MOdeling of EEG toolbox is an EEGLAB toolbox that integrates seamlessly with &#8216;Studies&#8217; and provides all the tools to analyze any experimental design including all sorts of covariates at the subject or group level. It allows analyzing all electrodes, all time and\/or frequency frames and has robust statistical methods implemented along several multiple comparisons procedures. Depending on time available (ie speed of the group), the various options of the toolbox will be explored. It is expected that attendees will have learned enough to be able to use the toolbox on their own data by the end of the session.<\/span><\/p>\n<p style=\"text-align: right;\"><span style=\"font-weight: 400;\">Teaching material available<\/span><\/p>\n<p>&nbsp;<\/p>\n<p>[\/et_pb_text][\/et_pb_column][et_pb_column type=&#8221;1_2&#8243; _builder_version=&#8221;3.0.47&#8243; parallax=&#8221;off&#8221; parallax_method=&#8221;on&#8221;][et_pb_text _builder_version=&#8221;3.0.97&#8243; text_font=&#8221;||||||||&#8221; header_font=&#8221;||||||||&#8221; header_3_font=&#8221;Comfortaa||||||||&#8221; header_3_font_size=&#8221;20px&#8221; header_3_text_color=&#8221;#00659d&#8221; header_4_font=&#8221;||on||||||&#8221; header_4_text_color=&#8221;#000000&#8243;]<\/p>\n<h3><b>Decoding EEG signals<\/b>\u00a0<b><br \/> <\/b><\/h3>\n<h4><i><span style=\"font-weight: 400;\">Alexandre Gramfort<\/span><\/i><\/h4>\n<p style=\"text-align: justify;\"><span style=\"font-weight: 400;\">Over the last decade, multivariate analyses have played a major role in interpreting complex neural time series such as EEG and MEG recordings. Here, we will combine introductory lectures and hands-on exercises to introduce the audience to the use of multivariate decoding. Using MNE-Python and scikit-learn, we will first show how users can decode EEG and MEG signals in less than 10 lines of code. We will then cover the motivation and interpretability of linear decoders in temporally-resolved neuroimaging. Finally, we will review a series of common analytical methods implemented in MNE, ranging from temporal generalization to common spatial patterns and receptive fields. Overall, the tutorial necessitates a basic knowledge of Python, and will be taught through online exercises.<\/span><\/p>\n<p style=\"text-align: right;\"><span style=\"font-weight: 400;\">Teaching material available<\/span><\/p>\n<p style=\"text-align: right;\">\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][et_pb_row _builder_version=&#8221;3.0.97&#8243;][et_pb_column type=&#8221;1_2&#8243; _builder_version=&#8221;3.0.47&#8243; parallax=&#8221;off&#8221; parallax_method=&#8221;on&#8221;][et_pb_text _builder_version=&#8221;3.0.97&#8243; text_font=&#8221;||||||||&#8221; header_font=&#8221;||||||||&#8221; header_3_font=&#8221;Comfortaa||||||||&#8221; header_3_font_size=&#8221;20px&#8221; header_3_text_color=&#8221;#00659d&#8221; header_4_font=&#8221;||on||||||&#8221; header_4_text_color=&#8221;#000000&#8243;]<\/p>\n<h3><b>Simulating EEG data<\/b> \u00a0<b><br \/> <\/b><\/h3>\n<h4><i><span style=\"font-weight: 400;\">Mike X Cohen<\/span><\/i><\/h4>\n<p style=\"text-align: justify;\"><span style=\"font-weight: 400;\">This is the \u201cdata era\u201d of neuroscience. But there are countless ways to analyze data, and not all of them are appropriate for your data. The purpose of this tutorial is to teach you the tools to simulate EEG data in order to evaluate analysis methods. This will allow you to (1) test the accuracy of analysis\/reconstruction methods; (2) know how analysis parameters affect results; and (3) understand the assumptions that your analysis methods make. The result of this workshop will be MATLAB code that simulates single- and multichannel EEG data, including 1\/f noise, phase-locked and non-phase-locked activity, and sinusoidal and nonstationary features.<\/span><\/p>\n<p style=\"text-align: right;\"><span style=\"font-weight: 400;\">Teaching material available<\/span><\/p>\n<p style=\"text-align: right;\">\n<p>[\/et_pb_text][\/et_pb_column][et_pb_column type=&#8221;1_2&#8243; _builder_version=&#8221;3.0.47&#8243; parallax=&#8221;off&#8221; parallax_method=&#8221;on&#8221;][et_pb_text _builder_version=&#8221;3.0.97&#8243; text_font=&#8221;||||||||&#8221; header_font=&#8221;||||||||&#8221; header_3_font=&#8221;Comfortaa||||||||&#8221; header_3_font_size=&#8221;20px&#8221; header_3_text_color=&#8221;#00659d&#8221; header_4_font=&#8221;||on||||||&#8221; header_4_text_color=&#8221;#000000&#8243;]<\/p>\n<h3><b>SEEG\/ECOG analysis with Brainstorm <\/b>\u00a0<b><br \/> <\/b><\/h3>\n<h4><i><span style=\"font-weight: 400;\">Fran\u00e7ois Tadel<\/span><\/i><i><span style=\"font-weight: 400;\">, Anne-Sophie Dubrarry<\/span><\/i><\/h4>\n<p><span style=\"font-weight: 400;\">Participants will learn how to import and process SEEG epilepsy recordings with the Brainstorm interface: co-registration of pre- and post-implantation anatomical images, manual placement of the SEEG electrodes with the MRI viewer, display and pre-processing of the signals, computation of epileptogenicity maps. The example dataset will be the same as the online Brainstorm SEEG tutorial: <\/span><span style=\"font-weight: 400;\"><br \/> <\/span><span style=\"font-weight: 400;\"><a href=\"http:\/\/neuroimage.usc.edu\/brainstorm\/Tutorials\/Epileptogenicity\">http:\/\/neuroimage.usc.edu\/brainstorm\/Tutorials\/Epileptogenicity<\/a>.<\/span><\/p>\n<p><span style=\"font-weight: 400;\"><\/span><\/p>\n<p style=\"text-align: right;\"><span style=\"font-weight: 400;\">Teaching material available<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"font-weight: 400;\">\u00a0<\/span><\/p>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][\/et_pb_section][et_pb_section fb_built=&#8221;1&#8243; background_color=&#8221;#00659d&#8221; _builder_version=&#8221;3.0.97&#8243; custom_padding=&#8221;54px|0px|53px|0px&#8221;][et_pb_row custom_padding=&#8221;||0px|&#8221; _builder_version=&#8221;3.0.97&#8243;][et_pb_column type=&#8221;4_4&#8243; _builder_version=&#8221;3.0.47&#8243; parallax=&#8221;off&#8221; parallax_method=&#8221;on&#8221;][et_pb_blurb title=&#8221; Session 2: Monday Afternoon &#8211; 2:00 to 5:00 PM&#8221; use_icon=&#8221;on&#8221; font_icon=&#8221;%%36%%&#8221; icon_color=&#8221;#0db7dd&#8221; icon_placement=&#8221;left&#8221; background_layout=&#8221;dark&#8221; content_max_width=&#8221;1100px&#8221; _builder_version=&#8221;3.0.97&#8243; header_level=&#8221;h2&#8243; header_font=&#8221;|||on|||||&#8221; header_text_color=&#8221;#000000&#8243; custom_margin=&#8221;||0px|&#8221; custom_padding=&#8221;||0px|&#8221;][\/et_pb_blurb][\/et_pb_column][\/et_pb_row][et_pb_row _builder_version=&#8221;3.0.97&#8243;][et_pb_column type=&#8221;1_2&#8243; _builder_version=&#8221;3.0.47&#8243; parallax=&#8221;off&#8221; parallax_method=&#8221;on&#8221;][et_pb_text background_layout=&#8221;dark&#8221; _builder_version=&#8221;3.0.97&#8243; text_font=&#8221;||||||||&#8221; text_text_color=&#8221;#ededed&#8221; header_font=&#8221;||||||||&#8221; header_3_font=&#8221;Comfortaa||||||||&#8221; header_3_font_size=&#8221;20px&#8221; header_3_text_color=&#8221;#ffffff&#8221; header_4_font=&#8221;||on||||||&#8221; header_4_text_color=&#8221;#0fe7ff&#8221;]<\/p>\n<h3><b><strong>Frequency Tagging (steady state analysis) in EEG<\/strong><br \/> <\/b><\/h3>\n<h4><i><span style=\"font-weight: 400;\"><em>Molly Henry<\/em><\/span><\/i><\/h4>\n<p style=\"text-align: justify;\">\u201cFrequency-tagging\u201d, or steady-state analysis, traditionally refers to the practice of providing a repetitive sensory stimulus (a flashing light or a tone sequence, for example), and then analyzing the brain\u2019s representation of that stimulus in the frequency domain. This technique has been fruitful in the past for determining hearing thresholds, especially of individuals incapable of providing a behavioral response (e.g., infants) and for determining a person\u2019s ability to selectively attend to one stimulus at the expense of another. More recently, the technique has been applied in the domain of neural entrainment, whereby neural oscillations become synchronized with rhythmic environmental stimuli. I will give a theoretical background comparing these approaches. Then, I will demonstrate how frequency-tagging analysis can be taken further, relating high-dimensional neural dynamics to moment-to-moment variations in perception, providing a powerful tool for inferring the neural \u201cstates\u201d that underlie human perception.<\/p>\n<p style=\"text-align: right;\">Teaching material available<\/p>\n<p>[\/et_pb_text][\/et_pb_column][et_pb_column type=&#8221;1_2&#8243; _builder_version=&#8221;3.0.47&#8243; parallax=&#8221;off&#8221; parallax_method=&#8221;on&#8221;][et_pb_text background_layout=&#8221;dark&#8221; _builder_version=&#8221;3.0.97&#8243; text_font=&#8221;||||||||&#8221; text_text_color=&#8221;#ededed&#8221; header_font=&#8221;||||||||&#8221; header_3_font=&#8221;Comfortaa||||||||&#8221; header_3_font_size=&#8221;20px&#8221; header_3_text_color=&#8221;#ffffff&#8221; header_4_font=&#8221;||on||||||&#8221; header_4_text_color=&#8221;#0fe7ff&#8221;]<\/p>\n<h3><b>Combining eye-tracking &#038; EEG<br \/> <\/b><\/h3>\n<h4><i><span style=\"font-weight: 400;\">Olaf Dimigen<\/span><\/i><\/h4>\n<p style=\"text-align: justify;\"><span style=\"font-weight: 400;\">During every waking hour, we move our eyes about 10,000 times. The combination of EEG recordings with eye-tracking is a promising approach to study visual cognition in such natural situations. This workshop will introduce students and researchers to this relatively new technique and its advantages, with a focus on data analysis. It will cover the following topics: Properties of saccade- and fixation-related brain potentials, building a suitable laboratory setup, data synchronization and integration, optimal strategies for removing eye movement artifacts from the data, and the use advanced linear deconvolution models to control for overlapping potentials and other confounds during natural vision. In hands-on exercises, we will analyze a combined dataset, using the EYE-EEG toolbox and the brand-new unfold toolbox.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"font-weight: 400;\"><\/span><\/p>\n<p style=\"text-align: right;\"><span style=\"font-weight: 400;\">\u00a0<\/span><\/p>\n<p style=\"text-align: justify;\">\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][et_pb_row _builder_version=&#8221;3.0.97&#8243;][et_pb_column type=&#8221;1_2&#8243; _builder_version=&#8221;3.0.47&#8243; parallax=&#8221;off&#8221; parallax_method=&#8221;on&#8221;][et_pb_text background_layout=&#8221;dark&#8221; _builder_version=&#8221;3.0.97&#8243; text_font=&#8221;||||||||&#8221; text_text_color=&#8221;#ededed&#8221; header_font=&#8221;||||||||&#8221; header_3_font=&#8221;Comfortaa||||||||&#8221; header_3_font_size=&#8221;20px&#8221; header_3_text_color=&#8221;#ffffff&#8221; header_4_font=&#8221;||on||||||&#8221; header_4_text_color=&#8221;#0fe7ff&#8221;]<\/p>\n<h3><b>Human Neocortical Neurosolver: A New Tool for Cellular and Circuit Level Interpretation of EEG<br \/> <\/b><\/h3>\n<h4><i><span style=\"font-weight: 400;\"><em>Stephanie Jones, Sam Neymotin, Dylan Daniels<br \/> <\/em><\/span><\/i><\/h4>\n<p style=\"text-align: justify;\"><span style=\"font-weight: 400;\">We developed the Human Neocortical Neurosolver (HNN), an open-source modeling tool designed to help researchers interpret cellular and circuit origins of EEG\/MEG. HNN presents a user-friendly GUI to a biophysically principled model of a neocortical circuit, under thalamic and cortical drive, that simulates the primary electrical currents underlying EEG\/MEG recordings. We will describe the model and teach participants how to study the origins of commonly measured signals, including event related potentials and low frequency rhythms (alpha\/beta\/gamma). Participants will learn how to compare model results to recorded data and to adjust parameters to develop and test hypotheses on circuit-level mechanisms.<\/span><\/p>\n<p style=\"text-align: right;\"><span style=\"font-weight: 400;\">Teaching material available<\/span><\/p>\n<p style=\"text-align: justify;\">\n<p>[\/et_pb_text][\/et_pb_column][et_pb_column type=&#8221;1_2&#8243; _builder_version=&#8221;3.0.47&#8243; parallax=&#8221;off&#8221; parallax_method=&#8221;on&#8221;][et_pb_text background_layout=&#8221;dark&#8221; _builder_version=&#8221;3.0.97&#8243; text_font=&#8221;||||||||&#8221; text_text_color=&#8221;#ededed&#8221; header_font=&#8221;||||||||&#8221; header_3_font=&#8221;Comfortaa||||||||&#8221; header_3_font_size=&#8221;20px&#8221; header_3_text_color=&#8221;#ffffff&#8221; header_4_font=&#8221;||on||||||&#8221; header_4_text_color=&#8221;#0fe7ff&#8221;]<\/p>\n<h3><b><strong>Inverted encoding models of EEG signals<\/strong><br \/> <\/b><\/h3>\n<h4><i><span style=\"font-weight: 400;\"><em>Thomas C Sprague<\/em><\/span><\/i><\/h4>\n<p style=\"text-align: justify;\">In cognitive neuroscience, we\u2019re often interested in understanding how cognitive operations impact mental representations. For example, how are neural representations transformed by visual attention, or how are they updated when manipulating the contents of working memory?\u00a0 I will walk through a recently-developed analysis procedure I call an \u201cinverted encoding model\u201d that enables us to reconstruct representations of feature values (like spatial position or visual orientation) given single neural activity patterns, including fMRI activation from individual regions of interest and evoked and induced scalp potentials measured with EEG.<\/p>\n<p style=\"text-align: justify;\">\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][\/et_pb_section][et_pb_section fb_built=&#8221;1&#8243; _builder_version=&#8221;3.0.47&#8243;][et_pb_row custom_padding=&#8221;||0px|&#8221; _builder_version=&#8221;3.0.97&#8243;][et_pb_column type=&#8221;4_4&#8243; _builder_version=&#8221;3.0.47&#8243; parallax=&#8221;off&#8221; parallax_method=&#8221;on&#8221;][et_pb_blurb title=&#8221; Session 3: wednesday morning &#8211; 9:00 to 12:00 AM&#8221; use_icon=&#8221;on&#8221; font_icon=&#8221;%%36%%&#8221; icon_color=&#8221;#0dc7e8&#8243; icon_placement=&#8221;left&#8221; content_max_width=&#8221;1100px&#8221; _builder_version=&#8221;3.0.97&#8243; header_level=&#8221;h2&#8243; header_font=&#8221;|||on|||||&#8221; custom_margin=&#8221;||0px|&#8221; custom_padding=&#8221;||0px|&#8221;][\/et_pb_blurb][\/et_pb_column][\/et_pb_row][et_pb_row _builder_version=&#8221;3.0.97&#8243;][et_pb_column type=&#8221;1_2&#8243; _builder_version=&#8221;3.0.47&#8243; parallax=&#8221;off&#8221; parallax_method=&#8221;on&#8221;][et_pb_text _builder_version=&#8221;3.0.97&#8243; text_font=&#8221;||||||||&#8221; header_font=&#8221;||||||||&#8221; header_3_font=&#8221;Comfortaa||||||||&#8221; header_3_font_size=&#8221;20px&#8221; header_3_text_color=&#8221;#00659d&#8221; header_4_font=&#8221;||on||||||&#8221; header_4_text_color=&#8221;#000000&#8243;]<\/p>\n<h3><b><strong>Modeling EEG-behavior relationships<\/strong><br \/> <\/b><\/h3>\n<h4><i><span style=\"font-weight: 400;\"><em>Valentin Wyart, Aur\u00e9lien Weiss<\/em><\/span><\/i><\/h4>\n<p style=\"text-align: justify;\">This hands-on tutorial will present methods and code for relating EEG activity to behavior, a key analysis for characterizing the role of a neural signal of interest in cognition. The parametric modeling framework on which these brain-behavior analyses rest will be first presented, and then applied to EEG datasets collected in different perceptual decision-making contexts. The goal of the tutorial is to outline the explanatory power of these methods, and to show their generalizability to various fields of research in cognition (perception, decision-making, learning, memory). Participants are expected to have at least minimal experience with programming and basic knowledge of statistics.<\/p>\n<p style=\"text-align: justify;\">\n<p>[\/et_pb_text][\/et_pb_column][et_pb_column type=&#8221;1_2&#8243; _builder_version=&#8221;3.0.47&#8243; parallax=&#8221;off&#8221; parallax_method=&#8221;on&#8221;][et_pb_text _builder_version=&#8221;3.0.97&#8243; text_font=&#8221;||||||||&#8221; header_font=&#8221;||||||||&#8221; header_3_font=&#8221;Comfortaa||||||||&#8221; header_3_font_size=&#8221;20px&#8221; header_3_text_color=&#8221;#00659d&#8221; header_4_font=&#8221;||on||||||&#8221; header_4_text_color=&#8221;#000000&#8243;]<\/p>\n<h3><b><strong>Separating different alpha sources <\/strong><br \/> <\/b><\/h3>\n<h4><em>Rasa Gulbinaite<\/em><\/h4>\n<p style=\"text-align: justify;\">Although the M\/EEG community tends to agree on the existence of multiple generators of alpha-band (7-13 Hz) rhythm, there is little agreement on ways to separate them. In this tutorial, you will learn analytical and experimental approaches that will allow you to isolate different alpha sources using: (1) independent component analysis (ICA), a multivariate source separation technique that combines information from all the electrodes, and allows to determine independent sources of activity based on the statistical structure of the data; (2) resonance responses to rhythmic visual stimulation. You will also learn how to determine spectral properties of multiple alpha generators (peak frequency, width of the alpha-band etc.).<\/p>\n<p style=\"text-align: justify;\">\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][et_pb_row _builder_version=&#8221;3.0.97&#8243;][et_pb_column type=&#8221;1_2&#8243; _builder_version=&#8221;3.0.47&#8243; parallax=&#8221;off&#8221; parallax_method=&#8221;on&#8221;][et_pb_text _builder_version=&#8221;3.0.97&#8243; text_font=&#8221;||||||||&#8221; header_font=&#8221;||||||||&#8221; header_3_font=&#8221;Comfortaa||||||||&#8221; header_3_font_size=&#8221;20px&#8221; header_3_text_color=&#8221;#00659d&#8221; header_4_font=&#8221;||on||||||&#8221; header_4_text_color=&#8221;#000000&#8243;]<\/p>\n<h3><b><strong>Temporal response functions \u2013 Extraction of the neural response to continuous stimuli<\/strong><br \/> <\/b><\/h3>\n<h4><em>Lorenz Fiedler<\/em><\/h4>\n<p style=\"text-align: justify;\">Going beyond conventional ERP designs (i.e., multi-trial averaging), encoding models allow the extraction of the neural response to continuously varying stimulus features, such as luminance or the speech envelope. In this tutorial, we will implement and discuss the extraction of the neural response to continuous stimulus features. First, we will obtain relevant stimulus features. Second, we will discuss how to preprocess the EEG data. Third, we will extract the neural response using several methods. Finally, we will test how well the extracted response predicts unknown data. I will prepare some data and code, but the participants are welcome to bring their own datasets.<\/p>\n<p style=\"text-align: right;\"><span style=\"font-weight: 400;\">Teaching material available<\/span><\/p>\n<p style=\"text-align: justify;\">\n<p>[\/et_pb_text][\/et_pb_column][et_pb_column type=&#8221;1_2&#8243; _builder_version=&#8221;3.0.47&#8243; parallax=&#8221;off&#8221; parallax_method=&#8221;on&#8221;][et_pb_text _builder_version=&#8221;3.0.97&#8243; text_font=&#8221;||||||||&#8221; header_font=&#8221;||||||||&#8221; header_3_font=&#8221;Comfortaa||||||||&#8221; header_3_font_size=&#8221;20px&#8221; header_3_text_color=&#8221;#00659d&#8221; header_4_font=&#8221;||on||||||&#8221; header_4_text_color=&#8221;#000000&#8243;]<\/p>\n<h3><b><strong>Making sense of (large amounts of) human intracranial EEG data<\/strong><br \/> <\/b><\/h3>\n<h4><em>\u00a0Jean-Philippe Lachaux<\/em><\/h4>\n<p style=\"text-align: justify;\">In this workshop, we will learn how to apprehend large-scale cortical dynamics supporting cognitive functions using intracranial EEG data from epileptic patients. I will rely on a novel iEEG visualization software (HiBoP) and a large iEEG dataset which will be freely distributed within the Human Brain Project (including visual and auditory perception, attention, language and memory tasks). The demonstration will mostly focus on the issue of comparative timing (relative latencies of activation across cortical sites) and functional connectivity, as revealed by amplitude-amplitude co-fluctuations.<span style=\"font-weight: 400;\">\u00a0<\/span><\/p>\n<p style=\"text-align: right;\"><span style=\"font-weight: 400;\">Teaching material available soon, we are debugging<br \/><\/span><\/p>\n<p style=\"text-align: justify;\">\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][\/et_pb_section]<\/p>\n","protected":false},"excerpt":{"rendered":"<p>[et_pb_section fb_built=&#8221;1&#8243; background_color=&#8221;#00659d&#8221; _builder_version=&#8221;3.0.97&#8243; background_color_gradient_direction=&#8221;152deg&#8221; custom_css_after=&#8221;display: block;||position: absolute;||content: &#8221;;||width: 100px;||height: 100px;||bottom: -50px;||left: 50%;||margin-left: -50px;||background-color: #00659d; \/** Change This Value ***\/||-ms-transform: rotate(45deg);||-webkit-transform: rotate(45deg);||transform: rotate(45deg);||z-index: 1;||&#8221;][et_pb_row custom_padding=&#8221;0px||0px|&#8221; custom_margin=&#8221;||-50px|&#8221; _builder_version=&#8221;3.0.92&#8243;][et_pb_column type=&#8221;4_4&#8243; _builder_version=&#8221;3.0.47&#8243; parallax=&#8221;off&#8221; parallax_method=&#8221;on&#8221;][et_pb_text background_layout=&#8221;dark&#8221; _builder_version=&#8221;3.0.97&#8243; header_font=&#8221;Comfortaa|700||on|||||&#8221; custom_padding=&#8221;20px|||&#8221;] Tutorials instructions [\/et_pb_text][\/et_pb_column][\/et_pb_row][\/et_pb_section][et_pb_section fb_built=&#8221;1&#8243; fullwidth=&#8221;on&#8221; _builder_version=&#8221;3.0.92&#8243;][et_pb_fullwidth_code _builder_version=&#8221;3.0.92&#8243;]&lt;svg id=&quot;curveDownColor&quot; xmlns=&quot;http:\/\/www.w3.org\/2000\/svg&quot; version=&quot;1.1&quot; width=&quot;100%&quot; height=&quot;100&quot; style=&quot;position:absolute; padding-top:0; margin-top:0;fill: #00659d; stroke: #00659d; top:0px;&quot; viewBox=&quot;0 [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_et_pb_use_builder":"on","_et_pb_old_content":""},"_links":{"self":[{"href":"https:\/\/cuttingeeg2018.org\/index.php?rest_route=\/wp\/v2\/pages\/1157"}],"collection":[{"href":"https:\/\/cuttingeeg2018.org\/index.php?rest_route=\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/cuttingeeg2018.org\/index.php?rest_route=\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/cuttingeeg2018.org\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/cuttingeeg2018.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1157"}],"version-history":[{"count":0,"href":"https:\/\/cuttingeeg2018.org\/index.php?rest_route=\/wp\/v2\/pages\/1157\/revisions"}],"wp:attachment":[{"href":"https:\/\/cuttingeeg2018.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1157"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}