![]() DECODING THE VISUAL ATTENTION OF AN INDIVIDUAL FROM ELECTROENCEPHALOGRAPHIC SIGNALS
专利摘要:
A method of determining the focus of an individual's visual attention from electroencephalographic signals. At least one visual stimulus to be displayed is generated (411) from at least one graphic object, a visual stimulus being an animated graphic object obtained by applying to a graphic object a temporal sequence of elementary transformations parameterized temporally by a signal corresponding modulation. From a plurality of electroencephalographic signals produced by the individual focusing his visual attention to one of the visual stimuli, is reconstructed (414) a modulation signal. A visual stimulus corresponding to the modulation signal for which the degree of statistical dependence with the reconstructed modulation signal is greater than a first threshold is identified (415). 公开号:FR3070852A1 申请号:FR1758305 申请日:2017-09-08 公开日:2019-03-15 发明作者:Sid Kouider;Jean-Maurice Leonetti;Nicolas Barascud;Robin Zerafa 申请人:Ecole Des Hautes Etudes En Sciences Sociales;Centre National de la Recherche Scientifique CNRS;Ecole Normale Superieure de Paris; IPC主号:
专利说明:
DECODING THE VISUAL ATTENTION OF AN INDIVIDUAL FROM ELECTROENCEPHALOGRAPHIC SIGNALS TECHNICAL FIELD [θθΐ] The present description relates to a method and system for determining the focus of the visual attention of an individual from electroencephalographic signals. STATE OF THE ART [002] We can see the emergence of various portable systems dedicated to the recording and the exploitation of electroencephalographic signals (EEG signals) in multiple applications. In particular, the miniaturization of EEG signal recording systems and the significant developments in analysis techniques linked to the real-time decoding of EEG signals now make it possible to envisage new applications that are both fast and reliable in their use. [003] Certain decoding techniques are based on an extraction of the EEG signals from electro-physiological descriptors making it possible to predict in real time the relationships between brain activity and visual stimuli in the environment. The difficulty here consists in identifying, in the EEG signal and in real time, the descriptors specific to a specific visual stimulus to which an individual pays attention among a multitude of other visual contents. Such a decoding must be robust, that is to say must allow, quickly and precisely, to determine to which specific content is focused the visual attention of the individual in order to be able to trigger a command corresponding to the focused visual stimulus by the individual. The patent document US8391966B2 describes a technique for analyzing EEG signals produced by an individual observing stimuli, each stimulus being constituted by a flashing light source at a given frequency. Different kinds of descriptors are generated to allow classification of EEG signals into classes corresponding to different stimuli for the purpose of identifying the visual stimulus observed at a given time. EEG signals are for example cut into successive segments and correlation coefficients between pairs of segments of the same signal are calculated to produce a first type of descriptor. An average correlation coefficient is calculated and then compared to a threshold to determine whether or not the user is observing the stimulus. Furthermore, the correlation between an EEG signal and the stimulus can be analyzed to generate a second type of descriptor: the degree of correlation with a stimulus will be higher if the individual actually observes this stimulus. The coefficients of an autoregressive model can be calculated from an average EEG signal, the coefficients of the model constituting a third type of descriptor. This technique presupposes a prior classification of EEGs by means of several types of descriptors in association with a method of discrimination by thresholding, search for closest neighbors, neural networks, etc. The technique is therefore dependent on the relevance of the descriptors used and the method of discrimination chosen. In addition, the technique is limited to stimuli in the form of flashing lights, which greatly limits the scope. [008] There therefore appears a need for a technique which allows reliable decoding of the EEG signals in real time and which is applicable to human-machine interfaces of software applications, comprising for example text, images, logos and / or menus. SUMMARY The present description relates, according to a first aspect, to a method of determining the focusing of the visual attention of an individual from electroencephalographic signals. The method comprises generating a set of at least one visual stimulus to be displayed from at least one graphic object, at least one elementary transformation and at least one modulation signal, a visual stimulus being a animated graphic object obtained by applying to a graphic object a temporal sequence of elementary transformations parameterized temporally by a corresponding modulation signal; a reconstruction of a modulation signal from a plurality of electroencephalographic signals produced by the individual to obtain a reconstructed modulation signal; a calculation of a degree of statistical dependence between the reconstructed modulation signal and each modulation signal of said set; an identification of at least one visual stimulus corresponding to a modulation signal for which the degree of statistical dependence is greater than a first threshold. The method according to the first aspect consists of a hybrid decoding method which combines an approach by reconstruction of stimuli from electroencephalographic signals (EEG) with an excitation method by means of one or more stimuli having temporal characteristics. mentioned above, capable of being reproduced in the brain of an individual observing this or these stimuli and therefore identifiable in the EEG signals. Such a combination makes it possible both to increase the sensitivity (that is to say the signal to noise ratio) of the EEG signal to the stimuli generated and to have a robust analysis method applicable in real time. The method is applicable to one or more visual stimuli. In addition, the use of elementary transformations (for example, a variation in light intensity, a variation in contrast, a colorimetric transformation, a geometric deformation, a rotation, an oscillation, a displacement along a trajectory, etc. ) provides a wide range of visual stimuli in the form of graphic objects which opens the door to many applications requiring a distinction to be made between numerous graphic objects presented simultaneously to a user. The technique is thus adaptable for example to a display of alphanumeric characters keyboard for the identification of the alphanumeric character on which the attention of a user is focused or more generally to a display of a plurality of logos , menus, graphic elements, etc. The modulation provided by the modulation signal does not affect viewing comfort as long as the frequency components of this modulation signal are less than about 25Hz. A modulation signal has for example a periodic pattern, repeating with a frequency between 2 and 20 Hz, this modulation signal being sampled at a sampling frequency corresponding to the refresh frequency (in general, greater than 60Hz) of l screen on which the stimuli are displayed. [0013] The reconstruction and search for statistical dependence can be carried out in real time, which makes it possible to identify in real time the graphic object on which the individual focuses. In particular, no prior classification of the EEG signals into classes corresponding respectively to the different stimuli is required for the reconstruction of the modulation signal. According to one or more embodiments of the method according to the first aspect, the set of at least modulation signal comprises a plurality of modulation signals and the method further comprises a search, among the plurality of modulation signals , a modulation signal for which the degree of statistical dependence on the reconstructed modulation signal is maximum; and an identification of the visual stimulus corresponding to the modulation signal for which the degree of statistical dependence is maximum; the modulation signals being composed so that a global statistical dependence degree, determined in the time and / or frequency domain, for all the pairs of modulation signals corresponding to two distinct visual stimuli, is less than a second threshold. According to one or more embodiments of the method according to the first aspect, the reconstruction is carried out by applying a reconstruction model to the plurality of electroencephalographic signals. The reconstruction model establishes the mathematical relationship between a modulation signal used for the generation of a visual stimulus and the electroencephalographic response of the individual focusing his attention on this visual stimulus. The reconstruction model is thus used to extract from the EEG signals the information relevant to a type of stimulus. This relationship is mainly dependent on the position on the skull of an individual (also referred to as a user) of the electrodes of the EEG signal acquisition equipment. The visual characteristics of stimuli (type, size, position, color, etc.) have a reduced impact on this relationship. The reconstruction model is capable of using various mathematical models, linear or not, making it possible to combine the EEG signals to reconstruct a modulation signal. Because a modulation signal is reconstructed, and not the visual stimulus as such (i.e., the animated graphic object that is displayed on a screen), reconstruction is possible simply and with precision, for example by simple linear combination of EEG signals, and this without restriction as to the nature or the semantic content of the animated graphic object, usable as visual stimulus. The search for statistical dependence between the reconstructed modulation signal and the different modulation signal (s) is also facilitated. In addition, the visual stimulus (s) can be displayed on all commonly available screens, such as: computer screen, tablet screen, telephone terminal screen, etc. It is therefore not necessary to have a dedicated stimuli production system. Thus the use of modulation signals for the generation of visual stimuli not only allows an easy reconstruction and search for statistical dependence, but also makes the method flexible and adaptable to any type of visual stimuli in the form of an animated graphic object. According to one or more embodiments of the method according to the first aspect, the reconstruction model comprises a plurality of parameters for combining electroencephalographic signals and the method comprises determining values of the parameters of the plurality of parameters of signal combination electroencephalographic during an initial learning phase. According to one or more embodiments of the method according to the first aspect, the method further comprises, during an initial learning phase applied to a set of at least one visual stimulus among the plurality of visual stimuli, obtaining, for each visual stimulus of said set, electroencephalographic test signals produced by the individual focusing his attention on the visual stimulus considered; and determining optimal values of the plurality of electroencephalographic signal combination parameters for which application of the reconstruction model to the plurality of test electroencephalographic test signals recorded for a visual stimulus generates a reconstructed modulation signal that best approximates modulation signal corresponding to the visual stimulus considered. Thus, by determining the parameters for combining the EEG signals for a given acquisition device, it is possible to reliably generate a reconstructed modulation signal and compare it with those used for the generation of visual stimuli. This relationship being stable over time, and not very dependent on visual stimuli and graphic objects, these parameters of combination of EEG signals are reusable for all subsequent visual stimuli likely to be presented to an individual, even if these stimuli are different from those used for in the learning phase for determining the reconstruction model. The reconstruction model finally makes it possible to manage a possible variability from one individual to another in that the parameters of the reconstruction model can be adjusted for each individual. The present description relates, according to a second aspect, to a computer program comprising code instructions for the execution of the steps of a method according to the first aspect, when this computer program is executed by a data processor. The present description relates, according to a third aspect, a computer system comprising at least a memory for storing code instructions of a computer program configured for the execution of a method according to the first aspect and at least one data processor configured to execute such a computer program. The present description relates, according to a fourth aspect, a system for determining the focus of the visual attention of an individual from electroencephalographic signals, the system comprising: a device for generating display signals configured to generate a set of at least one visual stimulus to be displayed from at least one graphic object, at least one elementary transformation and at least one modulation signal, a visual stimulus being an animated graphic object obtained by applying to a graphic object a temporal sequence of elementary transformations parameterized temporally by a corresponding modulation signal; a signal processing device configured to obtain a plurality of electroencephalographic signals produced by the individual; obtaining a reconstructed modulation signal by reconstructing a modulation signal from the plurality of electroencephalographic signals; calculating a degree of statistical dependence between the reconstructed modulation signal and each modulation signal of said set; and identifying at least one visual stimulus corresponding to a modulation signal for which the degree of statistical dependence is greater than a first threshold. According to one or more embodiments of the method according to the fourth aspect, the set of at least modulation signal comprises a plurality of modulation signals and the signal processing device is further configured to search, among the plurality of modulation signals, of the modulation signal for which the degree of statistical dependence on the reconstructed modulation signal is maximum; identify the visual stimulus corresponding to the modulation signal for which the degree of statistical dependence is maximum; the modulation signals being composed so that a global statistical dependence degree, determined in the time and / or frequency domain, for all the pairs of modulation signals corresponding to two distinct visual stimuli, is less than a second threshold. BRIEF DESCRIPTION OF THE FIGS. Other advantages and characteristics of the technique presented above will appear on reading the detailed description below, made with reference to FIGS, in which: FIG. IA schematically represents a system for determining the focus of the visual attention of an individual from EEG signals according to an exemplary embodiment; FIG. IB schematically represents a computer device according to an exemplary embodiment; FIG. 2A schematically represents the data and signals used in a system and method for determining the focus of visual attention according to an exemplary embodiment; FIGS. 2B-2E each schematically represents examples of modulation signals usable in a method or system for determining the focus of visual attention; FIG. 3 schematically represents aspects of a method and system for determining the focus of visual attention; FIG. 4A is a flow diagram of a method for generating an EEG signal reconstruction model according to an exemplary embodiment; FIG. 4B is a flow diagram of a method for determining the focus of the visual attention of an individual according to an exemplary embodiment; FIG. 5 illustrates an example of an animated graphic object in FIG. 6 shows examples of visual stimuli; FIG. 7 illustrates an example of the application of a system and method for determining the focus of visual attention. In the various embodiments which will be described with reference to FIGS., Similar or identical elements have the same references. DETAILED DESCRIPTION The present description is made with reference to functions, functional units, entities, block diagrams and flowcharts which describe different embodiments of methods, systems and programs. Each function, functional unit, entity, step in an organization chart can be implemented by software, hardware, firmware, microcode or any suitable combination of these technologies. When software is used, functions, functional units, entities, or steps can be implemented by computer program instructions or software code. These instructions can be stored or transmitted to a storage medium readable by a computer and / or be executed by a computer in order to implement these functions, functional units, entities or steps. The various embodiments and aspects described below can be combined or simplified in many ways. In particular, the steps of the different methods can be repeated for each set of graphic objects concerned and / or each user concerned, the steps can be reversed, executed in parallel, executed by different IT entities. Only certain embodiments of examples are described in detail to ensure the clarity of the description, but these examples are not intended to limit the general scope of the principles emerging from this description considered as a whole. FIG. IA schematically represents an exemplary embodiment of a system 100 for determining the focus of the visual attention of an individual (also referred to below, user) 101 from electroencephalographic signals. In one or more embodiments, the system 100 includes a display screen 105 configured to display animated graphic objects, a device 110 for generating display signals, a device 120 for processing signals, equipment 130 for acquiring EEG signals and a device 140 for controlling the equipment 130. In one or more embodiments, the device 110 for generating display signals is configured to generate display signals to be displayed by the display screen 105. These display signals encode a plurality of visual stimuli intended to be presented to the user 101 by means of the display screen 105. In one or more embodiments, the equipment 130 is configured for the acquisition of EEG signals. This equipment is for example produced in the form of a helmet, provided with electrodes intended to come into contact with the skull of the user 101. Such a helmet is, for example, a helmet manufactured by the company Biosemi ®, provided with 64 electrodes. Other types of equipment can be used: for example EEG Geodesy ™ devices from the company Electrical Geodesics Inc. (EGI) ®, or those from the company Compumedics NeuroScan ® usually having between 16 and 256 electrodes. In the following description, it is assumed by way of example that the equipment 130 is produced in the form of a helmet. In one or more embodiments, the signal processing device 120 is configured to process the EEG signals acquired by means of the headset 130 for acquiring EEG signals. In one or more embodiments, the device 140 for controlling the equipment 130 is a device serving as an interface between the headset 130 and / or the device 120 for signal processing. The control device 140 is configured to control the acquisition of EEG signals and to obtain the EEG signals acquired by the headset 130. In particular, the device 140 for controlling the headset 130 is configured to send a command to trigger the acquisition EEG signals. All or part of the functions described here for the device 110 for generating display signals, the signal processing device 120 and the control device 140 can be performed by software and / or hardware and implemented in at at least one computer device comprising a data processor and at least one data storage memory. In one or more embodiments, each device 110, 120, 140 and each of the steps of the methods described are implemented by one or more physically separate computer devices. Conversely, the various devices 110, 120, 140 can be integrated into a single computer device. Similarly, the steps of the methods described here can be implemented by a single computer device. The equipment 130 for acquiring EEG signals may also include a computer device and be configured (data processor, memory, etc.) to implement all or part of the steps of the methods described in this document. Each computer device presents the overall architecture of a computer, including components of such an architecture: data memory (s), processor (s), communication bus, user interface (s), interface (s) ) hardware (s) for connecting this computing device to a network or other equipment, etc. An exemplary embodiment of such an architecture is illustrated in Figure IB. This architecture comprises a processing unit 180 including at least one data processor, at least one memory 181, one or more data storage media 182, and hardware interfaces 183 such as network interfaces, interfaces for the connection of peripherals. , at least one user interface 184 including one or more input / output devices such as mouse, keyboard, display, etc. The data storage medium 182 includes code instructions for a computer program 186. Such a storage medium 182 can be an optical storage device such as a compact disc (CD, CD-R or CD-RW), DVD (DVD-Rom or DVD-RW) or Blu-ray, a magnetic medium such as hard disk, flash memory disk, magnetic tape or floppy disk, a removable storage medium such as a USB key, an SD or micro memory card SD, etc. The memory 181 can be a random access memory (RAM), a read only memory (ROM), a cache memory, non-volatile memory, a backup memory (for example programmable or flash memories), read only memories or any combination of these. types of memory. The processing unit 180 can be any microprocessor, integrated circuit, or central unit comprising at least one processing processor based on computer hardware. FIG. 2A schematically represents the data and signals used in a system and method for determining the focus of visual attention. The corresponding notations are introduced. In one or more embodiments, a plurality of graphic objects 01, 02, ..., ON intended to be presented to a user 101 is used. Each of these graphic objects can be an alphanumeric character (number or letter or other character), a logo, an image, a text, a user interface menu item, a user interface button, an avatar, a 3D object. , etc. Each of these graphic objects can be coded by a bitmap or vector image. In one or more embodiments, one or more elementary transformations Tl, T2, ..., TP are defined to be applied to graphic objects 01, 02, ..., ON. An elementary transformation can be: a variation of light intensity, a variation of contrast, a colorimetric transformation, a geometric deformation, a rotation, an oscillation, a displacement according to a plane or three-dimensional trajectory, a change of shape, even a change of graphic object etc. A change of graphic object can for example correspond to a transformation which replaces a graphic object by another graphic object of the same category, for example, the replacement of a letter (for example A) by another letter (for example . B), from one number to another number, one logo to another logo, etc. An elementary transformation can also be a combination of several of the aforementioned elementary transformations. Each of these elementary transformations is configurable by at least one application parameter. In one or more embodiments, an application parameter defines a degree of transformation of the elementary transformation on a predetermined scale. For example, a scale from 0 to 100 or from -100 to + 100 can be used. For example, when the elementary transformation is a variation in light intensity, this variation in intensity can be applied with a variable degree of transformation between 0 and 100, a degree of transformation equal to 0 signifying that the image coding the graphic object is not modified, a degree of transformation equal to 100 indicating that the image becomes completely white or, on the contrary, black. By varying the degree of transformation between 0 and 100, we obtain an image flashing effect. In another example, when the elementary transformation is a contrast variation, this contrast variation can be applied with a variable degree of transformation between 0 and 100, a degree of transformation equal to 0 meaning that the image coding l the graphic object is not modified, a degree of transformation equal to 100 indicating that the contrast of the image becomes maximum (the image becomes a black and white image, if it is coded in gray level). Similarly, for a geometric deformation of the morphing type, a degree of transformation can correspond to the degree of morphing. For a rotation, a degree of transformation can correspond to an angle of rotation. For an oscillation, a degree of transformation can correspond to a speed and / or amplitude of oscillation. For a displacement on a trajectory, a degree of transformation can correspond to a distance traveled and / or speed of displacement on the trajectory. For a change of shape (respectively of category of object), a degree of transformation can correspond to a speed and / or amplitude reflecting the passage from one shape (respectively from category to another). For each of the graphic objects 01, 02, ON, a corresponding SMI, SM2, SMN modulation signal is generated. A modulation signal is used to define the variations as a function of time of one or more parameters for applying the elementary transformation applied to the graphic object under consideration. For example, the degree of transformation di (t) at time t is defined by the amplitude SMi (t) of the modulation signal SMi at time t. In one or more embodiments, an animated graphic object O Al, OA2, ..OAN is generated for each corresponding graphic object 01, 02, ..ON from one or more corresponding elementary transformations and a corresponding modulation signal. The animated graphic object OA1, OA2, OAN thus generated is presented on a display screen 105. In one or more embodiments, a visual stimulus is an animated graphic object OAi (i whole number between 1 and N) obtained by application to a corresponding graphic object Oi of a time sequence STi of elementary transformations parameterized in time by a corresponding modulation signal SMi. Thus, at each instant tz of a discrete sequence of instants tO, tl, · · tz, ... in a time interval [tmin, tmax], a modified graphic object OAi (tz) is generated, by application d an elementary transformation Ti corresponding to the graphic object Oi with a degree of transformation di (tz) corresponding to the amplitude SMi (tz) at the instant tz of the modulation signal SMi corresponding to the graphic object Oi. The animated graphic object thus corresponds to the temporal succession of the modified graphic objects OAi (tz) when tz varies in the time interval [tmin, tmax], [0052] In one or more embodiments, EEG signals, noted E1, E2, ..., EX, are acquired by means of an equipment 130 for acquiring EEG signals. From the EEG signals E1, E2, ..EX, a reconstructed modulation signal SMR is generated. In one or more embodiments, the reconstructed modulation signal SMR is generated by applying a reconstruction model MR to the signals El, E2, ..EX. The parameters of the MR reconstruction model are noted PI, P2, ... PK. The reconstruction model MR can be a linear model, the reconstructed modulation signal being a linear combination of the signals E1, E2, ..EX. Other more elaborate models can be used, in particular models by neural networks. In one or more embodiments, a modulation signal is composed of elementary signals. These elementary signals can be rectangular, triangular, sinusoidal signals, etc. The modulation signals can have different durations. In one or more embodiments, the modulation signals are periodic, a time pattern being reproduced periodically by each modulation signal. A modulation signal has for example a periodic time pattern, repeating with a frequency between 2 and 20 Hz, this modulation signal being sampled at a sampling frequency corresponding to the refresh frequency (in general, greater than 60Hz) of the screen on which visual stimuli (i.e. animated graphic objects) generated from the modulation signals are displayed. The amplitude of the modulation signal SMi is used to define a degree of transformation. The relationship between the amplitude of the SMi modulation signal and the degree of transformation may or may not be linear. The amplitude of a SMi modulation signal can vary between a minimum value (corresponding to a first degree of transformation) and a maximum value (corresponding to a second degree of transformation). In one or more embodiments, the modulation signals are in this case independent two by two: the modulation signals are composed so that the dependence, measured in the time and / or frequency domain, between two signals Any distinct modulation is either minimum (eg zero) or below a given SCI threshold. The dependence between two signals can be quantified by a degree of statistical dependence. The degree of statistical dependence between two modulation signals can be calculated, in the time domain, for example by a time correlation coefficient and / or, in the frequency domain, for example by the spectral coherence rate. In one or more embodiments, for each pair of modulation signals corresponding to distinct visual stimuli, the modulation signals are temporally decorrelated two by two. In one or more embodiments, a degree of statistical dependence can be calculated for each pair of modulation signals corresponding to distinct visual stimuli, then a degree of overall statistical dependence (for example, the average degree of dependence, the maximum degree of dependence or the cumulative degree of dependence) can be calculated for all the pairs of modulation signals. The modulation signals are determined by looking for modulation signals which minimize this degree of overall statistical dependence or make it possible to obtain a degree of overall statistical dependence below a predetermined SCI threshold. In one or more embodiments, the degree of statistical dependence calculated for each pair of modulation signals corresponding to distinct visual stimuli is zero or less than a SCI threshold. In one or more embodiments, the degree of statistical dependence between two modulation signals can be calculated as the time correlation coefficient between these signals, where the correlation coefficient p (X, Y) between two signals X and Y can be obtained by the Pearson formula: _ e [(x - E (x)) (r - E (r))] _ E [xr] - e [x] e [y] σ γ σ χ σ γ where E denotes the mathematical expectation of a signal, and σ its standard deviation. The time correlation coefficient is between 0 and 1, the value 0 corresponding to time decorrelated signals. The degree of statistical dependence between the modulation signals can be determined by other mathematical criteria, such as the Spearman correlation or the Kendall rate, or replaced by statistical dependence measures such as mutual information. The spectral coherence rate between two signals x (t) and y (t) is a function with real values which can be defined for example by the ratio | Gxy (f) | 2 / (Gxx (f) * Gyy (f)) where Gxy (f) is the cross spectral density between x and y, and Gxx (f) and Gyy (f) the autospectral density of x and y respectively. In one or more embodiments, the degree of statistical dependence is calculated over a reference period, corresponding for example to the duration of the reconstruction window (see step 414) and / or the decoding window (see step 415). Effective discrimination of the modulation signals is possible when the overall degree of statistical dependence (calculated for example as the average degree of dependence, the maximum degree of dependence or a cumulative degree of dependence), calculated on all the pairs of signals modulation corresponding to distinct visual stimuli, is zero (for example for temporally decorrelated signals) or less than a SCI threshold, chosen for example equal to 0.2 (or 20% respectively if this degree is expressed as a percentage). The lower the overall statistical dependence, the easier and more effective the identification of the visual stimulus observed by an individual attentive to this stimulus. The probability of discrimination error (corresponding to the percentage of cases in which the visual stimulus identified in step 415 is not the one to which the individual is effectively focusing his visual attention) among all of the modulation signals, the modulation signal which served to generate the visual stimulus observed by the subject is also weaker. The SCI threshold can depend on the choice of the type of modulation signals. In practice, we can set a maximum probability of discrimination error (an acceptable probability for a given application for example), and adjust the modulation signals so as to remain below this maximum discrimination error rate. It is understood here that, even in the case where the reconstruction (see step 414) is ideal (that is to say that the reconstructed signal is equal at each instant to one of the modulation signals SMi), the quality of the decoding (see step 415) depends on the degree of statistical dependence between the modulation signals, because if the modulation signals SMi are entirely dependent on them, it will be impossible to select one modulation signal SMi rather than another. Each of FIGS. 2B, 2C, 2D, 2E schematically represents a set of modulation signals usable in a system for determining the focus of visual attention for the generation of visual stimuli. In Figure 2B, there is shown a first example set of 10 modulation signals, signal 1 to signal 10, which are temporally decorrelated two by two. These 10 signals are sinusoidal signals, periodic, having different frequencies (and therefore periods) between 1 Hz and 2 Hz in 0.2 Hz steps, so that any two signals taken from this first set do not have the same frequency . The phases of these signals can be arbitrary. In the example shown in FIG. 2B, the amplitude of these signals varies between 0% and 100% meaning that the corresponding degree of transformation varies (with an adapted proportionality coefficient) between a minimum value and a maximum value. For all the pairs of modulation signals of this set of 10 signals, the spectral overlap rate is zero (ie absence of a common frequency component) and the maximum time correlation coefficient on all the pairs of modulation signals is 0.2 , this time correlation coefficient being calculated on a correlation window of 4 seconds. In Figure 2C, there is shown a second example set of 10 modulation signals, signal 1 to signal 10, which are temporally decorrelated two by two. These 10 signals are rectangular, periodic signals having frequencies (and therefore periods) different between 1 Hz and 2 Hz in 0.2 Hz steps, so that any two signals taken from this second set do not have the same frequency . The phases of these signals can be arbitrary. As in FIG. 2B, the amplitude of these signals varies between 0% and 100% meaning that the corresponding degree of transformation varies (with an adapted proportionality coefficient) between a minimum value and a maximum value. For all the pairs of modulation signals of this set of 10 signals, the spectral overlap rate can be non-zero if certain harmonic components are common but the coefficient of maximum temporal correlation on all the pairs of modulation signals is 0.17 , this time correlation coefficient being calculated on a correlation window of 4 seconds. In FIG. 2D, a third example set of 10 modulation signals is shown, signal 1 to signal 10, which are temporally decorrelated two by two. These 10 signals are periodic signals having the same period (called reference period in FIG. 2D) and are composed of elementary rectangular signals so that the time patterns of any two signals taken from this third set are distinct during the reference period. In this case, the phase of each of the signals is important in that it must be adjusted so as to limit the time correlation coefficient, and therefore the degree of statistical dependence, to a maximum value for each pair of distinct modulation signals. selected from this set of 10 modulation signals. As in FIG. 2B, the amplitude of these signals varies between 0% and 100% meaning that the corresponding degree of transformation varies (with an adapted proportionality coefficient) between a minimum value and a maximum value. In FIG. 2E, a third example set of 9 modulation signals is shown, signal 1 to signal 9, which are temporally decorrelated two by two. These signals are periodic signals having the same period (called the reference period in FIG. 2E). Each of the modulation signals comprises a time pattern composed of a short rectangular pulse of 100% amplitude followed by a signal of longer duration of 0% amplitude, the rectangular pulses of the different visual modulation signals being shifted in time. relative to each other so that at a given instant, a single modulation signal has an amplitude at 100% while the others have an amplitude at 0%. These modulation signals all having the same temporal pattern (with a different phase shift), it is by adjusting the phase of each of the signals that one can control the coefficient of temporal correlation, and therefore the degree of statistical dependence, between two signals. When the transformation function used is a brightness change function, the graphic object being visible (unchanged brightness) when the modulation signal is at 100% and invisible when the modulation signal is at 0% (zero brightness), the animated graphic objects, which are obtained from these modulation signals and this elementary transformation obtained, blink while appearing and disappearing in a given order, a single visual stimulus being visible at a given instant. For all pairs of modulation signals in this set of 10 signals, the time correlation coefficient is zero. It is therefore possible to obtain temporally decorrelated modulation signals, by using separate frequencies, phases or temporal patterns for each pair of modulation signals, for example: With signals composed of the same periodic temporal pattern, but having frequencies and therefore different periods (case of FIGS. 2B and 2C), it does not matter the phase; With signals composed of distinct periodic temporal patterns, having or not the same duration (i.e. the period of the signal), with specific phases specific to each temporal pattern (case of Figure 2D); With signals composed of the same periodic temporal pattern, having the same period, but the patterns being out of phase with respect to each other (case of FIG. 2E). FIG. 3 schematically represents aspects of a method and system for determining the focus of visual attention. In one or more embodiments, the reconstructed modulation signal SMR is compared to each of the modulation signals SMI, SM2, ..., SMN to search for a modulation signal for which the degree of statistical dependence is maximum. For example, if the degree of statistical dependence is maximum for the SM4 modulation signal, this means that an individual's visual attention is focused on the OA4 visual stimulus generated from this SM4 modulation signal. In the example shown in FIG. 3, the visual stimuli are the numbers 0 to 9, the visual attention of the individual is focused on the number 4 corresponding to the visual stimulus O A4. The maximum degree of statistical dependence is found with the corresponding SM4 modulation signal. An exemplary embodiment of a method for generating an MR reconstruction model is illustrated diagrammatically in FIG. 4A. Although the steps of this process are presented sequentially, at least some of these steps can be omitted, or they can be executed in a different order, or they can be performed in parallel or combined to form a single step. In a step 401, a test i (ie [1; N]) is carried out with a visual stimulus generated from a modulation signal SMi: the visual stimulus is presented on a display screen and an individual is invited to observe the visual stimulus, that is to say to focus his visual attention on this visual stimulus. Each visual stimulus is an animated graphic object obtained by applying to a graphic object a temporal sequence of elementary transformations parameterized temporally by a corresponding modulation signal. EEG test signals Ei, j are recorded while the individual focuses on the visual stimulus considered, where i is the index identifying the test and the corresponding modulation signal, and j the index identifying the EEG channel checked in. Each of these EEG signals Ei, j is composed of a plurality of segments of EEG Ei, j, k, where k is the index identifying the segment. In an exemplary implementation of step 401, ten visual stimuli in the form of flashing numbers (numbers ranging from 0 to 9) are displayed on a screen, each flashing at a slightly different frequency. The individual is equipped with an EEG headset and views a display screen on which the ten digits flash at different frequencies. A succession of tests is carried out. Each test lasts for example ten seconds, the interval between two tests being for example 1 or 2 seconds. On each attempt, the individual is instructed to focus on one of the numbers and should ignore the others until the next attempt. The individual thus switches his attention from one stimulus to another and generates in EEG signals E1, E2, ..., EX at different frequencies depending on the focus of visual attention. In a first alternative embodiment, a time stamping of the EEG segments is carried out during step 401. A time stamping of the modulation signals is also carried out during step 401. The time stamping can be carried out by any method. In this first alternative embodiment, the clock of the acquisition equipment 130 is used to time stamp the EEG segments and produce a time stamp ti '(or "timecode" according to English terminology) for each segment of EEG. The clock of the controller 140 is used to time stamp the modulation signals and produce a time stamp each time a predetermined event takes place in the stimulation (corresponding for example to the appearance of a new visual stimulus at screen or at the start of a stimulus display). In a second embodiment, an additional EEG channel is used, via which short electrical pulses of known amplitude are transmitted each time a predetermined event takes place in the stimulation (corresponding for example to the appearance a new visual stimulus on the screen or at the start of the display of a stimulus). This additional EEG channel containing the short pulses is saved with the EEG data segments. Step 401 is repeated several times, for each visual stimulus of a plurality of visual stimuli, so as to record the corresponding EEG signals produced by the individual when he brings his visual attention to the visual stimulus considered. In a step 402, the time alignment (or synchronization) between the segments EEGEi, j, k and the modulation signals SMi is performed. This synchronization can be carried out by any method. This synchronization can use the double time stamp of the EEG segments and of the modulation signals, or else the additional EEG channel. When using the double time stamp, and the time stamps are produced by two different clocks, it is necessary to correct these values so as to obtain time stamps virtually produced by the same reference clock so as to correct for possible temporal drifts between the clocks. For example, when the clock of the control device is used as the reference clock, the time stamps ti ′ of the time-stamped EEG segments Ei, j, k, produced by the clock of the acquisition device, are resynchronized with respect to the reference clock to obtain time stamps ti. By associating these corrected time stamps of the EEG segments with those produced for the modulation signals, one can achieve alignment between the EEG segments Ei, j, k and the SMi signals. The difference between the reference clock (Z) of the control device 140 and that of the equipment 130 for acquiring EEG data (tj is modeled by a linear equation: diff = a * (t'-to ) + b = t '- ζ where a represents the drift between the two clocks and b the offset to t' = to. To estimate these coefficients a and b, we do, before step 401, the acquisition d 'a series of x points (t', diff (t ')), then we estimate the coefficients a and b by the method of least squares. In order to compensate for the random variations in the execution times of instructions and data transmission between the control device 140 and the acquisition equipment 130, each point (t ', diff (t')) is obtained from the successive sending of n time stamps tk by the control device 140 to the acquisition equipment 130 which produces a stamp tk 'at each reception. The point (t', diff (t ' )) used for the calculation of the coefficients a and b corresponds to the couple (& diff = tk ’- tk) which has the minimum difference (&’ - &). Once the coefficients a and b have been obtained, the time stamps ti ’are corrected as follows: ti = ti '- a * (ti' - to) - b The timestamping and time registration steps are however optional and are in particular not necessary when the acquisition equipment 130 and the control device are operating based on the same clock. In a step 403, the EEG segments Ei, j, k, are concatenated so as to generate the EEG signals Ei, j. In a step 404, a preprocessing and denoising can be applied to the EEG signals so as to optimize the signal / noise ratio. EEG signals can in fact be considerably contaminated by artefacts of both intra- and extra-cerebral origin, for example electrical artefacts, such as 50 Hz, current of the electrical network in Europe; or biological artifacts, such as eye movements, EKG, muscle activity, etc.). In one or more embodiments, the signals E1, E2, EX are thus denoised before the generation of the reconstructed modulation signal SMR. This denoising can consist in simply filtering the high frequencies in the signals E1, E2, ..., EX, for example all the frequencies above 40 Hz to eliminate the electrical noise produced by the electrical network. Multivariate statistical approaches can be used, including Principal Component Analysis (PCA), Independent Component Analysis (ICA), and Canonical Correlation Analysis (CCA) ), to separate the useful EEG signal components (from brain activity related to the cognitive task in progress), from irrelevant components. In a step 405, the parameters of the reconstruction model are determined. This determination can be made in order to minimize the reconstruction error. The reconstruction model comprises, for example, a plurality of parameters for combining EEG signals. These combination parameters are determined by a method of solving mathematical equations so as to determine optimal combination parameter values, that is to say the values for which the application of the reconstruction model to the plurality of signals EEG Ei, j of tests recorded for a visual stimulus makes it possible to generate a reconstructed modulation signal which best approximates the modulation signal corresponding to the visual stimulus considered, that is to say the values for which the reconstruction error is minimal . In one or more embodiments, the values cq · of the combination parameters can be fixed (independent of time). In other embodiments, these values can be adjusted in real time in order to take into account a possible adaptation of the brain activity of the user 101, or a change in the signal / noise ratio in the signal. EEG during a recording session. The reconstruction model MR can be a linear model, producing a modulation signal by linear combination of the signals Ei, j. In this case, the combination parameters are linear combination parameters cq · and the mathematical equations are linear equations of the form: SMi = cqEi, j pouri e [1; N] Other more elaborate models can be used, in particular models by neural networks, for which the modulation signal is obtained by applying in cascade of non mathematical operations -linear to signals Ei, j. For example, a Siamese network, for which a neural network is trained (from calibration data) to correspond to any EEG signal E a one-dimensional time signal R (in this case, a modulation signal) so as to that for two EEG signals E1 and E2 recorded at different times, respectively produce one-dimensional signals RI and R2 (in this case, modulation signals) which are similar when the attention of the individual is focused on the same animated graphic object, and dissimilar when the attention of the individual is focused on two distinct animated graphic objects. The concept of similarity between two signals is defined in a mathematical sense (it can for example be a simple correlation) and corresponds to a function which quantifies the degree of similarity between two objects (see for example the page: https: / /en.wikipedia.org/wiki/Similarity_measure). Several mathematical definitions of similarity can be used, such as the opposite of Euclidean distance, or even "cosine similarity" (see for example the page: https://fr.wikipedia.org/wiki/Similarité_cosinus). The reconstructed modulation signal is the one-dimensional signal R generated by the neural network from a sample of newly acquired EEG E. In one or more embodiments, the steps of the method for generating a reconstruction model are implemented by a system 100 according to FIG. IA, for example by the device 120 for processing signals. An exemplary embodiment of a method for determining the focus of an individual's visual attention is illustrated diagrammatically by FIG. 4B. Although the steps of this process are presented sequentially, at least some of these steps can be omitted, or they can be executed in a different order, or they can be performed in parallel or combined to form a single step. In one or more embodiments, the steps of the method for determining the focus of visual attention are implemented by a system 100 according to FIG. 1A, for example by the device 120 for processing signals and the device 110 for generating display signals. In a step 411, the display signal generation device 110 is configured to generate a plurality of visual stimuli from a plurality of graphic objects 01, 02, ON, from a plurality of elementary transformations Tl, T2, TP and a plurality of modulation signals SMI, SM2, ..SMN. A visual stimulus is an animated graphic object O Ai (i between 1 and N) obtained by application to a corresponding graphic object Oi of a temporal sequence STi of elementary transformations parameterized temporally by a corresponding modulation signal SMi. In one or more embodiments, the number N of visual stimuli, modulation signals and graphic objects is equal to 1. In one or more embodiments, the number P of elementary transformations is equal to 1. Each elementary transformation of the time sequence STi of elementary transformations can thus correspond to the same elementary transformation whose application parameter varies during time. Over time, the individual can focus his visual attention from one animated graphic object to another. Meanwhile, in a step 412, the electroencephalographic signals El, E2, ..., Ej, ..., EX produced by the individual are recorded by the acquisition equipment 130. In one or more embodiments, the signal processing device 120 is configured to obtain a plurality of electroencephalographic signals El, E2, ..., Ej, ..., EX produced by the individual focusing his attention to one of the OAi visual stimuli. In a step 413, the electroencephalographic signals El, E2, ... .Ej, ... .EX are pretreated and denoised so as to improve the reliability of the method for determining the focus of visual attention. The preprocessing can consist in synchronizing the segments of electroencephalographic signals El, E2, ... Ej, ... EX with respect to a reference clock, as explained above for step 402, in concatenating the segments of electroencephalographic signals as explained above for step 403, and / or performing denoising of the electroencephalographic signals as explained above for step 404. In one or more embodiments, the signal processing device 120 is configured to, during a step 414, obtain a reconstructed modulation signal SMR by reconstruction of a modulation signal from a plurality of electroencephalographic signals El, E2, ..., Ej, ..., EX. In one or more embodiments, the signal processing device 120 is configured to reconstruct a modulation signal and generate a reconstructed modulation signal SMR from the plurality of electroencephalographic signals obtained in step 413 or 412 (with or without pretreatment and / or denoising). In one or more embodiments, the reconstruction is carried out by applying a reconstruction model to the plurality of electroencephalographic signals obtained in step 413 or 412 (with or without pretreatment and / or denoising). This reconstruction can be performed on a given sliding time window, here called the reconstruction window, and repeated periodically for each time position of the reconstruction window. For example, when the reconstruction model MR is a linear model, producing a modulation signal by linear combination of the signals E1, E2, .Ej, EX. In this case, the combination parameters are the linear combination parameters cq obtained during step 405 and the reconstructed modulation signal SMR is calculated by linear combination of the signals El, E2, ..Ej, ..., EX: SMR = Z; "/ Ej [00107] In one or more embodiments, the signal processing device 120 is configured to, during a step 415 (called decoding step), calculate a degree of statistical dependence between the reconstructed modulation signal and each modulation signal from all the modulation signals and identify at least one visual stimulus corresponding to a modulation signal for which the degree of statistical dependence is greater than a threshold SC2, of value for example between 0.2 and 0 3. The fact of identifying at least one visual stimulus corresponding to a modulation signal for which the degree of statistical dependence is greater than a threshold SC2 means that the visual attention of the individual has, a priori, come to focus on this. visual stimulus and / or that this or these visual stimuli have appeared in an area of the display screen observed by the individual. We can therefore use this identification to detect that a display change has taken place and / or that a change in focus of visual attention has taken place. The degree of statistical dependence can be determined as described earlier in this document. The degree of statistical dependence is for example a coefficient of temporal correlation between the reconstructed modulation signal and a modulation signal of all the modulation signals. In one or more embodiments, the number N of visual stimuli, modulation signals and graphic objects is strictly greater than 1 and the signal processing device 120 is configured for, in addition, during a step 415 searching, among the plurality of modulation signals SMI, SM2, SMN, the modulation signal SMi for which the degree of statistical dependence on the reconstructed modulation signal SMR is maximum and identifying the visual stimulus OAi corresponding to the modulation signal SMi for which the degree of statistical dependence is maximum. The visual attention is a priori focused towards the visual stimulus OAi identified. The search is carried out for example by calculating a degree of statistical dependence between the reconstructed modulation signal SMR and each signal of the plurality of modulation signals SMI, SM2, ..., SMN. This decoding step can be performed on a given sliding time window, here called the decoding window, and repeated periodically for each time position of the decoding window. The duration of the decoding window can be identical to that of the reconstruction window. In one or more embodiments, when the number N of visual stimuli, modulation signals and graphic objects is strictly greater than 1, one or more visual stimuli that can be displayed at a given time on the screen display 105. The decoding step 415 can nevertheless be identical regardless of the number of visual stimuli displayed at a given instant, the statistical dependence being able to be sought with any of the modulation signals SMI, SM2,. ., SMN corresponding to the visual stimuli likely to be displayed. This eliminates the need for dynamic modification and synchronization of the processing operations carried out during the decoding step 415 with respect to variations in the content actually displayed. This can be very useful when visual stimuli are integrated into a video or when the user interface, in which visual stimuli are integrated, is dynamically changed as the user interacts with this user interface. In an exemplary embodiment, ten visual stimuli in the form of flashing figures (figures ranging from 0 to 9) are displayed on a screen, each flashing at a slightly different frequency or with the same frequency but appearing alternately at the screen. The individual is equipped with an EEG headset and views a display screen on which the ten digits flash at different frequencies. In one embodiment, when determining the focus of visual attention, in the event of ambiguity between two or more visual stimuli or in the event of disturbances and / or artefacts in the EEG signals recorded as a result of example of user movements, we can temporarily modify (for the time necessary to remove the ambiguity or to find less disturbed signals) the visual stimulus modulation signals. This modification can be carried out so as for example to display only the visual stimuli for which there is ambiguity and / or to modify the modulation signals of the visual stimuli for which there is ambiguity. The modification of the modulation signals can consist in modifying the frequency or the temporal pattern of the modulation signals, so as to increase the frequency and / or the total duration of visibility and / or the degree of transformation of the visual stimuli for which there is ambiguity and reduce the frequency and / or the total duration of visibility and / or the degree of transformation of other visual stimuli. The modification can also consist in permuting the modulation signals between them, without changing their temporal pattern or their frequency. This permutation can be random. Such a permutation returns, when the modulation signals and the elementary transformation are defined so that the visual stimuli flash while appearing and disappearing on the display screen in a given order (see the example of FIG. 2E ), to modify, for example randomly, the order of appearance of visual stimuli, so that the stimuli for which there is ambiguity are visible more frequently. The permutation can be combined with a modification of the modulation signals aimed at increasing the frequency and / or the total duration of visibility and / or the degree of transformation of the visual stimuli for which there is ambiguity. The signal processing device 120 makes it possible to automatically identify, without any information other than 1EEG, the visual stimulus on which attention is focused. A reconstruction model enables the reconstructed modulation signal to be generated from raw 1EEG. The modulation signal is correlated with the modulation signals corresponding to the various animated graphic objects, the visual stimulus observed being that corresponding to the modulation signal for which the degree of statistical dependence is maximum. Tests obtained by combining the results of several individuals show that the method for determining the visual stimulus observed is very robust (error rate less than 10% for signals El, E2, ..., EX recorded for a period of 1 second), even with a very short learning phase, of a few minutes (for example, less than 5 minutes) or even a few seconds (for example, less than 5 seconds) implemented over a few stimuli types. The method for determining the focus of visual attention is applicable not only to numbers but to numerous human-machine interfaces, for example with a full alphanumeric keyboard with 26 characters or more. [00116] FIG. 5 illustrates another example of a human-machine interface to which this method is applicable. In this example, dynamic stimuli consist of logos or icons which are animated, not by elementary transformations which act on the light intensity of the logo, but by application of movements so as to move the logo on itself, in a plane or in a three-dimensional space. These movements are for example oscillations or rotations, at different frequencies decodable in real time. In this case, the amplitude of the corresponding modulation signal indicates the degree of transformation at a given instant, that is to say the angle of rotation to be applied at a given instant. These movements are for example periodic. In FIG. 5, 12 logos APP1 to APP12 are presented, arranged in a grid of 4 X 3 logos. Although this figure is presented in black and white, the logos can also be in color. In the example of FIG. 5, the logos rotate on themselves at different frequencies, as illustrated in FIG. 5 for the APP8 logo. Each of these logos is animated by oscillation in rotation on itself at a frequency distinct from that of the other logos. The periodic rotation applied to the icons induces brain responses detectable in the EEG signals at the natural frequency of rotation, which can be decoded in real time using the techniques described above. This type of interface is highly flexible and can in particular be adapted for smartphones or tablets. Such a man-machine interface makes it possible, for example, to produce graphical interfaces for any type of computer device, for example for software applications and / or operating systems on a mobile terminal or computer, whether the display screen is a touch screen or not. [00118] FIG. 6 illustrates another example of a usable human-machine interface. In order to facilitate the focus of an individual's visual attention on a visual stimulus and to minimize the influence of neighboring visual stimuli on EEG signals, it is possible to use a technique called crowding. ". This technique consists in surrounding each visual stimulus with lateral masks, animated or not, which reduce the visual disturbances linked to the animation of neighboring visual stimuli and allow the individual to focus his attention more effectively on a particular stimulus, and by consequently to improve the decoding thereof. [00119] FIG. 7 illustrates another example of a human-machine interface in which feedback is given to the user on the visual stimulus which has been identified as being observed by the user. The human-machine interface includes the numbers 0 to 9. In the example of fig. 7, the user observes the number 6 and the feedback consists in enlarging the number identified as being observed by applying a method for determining the focus of the visual attention according to this description. In general, the feedback given to the user can consist in highlighting the visual stimulus identified, for example, by highlighting, blinking, zooming, changing position, changing size or changing color, etc. [00120] In one or more embodiments, the visual stimuli are part of a man-machine interface of a software application or of a computer device, and a command is sent to trigger the execution of one or more operations associated with the stimulus visual identified following the identification of the visual stimulus observed by the implementation of a method for determining the focus of visual attention according to the present description. In one or more embodiments, the different steps of the method or methods described in this document are implemented by software or computer program. The present description thus relates to computer software or program comprising software instructions or program code instructions, readable and / or executable by a computer or by a data processor, these instructions being configured to control execution. steps of one or more methods described in this document when this computer program is executed by a computer or data processor. These instructions can use any programming language, and be in the form of source code, object code, or intermediate code between source code and object code, such as in a partially compiled form, or in n ' any other desirable form. These instructions are intended to be stored in a memory of a computer device or computer system, loaded then executed by a processing unit or data processor of this computer device or computer system in order to implement the steps of the method or methods described. in this document. These instructions can be stored, temporarily or permanently, in whole or in part, on a computer-readable, non-transient medium, of a local or remote storage device comprising one or more storage media. The present description also relates to a data medium readable by a data processor, comprising instructions from software or a computer program as mentioned above. The data carrier can be any entity or device capable of storing such instructions. Embodiments of computer readable media include, but are not limited to, both computer storage media and communication media having any medium that facilitates the transfer of a computer program from one place to another. Such a storage medium can be an optical storage device such as a compact disc (CD, CD-R or CD-RW), DVD (DVDRom or DVD-RW) or Blu-ray, a magnetic medium such as hard disk, tape magnetic or floppy disk, a removable storage medium such as a USB key, an SD or micro SD memory card, or a memory, such as random access memory (RAM), read-only memory (ROM), cache memory, nonvolatile memory, backup (for example programmable or flash memories), etc. The present description also relates to a computer device or computer system comprising means for implementing the steps of the process or processes described in this document. These means are software (software) and / or hardware (hardware) means for implementing the steps of the process or processes described in this document. The present description also relates to a computer device or computer system comprising at least one memory for storing code instructions of a computer program for the execution of all or part of the steps of one or more methods. described in this document and at least one data processor configured to execute such a computer program.
权利要求:
Claims (12) [1" id="c-fr-0001] 1. Method for determining the focus of an individual's visual attention from electroencephalographic signals, the method comprising a generation (411) of a set of at least one visual stimulus to be displayed from at least a graphic object, of at least one elementary transformation and at least one modulation signal, a visual stimulus being an animated graphic object obtained by applying to a graphic object a time sequence of elementary transformations parameterized temporally by a signal of corresponding modulation; a reconstruction (414) of a modulation signal from a plurality of electroencephalographic signals produced by the individual to obtain a reconstructed modulation signal; a calculation (415) of a degree of statistical dependence between the reconstructed modulation signal and each modulation signal of said set; an identification (415) of at least one visual stimulus corresponding to a modulation signal for which the degree of statistical dependence is greater than a first threshold. [2" id="c-fr-0002] 2. Method according to claim 1, in which said assembly comprises a plurality of modulation signals, the method comprising: a search (415), among the plurality of modulation signals, for a modulation signal for which a degree of statistical dependence on the reconstructed modulation signal is maximum; and an identification (415) of the visual stimulus corresponding to the modulation signal for which the degree of statistical dependence is maximum; the modulation signals being composed so that a global statistical dependence degree, determined in the time and / or frequency domain, for all the pairs of modulation signals corresponding to two distinct visual stimuli, is less than a second threshold. [3" id="c-fr-0003] 3. The method of claim 1 or 2, wherein the reconstruction is carried out by applying a reconstruction model to the plurality of electroencephalographic signals. [4" id="c-fr-0004] 4. The method of claim 3, wherein the reconstruction model comprises a plurality of parameters of combination of electroencephalographic signals, the method comprising determining values of plurality of parameters of combination of electroencephalographic signals during an initial learning phase. [5" id="c-fr-0005] 5. The method as claimed in claim 4, the method further comprising, during the initial learning phase applied to a set of at least one visual stimulus among the plurality of visual stimuli, obtaining, for each visual stimulus of said set, electroencephalographic signals from tests produced by the individual focusing his attention on the visual stimulus considered; and determining optimal values of the plurality of electroencephalographic signal combination parameters for which application of the reconstruction model to the plurality of test electroencephalographic test signals recorded for a visual stimulus generates a reconstructed modulation signal that best approximates modulation signal corresponding to the visual stimulus considered. [6" id="c-fr-0006] 6. Method according to any one of the preceding claims, in which each modulation signal defines variations as a function of time of at least one parameter for applying an elementary transformation to a graphic object. [7" id="c-fr-0007] 7. The method of claim 6, wherein an application parameter is a degree of transformation or a speed of application of an elementary transformation. [8" id="c-fr-0008] 8. Method according to any one of the preceding claims, in which an elementary transformation is a transformation of the assembly constituted by a variation of light intensity, a variation of contrast, a colorimetric transformation, a geometric deformation, a rotation, a oscillation, displacement along a trajectory, a change of shape and a change of graphic object or a combination of transformations chosen from said set. [9" id="c-fr-0009] 9. Computer program comprising program code instructions for the execution of the steps of a method according to any one of the preceding claims when said computer program is executed by a data processor. [10" id="c-fr-0010] 10. Computer system comprising at least a memory for storing code instructions of a computer program configured for the execution of a method according to any one of claims 1 to 7 and at least one data processor configured to run said computer program. [11" id="c-fr-0011] 11. A system for determining the focus of an individual's visual attention from electroencephalographic signals, the system comprising a device (110) for generating display signals configured to generate (410) a set of at least a visual stimulus to be displayed from at least one graphic object, at least one elementary transformation and at least one modulation signal, a visual stimulus being an animated graphic object obtained by application to a graphic object of a temporal sequence of elementary transformations parameterized temporally by a corresponding modulation signal; a signal processing device (120) configured to obtain (411) a plurality of electroencephalographic signals produced by the individual; obtaining (414) a reconstructed modulation signal by reconstructing a modulation signal from the plurality of electroencephalographic signals; calculating (415) a degree of statistical dependence between the reconstructed modulation signal and each modulation signal of said set; and identifying (415) at least one visual stimulus corresponding to a modulation signal for which the degree of statistical dependence is greater than a first threshold. [12" id="c-fr-0012] The system of claim 11, wherein said assembly comprises a plurality of modulation signals and the signal processing device (120) is further configured to search (415) among the plurality of modulation signals for the signal modulation for which the degree of statistical dependence on the reconstructed modulation signal is maximum; identifying (415) the visual stimulus corresponding to the modulation signal for which the degree of statistical dependence is maximum; the modulation signals being composed so that a global statistical dependence degree, determined in the time and / or frequency domain, for all the pairs of modulation signals corresponding to two distinct visual stimuli, is less than a second threshold.
类似技术:
公开号 | 公开日 | 专利标题 FR3070852B1|2019-09-20|DECODING THE VISUAL ATTENTION OF AN INDIVIDUAL FROM ELECTROENCEPHALOGRAPHIC SIGNALS US9256962B2|2016-02-09|Personalizing medical conditions with augmented reality EP2407912A2|2012-01-18|Method and system for classifying neuronal signals and method for selecting electrodes for direct neuronal control Poli et al.2014|Collaborative brain-computer interface for aiding decision-making Teng et al.2019|Visual working memory directly alters perception Podvalny et al.2019|A dual role of prestimulus spontaneous neural activity in visual object recognition FR3067157A1|2018-12-07|AUTOMATIC FILLING OF ELECTRONIC FILES WO2010004040A2|2010-01-14|Method and device for storing medical data, method and device for viewing medical data, corresponding computer program products, signals and data medium Pilz et al.2009|Learning influences the encoding of static and dynamic faces and their recognition across different spatial frequencies Scherr et al.2018|Acceptance testing of mobile applications-automated emotion tracking for large user groups EP2988249B1|2017-05-24|Method for determining, in an image, at least one area likely to represent at least one finger of an individual Burattin et al.2016|Eye tracking meets the process of process modeling: a visual analytic approach EP3202116B1|2021-12-08|Method and device to assist with decision-making Wu et al.2016|Graphical perception in animated bar charts Salminen et al.2019|Confusion prediction from eye-tracking data: experiments with machine learning CH715893A2|2020-08-31|A system and method for reading and analyzing behavior, including verbal language, body language and facial expressions, to determine a person's congruence. FR3033914A1|2016-09-23|PROCESS FOR PROCESSING AN ASYNCHRONOUS SIGNAL Mohedano et al.2015|Improving object segmentation by using EEG signals and rapid serial visual presentation CN111260545A|2020-06-09|Method and device for generating image Heron et al.2019|Adaptation reveals multi-stage coding of visual duration Lang et al.2020|Augmented reality for people with low vision: symbolic and alphanumeric representation of information CA2439867A1|2002-09-06|Method and device for locating an object by means of the shape, dimensions and/or orientation thereof Lu et al.2009|When a never-seen but less-occluded image is better recognized: Evidence from same-different matching experiments and a model EP3155609B1|2019-07-24|Frequency analysis by phase demodulation of an acoustic signal You2018|Improving the accuracy of low-quality eye tracker
同族专利:
公开号 | 公开日 CN111511269A|2020-08-07| WO2019048525A1|2019-03-14| US20200297263A1|2020-09-24| EP3678532A1|2020-07-15| CA3075210A1|2019-03-14| JP2020537583A|2020-12-24| FR3070852B1|2019-09-20| KR20200097681A|2020-08-19|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US8391966B2|2009-03-16|2013-03-05|Neurosky, Inc.|Sensory-evoked potential classification/detection in the time domain| EP3397146A4|2015-12-29|2019-08-07|LifeQ Global Limited|Cardio-kinetic cross-spectral density for assessment of sleep physiology| WO2021121766A1|2019-12-18|2021-06-24|Nextmind Sas|Brain-computer interface| CN113080998A|2021-03-16|2021-07-09|北京交通大学|Electroencephalogram-based concentration state grade assessment method and system|
法律状态:
2018-09-28| PLFP| Fee payment|Year of fee payment: 2 | 2019-03-15| PLSC| Search report ready|Effective date: 20190315 | 2019-09-30| PLFP| Fee payment|Year of fee payment: 3 | 2020-09-30| PLFP| Fee payment|Year of fee payment: 4 | 2021-09-07| PLFP| Fee payment|Year of fee payment: 5 |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 FR1758305A|FR3070852B1|2017-09-08|2017-09-08|DECODING THE VISUAL ATTENTION OF AN INDIVIDUAL FROM ELECTROENCEPHALOGRAPHIC SIGNALS| FR1758305|2017-09-08|FR1758305A| FR3070852B1|2017-09-08|2017-09-08|DECODING THE VISUAL ATTENTION OF AN INDIVIDUAL FROM ELECTROENCEPHALOGRAPHIC SIGNALS| KR1020207009509A| KR20200097681A|2017-09-08|2018-09-06|How to decode an individual's visual interest from EEG signals| CA3075210A| CA3075210A1|2017-09-08|2018-09-06|Decoding the visual attention of an individual from electroencephalographic signals| CN201880072508.9A| CN111511269A|2017-09-08|2018-09-06|Decoding a person's visual attention from an electroencephalogram signal| PCT/EP2018/073961| WO2019048525A1|2017-09-08|2018-09-06|Decoding the visual attention of an individual from electroencephalographic signals| EP18778371.7A| EP3678532A1|2017-09-08|2018-09-06|Decoding the visual attention of an individual from electroencephalographic signals| US16/645,294| US20200297263A1|2017-09-08|2018-09-06|Decoding the visual attention of an individual from electroencephalographic signals| JP2020535297A| JP2020537583A|2017-09-08|2018-09-06|Methods of determining the subject's visual attention focus from brainwave signals, computer programs, computer systems and systems that determine the subject's visual attention focus from brainwave signals| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|