![]() Facilitating the generation of standardized tests for evaluating gestures on a touch screen based on
专利摘要:
It is proposed to facilitate the generation of standardized tests for the evaluation of gestures on a touchscreen based on data from computer-generated models. A system includes a memory 110 which stores executable components and a processor 112, operatively coupled to the memory, which executes the executable components. Executable components may include a mapping component 102 that correlates a set of operational instructions with a set of gestures on a touch screen and a sensor component 104 that receives sensor data from a plurality of sensors. The sensor data can be linked to the implementation of the series of gestures on a touch screen. The series of gestures on a touch screen can be implemented in an environment subject to vibrations or turbulence. Further, the executable components may include an analysis component 106 which analyzes the sensor data and evaluates performance data and respective usability data of the series of gestures on a touch screen relative to the respective operational instructions. Figure for the abstract: Fig 1 公开号:FR3076642A1 申请号:FR1872578 申请日:2018-12-10 公开日:2019-07-12 发明作者:Mark Smith;George Henderson;Luke Bolton 申请人:GE Aviation Systems Ltd; IPC主号:
专利说明:
Description Title of the invention: Facilitation of the generation of standardized tests for the evaluation of gestures on a touch screen as a function of data from computer generated models [0001] The present invention generally relates to the evaluation of gestures on a touch screen and the facilitation of the generation of standardized tests for the evaluation of gestures on a touch screen according to data from computer generated models. Human machine interfaces can be designed to allow an entity to interact with a computing device through one or more gestures. For example, the one or more gestures can be detected by the computing device and, as a function of respective functions associated with the one or more gestures, an action can be implemented by the computing device. Such gestures are useful in situations where the computing device and the user sit still with little, if any, movement. However, in situations where there are unpredictable, constant movements, such as unstable situations associated with air travel, gestures cannot be made and / or cannot be detected precisely by the computer device. Therefore, gestures cannot be used effectively with computing devices in an unstable environment. One or more examples propose a system which may include a memory which stores executable components and a processor, operatively coupled to the memory, which executes the executable components. Executable components can include a mapping component that correlates a set of operational instructions to a set of gestures on a touch screen. The operational instructions may include at least one defined task performed in relation to a touch screen of a computer device. The executable components can also include a sensor component that receives sensor data from a plurality of sensors. The sensor data can be linked to the implementation of the series of gestures on a touch screen. The series of gestures on a touch screen can be implemented in an environment subject to vibrations or turbulence depending on certain implementations. In addition, the executable components may include an analysis component which analyzes the sensor data and evaluates a score / performance data and / or a score / usability data of the gesture series on a touch screen relative to the respective operational instructions from the set of operational instructions. The score / performance data and / or the score / usability data can be a function of the appropriateness of the series of gestures on a touch screen in the defined environment (for example, an environment subject to vibrations or turbulence ). [0004] Also, in one or more examples, a computer-implemented method is provided. The computer-implemented method may include mapping, by a system comprising a processor, a set of operational instructions to a set of gestures on a touch screen. The operational instructions may include a defined set of related tasks performed in relation to a touch screen of a computing device. The method implemented by computer can also include obtaining, by the system, sensor data which is linked to the implementation of the series of gestures on a touch screen. In addition, the computer-implemented method may include evaluating, by the system, a score / performance data and / or a score / usability data of the series of gestures on a touch screen relative to the operational instructions. of the series of operational instructions based on an analysis of the sensor data. In some implementations, the series of gestures on a touch screen can be implemented in a controlled non-stationary environment. In addition, according to one or more examples, there is provided here a computer-readable storage device comprising executable instructions which, in response to execution, cause operations to be performed on a system comprising a processor. Operations may include matching a set of operational instructions to a set of gestures on a touch screen and obtaining sensor data that is related to the implementation of the series of gestures on a touch screen in an environment. unstable. Operations may also include learning a model based on a series of operational instructions, a series of gestures on a touch screen, and sensor data. Furthermore, the operations may also include analyzing a score / performance data and / or a score / usability data of the series of gestures on a touch screen relative to the respective operational instructions of the series of operational instructions. based on an analysis of the sensor and model data. For the accomplishment of the preceding and related objectives, the invention described comprises one or more of the elements described more fully below and the accompanying drawings present in detail certain illustrative aspects of the present invention. However, these aspects are indicative of only a few of the ways in which the principles of the present invention can be employed. Other aspects, advantages, and new elements of the present invention will become apparent from the following detailed description taken in conjunction with the drawings. It will also be appreciated that the detailed description may include additional examples or variations in addition to those described in this summary. Various non-limiting embodiments are further described with reference to the accompanying drawings in which: [0008] [fig.l] illustrates a non-limiting example of a system for facilitating a test of control gestures according to one or more embodiments described here; [Fig. 2] illustrates another non-limiting example of a system for the evaluation of functional gestures according to one or more embodiments described here; [Fig.3] illustrates a non-limiting example of the implementation of a standardized test for a pan / move function test according to one or more embodiments described here; [Fig.4] illustrates a non-limiting example of a first embodiment of the FIG pan / move function test. 3 according to one or more embodiments described here; [Fig.5] illustrates a non-limiting example of a second embodiment of the FIG pan / move function test. 3 according to one or more embodiments described here; [Fig.6] illustrates a non-limiting example of a third embodiment of the FIG pan / move function test. 3 according to one or more embodiments described here; [Fig.7] illustrates a non-limiting example of a fourth embodiment of the FIG pan / move function test. 3 according to one or more embodiments described here; [Fig.8] illustrates a non-limiting example of a first embodiment of an enlargement / shrinking function test according to one or more embodiments described here; [Fig. 9] illustrates a non-limiting example of a second embodiment of the FIG enlargement / shrinking function test. 8 according to one or more embodiments described here; [Fig. 10] illustrates a non-limiting example of a third embodiment of the FIG enlargement / shrinking function test. 8 according to one or more embodiments described here; [Fig.ll] illustrates a non-limiting example of a fourth embodiment of the test to enlarge / shrink the FIG. 8 according to one or more embodiments described here; [Fig. 12] illustrates a non-limiting example of a first embodiment of an enlargement / shrinking function test according to one or more embodiments described here; [Fig. 13] illustrates a non-limiting example of a second embodiment of the FIG enlargement / shrinking function test. 12 according to one or more embodiments described here; [Fig. 14] illustrates a non-limiting example of a third embodiment of the FIG enlargement / shrinking function test. 12 according to one or more embodiments described here; [Fig. 15] illustrates a non-limiting example of a fourth embodiment of the test to enlarge / shrink the FIG. 12 according to one or more embodiments described here; [Fig.16] illustrates a representation of an example, without limitation, of the task task "go to" which can be implemented according to one or more embodiments described here; [Fig.17] illustrates another non-limiting example of a system for evaluating a function gesture according to one or more embodiments described here; [Fig.18] illustrates a non-limiting example of a method implemented by computer to facilitate touch screen evaluation tasks intended to evaluate the usability of gestures for touch screen functions according to one or more embodiments described here; [Fig. 19] illustrates a non-limiting example of a process implemented by computer to generate standardized tests for the evaluation of gestures on a touch screen in an unstable environment according to one or more embodiments described here; [Fig. 20] illustrates a non-limiting example of a method implemented by computer to evaluate a risk benefit analysis associated with the evaluation of gestures on a touch screen in an unstable environment according to one or more embodiments described here ; [Fig.21] illustrates a non-limiting example of an IT environment in which one or more embodiments described here can be facilitated; and [fig.22] illustrates a non-limiting example of a networked working environment in which one or more embodiments described here can be facilitated. One or more embodiments are now described more fully below with reference to the accompanying drawings in which examples of embodiments are shown. In the following description, for explanatory purposes, many specific details are presented in order to allow a deeper understanding of the various embodiments. However, the various embodiments can be practiced without these specific details. In other examples, well known structures and devices are shown in the form of a schematic diagram in order to facilitate the description of the various embodiments. Various aspects presented here relate to determining the effectiveness of a command based on a gesture in a volatile environment before the implementation of the gestures in the volatile environment. Specifically, the various aspects relate to a series of computer-based assessment tasks designed to assess the usability of gestures for touch screen functions (for example, action of a touch screen, operation of a touchscreen). A “gesture” is a touch screen interaction that is used to express an intention (for example, selecting an item on the touch screen, facilitating movement on the touch screen, having a defined action performed according to touch screen interaction). As presented here, the various aspects can assess the usability of gestures for a defined function and a defined environment. Usability can be determined by the time taken to complete the tasks, the precision with which the tasks were completed, or a combination of precision and completion time. Human machine interfaces (HMIs) designed for cockpits or other implementations that are subject to vibration and / or turbulence should be developed with usability in mind. For example, for aviation, this may involve considering scenarios such as turbulence, vibration, and the positioning of interfaces in the cockpit or other defined environment. There is a growing interest in using touch screens in the cockpit and as touch screens are becoming ubiquitous on the market there are now a number of common gestures that can be used to express simple intent to the system. However, these simple gestures are not suitable in environments that are unstable. Therefore, we present here embodiments that can determine the usability of various gestures and the suitability of gestures in non-stationary environments. For example, unstable or non-stationary environments may include, but are not limited to, environments encountered during land navigation, marine navigation, aeronautical navigation, and / or space navigation. Although the various aspects are presented in relation to an unstable environment, the various aspects can also be used in a stable environment. The various aspects can provide objective evaluations (rather than subjective evaluations) of gestures on a touch screen. Objective ratings can be collected and used in conjunction with various subject usability scales to more reliably determine the usability of a system with dedicated gestures for a single user intention. The LIG. 1 illustrates a non-limiting example of a system 100 to facilitate the testing of control gestures according to one or more embodiments described here. The system 100 can be configured to perform touch screen evaluation tasks to evaluate the usability of gestures for touch screen functions. The evaluation of the usability of gestures can be for touch screen functions which are performed in a non-stationary or non-stable environment, depending on certain implementations. For example, the assessment can be performed for environments that are subject to vibration and / or turbulence. Such environments may include, but are not limited to nautical environments, nautical applications, aeronautical environments, and aeronautical applications. The system 100 may include a mapping component 102, a sensor component 104, an analysis component 106, an interface component 108, at least one memory 110, and at least one processor 112. The component of mapping 102 can correlate a set of operational instructions to a set of gestures on a touch screen. The operational instructions may include at least one defined task performed in relation to a touch screen of a computer device. In some implementations, the operational instructions may include a set of related tasks to be performed relative to the touch screen of the computing device. For example, the set of operational instructions may include instructions for interacting with an entity, via an associated computing device, with a touch screen of the interface component 108. According to some implementations, the interface component 108 can be a component of the system 100. However, according to some implementations, the interface component 108 can be separated from the system 100, but in communication with the system 100. For example, the interface component 108 can be associated with a device located at the same location as the system (for example, in a flight simulator) and / or a device located at a distance from the system (for example, a telephone mobile, tablet, laptop, and other computing devices). The instructions may include detailed instructions, which may be visual instructions and / or audible instructions. According to some implementations, the instructions can advise the entity to perform various functions by interaction with an associated computing device. Various functions may include “pan / move,” “enlarge / shrink,” “go to next / go to previous” (for example, “go to”), and / or “erase / remove / delete.” The pan / move function can include dragging an item (for example, a finger, a pen device) on the screen and / or dragging two items (for example, two fingers) on the screen. The sliding movement of the article (s) can be according to a defined path. In addition details related to an example, not limiting the pan / move function will be presented below with reference to Figures 3-7. Enlarge / shrink can include dragging an object up, down, right, and / or left on the screen. Another enlarge / shrink function can include clockwise and / or counterclockwise rotation. Yet another enlarge / shrink function can include pinching and / or enlarging a defined item on the screen. In addition, non-limiting details of an example of a enlarge / shrink function will be provided below with reference to Figures 8-15. The "go to" function can include swiping (or "sending") an object to the left, right, up, and / or down on the screen. The sensor component 104 can receive sensor data from one or more sensors 114. The one or more sensors 114 can be included, at least in part in the interface component 108. The one or more sensors can include touch sensors which are located in the interface component 108 and associated with the display. According to one implementation, the sensor data can be linked to the implementation of the series of gestures on a touch screen. For example, the series of gestures on a touch screen can be implemented in an environment which is subjected to vibrations or turbulence, is a non-stationary environment, and / or is an unstable environment. In some implementations, gestures on a touch screen can be tested in an environment subject to little, if any, vibration or turbulence. The analysis component 106 can analyze the sensor data. For example, the analysis component 106 can evaluate whether a gesture conforms to a defined gesture path or to an expected movement. In addition, the analysis component 106 can evaluate a score / performance data and / or a score / usability data of the series of gestures on a touch screen relative to the respective operational instructions of the series of instructions. operational. The score / performance data and / or the score / usability data may be a function of an appropriateness of gestures on a touch screen in the test environment (for example, a stable environment, an environment subject to vibrations or turbulence, etc.). For example, if a gesture on a touch screen is not suitable for the environment, a high percentage of errors may be detected. In an implementation, the performance score data may relate to a number of times that a gesture has deviated from the defined gesture path, positions in the defined gesture path where one or more deviations have occurred, the incapacity to perform the gesture, and / or the inability to perform a gesture (for example, from a defined starting position to a defined finishing position). The at least one memory 110 can be functionally coupled to the at least one processor 112. The at least one memory 110 can store computer executable components and / or computer executable instructions. The at least one processor 112 may facilitate the execution of the computer executable components and / or the computer executable instructions stored in the at least one memory 110. The term "coupled" or variants thereof may include various communications including, but not limited to, direct communications, indirect communications, wired communications, and / or wireless communications. In addition, the at least one memory 110 can store protocols associated with the facilitation of standardized tests for the evaluation of gestures on a touch screen in an environment, which can be a stable environment, or an unstable environment, as shown here. In addition, the at least one memory 110 can facilitate action to control communication between the system 100, other systems, and / or other devices, such that the system 100 can employ protocols and / or stored algorithms to obtain an improved evaluation of a gesture on a touch screen as described here. Note that although the one or more of the computer executable components and / or computer executable instructions can be illustrated and described here as components and / or instructions separate from the at least one memory 110 (for example, functionally connected to at least one memory 110), the various aspects are not limited to this implementation. Rather, according to various implementations, the one or more components executable by computer and / or the one or more instructions executable by computer can be stored in (or integrated in) at least one memory 110. In addition, while various components and / or instructions have been illustrated as separate components and / or as separate instructions, in some implementations, multiple components and / or multiple instructions may be implemented as a single component or as a single instruction. In addition, a single component and / or a single instruction can be implemented as multiple components and / or as multiple instructions without departing from the exemplary embodiments. It will be appreciated that the data storage components (for example, memories) described here can be either a volatile memory or a non-volatile memory, or can include both a volatile and non-volatile memory. As an example and not a limitation, a non-volatile memory may include a read only memory (ROM), a programmable ROM (PROM), an electrically programmable ROM (EPROM), an electrically erasable and programmable ROM (EEPROM), or a flash memory. Volatile memory can include random access memory (RAM), which acts as an external cache memory. As an example and not a limitation, one is available in many forms such as synchronous RAM (SRAM), dynamic RAM (), synchronous DRAM (SDRAM), SDRAM with double data access speed (DDR) SDRAM), an improved SDRAM (ESDRAM), a synchronous link DRAM (SLDRAM), and a direct Rambus RAM (DRRAM). The memory for the aspects described is intended to include, but not be limited to, these and other types of memory. The at least one processor 112 can facilitate the respective analysis of information related to the evaluation of gestures on a touch screen. The at least one processor 112 may be a processor dedicated to determining the appropriateness of one or more gestures as a function of data received and / or as a function of a generated model, a processor which controls one or more components of the system 100, and / or a processor which both analyzes and generates models based on received data and controls one or more components of system 100. According to certain implementations, the various systems can include interface components (for example, the interface component 108) or respective display units which can facilitate the entry and / or exit of information. to one or more display units. For example, a graphical user interface can be output to one or more display units and / or mobile devices as presented here, which can be facilitated by the interface component. A mobile device can also be called, and may contain some or all of the functionality of a system, a subscriber unit, a subscriber station, a mobile station, a mobile, a mobile device, a device, a terminal without wire, remote station, remote terminal, access terminal, user terminal, terminal, wireless communication device, wireless communication device, user agent, user device, or equipment user (EU). A mobile device can be a cell phone, cordless phone, session initiation protocol (SIP) phone, smartphone, digital phone, wireless local loop station (WLL), personal digital assistant ( PDA), laptop, portable communication device, portable computing device, ultra-portable, tablet, satellite radio, data card, wireless modem card, and / or other processing device for communicate on a wireless system. Furthermore, although presented in relation to wireless devices, the aspects described can also be implemented with wired devices, or with both wired and wireless devices. FIG. 2 illustrates another example, of a system 200 for a function gesture evaluation according to one or more embodiments described here. The repeated description of identical elements used in other embodiments described here is omitted for the sake of brevity. The system 200 may include one or more of the components and / or functionality of the system 100 and vice versa. The system 200 may include a gesture model generation component 202 which can generate a gesture model 204 based on operational data received from a multitude of computer devices, which can be located in the system 200 and / or located remotely from the system 200. In some implementations, the gesture model 204 can be trained and normalized as a function of data from one or more devices. The data can be operational data and / or test data which can be collected by the sensor component 104. According to certain implementations, the gesture model 204 can learn gestures on a touch screen relating to the respective operational instructions. of the set of operational instructions. For example, the set of operational instructions may include one or more gestures and one or more tasks (for example, instructions) which should be performed in relation to the one or more gestures. According to some implementations, the gesture model generation component 202 can train the gesture model 204 by sharing in the computer cloud between a multitude of models. The multitude of models can be based on operational data received from the multitude of computing devices. For example, a test based on multiple gestures can be performed in different locations. Data and analysis can be gathered and analyzed in different locations. In addition, respective models can be trained in the different locations. Created models trained in different locations can be aggregated by sharing on the cloud between one or more models. By sharing models and information from different locations (for example, testing centers) robust training and analysis of gestures can be facilitated, as shown here. The system 200 may also include a scaling component 206 which performs gesture analysis on a touch screen as a function of the dimensions of the touch screen of the computing device. For example, various devices can be used to interact with the system 200. The various devices can be mobile devices, which can include different fingerprints and, therefore, display screens which can be of different sizes. In one example, a test can be performed on a large screen and the gesture model 204 can be trained on the large screen. However, a similar test is to be performed on a smaller screen and, therefore, the scaling component 206 can use the gesture model 204 to rescale the test as a function of the actual state. available (e.g. display size). In this way, the tests can remain the same regardless of the device on which the tests are performed. Thus, the one or more tests can be standardized between a variety of devices. In some implementations, the scaling component 206 can perform gesture analysis on a touch screen as a function of the respective sizes of one or more objects (for example, fingers, thumbs, or parts thereof) detected by the touch screen of the computing device. For example, if fingers are used to interact with the touch screen, the fingers may be too large for the screen area and, therefore, errors may be encountered depending on the size of the fingers. In another example, the fingers may be smaller than average and, therefore, the amount of time taken to complete one or more tasks may be longer due to the additional distance which must be covered on the screen. due to the small size of the finger. Note that although various dimensions, screen reports, and / or other numerical definitions can be described here, these details are provided simply to explain the aspects described. In various implementations, other dimensions, screen reports, and / or other numerical definitions can be used with the aspects described. In some implementations, a stopwatch component 208 can measure various amounts of time spent to complete a task and / or parts of the task. For example, the stopwatch component 208 may begin measuring an amount of time when a test is selected (for example, when a test start selector is activated). In another example, the stopwatch component 208 can begin measuring time upon receipt of a first gesture (for example, as determined by one or more sensors and / or the sensor component 104). In addition, or alternatively, the gesture analysis may include a series of tests or tasks which have come out. When the trial is started or after, a time to successfully complete a first gesture can be measured by the stopwatch component 208. In addition, an amount of time that elapses between the completion of the first task and a start of a second task can be measured by the chronometer component 208. The start of the second task can be determined according to the reception of a next gesture by the sensor component 104 after the completion of the first task. In another example, the start of the second task can be determined based on an interaction with one or more objects associated with the second task. An amount of time to complete the second task, another amount of time between the second task and a third task, etc., can be measured by the timer component 208. According to certain implementations, one or more errors can be measured by the chronometer component 208 as a function of the respective past time deviating from a target path associated with the at least one defined path. For example, a task may indicate that a gesture should be performed and a target path should be followed when performing the gesture. However, according to some implementations, as the gesture can be performed in an environment which is unstable (for example, which undergoes vibrations, turbulence, or other disturbances), an item pointing (for example, a finger) can deviating from the target path (for example, losing contact with the touch screen) due to displacement. In some implementations, a defined amount of deviation can be expected due to the instability of the environment in which the gesture is performed. However, if the amount of deviation is above the defined amount, it may indicate an error and, therefore, the gesture may not be suitable for the environment tested. For example, the environment may experience too much vibration or movement, making the gesture inappropriate. The LIG. 3 illustrates a non-limiting example of the implementation of a standardized test for a function test pan / move 300 according to one or more embodiments described here. The repeated description of identical elements used in other embodiments described here is omitted for the sake of brevity. Note that although particular standardized tests are illustrated and described here, the aspects described are not limited to these implementations. Rather, the non-limiting examples of the standard tests are illustrated and described to facilitate the description of the one or more aspects provided here. Thus, other standardized tests can be used with the aspects described. The function test pan / move 300 can be used to simulate a sliding and / or movement of an object on a touch screen of the device. For example, a test channel 302 which has a defined width can be displayed. In some implementations, the test channel 302 may have a similar width over its entire length. However, in certain implementations, different zones of the test channel 302 may have different widths. A path defined 304 in the profile can be used by the analysis component 106 to determine if one or more errors have occurred during the gesture. For example, the one or more errors can be measured as a function of the time spent deviating from the defined path 304. A test object 306 can also be displayed which is the object with which the entity can interact (for example, by multitouch). For example, test object 306 can be selected and moved during the test. According to certain implementations, a ghost object 308 can also be displayed. Ghost object 308 is an object whose entity can try to mimic the path with the test object. For example, the ghost object 308 can be placed along the path in a position to which the test object 306 should be moved. According to some implementations, the test object 306 and the ghost object 308 may be approximately the same size and / or shape. However, according to other implementations, the test object 306 and the ghost object 308 can be of different sizes and / or shapes. In addition, in some implementations, the test object 306 and the phantom object 308 may be displayed in different colors or in other ways to distinguish between the objects. The defined path 304 can be designed to allow the sensor component 104 and / or one or more sensors to evaluate a displacement along the vertical axis (for example, a direction Y 310), a displacement over the horizontal axis (for example, direction X 312), and a displacement on both the horizontal axis and the vertical axis (for example, a combined direction XY 314). In the example illustrated, the function test pan / move 300 can start in a first position (for example, a starting position 316) and can end in a second position (for example, a stop position 318). During the test procedure, the test object 306 may be located in various positions along the defined path 304 or in a position located in the test channel 302 but not on the defined path 304 (for example, the test channel 302 and / or the test object 306 can be dimensioned in such a way that a movement inside the test channel 302 can deviate from the defined path 304) or outside the test channel test 302. In some implementations, if an object (for example, a finger) is removed from the test object, the test object remains where it is located and does not return to the starting position. In addition, there is no return when the canal limits have been crossed. The test object can move freely anywhere on the screen and is not limited by the channel. In addition, timing may start when the test object is touched and may end when the end line (for example, a stop position) is touched. Figures 4-7 illustrate examples, not limiting implementation of the function test to pan / move 300 of FIG. 3 according to one or more embodiments described here. The repeated description of identical elements used in other embodiments described here is omitted for the sake of brevity. When or after a start of the function test to pan / move 300 is requested (for example, by selecting the test via the touch screen via the component interface 108, via an auditory selection, or by any other way of selecting the function test pan / move 300), an embodiment 400 of the function test pan / move 300 can be displayed as shown in FIG. 4. As indicated, the test object 306 is displayed, however, the ghost object 308 is not displayed. In some implementations, the phantom object 308, at the start of the pan / move function test 300 may be in substantially the same location as the test object 306 and, therefore, cannot be seen . However, when or after the start of the pan / move function test 300, the phantom object 308 can be displayed to give an indication of how the test object 306 should be moved on the screen. When or after the test object 306 is moved from the start position 316 to the stop position 318, a second embodiment 500 of the function test pan / move 300 can be automatically displayed as shown in FIG. 5. In the second embodiment 500, the test channel 302 can be rotated and inverted so that the starting position 316 is located in a different location on the display screen. When or after the second embodiment 500 of the pan / move function test 300 is completed (for example, when the test object 306 has been moved from the starting position 316 to the position of judgment 318), a third embodiment 600 of the test of pan / move function 300 can be automatically displayed. As illustrated by the third embodiment 600, the starting position 316 is still in a different location on the screen. In addition, upon or after the completion of the third embodiment 600 (for example, when the test object 306 has been moved from the start position 316 to the stop position 318), a fourth embodiment realization 700 can be automatically displayed as illustrated in FIG. 7. Upon or after completion of the fourth embodiment 700, the pan / move 300 function test may be terminated. Therefore, as illustrated in Figures 4-7, the function test pan / move 300 can progress in different directions (for example, four directions in this example). In addition, switching between different measurement embodiments can be used to average various problems that may arise during the measurement in different directions. For example, depending on whether an object (for example, a finger) is placed on the screen from a left direction or a right direction, at least part of the screen may be obscured. For example, for Figures 4 and 6, if the object is placed on the screen in a straight direction, when the test object 306 is moved from the starting position 316, the starting position 316 can be obstructed during part of the function test pan / move 300. Similarly, for the figures. 5 and 7, if the object is placed on the screen in a straight direction, during part of the function test pan / move 300, the starting position 316 may be obstructed during part of the test function to pan / move 300. Figures 8-10 illustrate non-limiting examples of the implementation of an enlargement / shrinking function test according to one or more embodiments described here. The repeated description of identical elements used in other embodiments described here is omitted for the sake of brevity. The enlarge / shrink function test can be designed to test the enlarge and / or shrink functions with different gestures. Similar to the function test pan / move 300 of the LIG. 3, the enlargement / shrinking function test may include the test object 306. In addition, during or after the movement of the test object 306 (or an advance movement of the test object 306 ), ghost object 308 can be displayed. An objective of the enlarge / shrink function test may be to determine which gesture (s) may be the most appropriate gestures to accomplish a desired function or intention. The LIG. 8 illustrates an embodiment 800 of an enlargement / shrinking function test according to one or more embodiments described here. Expand / shrink tests, as well as other tests presented here, can be multi-touch tests where more than one part of the touch screen can be touched at about the same time. A first cursor path 802 and a second cursor path 804 are illustrated. For the first cursor path 802, the test object 306 can be configured to move up from the start position 316 to the stop position 318. In addition, the second cursor path 804 can be configured for testing the movement of the test object 306 from the starting position 316 down to the stop position 318. Therefore, the embodiment 800 can test the movement up and down for the accuracy and / or speed. During or after the completion of embodiment 800 of the enlargement / shrinking function test, a second embodiment 900 of the enlarging / shrinking function test can be displayed. The second embodiment 900 includes a first cursor path 902 which can be used to test a gesture which moves the test object 306 from the start position 316 (on the left) to the stop position 318 (on the right). In addition, a second cursor path 904 can be used to test a gesture that moves the test object 306 from the start position 316 (on the right) to the stop position 318 (on the left). Thus, the second embodiment 900 can test a horizontal displacement in the left and right directions. In some implementations, the first cursor path 902 and the second cursor path 904 may be centered in the horizontal direction on the display screen. However, other locations can be used for the first cursor path 902 and the second cursor path 904. A third embodiment 1000 of the enlargement / shrinking function test, as illustrated in FIG. 10, can be displayed during or after the completion of the second embodiment. The third embodiment 1000 can test the rotational movement of one or more gestures. Thus, as illustrated by a first rotational measurement 1002, an attempt can be made to move the test object 306 from the starting position 316 clockwise to the stop position 318. In addition, as illustrated by a second rotational measurement 1004, an attempt can be made to move the test object 306 from the starting position 316 anticlockwise to the stop position 318. As illustrated, parts of respective bottoms of the first rotational measurement 1002 and the second rotational measurement 1004 can be removed such that a full circle is not measured during the third embodiment 1000. In some implementations, the first rotational measurement 1002 and the second rotational measurement 1004 may be centered on the display in a vertical direction (for example, the Y direction). In addition, during or after the completion of the third embodiment 1000, a fourth embodiment 1100 of the enlargement / shrinking function test can be displayed as illustrated in FIG. 11. A first implementation 1102 of the fourth embodiment 1100 is illustrated on the left side of FIG. 11. In the first implementation 1102, the starting position 316 is located approximately in the middle of a circular shape. The first implementation 1102 can be used to test a zoom out function which can be carried out by moving two objects (for example, two fingers) from each other and towards the outside towards the outside of the circle, which can be the stop position 318. A second implementation 1104 of the fourth embodiment 1100 is illustrated on the right side of FIG. 11. In the second implementation 1104, the starting position 316 is located at the outermost part of a circular shape. The second implementation 1104 can be used to test a pinch function which can be achieved by moving two objects (for example, two fingers) towards each other and inward towards the middle of the circle, which can be the stop position 318. In some implementations, the enlarge / shrink function task can be used to test the enlarge and shrink function with different gestures. Timing can start when the test object is touched. In addition, a performance measure may be the time it takes to reach 50% (or another percentage), which can be determined when: Force = 0 and Value = 50 (for two seconds), for example. A reading may appear near the test object to show the current value / position of the test object. According to one implementation, if the test object is touched and held, the cursor path can still be active if the user removes his fingers from the object while maintaining contact with the screen ( this is similar to what the user expects from common touchscreen devices). If the user removes their fingers from the test object, the test object may remain where it was left and not be reset. Figures 12-15 illustrate non-limiting examples of the implementation of another test of the enlarging / shrinking function according to one or more embodiments described here. The repeated description of identical elements used in other embodiments described here is omitted for the sake of brevity. Function tests enlarge / shrink figures. 12-15 are similar to the function of enlarging / shrinking figures. 8-11. However, in this example, the gesture is performed up to a certain percentage of a complete displacement (as presented relative to Figures 8-11). In addition, the expand / shrink function tests in Figures 12-15 can be multi-touch tests. For example, in an embodiment 1200 of FIG. 12, a first reading 1202 and a second reading 1204 can be displayed as floating on respective sides of the test object 306. Although illustrated to the left of the test object 306, the first reading 1202 and the second reading 1204 can be to the right of the test object 306, or located in another position relative to the test object 306. According to certain implementations, the first reading 1202 and / or the second reading 1204 can be located inside the test object 306. Thus, the first cursor path 802 can be used to move the test object from 0% to another percentage (for example, 50%) . The second cursor path 804 can be used to move the cursor path from 100% to a lower percentage (for example, 50%). A value from first reading 1202 and another value from second reading 1204 can change automatically when test object 306 is moved. The error observed in embodiment 1200 can be determined according to the precision with which the gesture stops at the desired percentage (for example, 50% in this example). During or after the completion of embodiment 1200, a second embodiment 1300 can be automatically displayed. The second embodiment 1300 is similar to the second embodiment 900 of FIG. 9. As illustrated, the first reading 1202 and the second reading 1204 can be placed above the test object 306. Nevertheless, the aspects described are not limited to this implementation and the first reading 1202 and the second reading 1204 can be positioned in various other locations. The LIG. 14 illustrates a third embodiment 1400 which can be displayed during or after the completion of the second embodiment 1300. The test object 306 can be moved in a similar manner to that presented in relation to the LIG. 10. However, in the third embodiment 1400 the ability to rotate the test object 306 only a certain percentage can be tested. Upon or after completion of the third embodiment 1400, a fourth embodiment 1500, as illustrated in the LIG. 15 can be displayed. The fourth embodiment 1500 is similar to the test carried out relative to the LIG. 11, however, only a certain percentage of displacement is tested. The LIG. 16 illustrates a representation of a non-limiting example of a “go to” 1600 function task which can be implemented according to one or more embodiments described here. The repeated description of identical elements used in other embodiments described here is omitted for the sake of brevity. The task for this essay may be to scan gestures in multiple different directions (for example, four or more separate directions). As an example and not a limitation, a first scanning gesture can be to sweep, quickly sweep or quickly move an object in the direction of a first arrow 1602. For example, the gesture can be in one direction from the side of the screen to the middle of the screen screen, however, other directions for the scanning gesture can be used with the aspects described. Depending on these or other implementations, the one or more arrows (for example, scanning direction arrows) may indicate the direction of scanning. As illustrated, on the LIG. 16, the first sweep gesture has been completed and instructions for a second sweep gesture can be provided automatically. For example, a second arrow 1604 can be output in connection with a numerical indication (or another type of indication) of the scan number (for example, 2 in this example, which is the second scan gesture). In some implementations, the scanning direction arrows (for example, the first arrow 1602, the second arrow 1604, and subsequent arrows) may be centered on the horizontal direction and / or the vertical direction depending on the location on the screen. According to other implementations, the direction arrows can be located anywhere on the screen. When or after performing the second sweep gesture, a third sweep gesture instruction can be output automatically. This process can continue until the sweep gesture test has been successfully completed, or after a time limit for completion of the test. According to certain implementations, the timing of the task can start when the first key is detected on the first scanning cursor path. Task timing may end when the last scan is completed successfully. Performance can be measured by completion time. In addition, the amount of time between completing each task and starting the next task can be collected. For example, after performing the first sweep gesture, it may take some time to move to a start position for the second sweep gesture. In addition, after performing the second sweep gesture, time may extend to the third sweep gesture, etc. until the completion of the "go to" function task. In addition, a number of keys which are received, but which are not a scan, can be measured for analysis and to train the model. FIG. 17 illustrates another example, of a system 1700 for evaluating a gesture of function according to one or more embodiments described here. The repeated description of identical elements used in other embodiments described here is omitted for the sake of brevity. The system 1700 can comprise one or more of the components and / or functionality of the system 100 and / or of the system 200, and vice versa. According to some implementations, the analysis component 106 can perform a utility-based analysis as a function of a benefit from the precise determination of a gesture intention with the cost of an imprecise determination of an intention gesture. In addition, a risk component 1702 can regulate acceptable error rates as a function of the acceptable risk associated with a defined task. Thus, the benefit of a specific gesture intention against the cost of an imprecise gesture intention can be weighed and taken into account for the gesture model 204. For example, if there is an imprecise prediction made with respect to the change of a radio station, there may be a negligible cost associated with this imprecise prediction. However, if the prediction (and the associated task) is associated with the navigation of an aircraft or an automobile, a level of confidence associated with the accuracy of the prediction must be very high (for example, 99% confidence) , otherwise an accident could happen due to the imprecise prediction. The 1700 system can also include a reasoning and machine learning component 1704, which can employ automated reasoning and machine learning procedures (for example, the use of statistical classifiers trained explicitly and / or implicitly) in connection with the realization of inference and / or probabilistic determinations and / or determinations based on statistics according to one or more aspects described here. For example, the reasoning and machine learning component 1704 may employ principles of probability and theoretical decision inference. In addition, or alternatively, the reasoning and machine learning component 1704 can rely on predictive models constructed using automated learning and / or machine learning procedures. An inference centered on logic can also be used separately or in conjunction with probabilistic processes. The reasoning and machine learning component 1704 can infer gesture intention as a function of one or more gestures received. According to a specific implementation, the 1700 system can be implemented for on-board avionics of an aircraft. Therefore, gesture intent can relate to various aspects related to the navigation of the aircraft. Depending on knowledge, the reasoning and machine learning component 1704 can train a model (for example, the gesture model 204) to make an inference based on whether one or more gestures were actually received and / or one or more actions to be done according to one or more gestures. As used here, the term “inference” generally refers to the reasoning process concerning or inferring states of the system, of a component, of a module, of the environment, and / or of goods to from a series of observations captured from events, reports, data and / or any other form of communication. Inference can be used to identify a specific context or action, or can generate a probability distribution between states, for example. The inference can be probabilistic. For example, calculating a probability distribution between states of interest based on a consideration of the data and / or events. Inference can also refer to techniques used to compose higher-level events from a series of events and / or data. Such an inference can result in the construction of new events and / or actions from a set of observed events and / or stored event data, whether or not the events are correlated in close temporal proximity, and if the events and / or data come from one or more sources of events and / or data. Various classification schemes and / or systems (for example, carrier vector machine, neural network, logic-centered production systems, Bayesian belief networks, fuzzy logic, data fusion engines, etc.) can be used in connection with the realization of an automatic and / or inferred action in connection with the aspects described. The various aspects (for example, in connection with standardized tests for the evaluation of gestures on a touch screen, standardized tests for the evaluation of gestures on a touch screen in an unstable environment, etc.) can employ various schemes based on artificial intelligence to achieve various aspects. For example, a process to evaluate one or more gestures received at a display unit can be used to predict an action that should be performed and / or a risk associated with the implementation of the action, which can be triggered by an automatic classifier system and process. A classifier is a function which maps an input attribute vector, x = (xl, x2, x3, x4, xn), to a confidence that the input belongs to a class. In other words, f (x) = trust (class). Such a classification can use a probabilistic and / or statistics-based analysis (for example, taking into account the tools and costs of analysis) to predict or infer an action that should be implemented according to a gesture received , whether the gesture was correctly performed, whether to selectively ignore a gesture, etc. In the gesture box on a touch screen, for example, attributes can be the identification of a known gesture reason based on historical information (for example, the gesture model 204) and classes can be criteria of how to interpret and implement one or more actions depending on the gesture. A support vector machine () is an example of a classifier that can be used. It works by finding a hypersurface in the space of possible inputs, which hypersurface seeks to separate the trigger criterion from non-trigger events. Intuitively, this makes the correct classification for testing data that may be similar, but not necessarily identical to training data. Other directed and non-directed model classification approaches (for example, naive Bayes, Bayesian networks, decision trees, neural networks, fuzzy logic models, and probabilistic classification models) providing different grounds for independence can be used. A classification as used here can include a statistical regression which is used to develop priority models. One or more aspects may employ classifiers which are explicitly trained (for example, by generic training data) as well as classifiers which are implicitly trained (for example, by observing and recording gestural behavior in an unstable environment , by receiving extrinsic information (for example, sharing on the cloud, etc.). For example, can be configured through a learning or training phase in an element selection and classifier builder module Thus, a classifier (s) can be used to automatically learn and perform a number of functions, including but not limited to determining according to predetermined criteria how to interpret a gesture, if a gesture can be performed in a stable or unstable environment, changes to a gesture that cannot be performed successfully dried in the environment, etc. Criteria may include, but are not limited to, similar gestures, historical information, aggregated information, etc. In addition, or alternatively, an implementation diagram (for example, a rule, a policy, etc.) can be applied to control and / or regulate the performance and / or the interpretation of one or several gestures. In some implementations, based on a predefined criterion, the implementation based on rules can automatically and / or dynamically interpret how to respond to a particular gesture. In response to this, the rule-based implementation can automatically interpret and perform gesture-related functions based on a cost-benefit analysis and / or a risk analysis using one (or more) rules (s) ) predefined and / or programmed based on any desired criteria. Processes implemented by computer which can be implemented according to the present invention will be better appreciated with reference to the following flowcharts. While, for reasons of simplicity of explanation, the methods are shown and described as a series of blocks, it must be understood and appreciated that the aspects described are not limited by the number or the order of the blocks, since certain blocks can appear in different orders and / or approximately at the same time as other blocks compared to what is represented and described here. In addition, all the blocks illustrated do not require implementing the methods described. It will be appreciated that the functionality associated with the blocks can be implemented by software, hardware, a combination thereof, or any other suitable means (for example, device, system, process, component, and the like). In addition, it will further be appreciated that the methods described can be stored on an article of manufacture to facilitate the transport and transfer of such methods to various devices. Those skilled in the art will understand and appreciate that the methods can alternatively be represented as a series of related states or events, as in a state diagram. According to certain implementations, the methods can be carried out by a system comprising a processor. In addition, or alternatively, the method can be carried out by a machine-readable storage medium and / or a non-transient computer-readable medium, comprising executable instructions which, when executed by a processor, facilitate the execution of the processes. FIG. 18 illustrates a nonlimiting example of a method implemented by computer 1800 to facilitate touch screen evaluation tasks intended to evaluate the usability of gestures for touch screen functions according to one or more embodiments described here. . The repeated description of identical elements used in other embodiments described here is omitted for the sake of brevity. The method implemented by computer 1800 begins in 1802, when a test is initialized. For example, the test can be initialized based on input received which indicates that the test should be performed. To initiate the test, gesture instructions can be output or provided on a display screen. According to some implementations, a stopwatch can be started substantially at the same time as the instructions are provided or after a first gesture is detected. In addition, during the test, a defined environment (for example, a stable environment, an unstable environment, a moving environment, a restless environment, etc.) can be simulated. In 1804 of the computer implemented process 1800, a time to complete each step of the test can be measured. According to some implementations, a total time to complete the test may be specified. During or after the successful completion of the test, or after a time has expired, information related to the test can be entered in a model in 1806 of the process implemented by computer 1800. For example , the test instruction set, a test result, and other information associated with the test (for example, simulated environment information) can be entered into the model. The model can aggregate test data with other historical test data. In one example, the data can be aggregated with other data received via a cloud-based sharing platform. In 1808 of the process implemented by computer 1800, a determination can be made of whether the test has been completed in a defined amount of time. For example, the determination can be made on a gesture-by-gesture basis (for example, during individual steps of the test) or for the total time of the completion of the test. If the gesture is not successfully completed within the defined amount of time (“NO”), in 1810 of the computer-implemented process 1800 one or more parameters of the test can be modified and a next test can be initiated in 1802. If the completed gesture has been received in the defined amount of time (“YES”), in 1812 of the process implemented by computer 1800, a determination is made of whether a number of errors associated with the gesture was below a number defined with errors. For example, if the environment is unstable, one or more errors (for example, a finger rising from the display screen, an unwanted move) can be expected. If the number of errors is not below the defined quantity (“NO”), in 1812 of the process implemented by computer 1800 at least one parameter of the test can be modified and the modified test can be initialized in 1802. According to certain implementations, the one or more parameters modified in 1808 and the at least one parameter modified in 1812 may be the same parameter or may be different parameters. If the determination in 1812 is that the number of errors is below the defined quantity ("YES"), in 1816 the model can be used to evaluate the test between different platforms and conditions. For example, the test can be performed using different input devices (eg mobile devices) which may include different display screen sizes, different operating systems, etc. Therefore, a multitude tests can be carried out to determine if the gesture is suitable between a multitude of devices. If the gesture is suitable in the multitude of devices, in 1818, the gesture associated with the test can be indicated as usable in the environment tested. Over time, the gesture can be retried for other input devices and / or other operating conditions. [0101] FIG. 19 illustrates a nonlimiting example of a process implemented by computer in 1900 to generate standardized tests for the evaluation of gestures on a touch screen according to one or more embodiments described here. The repeated description of identical elements used in other embodiments described here is omitted for the sake of brevity. In 1902 of the method implemented by computer in 1900, a set of operational instructions can be mapped to a set of gestures on a touch screen (for example, via the mapping component 102). The operational instructions may include a defined set of related tasks performed in relation to a touch screen of a computing device. For example, a set of operational instructions can be defined and expected gestures associated with the operational instructions can be defined. In some implementations, mapping gestures to operational instructions may include learning gestures on a touch screen relative to the respective operational instructions in the operational instruction series. For example, learning can be based on a gesture model trained on the game of gestures. Sensor data related to the implementation of the series of gestures on a touch screen can be collected in 1904 from the process implemented by computer 1900 (for example, via the sensor component 104). According to certain implementations, the series of gestures on a touch screen can be implemented in a non-stationary environment. The non-stationary environment can be an environment which is subjected to a vertical displacement which can produce unexpected vibrations and / or turbulences. According to various implementations, the non-stationary environment can be a simulated environment (for example, a controlled non-stationary environment) configured to mimic conditions of a target test environment. In 1906 of the process implemented by computer in 1900, a score / performance data and / or a score / usability data for the series of gestures on a touch screen can be evaluated relative to the respective operational instructions of the series of operational instructions based on an analysis of sensor data. One or more errors can be measured as a function of the respective time spent deviating from a defined target path for at least one gesture in the series of gestures on a touch screen. According to certain implementations, evaluating the score / performance data and / or the score / usability data can include performing gesture analysis on a touch screen as a function of the respective sizes of a or multiple objects (for example, fingers) detected by the touch screen of the computing device. For example, the object can be one or more fingers or another item that can be used to interact with a touch screen display. In some implementations, evaluating performance and / or score / usability data may include performing gesture analysis on a touch screen as a function of the dimensions of the touch screen of the computing device. [0106] FIG. 20 illustrates a non-limiting example of a process implemented by computer in 2000 for evaluating a benefit-risk analysis associated with the evaluation of gestures on a touch screen according to one or more embodiments described here. The repeated description of identical elements used in other embodiments described here is omitted for the sake of brevity. The method implemented by computer 2000 begins in 2002 when operational instructions can be matched with gestures on a touch screen (for example, via the mapping component 102). Sensor data associated with the series of gestures on a touch screen can be collected in 2004 from the process implemented by computer (for example, via the sensor component 104). For example, sensor data can be collected from one or more sensors associated with a touch screen device. In 2006, a model can be trained from the computer-implemented process 2006 (for example, via the gesture model generation component 202). For example, the model can be trained based on operational instructions, the series of gestures on a touch screen, and sensor data. In 2008 of the method implemented by computer in 2000, a score / performance data and a score / usability data for gestures on a touch screen can be evaluated relative to the respective operational instructions based on an analysis. sensor data (for example, via the analysis component 106). In 2010 of the process implemented by computer, an analysis based on utility can be carried out. Utility-based analysis can be performed as a function of a benefit from the precise determination of a gesture intention with the cost of an imprecise determination of a gesture intention (for example, via the component d analysis 106). In addition, in 2012 of the method implemented by computer, acceptable error rates can be regulated as a function of the risk associated with a defined task (for example, via the risk component 1702). For example, a cost associated with an inaccurate prediction of a first intention associated with a first gesture may be low (for example, a small amount of risk is involved) while a second cost associated with an inaccurate prediction of a second intention associated with a second gesture may be high (for example, a large amount of risk is involved). According to certain implementations, the method implemented by computer 2000 can include generating a gesture model as a function of operational data received from a plurality of entities. In addition to these implementations, the computer-implemented method 2000 may include training the gesture model by sharing in the computer cloud between a plurality of models. The plurality of models can be based on operational data received from the plurality of computing devices. [0112] As presented here, there is provided a series of computer-based assessment tasks designed to assess the usability of gestures for touch screen functions. The various aspects can assess the usability of gestures for a given function. For example, usability can be determined by the time spent completing tasks, the precision with which tasks have been completed, or a combination of precision and completion time. As presented here, a system can include a memory which stores executable components and a processor, operatively coupled to the memory, which executes the executable components. Executable components can include a mapping component that correlates a set of operational instructions to a set of gestures on a touch screen. The operational instructions may include at least one defined task performed in relation to a touch screen of a computer device. The executable components can also include a sensor component that receives sensor data from a plurality of sensors. The sensor data can be linked to the implementation of the series of gestures on a touch screen. The series of gestures on a touch screen can be implemented in an environment subject to vibrations or turbulence, or in a more stable environment. In addition, the executable components may include an analysis component which analyzes the sensor data and evaluates a score / performance data and a score / usability data respectively of the gesture series on a touch screen relative to the instructions. respective operational instructions from the set of operational instructions. The respective score / performance data and usability score / data may be a function of an appropriateness of gestures on a touch screen in the test environment. In one implementation, the executable components can include a gesture model which learns gestures on a touch screen relating to the respective operational instructions of the series of operational instructions. The operational instructions may include a defined set of related tasks performed in relation to a touch screen of a computing device. In some implementations, one or more errors can be measured as a function of the respective time spent deviating from a target path associated with the at least one defined path. In another implementation, the executable components may include a scaling component that performs gesture analysis on a touch screen as a function of the dimensions of the touch screen of the computing device. In addition to this implementation, the scaling component can perform gesture analysis on a touch screen as a function of the respective sizes of one or more objects detected by the touch screen of the computing device. In some implementations, the executable components can include a gesture model generation component which can generate a gesture model based on operational data received from a plurality of entities. In addition to this implementation, the gesture model can be trained by sharing in the cloud between a plurality of models. According to some implementations, the analysis component can carry out a utility-based analysis as a function of a benefit from the precise determination of a gesture intention with the cost of an imprecise determination of a gesture intention. gesture. In addition to these implementations, the executable components can include a risk component which can regulate acceptable error rates as a function of the acceptable risk associated with a defined task. A computer-implemented method may include mapping, by a system comprising a processor, a set of operational instructions to a set of gestures on a touch screen. The method implemented by computer can also include obtaining, by the system, sensor data which is linked to the implementation of the series of gestures on a touch screen. The series of gestures on a touch screen can be implemented in a controlled non-stationary environment. In addition, the computer-implemented method may include evaluating, by the system, a score / performance data and a score / usability data respectively of the series of gestures on a touch screen relative to the respective operational instructions. the set of operational instructions based on an analysis of the sensor data. In one implementation, the method implemented by computer can include learning, by the system, gestures on a touch screen relating to the respective operational instructions of the series of operational instructions. According to certain implementations, the method implemented by computer can include measuring, by the system, one or more errors as a function of the respective time spent deviating from a defined target path for at least one gesture from the series of gestures on a touch screen. According to certain implementations, the method implemented in format can include performing, by the system, an analysis of gestures on a touch screen as a function of the dimensions of the touch screen of the computing device. In addition to these implementations, the method implemented by computer can include performing, by the system, the analysis of gestures on a touch screen as a function of respective sizes of one or more objects detected by the touch screen. of the IT system. The method implemented by computer can also include, according to certain implementations, generating, by the system, a gesture model as a function of operational data received from a plurality of computer devices. In addition to these implementations, the method implemented by computer can include training, by the system, the gesture model by sharing in the computer cloud between a plurality of models. The plurality of models may be a function of the operational data received from the plurality of computing devices. In a variant of or another implementation, the method implemented by computer can include carrying out, by the system, an analysis based on utility which has the benefit of correlating precisely an intention of gesture with an imprecise correlation cost of an intention to act. In addition to this implementation, the computer-implemented method may include regulating, by the system, acceptable error rates as a function of the acceptable risk associated with a defined task. In addition, there is proposed here a computer-readable storage device comprising executable instructions which, in response to an execution, cause a system comprising a processor to perform operations. Operations may include matching a set of operational instructions to a set of gestures on a touch screen and obtaining sensor data that is related to the implementation of the series of gestures on a touch screen in an environment. unstable. Operations may also include training a model based on the set of operational instructions, the series of gestures on a touch screen, and sensor data. Furthermore, the operations may also include analyzing a score / performance data and or a score / usability data respectively of the series of gestures on a touch screen relative to the respective operational instructions of the series of operational instructions. based on an analysis of the sensor data. According to certain implementations, the operations may include carrying out an analysis based on utility as a function of a benefit from the precise determination of a gesture intention with the cost of an imprecise determination of a intention of gesture. In addition to these implementations, operations may include regulating a risk component that regulates acceptable error rates as a function of the acceptable risk associated with a defined task. In order to provide a context for the various aspects of the present invention, Figures 21 and 22 as well as the following presentation are intended to provide a brief and general description of a suitable environment in which the various aspects of the present invention can be implemented .. Referring to FIG. 21, an example of an environment 2110 for implementing various aspects of the present invention includes a computer 2112. The computer 2112 includes a processor unit 2114, a system memory 2116, and a system bus 2118. The system bus 2118 couples system components as shown in FIG. 21. The processor unit 2114 can be any of a variety of processors available. Multi-core microprocessors and other multi-processor architectures can also be used as the 2114 processor unit. The bus system 2118 can be any one of various types of bus structure (s) including the memory bus or a memory command, a peripheral bus or an external bus, and / or a local bus using any variety of bus architectures available including, but not limited to, 8-bit bus, industry standard architecture (ISA), micro-channel architecture (MSA), extended ISA (EISA), smart electronics for readers (IDE), a VESA local bus (VLB), a peripheral component interconnect bus (PCI), a universal serial bus (LSB), an advanced graphics port (AGP), a PC Card bus (PCMCIA), and a system interface for small computers (SCSI). The memory system 2116 includes a volatile memory 2120 and a non-volatile memory 2122. The elementary input / output system (BIOS), containing the basic routines for transferring information between elements in the computer 2112, such as during startup, is stored in non-volatile memory 2122. By way of illustration, and not by way of limitation, non-volatile memory 2122 may include a read-only memory (ROM), a programmable ROM (PROM), an electrically programmable ROM ( EPROM), an electrically erasable and programmable ROM (EEPROM), or a flash memory. The volatile memory 2120 includes a random access memory (RAM), which acts as an external cache memory. As an illustration and not a limitation, RAM is available in many forms such as synchronous RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), SDRAM with double data access speed (DDR SDRAM), improved SDRAM (ESDRAM), synchronous link DRAM (SLDRAM), direct Rambus RAM (DRRAM). The computer 2112 also includes removable / non-removable, volatile / non-volatile computer storage media. FIG. 21 illustrates, for example, 2124 disk storage. 2124 disk storage includes, but is not limited to, positive devices such as a magnetic disk drive, a floppy disk drive, a tape drive, a Jaz drive , a Zip drive, an LS-100 drive, a flash memory card, or a memory stick. In addition, disk storage 2124 can also include storage media separately or in combination with other storage media including, but not limited to, an optical disk drive such as a CD-ROM device (CD-ROM) , a recordable CD player (CD-R Drive), a rewritable CD player (CD-RW Drive) or a versatile digital disc player (DVD-ROM). To facilitate the connection of disk storage 2124 to the system bus 2118, a removable or non-removable interface is usually used, such as the interface 2126. It will be appreciated that FIG. 21 describes software which acts as an intermediary between users and the basic computer resources described in the suitable operating environment 2110. Such software includes an operating system 2128. The operating system 2128, which can be stored on disk storage 2124, acts to control and allocate resources from computer 2112. System applications 2130 take advantage of resource management by the operating system 2128 through program modules 2132 and program data 2134, stored either in system memory 2116 or on disk storage 2124. It will be appreciated that one or more embodiments of the present invention can be implemented with various operating systems or combinations of operating systems. A user enters commands or information into the computer 2112 via an input device 2136. The input devices 2136 include, but are not limited to, a device pointing devices such as a mouse, trackball, stylus, touchpad, keyboard, microphone, joystick, joystick, satellite dish, scanner, TV tuner card, digital camera, camera digital video, web camera, and others. These input devices and others are connected to the processing unit 2114 via the system bus 2118 via an interface port (s) 2138. The port (s) of 2138 interface include, for example, a serial port, a parallel port, a game port, and a universal serial bus (USB). The device (s) 2140 use some of the same types of ports as the device (s) 2136. Thus, for example, a USB port can be used to provide input to the computer 2112, and to output information from the computer 2112 to an output device 2140. An output adapter 2142 is provided to illustrate that these are certain output devices 2140 such as monitors, speakers and printers, among other output devices 2140, which require special adapters. The output adapters 2142 include, by way of illustration and not limitation, video and sound cards which provide means of connection between the output device 2140 and the system bus 2118. It should be noted that other devices and / or systems of the devices both provide input and output capabilities as a remote computer (s) 2144. The computer 2112 can operate in a networked environment using logical connections to one or more remote computers, such as a remote computer (s) 2144. The remote computer (s) ) 2144 can be a personal computer, server, router, network PC, workstation, microprocessor-based application, peer device or other common network node and the like, and usually includes many or all the elements described relative to the computer 2112. For reasons of brevity, only a memory storage device 2146 is illustrated with the remote computer (s) 2144. The remote computer (s) (s) 2144 are logically connected to the computer 2112 by a network interface 2148 and then physically connected via a communication connection 2150. The network interface 2148 includes communication networks such as local area networks (LAN) and wide area networks (WAN). LAN technologies include Fiber Distributed Data Interfaces (LDDI), Copper Cable Distributed Data Interfaces (CDDI), Ethernet / IEEE 802.3, Token Rings / IEEE 802.5 and others. WAN technologies include, but are not limited to, point-to-point links, circuit switched networks such as digital integrated services networks (ISDN) and variations thereof, packet switched networks, and digital subscriber lines (LNA). The communication connection (s) 2150 refers to the hardware / software used to connect the network interface 2148 to the system bus 2118. While the communication connection 2150 is shown for clarity of the illustration at inside the computer 2112, it can also be external to the computer 2112. The hardware / software necessary for connection to the network interface 2148 includes, by way of example only, internal and external technologies such as, modems including modems for standard quality telephones, cable modems and DSL modems, ISDN adapters, and Ethernet cards. [0131] The LIG. 22 is a schematic diagram of a sample of computer environment 2200 with which the present invention can interact. The IT environment sample 2200 includes one or more client (s) 2202. The cbent (s) 2202 can be hardware and / or software (eg, wires, processes, computing devices). The computer environment sample 2200 also includes one or more server (s) 2204. The server (s) 2204 can also be hardware and / or software (for example, wires, processes, computer devices) . The servers 2204 can house wires to carry out transformations by using one or more embodiments as described here, for example. The possible communication between a client 2202 and servers 2204 can be in the form of data packets adapted to be transmitted between two or more computer processes. The computer environment sample 2200 includes a communication framework 2206 which can be used to facilitate communications between the client (s) 2202 and the server (s) 2204. The client (s) ) 2202 are functionally connected to one or more client data store (s) 2208 which can be used to store local information about the client (s) 2202. Similarly, the server (s) 2204 are functionally connected to one or more server data store (s) 2222 which can be used to store local information on servers 2204. As used in this description, in some embodiments, the terms "component," "system," "interface," "manager," and the like are intended to refer to, or understand, an entity related to a computer or an entity linked to an operational machine with one or more specific functionalities, in which the entity can be either hardware, a combination of hardware and software, software, running software and / or firmware. As an example, a component can be, but is not limited to, a process running on a processor, a processor, an object, an executable, a thread, computer executable instructions, a program, and / or a computer. By way of illustration and not limitation, both an application running on a server and the server can be a component. [0133] Ln or more components can reside in a process and / or a thread of execution and a component can be located on a computer and / or distributed between two or more computers. In addition, these components can run from various computer readable media with various data structures stored on them. The components can communicate via local and / or remote processes as according to a signal comprising one or more data packets (for example, data coming from a component interacting with another component in a local system, a distributed system, and / or on a network like the Internet with other systems via the signal). As another example, a component can be a device with specific functionality provided by mechanical parts actuated by electrical or electronic circuits, which is controlled by software or a firmware application executed by one or more processors, in which the processor can be inside or outside the devices and can run at least part of the software or firmware application. As yet another example, a component may be a device which provides specific functionality by electronic components without mechanical parts, the electronic components may include a processor for executing software or firmware which at least in part provides the functionality of the electronic components. In one aspect, a component can emulate an electronic component via a virtual machine, for example, in a cloud computing system. While various components have been illustrated as separate components, it will be appreciated that multiple components can be implemented as a single component, or a single component can be implemented as multiple components, without departing from the example modes of achievement. [0134] Inference can also refer to techniques used to compose higher level events from a series of events and / or data. Such an inference allows the construction of new events or actions from a set of observed events and / or stored event data, if the events are correlated in close temporal proximity, and if the events and data originate one or more sources of events or data. Various classification schemes and / or systems (for example, support vector machine, neural network, expert systems, Bayesian belief networks, fuzzy logic, and data fusion engines) can be used in connection with performing actions automatic and / or inferred in connection with the present invention. In addition, the various embodiments can be implemented as a method, a device, or a manufactured item using standard manufacturing and / or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computer for practicing the present invention. The term "article of manufacture" as used herein is intended to mean a computer program accessible from any computer readable device, machine readable device, computer readable medium, computer readable media, machine readable media, storage / communication media computer readable (or machine readable). For example, computer readable media can include, but is not limited to, a magnetic storage device, for example, a hard drive; a diskette ; one (or more) magnetic tape (s); an optical disc (for example, a compact disc (CD), a digital versatile disc (DVD), a Blue-ray ™ disc (BD)); a smart card; a flash memory device (for example, card, plug, key, disc); and / or a device that emulates a storage device and / or any of the computer readable media. Of course, those skilled in the art will recognize that many modifications can be made to this configuration without departing from the scope or spirit of the various embodiments. LIST OF COMPONENTS [0136] mapping component (102) [0137] sensor component (104) [0138] analysis component (106) [0139] interface component (108) [0140] memory (110) [0141] processor ( 112) [0142] sensors (114) [0143] gesture model generation component (202) [0144] scaling component (206) [0145] stopwatch component (208) [0146] risk component (1702) [0147] component of reasoning (1704)
权利要求:
Claims (1) [1" id="c-fr-0001] Claims [Claim 1] A system comprising: a memory which stores executable components; and a processor, operatively coupled to memory, which executes the executable components, the executable components comprising: a mapping component which correlates a set of operational instructions to a set of gestures on a touch screen, wherein the operational instructions include at minus a defined task performed with respect to a touch screen of a computing device; a sensor component which receives sensor data from a plurality of sensors, wherein the sensor data is related to the implementation of the gesture series on a touch screen; and an analysis component which analyzes the sensor data and evaluates respective performance data and usability data of the gesture series on a touch screen relative to the respective operational instructions of the operational instruction series, wherein the respective performance data and usability data are a function of the appropriateness of the series of gestures on a touch screen. [Claim 2] The system of claim 1, further comprising a gesture model which learns gestures on a touch screen relating to the respective operational instructions of the set of operational instructions. [Claim 3] The system of claim 1 or 2, wherein one or more errors are measured as a function of the respective spent time deviating from a target path associated with the at least one defined path. [Claim 4] A system according to any one of the preceding claims, further comprising a scaling component which performs gesture analysis on a touch screen as a function of the dimensions of the touch screen of the computing device. [Claim 5] The system of claim 4, wherein the scaling component performs gesture analysis on a touch screen as a function of respective sizes of one or more objects detected by the touch screen of the device computer science. [Claim 6] A system according to any preceding claim, further comprising a gesture model generation component which generates a gesture model based on operational data received from a plurality of computer devices, wherein the gesture model is driven by sharing in the computing cloud between a plurality of models, wherein the plurality of models are based on operational data received from the plurality of computing devices. [Claim 7] System according to any one of the preceding claims, in which the analysis component performs an analysis based on utility as a function of a benefit of the precise determination of a gesture intention with a cost of one. imprecise determination of an intention to act. [Claim 8] The system of claim 7, further comprising a risk component which regulates acceptable error rates as a function of the acceptable risk associated with a defined task, in which the series of gestures on a touch screen are implemented. works in an environment subject to vibrations or turbulence. [Claim 9] A computer implemented method comprising: mapping, by a system comprising a processor, a set of operational instructions to a set of gestures on a touch screen, wherein the operational instructions include a defined set of related tasks performed in relation to a touch screen of a computer device; obtain, by the system, sensor data which is linked to the implementation of the series of gestures on a touch screen; and evaluate, by the system, respective performance scores and usability scores of the series of gestures on a touch screen relative to the respective operational instructions of the series of operational instructions based on an analysis of the data of sensor. [Claim 10] A computer-implemented method according to claim 9, further comprising: learning, by the system, gestures on a touch screen relating to the respective operational instructions of the set of operational instructions. [Claim 11] Method implemented by computer according to claim 9 or 10, further comprising: measuring, by the system, one or more errors as a function of the respective time spent deviating from a defined target path for at least one gesture of the series of gestures on a touch screen, in which the series of gestures on a touch screen are implemented in a controlled non-stationary environment. [Claim 12] Method implemented by computer according to one of claims 9 to 11, further comprising: carrying out, by the system, an analysis of gestures on a touch screen as a function of the dimensions of the touch screen of the device computer science. [Claim 13] Method implemented by computer according to claim 12, further comprising: carrying out, by the system, the analysis of gestures on a touch screen as a function of respective sizes of one or more objects detected by the computer device touch screen. [Claim 14] Method implemented by computer according to one of claims 9 to 13, further comprising: generating, by the system, a gesture model as a function of operational data received from a plurality of computer devices; and training, by the system, the gesture model by sharing in the computer cloud between a plurality of models, in which the plurality of models are based on operational data received from the plurality of computer devices. [Claim 15] Method implemented by computer according to one of claims 9 to 14, further comprising: carrying out, by the system, an analysis based on utility which takes into account a benefit of precisely correlating an intention of gesture with an imprecise correlation cost of an intention to act; and to regulate, through the system, a risk component that regulates acceptable error rates as a function of the risk associated with a defined task.
类似技术:
公开号 | 公开日 | 专利标题 US11205100B2|2021-12-21|Edge-based adaptive machine learning for object recognition US10878550B2|2020-12-29|Utilizing deep learning to rate attributes of digital images US20120158623A1|2012-06-21|Visualizing machine learning accuracy JP6591672B2|2019-10-16|Dueling deep neural network US20150278706A1|2015-10-01|Method, Predictive Analytics System, and Computer Program Product for Performing Online and Offline Learning US20190377984A1|2019-12-12|Detecting suitability of machine learning models for datasets US20170364825A1|2017-12-21|Adaptive augmented decision engine FR3076642A1|2019-07-12|Facilitating the generation of standardized tests for evaluating gestures on a touch screen based on computer generated model data JP6911603B2|2021-07-28|How to generate predictive models for the categories of facilities visited by users, programs, server equipment, and processing equipment US11263241B2|2022-03-01|Systems and methods for predicting actionable tasks using contextual models US20170076321A1|2017-03-16|Predictive analytics in an automated sales and marketing platform US20180225685A1|2018-08-09|Identifying impending user-competitor relationships on an online social networking system US20180322411A1|2018-11-08|Automatic evaluation and validation of text mining algorithms US20190296989A1|2019-09-26|Information technology services with feedback-driven self-correcting machine learnings EP3846091A1|2021-07-07|Method and system for design of a predictive model Chowdhury et al.2018|A Holistic Ranking Scheme for Apps US20200320381A1|2020-10-08|Method to explain factors influencing ai predictions with deep neural networks US11093041B2|2021-08-17|Computer system gesture-based graphical user interface control WO2020243965A1|2020-12-10|Causal analysis US20200143231A1|2020-05-07|Probabilistic neural network architecture generation US20200380309A1|2020-12-03|Method and System of Correcting Data Imbalance in a Dataset Used in Machine-Learning US20190236719A1|2019-08-01|Selective identification of social network connections US20220012022A1|2022-01-13|System and method for matching integration process management system users using deep learning and matrix factorization EP3846087A1|2021-07-07|Method and system for selecting a learning model within a plurality of learning models US20210312323A1|2021-10-07|Generating performance predictions with uncertainty intervals
同族专利:
公开号 | 公开日 CN109901940A|2019-06-18| GB2569188A|2019-06-12| US20190179739A1|2019-06-13| GB201720610D0|2018-01-24|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US20130120280A1|2010-05-28|2013-05-16|Tim Kukulski|System and Method for Evaluating Interoperability of Gesture Recognizers| US20130120282A1|2010-05-28|2013-05-16|Tim Kukulski|System and Method for Evaluating Gesture Usability| CN103529976B|2012-07-02|2017-09-12|英特尔公司|Interference in gesture recognition system is eliminated| US9442587B2|2012-12-04|2016-09-13|L-3 Communications Corporation|Touch sensor controller responsive to environmental operating conditions| US20140267130A1|2013-03-13|2014-09-18|Microsoft Corporation|Hover gestures for touch-enabled devices| US10275341B2|2015-01-21|2019-04-30|Somo Innovations Ltd|Mobile application usability testing| US9927917B2|2015-10-29|2018-03-27|Microsoft Technology Licensing, Llc|Model-based touch event location adjustment|US10884912B2|2018-06-05|2021-01-05|Wipro Limited|Method, system, and framework for testing a human machine interfaceapplication on a target device| WO2020039273A1|2018-08-21|2020-02-27|Sage Senses Inc.|Method, system and apparatus for touch gesture recognition|
法律状态:
2020-04-14| PLFP| Fee payment|Year of fee payment: 2 | 2020-11-20| PLFP| Fee payment|Year of fee payment: 3 | 2021-04-09| PLSC| Publication of the preliminary search report|Effective date: 20210409 |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 GB1720610.3A|GB2569188A|2017-12-11|2017-12-11|Facilitating generation of standardized tests for touchscreen gesture evaluation based on computer generated model data| GB1720610.3|2017-12-11| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|