![]() METHOD AND APPARATUS FOR GESTURE DETECTION IN AN ELECTRONIC DEVICE
专利摘要:
method and apparatus for detecting gesture in an electronic device. it is an electronic device (200) that includes one or more processors (216), a motion detector (242), another motion detector, such as a gyroscope (253) or one or more accelerometers (252) and a or more proximity detectors (208). an amount of movement of the electronic device, an amount of rotation about a geometric axis (254) and the possibility of an object (706) being located near a main face (707) of the electronic device can be determined. the occurrence of a gesture (804) that raises the electronic device can be confirmed when the amount of movement exceeds a first predetermined threshold (506), the amount of rotation exceeds a second predetermined threshold (605) and the object is located close to the device electronic. other factors, such as whether the movement was against a gravity direction (504) and a final orientation of the electronic device can also be considered. a control operation can occur in response to confirmation of the gesture. 公开号:BR102016004022A2 申请号:R102016004022-1 申请日:2016-02-24 公开日:2020-12-08 发明作者:Mark Elkins;John Gorsica 申请人:Motorola Mobility Llc; IPC主号:
专利说明:
[0001] [001] This disclosure generally refers to electronic devices and corresponding methods and, more particularly, to electronic devices with motion sensors. BACKGROUND OF THE TECHNIQUE [0002] [002] Mobile electronic communication devices such as mobile phones, smart phones, gaming devices and the like are used by billions of people. These owners use mobile communication devices for a number of different purposes that include, but are not limited to: voice communications and data communications for text messaging, Internet browsing, commerce, such as online banking and social networking. [0003] [003] Just as the technology of these devices has advanced, so has its set of resources. For example, not too long ago, all electronic devices had physical numeric keypads. Today, touch screens are most often seen as user interface devices. Similarly, touch used to be the only way to deliver user input to a device, via a key pad or touchscreen. Currently, some devices are equipped with speech recognition that allows a user to speak commands to a device instead of typing them. [0004] [004] Some electronic devices even include motion sensors that can recognize certain movements by a user. For example, accelerometers in some electronic devices can detect a rhythmic movement from side to side and from top to bottom in order to be used as a pedometer. These electronic devices can count these rhythmic movements to determine the number of steps a user has taken. [0005] [005] Although these motion sensors are useful for simple tasks such as counting a user's steps, their use has been limited due to the fact that it is often very difficult for such devices to identify which gesture a user is making. This is true even when motion sensors detect that a gesture is taking place. [0006] [006] It would be advantageous to have additional solutions, in the form of an improved device, an improved method, or both, to be able to more accurately identify the gestures that result in the movement of electronic devices. BRIEF DESCRIPTION OF THE DRAWINGS [0007] [007] Figure 1 illustrates a user who interacts with an electronic device of the prior art. [0008] [008] Figure 2 illustrates a schematic block diagram of an explanatory electronic device according to one or more disclosure modalities. [0009] [009] Figure 3 illustrates an explanatory method according to one or more disclosure modalities. [0010] [010] Figure 4 illustrates one or more explanatory method steps for gesture detection according to one or more disclosure modalities. [0011] [011] Figure 5 illustrates one or more explanatory method steps for gesture detection according to one or more disclosure modalities. [0012] [012] Figure 6 illustrates one or more exemplary method steps for gesture detection according to one or more of the disclosure modalities. [0013] [013] Figure 7 illustrates one or more exemplary method steps for gesture detection according to one or more of the disclosure modalities. [0014] [014] Figure 8 illustrates one or more explanatory method steps for gesture detection according to one or more disclosure modalities. [0015] [015] Figure 9 illustrates another explanatory method according to one or more disclosure modalities. [0016] [016] Experienced technicians will verify that the elements in the Figures are illustrated for simplicity and clarity and were not necessarily drawn to scale. For example, the dimensions of some of the elements in the Figures may be exaggerated in relation to the other elements to help improve the understanding of the modalities of the present disclosure. DETAILED DESCRIPTION OF THE DRAWINGS [0017] [017] Before describing in detail the modalities that are in accordance with the present disclosure, it should be noted that the modalities are found mainly in combinations of method steps and device components related to the detection of an elevation gesture that moves a device from a first lift to a second lift adjacent to a user’s head or face. The blocks or process descriptions in a flowchart can be modules, segments or pieces of code that implement logical functions specific to a machine or steps in a process or, alternatively, that transition the specific hardware components to different stages or operating modes. Alternative deployments are included and it will be evident that the functions can be performed in a different order from those shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved. [0018] [018] It will be verified that the disclosure modalities described in this document can be composed of one or more conventional processors and unique stored program instructions, which control the one or more processors to implant, in conjunction with certain non-processor circuits, a part , most or all functions to detect a gesture that raises an electronic device and, optionally, to perform one or more control operations in response, as described in this document. Unprocessed circuits may include, but are not limited to: microphones, speakers, acoustic amplifiers, digital to analog converters, signal regulators, clock circuits, power source circuits and user input devices. As such, these functions can be interpreted as steps in a method for detecting the gesture that elevates the device. Alternatively, some or all of the functions could be implemented by a state machine that has no stored program instructions or in one or more application-specific integrated circuits (ASICs), in which each function or some combination of certain functions is implemented as custom logic. Certainly, a combination of the two approaches could be used. Accordingly, the methods and means for these functions have been described in this document. In addition, it is expected that an individual of common ability, despite the possibly significant effort and many design choices motivated by, for example, available time, current technology and economic considerations, when guided by the concepts and principles revealed in this document, you will be promptly able to generate such programs and software instructions and ICs with minimal experimentation. [0019] [019] The modalities of the revelation are now described in detail. In reference to the drawings, similar numbers indicate similar parts across all views. As used in the description in this document and for all claims, the following terms assume the meanings explicitly associated in this document, unless the context clearly indicates otherwise: the meaning of "one", "one", "o" and "a" includes reference to the plural, the meaning of "em" includes "no" and "na". Relational terms, such as first and second, top and bottom and the like can be used only to distinguish an entity or action from another entity or action without necessarily requiring or implying any such actual relationship or order between such entities or actions. As used in this document, components can be "operatively coupled" when information can be sent between such components, although there may be one or more intervention components or intermediaries between or along the connection path. In addition, the reference designators shown in this document in parentheses indicate the components shown in a figure other than that in question. For example, the allusion to a device (10) in the discussion of Figure A refers to an element 10 shown in the Figure in addition to Figure A. [0020] [020] The modalities of the revelation provide a method that can be repeated and precise and an apparatus to detect that a gesture that raises an electronic device has occurred. For example, when a user lifts a portable electronic device from his waist to his ear, as if making a phone call, the modalities of the disclosure can quickly and precisely determine that movement so that the device can perform one or more control operations on response to motion detection. The disclosure modalities additionally have the ability to distinguish this lifting movement from other movements, such as positioning a portable electronic device in a pocket, which may have very similar movement signatures. Advantageously, the disclosure modalities provide an intuitive, immediate and natural way to control an electronic device without the need to deliver voice commands or touch input to a user interface. With the disclosure modalities, a user can trigger, activate, act or initiate control functions of an electronic device with a simple gesture movement. [0021] [021] In one embodiment, a method for controlling an electronic device comprises detecting, with one or more motion sensors, that a gesture has occurred that raises the electronic device. This determination can be a function of one or more factors. Explanatory factors include a distance that the electronic device moves during a movement initiated by a user, an amount of rotation of the electronic device around a geometric axis during the movement and whether a surface of the electronic device ends up being located next to another object , such as a user’s face, ear, or head. Other explanatory factors include whether a part of the movement was against a gravity direction, an orientation of the electronic device at the end of the movement and an acceleration that occurs during the movement. These factors can be used individually or in combination. [0022] [022] In one modality, once a gesture that raises an electronic device to a user's head, face or ear is detected, one or more control circuits operable with one or more motion sensors can perform an operation control. For example, for illustrative purposes, an explanatory use case will be transforming a portable electronic device from a normal operating mode to a discrete operating mode. However, this use case is provided only to explain one or more modalities. Other control operations could be replaced by a change of mode, since numerous control operations will be obvious to those individuals of ordinary skill in the art who have the benefit of this disclosure. [0023] [023] In the case of explanatory use, the control operation will be transitioning from a voice control interface mechanism that operates on an electronic device between a normal operating mode and a different operating mode. In this example, a voice control interface mechanism is operable to receive voice commands and deliver audible responses to a user. For example, the voice control interface engine may receive a speech command in which a user asks a question. The electronic device can then search the Internet for the answer and, in response to receiving the speech command, deliver an audible output to the user with the answer. [0024] [024] The disclosure modalities contemplate that an unexpected consequence of this explanatory voice recognition system is that a user may not want the passerby to hear the audible output. This is especially true when the audible output includes a statement of personal information. With this unexpected problem in mind, methods and devices for detecting the gesture that raises the electronic device can be used to trigger or activate a control operation such as by having the voice control interface mechanism enter a second mode of operation. "distinct". This explanatory use case, which is a suitable application for one or more of the disclosure modalities, will be illustrated in the following Figures. [0025] [025] With reference now to Figure 1, illustrated in the present document is an electronic device of the prior art 100 configured with a voice controlled user interface. An example of such an electronic device of the prior art 100 is described as Patent Application Published in US 2014/0278443 by Gunn et al., Which is incorporated by reference into this document. Essentially, the prior art electronic device 100 includes a voice controlled user interface for receiving a speech command phrase, identifying a speech command phrase segment and performing a control operation in response to the segment. In one embodiment, the control operation is the delivery of an audible response. [0026] [026] Figure 1 illustrates a use case that highlights an unexpected problem associated with the otherwise incredibly convenient functionality offered by the voice-controlled user interface. A user 101 delivers, in a normal conversational tone, a voice command 102 that asks "How tall is the Sears Tower " The prior art electronic device 100, using its voice-controlled user interface and one or more other applications, retrieves the response from a remote source and announces the response with an audible output 103. In this case, the electronic device of the prior art announces, at a volume level sufficient for user 101 to hear the same over a distance of many meters, "442.26 m (1451 feet)". [0027] [027] Two points should be noted in Figure 1. First, due to the convenience offered by the voice-controlled user interface, user 101 was able to determine a triviality fact simply through speech. User 101 did not have to access a book, computer or anyone else. The prior art electronic device 100 simply found the answer and delivered it. [0028] [028] Second, the audible output 103 was delivered at an output level that was sufficient for user 101 to hear it over a distance. It is interesting to note that if user 101 was able to hear it over a distance of many meters, a passerby or curious would have it too. The disclosure modalities include that user 101 may not care if a third party hears the answer to the question "How tall is the Sears Tower " However, if the user's voice command was "play my voicemail", user 101 may not want a third party to hear a medical diagnosis provided by their physician. Similarly, user 101 may not want a third party to hear his significant break or the use of explanations after he has forgotten a birthday. [0029] [029] Advantageously, one or more disclosure modalities may allow the user 101 to conventionally cause an electronic device configured in accordance with the disclosure modalities to perform a control operation, such as inserting a discrete operation mode, without drawing attention to you and without delivering touch sensitive input or voice input. Consequently, user 101 can use the elevation detection apparatus and method described in this document to transition from a voice control interface mechanism to a second distinct operating mode in which medical diagnosis, disruption or expletives are heard. only by the person for whom they are intended. As noted above, this is only an explanatory application for disclosure modalities. Others will be readily obvious to those individuals of ordinary skill in the art who have the benefit of this revelation. [0030] [030] Now back to Figure 2, illustrated in the present document, is an exemplary electronic device 200 configured according to one or more disclosure modalities. The electronic device 200 of Figure 2 is a portable electronic device and is shown as a smart phone for illustrative purposes. However, it must be obvious to those of ordinary skill in the art who have the benefit of this disclosure that other electronic devices can be replaced by the explanatory smart phone in Figure 1. For example, electronic device 200 could also be a handheld computer, a tablet computer, betting device, media player or other device. [0031] [031] This illustrative electronic device 200 includes a display 202, which can be optionally sensitive to touch. In an embodiment in which the display 202 is touch sensitive, the display 202 can serve as a primary user interface 211 of the electronic device 200. Users can deliver the user input to the display 202 of such an embodiment through input delivery sensitive to the touch of a finger, stylus or other objects placed close to the display. In one embodiment, the display 202 is configured as an active matrix LED (AMOLED) display. However, it should be noted that other types of displays, which include liquid crystal displays, would be obvious to those individuals of ordinary skill in the art who have the benefit of this disclosure. [0032] [032] The exemplary electronic device 200 of Figure 2 includes a housing 201. In one embodiment, housing 201 includes two housing members. A front housing member 227 is arranged around the periphery of the display 202 in one embodiment. The front housing member 227 and the display 202 collectively define a first main face of the electronic device 200. A rear housing member 228 forms the rear side of the electronic device 200 in this illustrative embodiment and defines a second rear main face of the electronic device . [0033] [033] Resources can be incorporated into 227,228 accommodation members. Examples of such features include an optional camera 229 or an optional speaker port 232 arranged on top of a speaker. These features are shown to be arranged on the rear main face of the electronic device 200 in this mode, but could be located anywhere. In this illustrative embodiment, a user interface component 214, which may be a button or touch-sensitive surface, may also be arranged along the rear housing member 228. [0034] [034] In one embodiment, the electronic device 200 includes one or more connectors 212,213, which may include an analog connector, a digital connector or combinations thereof. In this illustrative embodiment, connector 212 is an analog connector disposed at a first end 250, that is, at the top end, of the electronic device 200, while connector 213 is a digital connector disposed at a second end 251 opposite the first end 250 , which is the bottom end in this modality. [0035] [035] A schematic block diagram 215 of electronic device 200 is also shown in Figure 2. In one embodiment, electronic device 200 includes one or more processors 216. In one embodiment, one or more processors 216 can include a processor application and, optionally, one or more auxiliary processors. One or both of the application processor or auxiliary processor (s) may include one or more processors. One or both of the application processor or auxiliary processor (s) may be a microprocessor, a group of processing components, one or more ASICs, programmable logic or other type of processing device. [0036] [036] The application processor and the auxiliary processor (s) can be operable with the various components of the electronic device 200. Each of the application processor and the auxiliary processor (s) ( and s) can be configured to process and execute the executable software code to perform various functions of the electronic device 200. A storage device, such as memory 218, can optionally store the executable software code used by one or more processors 216 during operation. [0037] [037] In this illustrative embodiment, the electronic device 200 also includes a communication circuit 225 that can be configured for wired or wireless communications with one or more other devices or networks. Networks can include a wide area network, a local area network and / or a personal area network. Examples of wide area networks include GSM, CDMA, W-CDMA, CDMA-2000, iDEN, TDMA, Generation 2.5 3GPP GSM networks, 3rd Generation 3GPP WCDMA networks, 3GPP short term evolution networks ( LTE) and 3GPP2 CDMA communication networks, UMTS networks, E-UTRA networks, GPRS networks, iDEN networks and other networks. [0038] [038] Communication circuit 225 can also use wireless technology for communication, such as, but not limited to, point-to-point or ad hoc communications such as HomeRF, Bluetooth and IEEE 802.11 (a, b, g or n) and other forms of wireless communication such as infrared technology. Communication circuit 225 may include a set of wireless communication circuits, one of a receiver, a transmitter or transceiver and one or more antennas 226. [0039] [039] In one embodiment, the one or more processors 216 may be responsible for performing the primary functions of the electronic device 200. For example, in one embodiment, the one or more processors 216 comprise one or more circuits operable with one or more devices user interface 211, which may include display 202, to present presentation information to a user. The executable software code used by one or more 216 processors can be configured as one or more modules 220 that are operable with one or more processors 216. Such modules 220 can store instructions, control algorithms, logic steps and so on . [0040] [040] In one embodiment, one or more proximity sensors 208 may be operable with one or more processors 216. In one embodiment, the one or more proximity sensors 208 include one or more signal receivers and signal transmitters. Signal transmitters emit infrared or electromagnetic signals that reflect off objects to signal receivers, thereby detecting an object located close to electronic device 200. It should be noted that each of the proximity sensors 208 can be any one of the various types of proximity sensors, such as, but not limited to: capacitive, magnetic, inductive, optical / photoelectric, laser, acoustic / sonic proximity sensors, based on radar, based on Doppler, thermals and based in radiation. Other types of sensors will be obvious to individuals of ordinary skill in the art. [0041] [041] In one embodiment, one or more proximity sensors 208 can be infrared proximity sensors that transmit an infrared light beam that reflects from a nearby object and is received by a corresponding signal receiver. Proximity sensors 208 can be used, for example, to compute the distance to any nearby object from the characteristics associated with the reflected signals. The reflected signals are detected by the corresponding signal receiver, which can be an infrared photodiode used to detect reflected light emitting diode (LED) light, respond to modulated infrared signals and / or perform triangulation of received infrared signals. The reflected signals can also be used to receive user input from a user who delivers gesture or touch input to electronic device 200. [0042] [042] In one embodiment, one or more 216 processors can generate commands or perform control operations based on information received from one or more 208 proximity sensors. One or more 216 processors can also generate commands or execute control operations based on information received from a combination of one or more proximity sensors 208 and one or more other sensors 209. Alternatively, one or more processors 216 can generate commands or perform control operations based on in the information received from one or more other 209 sensors alone. In addition, the one or more processors 216 can process the information received alone or in combination with other data, such as information stored in memory 218. [0043] [043] One or more other 209 sensors may include a microphone 240, an earpiece 241, a second speaker (arranged under speaker door 232) and a mechanical input component 214, such as a button. The one or more other 209 sensors may also include key selection sensors, a touch sensitive keyboard sensor, a touch screen sensor, a capacitive sensor and one or more switches. Touch sensors can be used to indicate whether any of the user activation targets 204, 205, 206, 207 currently on display 202 are being activated. Alternatively, touch sensors in housing 201 can be used to determine whether the electronic device 200 is being touched at the side edges, thereby indicating whether certain orientations or movements of the electronic device 200 are being performed by a user. The other 209 sensors can also include capacitive housing / surface sensors, audio sensors and video sensors (such as a camera). [0044] [044] The other sensors 209 may also include motion detectors 242, such as one or more accelerometers 252 or gyroscopes 253. For example, an accelerometer 252 can be incorporated into the electronic circuitry of electronic device 200 to show vertical orientation, constant inclination and / or if the device is stationary. A 253 gyroscope can be used in a similar project. [0045] [045] In one or more embodiments, motion detectors 242 can also include a barometer 255. A barometer 255 can capture changes in air pressure due to changes in elevation. Consequently, during a gesture that raises the electronic device 200, a barometer can be used to estimate the distance that the electronic device 200 has moved by detecting pressure changes from a starting position to a stopped position. In one embodiment, the barometer includes a cantilever mechanism produced from a piezoelectric material and disposed within a chamber. The cantilever mechanism works as a pressure sensitive valve, which flexes as the pressure differential between the chamber and the environment changes. The cantilever deviation stops when the pressure differential between the chamber and the environment is zero. Since the cantilever material is piezoelectric, the material deviation can be measured with an electric current. [0046] [046] Without taking into account the type of motion detectors 242 that are used, in one embodiment, motion detectors 242 are also operable to detect the movement of electronic device 200 by a user. In one or more embodiments, the other sensors 209 and motion detectors 242 can each be used as a gesture detection system. [0047] [047] To illustrate by way of example, in a modality, a user can deliver the gesture input by moving a hand or arm in the predefined movements in close proximity to the electronic device 200. Such movement can be detected by one or more sensors. proximity 208. In another mode, the user can deliver the gesture input by touching the display 202. This user input can be detected by touch sensors operable with the display. In yet another modality, a user can deliver the gesture input by lifting, stirring or otherwise deliberately moving the electronic device 200. This motion can be detected by one or more accelerometers 252. In still other modes , the user can deliver the gesture input by rotating or changing the orientation of the electronic device 200, which can be detected by multiple accelerometers 252 or a gyroscope 253. Other modes of gesture input delivery will be obvious to that individual of ordinary skill in technique that have the benefit of that revelation. [0048] [048] Other components operable with one or more 216 processors may include output components 243 such as video outputs, audio outputs 244 and / or mechanical outputs. Examples of output components include audio outputs 244 such as speaker port 232, earphone 241 or other alarms and / or vibrators and / or a mechanical output component such as movement or vibration based mechanisms. [0049] [049] In one embodiment, the one or more 216 processors are operable to alter a gain in microphone 240 so that a user's voice input can be received from different distances. For example, in one embodiment the one or more 216 processors are operable to operate microphone 240 in a first mode at a first gain sensitivity so that a user's voice commands can be received from more than 0.3 m away from the device. If the electronic device 200 is a smart phone, for example, the one or more 216 processors can operate the microphone 240 in a first mode at a first gain sensitivity to receive a user's voice input when operating in a live mode -voice, for example. Similarly, when electronic device 200 is configured with a control module 245, which is a voice control interface mechanism in this illustrative example, the one or more processors 216 can operate microphone 240 in a first mode in a first gain sensitivity to receive a user's voice input over a distance of many meters. This would cause microphone 240 to function as the prior art electronic device microphone (100) of Figure 1 in which voice commands (102) could be received over a distance of many meters. [0050] [050] In one embodiment, one or more 216 processors may additionally operate microphone 240 in a second mode at a second gain sensitivity to receive a user's voice input. In one embodiment, the second gain sensitivity is less than the first gain sensitivity. This results in voice input that is received from closer distances at lower levels. If the electronic device 200 is a smart phone, for example, the one or more 216 processors can operate the microphone 240 in a second mode at a second gain sensitivity to receive a user's voice input when the electronic device 200 is placed against the user's face. Since microphone 240 is very close to the user's mouth, this second lower gain sensitivity can be used to capture the user's lower volume input. Similarly, when electronic device 200 is configured with a control module 245 as a voice control interface mechanism, one or more processors 216 can operate microphone 240 in a second mode at a second gain sensitivity to receive voice input from a user's mouth that can be just 0.3 m (one inch) (or less) from the 240 microphone. [0051] [051] In a similar design, one or more 216 processors can operate one or both of the earpiece 241 and / or the speaker under headphone port 232 in either a first mode or a second mode. In one embodiment, the one or more processors 216 are operable to alter a gain of any speaker so that the audible output of the electronic device 200 can be heard by a user at different distances. For example, in one embodiment, one or more processors 216 are operable to operate one or both of the earpiece 241 and / or the speaker under earphone port 232 in a first mode at a first gain so that the audible output is produced at a first output level. In one embodiment, the first output level is a volume sufficient that the audible output can be heard from more than 0.3 m (one foot) away from the device. If the electronic device 200 is a smart phone, for example, the one or more processors 216 can operate one or both of the earphone 241 and / or the speaker under the earphone port 232 in a first mode on a first gain to produce output at a higher volume when operating in a speakerphone mode, for example. Similarly, when electronic device 200 is configured with a control module 245 as a voice control interface mechanism, one or more processors 216 can operate one or both of the earpiece 241 and / or the speaker under the headphone port 232 in a first mode at a first gain to produce the audible output at a first output level so that a user can hear a user's audible output located over a distance of many meters. This would cause one or both of the earpiece 241 and / or the speaker under the headphone port 232 to function as the speaker of the prior art electronic device (100) in Figure 1 did in which the audible output ( 103) could be heard over a distance of many meters. [0052] [052] In one embodiment, the one or more 216 processors can additionally operate the one between the earpiece 241 and / or the speaker or both under the headphone port 232 in a second mode in a second gain to produce the output audible at a second output level. In one mode, the second gain is less than the first gain so that the second output level is at a lower volume than the first output level. This results in audible output only that is audible from closer distances due to lower output levels. If the electronic device 200 is a smart phone, for example, the one or more processors 216 can operate one or both of the earpiece 241 and / or the speaker under the earphone port 232 in a second mode in a second gain to deliver audible output to a user when electronic device 200 is placed against the user's face. Since the earpiece 241 is very close to the user's ear, this lower second gain can be used to deliver the audible output at a lower level so as not to overuse the user's eardrums. Similarly, when electronic device 200 is configured with a control module 245 similar to the illustrative voice control interface mechanism, one or more processors 216 can operate one or both of the earpiece 241 and / or the speaker. speaker under headphone port 232 in a second mode in a second gain to deliver audible output to a user's ear when earphone 241 is just 0.3 m (one inch) (or less) from earphone 241. In one embodiment, this second mode of operation, that is, when the second output level is less than the first output level, is known as the "distinct mode" of operation. [0053] [053] In one embodiment, the one or more processors 216 are for switching between the earpiece 241 and the speaker under headphone port 232 when operating in the first mode and the second mode. For example, the earpiece 241 may comprise a small trigger to deliver audible output in just a few millimeters. In contrast, the speaker under the headphone port 232 can be a large driver for delivering audible output over greater distances. When this is the case, when operating in the first mode, the one or more 216 processors can deliver all the audio output from the headphone port 232. When operating in the second mode, the one or more 216 processors can deliver the entire audible output from the earpiece 241. Consequently, in one or more embodiments, a control operation comprises switching a control module 245 between a first mode of operation and a second mode of operation. When the 245 control module is the voice control interface mechanism used in this document as an illustration, that voice control mechanism can be operative in the second mode to output the audible output from a second speaker, for example, earphone 241, which is different from the speaker operable in the first mode, for example, headphone port 232. [0054] [054] In one embodiment, output components 243 can include analog to digital converters (ADCs), digital to analog converters (DACs), echo elimination, high-step filters, low-step filters, pass-through filters band, adjustable band filters, noise reduction filtering, automatic gain control (AGC) and other audio processing that can be applied to filter audio noise. For example, these devices can be used to filter out noise received from microphone 240. Output components 243 can be a single component as shown in Figure 2, or they can be deployed partially in hardware and partially in software or firmware run by one or more 216 processors. In some embodiments, output components 243 can be deployed using various hardware components and can also use one or more software or firmware components in various combinations. The output components 243 can be operative to control one or both of the earpiece 241 and / or the speaker under the earphone port 232 and / or to selectively turn on or off those output devices. In addition, the output components 243 can adjust the filtering or gain of one or both of the earpiece 241 and / or the speaker under the earphone port 232 for the purposes of various applications described below. [0055] [055] In one or more embodiments, the one or more processors 216 are operable to detect a gesture that raises the electronic device 200. In one embodiment, the accelerometer 252 serves as a motion detector operable with one or more processors 216. In this way, gyroscope 253 serves as another motion detector operable with one or more 216 processors. When a gyroscope 253 is not included with electronic device 200, multiple accelerometers can replace gyroscope 253 to determine the rotation of electronic device 200 in around the geometric axis. In such an embodiment, accelerometer 252 would serve as the motion detector while accelerometer 252 and another accelerometer replace gyroscope 253. This results in the other motion detector that has an accelerometer 252 in common with the motion detector. [0056] [056] In one embodiment, when a user lifts or otherwise moves the electronic device 200, the one or more processors 216 are operable to determine an amount of motion from the electronic device 200 of the motion detector, which is the accelerometer 252 in one mode. The one or more processors 216 are further configured to determine an amount of rotation of the electronic device 200 about a geometric axis 254 of the other motion detector, which is the gyroscope 253 in one embodiment. In one embodiment, the geometric axis 254 runs perpendicularly to the front main face of the electronic device 200 defined by the display 202 and the front housing member 227 in Figure 2. Put differently, in this mode, the geometric axis 254 travels orthogonally within and off the page as shown in Figure 2. The geometric axis 254 is then shown as a point to represent this orientation. [0057] [057] In one embodiment, the one or more 216 processors are additionally operable to determine whether an object is located near the electronic device 200. In one embodiment, this determination is whether the object is located near a main face of the electronic device 200 , such as the front main face of the electronic device 200 defined by the display 202 and the front housing member 227 in Figure 2. This determination can be made when the one or more processors 216 receive signals from the one or more proximity sensors 208 that indicate that an object, such as the user's ear, head or face, is located near the main face of the electronic device 200. In one or more embodiments, the one or more processors 216 are additionally operable to determine a direction of gravity in relation to the electronic device 200. This can be done with the accelerometer 252 in one mode. [0058] [058] In one or more embodiments, the one or more processors 216 are additionally operable to determine an orientation of the electronic device 200 once a gesture that raises the electronic device 200 has been detected. For example, an illustrative gesture is the action to raise the electronic device 200 from a first location, such as in the user's hand at the level of the waist or torso to his ear. Testing has shown that it can be difficult to distinguish this movement, for example, from placing the electronic device 200 in a pocket. Consequently, in one or more modalities to confirm that a gesture raising the electronic device 200 has occurred, the disclosure modalities confirm that at least one component of the direction of gravity travels from a first end 250 of the electronic device 200 to a second end 251 of electronic device 200. In this document, the first end 250 is the end with the earpiece 241, while the second end 251 is the end with the microphone 240. The disclosure modalities contemplate that if a user is retaining the electronic device 200 adjacent to your head in order to hear the earpiece 241 and speak into the microphone 240, the earpiece 241 will be larger than the microphone 240. Consequently, in one embodiment, the one or more processors 216 check this to detect that a gesture that raises the electronic device 200 is taking place. [0059] [059] In one or more embodiments, the one or more processors 216 are operable to further determine, from information received from the accelerometer 252, an acceleration that occurs during the movement of the electronic device 200 through a user. This acceleration determination can be used in multiple ways. First, it can be used to confirm that the movement that moves the electronic device 200 has occurred against the direction of gravity, as would be the case when lifting the electronic device 200, but not when placing the electronic device 200 in a pocket. Second, by comparing the acceleration with a predetermined threshold, the acceleration can be used to confirm that a user is currently lifting the electronic device 200 instead of performing some other operation, such as swinging the electronic device 200. [0060] [060] In one or more embodiments, the one or more processors 216 confirm that a gesture that raises the electronic device 200 occurs as a function of one or more factors. For example, in one embodiment, the one or more processors 216 are to confirm that a gesture that raises the electronic device 200 occurs when an amount of movement exceeds a predetermined first threshold. In one embodiment, this first predetermined threshold is about 20 centimeters. The term "about" is used to refer to an amount that does not have to be absolute, but can include some tolerance. For example, 19.378 centimeters or 20.125 centimeters could be "about" 20 centimeters when including tolerances for sensors and electrical and mechanical systems. [0061] [061] In one embodiment, the one or more processors 216 are additionally to confirm that a gesture that raises the electronic device 200 occurs when the amount of rotation of the electronic device 200 around the geometric axis 254 exceeds a second predetermined threshold. In one embodiment, the second predetermined threshold is about forty-five degrees. [0062] [062] In one embodiment, the one or more processors 216 are additionally to confirm that a gesture that raises the electronic device 200 occurs when, at the end of a movement, an object is located near a main face of the electronic device 200 as previously described. In one embodiment, the one or more processors 216 are additionally to confirm that a gesture that raises the electronic device 200 has occurred when all or some amount of movement has occurred against the direction of gravity. As noted above, this can help to distinguish an elevation gesture from a gesture that puts the electronic device 200 in a pocket. [0063] [063] In one embodiment, the one or more processors 216 are additionally to confirm that the gesture that raises the electronic device 200 has occurred when an acceleration of the electronic device 200 during movement exceeds a third predetermined threshold. In one embodiment, this predetermined threshold is 0.5 meters per second squared, net after deducting any acceleration due to gravity. [0064] [064] The factors listed above can be used in the function that determines whether the gesture that raises the electronic device 200 occurred alone or in combination. For example, the function can consider one, two, three or all factors. The consideration of more factors helps to prevent false detection of the gesture. The disclosure modalities contemplate that a user must be minimally affected due to false detection. Consequently, in one embodiment, the one or more processors 216 consider all factors, namely, an amount of movement, an amount of rotation, if an object is located near a main face of electronic device 200, a direction of gravity, if the movement is against the gravity direction and what final orientation of the electronic device 200 occurs after the movement. [0065] [065] This is explained in more detail through an illustrative example. In one embodiment, a moving average of acceleration values, as measured by accelerometer 252 and denoted α.sub.zero, is maintained in memory 218 by one or more processors 216. For example, a moving average of 64 acceleration values can be maintained in one mode. At any time, an instantaneous acceleration value can be measured by taking the square root of the sum of the instantaneous acceleration along the square X axis, the instantaneous acceleration along the square Y axis and the instantaneous acceleration along the geometric axis Square Z according to the following formula: α.sub.total = SQRT (α.sub.x ^ 2 + α.sub.y ^ 2 + α.sub.z ^ 2) EQ. 1. [0066] [066] This value can be calculated at periodic intervals, such as five or ten times per second, with each value being added to the number of moving average values on a first come, first go basis. [0067] [067] At any time, the significant acceleration can be determined by subtracting the acceleration value from the moving average from an instantaneous acceleration value according to the following formula: α.sub.current = α.sub.total - α.sub.zero EQ. two. [0068] [068] When that value, α.sub.current, is below a predetermined acceleration threshold, such as 0.5 meters per second squared, one or more processors 216 may conclude that electronic device 200 is not in motion. When this significant acceleration value is zero for a set of predetermined cycle numbers, the completed values for acceleration due to gravity, denoted Δ.sub.gravity, electronic device speed 200, denoted v.sub.current, distance traveled by electronic device 200, denoted d.sub.total and rotation of electronic device 200 about geometric axis 254, denoted Θ.sub.total can all be set to zero in memory 218. [0069] [069] The difference between the average acceleration stroke and acceleration due to gravity can then be calculated to determine the effect of gravity. In one embodiment, this includes subtracting the acceleration due to gravity, that is, 9.8 meters per second squared, from the average course of acceleration values and adding this to a historical calculation of the same subtraction according to the following equation : Δ.sub.gravity = Δ.sub.gravity + (α.sub.zero - 9.8 m / s.sup.2) EQ. 3. [0070] [070] This provides an ability to direct movement that allows one or more 216 processors to determine whether the movement of electronic device 200 is with or against gravity. [0071] [071] The current speed of the electronic device 200 can be calculated with the following equation: v.sub.current = v.sub.current + (α.sub.current * t.sub.samplerate.sup.2) EQ. 4. [0072] [072] The total distance that the electronic device 200 moves during a movement or gesture can be calculated with the following equation: d.sup.total = (v.sup.samplerate * t.sup.samplerate) + 0.5 * (α.sup.current * t.sub.samplerate.sup.2) EQ. 5. [0073] [073] This distance can also be measured directly when motion detector 242 is a barometer 255. The use of a barometer 255 relieves the need to perform a double integration to determine the distance using EQ. 5. [0074] [074] The total rotation of the electronic device 200 about the geometric axis 254 can be calculated with the following equation: Θsub.total = Θ.sup.total + Θ.sup.current EQ. 6. [0075] [075] With these factors, in addition to the input of one or more proximity sensors 208, if a gesture that raises the electronic device 200 has occurred can be confirmed. In one embodiment, if the one or more proximity sensors 208 indicate that an object is located near a main face of electronic device 200, but the object is not located near a main face of electronic device 200 during a detection cycle previous proximity sensor, the values d.sub.total, Θ.sup.total and Δ.sub.gravity can be compared to their respective limits. In one embodiment, when all limits have been exceeded and the orientation of the electronic device 200 has the first end 250 above the second end 251, that is, the bottom of the electronic device 200 is oriented no more than ninety degrees from the perpendicular to the direction gravity (low), the one or more processors 216 confirm that a gesture that raises the electronic device 200 has occurred. Consequently, in one embodiment, the one or more processors 216 may perform a control operation in response to confirmation of the gesture that raises the electronic device 200 that has occurred. In an illustrative embodiment, the control operation includes making the transition from a voice control interface mechanism to a different operating mode. This will be shown in more detail in Figure 3 below. [0076] [076] In one or more embodiments, the control module 245 of the electronic device 200 is a voice control interface mechanism. The voice control interface mechanism can include hardware, executable code and speech monitor executable code in one embodiment. The voice control interface mechanism may include, stored in memory 218, basic speech models, trained speech models or other modules that are used by the voice control interface mechanism to receive and identify voice commands. In one embodiment, the voice control interface mechanism may include a speech recognition mechanism. Regardless of the specific implementation used in various modalities, the voice control interface mechanism can access several speech models to identify speech commands. [0077] [077] In one embodiment, the voice control interface mechanism is configured to deploy a voice control feature that allows a user to speak a specific trigger phrase, followed by a command, to make one or more 216 processors perform an operation. For example, the user may say, as a trigger phrase, "Okay, Telephone, Ready, now go!". After that, the user can speak a command, such as "How tall is the Sears Tower " This combination of trigger and command phrases can cause one or more 216 processors to access an application module 247, such as a web browser, to fetch the response and then deliver the response as an audible output via a component output 243. For example, when operating in the first mode, the one or more 216 processors can deliver the response as an audible output through the headphone port 232 at a first output level. When operating in distinct mode, the one or more 216 processors can deliver the response as an audible output through the earpiece 241 at a second, smoother output level. In short, in one embodiment, the voice control interface mechanism listens to voice commands, processes commands and, together with one or more 216 processors, returns an audible output that is the result of the user's purpose. [0078] [078] In one or more embodiments, the one or more 216 processors are operable to carry out a transition from the voice control interface mechanism between the first mode and the second mode, or distinct mode, in response to the detection of an input from predefined user. In one embodiment, the predefined user entry is a gesture entry that raises the electronic device 200 as previously described. [0079] [079] In one or more modalities, the voice control interface mechanism is operative in a first mode to receive a speech command through the microphone 240 from a first distance and, in response to the speech command, produce a audible output at a first output level. In one embodiment, this audible output is delivered to a user via the headphone port 232. [0080] [080] The one or more processors 216 are then operable to detect the gesture that raises the electronic device 200. When the one or more processors 216 detect such a predetermined characteristic, they can be used to control the control interface mechanism and to switch between a first operating mode and a second operating mode. [0081] [081] When detection of the predefined user input occurs, in one mode, the one or more 216 processors are operable to carry out the transition from the voice control interface mechanism to a second operating mode, which is the discrete mode in a modality. When operating in discrete mode, the voice control interface mechanism is operable to receive speech commands from a second distance that is less than the first distance associated with the first mode. In addition, the voice control interface mechanism may be operable to produce, in response to received speech commands, an audible output at a second output level that is less than the first output level. In one embodiment, these softer output commands are delivered to a user via the 241 earpiece. [0082] [082] Advantageously, through the delivery of the predefined user input to make the voice control interface mechanism transition from the first mode to the distinct mode, the user can take advantage of the voice-controlled operation without third parties or onlookers who hear the information delivered in the form of audible output. This solves the unexpected problem illustrated in Figure 1 where viewers can overhear the audible response. Thus, if a user plans to listen to a voicemail that may be of a sensitive nature, the user simply delivers the predefined user input to the electronic device 200 to cause the one or more processors 216 to transition from the voice control interface for the discrete operation mode, as shown in Figure 3 below. [0083] [083] It should be understood that the electronic device 200 and the architecture of Figure 2 are provided for illustrative purposes only and to illustrate the components of an electronic device 200 in accordance with the disclosure modalities and are not intended to be a complete schematic diagram of the various components required for an electronic device. Therefore, other electronic devices according to the disclosure modalities may include several other components not shown in Figure 2 or they may include a combination of two or more components or a division of a particular component into two or more separate components and are still within the scope of the present disclosure. [0084] [084] Now back to Figure 3, illustrated in the present document, is a method 300 for using an electronic device 200 according to one or more modalities of the disclosure. In step 301, a user 305 is shown holding electronic device 200. In step 301, electronic device 200 is operating in a standard operating mode, which is the first operating mode in which the voice control interface mechanism is operative to receive a speech command from a first distance and produce, in response to the speech command, an audible output at a first output level. In this way, electronic device 200 would function exactly like the prior art electronic device (100) of Figure 1 when operating in the first mode. User 305 could deliver, in a normal conversational tone, a voice command that asks "How tall is the Sears Tower " and electronic device 200 would announce the response with an audible output that user 305 could hear over a distance of many meters. [0085] [085] However, in Figure 3, user 305 is interested in receiving personal information that he does not want third parties to hear. Additionally, he does not want third parties to see the same manipulate his device to put it in a privacy mode, since this "tip" can provoke the curiosity of these third parties, thus making them wish to hear even more closely. Advantageously, the disclosure modalities allow the user 305 to make a simple gesture to cause the one or more processors (216) of the electronic device 200 to change the operating modes. [0086] [086] Consequently, in step 301, the user delivers a predefined user entry 306 by raising the electronic device 200 from his waist 307 to his head 308. Consequently, the predefined user entry 306 of that explanatory step 301 understands that raises the electronic device 200 from a first elevation 309 to a second elevation 310, wherein the second elevation 310 is greater than the first elevation 309. The difference in elevation can be measured in a number of ways, which include using EQ. 5 above or use a barometer (255) as the motion detector (242). Other techniques for measuring distance will be obvious to those of ordinary skill in the technique that have the benefit of this revelation. [0087] [087] In step 302, the one or more processors (216) of the electronic device detect 311 of the predefined user input 306 as previously described. In step 303, one or more processors (216) cause 312 of the control module (245), which is the voice control interface mechanism in this illustrative use case, to make the transition to a second mode. As shown in step 304, in the second mode, the voice control interface mechanism is operative to receive speech command 313 from a second distance and produce, in response to the speech command, audible output 314 at a second level of output. In this document, user 3 05 is asking "What time is my meeting with Buster ". Audible output 314, which no onlooker can hear due to its lower volume, says "Nineteen hours at Restaurante do Mac". [0088] [088] In one mode, the second distance is less than the first distance in the first mode. In addition, the second output level is less than the first output level. This is illustrated in the drawings by comparing Figure 1 and Figure 3. In Figure 1, user 100 is a first distance 104 from the prior art electronic device 100. In Figure 3, user 305 is a second distance 315 from the device electronic 200 which is less than the first distance (104). The second output level is indicated by the smaller text of the audible output 314 in Figure 3 compared to the larger text of the audible output (103) in Figure 1. [0089] [089] Now back to Figures 4 to 8, illustrated in this document, are several method steps for detecting the predefined user input (306) that raises the electronic device 200 as described above with reference to Figure 2. In one embodiment, the method illustrated by the various steps of Figures 4 to 8 includes detecting, with one or more motion detectors (242), a gesture that raises the electronic device 200 as a function of at least a distance from the electronic device 200 that moves during a movement, an amount of rotation of the electronic device 200 about a geometric axis and whether an object is located close to a surface of the electronic device 200. The method may additionally include detecting the gesture as a function of if at least part of the movement it went against a direction of gravity and an orientation of the electronic device 200 at the end of the movement. Once such a gesture is detected, in one embodiment, one or more processors (216) of the electronic device 200 can perform a control operation in response to the gesture that raises the occurrence of the electronic device. [0090] [090] Starting with Figure 4, in step 401, the one or more processors (216) of the electronic device 200 detect 403, with one or more motion detectors (242) of the electronic device 200, the movement 404 of the electronic device 200 In this step 401, the one or more processors (216) can also detect a distance 405 that the electronic device 200 travels during movement 404. The acceleration of movement 404 can be determined, as well as the speed. In step 402, one or more processors (216) of the electronic device 200 detect 406 of the rotation 407 of the electronic device 200 about a geometry axis 254 with one or more other sensors. As shown in Figure 5, in step 501, one or more processors (216) can detect 503, with a motion detector (242), a direction of gravity 504. [0091] [091] Once these parameters are measured, they can be compared with different limits. For example, in step 502, one or more processors (216) may compare 505 at distance 405 with a predetermined distance threshold 506 to determine whether distance 405 exceeds the predetermined distance threshold 506. In one embodiment, the distance threshold predetermined 506 is about twenty centimeters. Similarly, in step 601 of Figure 6, one or more processors (216) can compare a spin amount 604 with a predetermined spin threshold 605 to determine whether the spin amount 604 exceeds the predetermined spin threshold 605. In one embodiment, the predetermined rotation threshold 605 is about forty-five degrees. [0092] [092] In step 602, in one embodiment, one or more processors (216) can determine 606 if the acceleration 607 that occurs during movement 404 has exceeded a predetermined acceleration threshold 608. In one embodiment, the predetermined acceleration threshold is about 0.5 meters per second net square after deducting gravity. [0093] [093] In step 701, one or more processors (216) can compare movement 404 with the direction of gravity 504. For example, in one embodiment, one or more processors can determine 703 if at least part of the movement 404 was against the direction of gravity 504. Similarly, in one embodiment, one or more processors (216) can determine 704 if a component of the direction of gravity 504 travels from a predefined first end 250 of the electronic device 200 to a second predetermined end 251 of the electronic device 200. As noted above, this step can ensure that the earpiece (241) is above the microphone (240), which is indicative of the user 305 who holds the electronic device 200 to his ear instead of put the same in a pocket. In step 702, one or more processors (216) can determine 705 whether an 706 object, such as the user's face, ear or head, is located near a main surface 707 of electronic device 200. [0094] [094] When this occurs, as shown in Figure 8, the one or more processors (216) can confirm that a gesture 804 that raises the electronic device 200 has occurred. In one embodiment, in response to confirmation of gesture 804 that raises electronic device 200, one or more processors (216) can cause an 802 control operation to occur. In one embodiment, the control operation comprises making the transition from a voice control interface mechanism from a first operation mode to a second operation mode, as shown in step 803. The control operation can also optionally include redirecting the audible output 314 from electronic device 200 to earpiece (241) of electronic device 200. [0095] [095] The disclosure modalities contemplate that the user 305 may wish to invert the process shown in Figures 4 to 8 Through the use of the mode transition use case in a voice control interface mechanism, the disclosure modalities contemplate that since the voice control interface mechanism is in the second mode, it may be desirable to transition the electronic device 200 back to the first mode of operation so that it can be used as shown in Figure 1. Advantageously, the modalities of disclosure provide a mechanism for doing just that. Now back to Figure 9, illustrated in the present document, is this modality. [0096] [096] In step 901, electronic device 200 is operating in the second mode, in which speech commands 313 are received at a softer volume and audible responses are delivered at the second softer output level. In this example, user 305 continues the conversation from step (304) of Figure 3, since the voice command serves to remind the user about the date with Buster at 6 pm. The generated audible output 314 says "Reminder setting". [0097] [097] User 305 is now done with the distinct operating mode. Consequently, in one mode, user 305 can return the voice control interface mechanism to the first mode of operation when a predefined condition is identified. In Figure 9, the predefined condition is a reverse movement 905 of the electronic device 200, which is identified 905 by one of the proximity sensors (208) and / or other sensors (209) in step 902 or both. When this occurs, in an embodiment in step 903, the one or more processors (216) are operable to return the voice control interface mechanism 906 to the first mode of operation. [0098] [098] In other modalities, the operation of Figures 4 to 8 can be performed in other ways. For example, electronic device 200 may include timer (246). Once the user interaction is completed, for example, when user 305 is delivered by delivering speech command 313 at step 901, one or more processors (216) can start the timer (246). When the timer (246) expires, the one or more processors (216) can return the voice control interface mechanism to the first mode of operation by identifying the expiration of the timer (246) as the default condition. Other predefined conditions will be obvious to those of ordinary skill in the art who have the benefit of this revelation. [0099] [099] In the aforementioned specification, the specifics of the present disclosure were described. However, an individual of ordinary skill in the art notes that various modifications and changes can be made without departing from the scope of the present disclosure as presented in the claims below. Thus, although the preferred forms of disclosure have been illustrated and described, it is clear that the disclosure is not limited in that way. The numerous modifications, alterations, variations, substitutions and equivalents will occur to that individual skilled in the art without departing from the spirit and scope of the present disclosure as defined by the following claims. Consequently, the specification and figures should be considered in an illustrative rather than a restrictive sense and all such modifications are intended to be included within the scope of the present disclosure. The benefits, advantages, solutions to problems and any (any) element (s) that may cause the benefit, advantage or solution to occur or become more pronounced should not be interpreted as essential, required or critical elements or resources of any of the claims or all claims.
权利要求:
Claims (20) [0001] Method for controlling an electronic device, CHARACTERIZED by the fact that it comprises: detect, with one or more motion sensors, a gesture that elevates the electronic device as a function of at least: movement of the electronic device; rotation of the electronic device around a geometric axis; and if an object is located near a surface of the electronic device at the end of the movement; and perform, through one or more processors operable with one or more motion sensors, a control operation in response to the gesture that raises the occurrence of the electronic device. [0002] Method, according to claim 1, CHARACTERIZED by the fact that it additionally comprises: detect, with a motion sensor, the movement of the electronic device; determine, with one or more processors, a distance by which the electronic device moves during the movement; detect, with another motion sensor operable with one or more processors, the rotation of the electronic device around the geometric axis; determine, with one or more processors, an amount of rotation of the electronic device around the geometric axis; detect, with one or more proximity sensors operable with one or more processors, in which the object is located close to the electronic device; and determine, with one or more processors, when the object is located close to the electronic device, that the gesture that elevates the electronic device occurred by confirming both that the distance exceeds a predetermined distance threshold and that the amount of rotation exceeds a predetermined rotation threshold. [0003] Method, according to claim 2, CHARACTERIZED by the fact that the function of at least additionally: if at least part of the movement was against a direction of gravity; and an orientation of the electronic device at an end of the movement. [0004] Method, according to claim 3, CHARACTERIZED by the fact that it additionally comprises detecting, by the motion sensor, the direction of gravity and in which the determination of the occurrence of the gesture that raises the device additionally comprises confirming that at least a part of the movement it was against the direction of gravity. [0005] Method according to claim 4, CHARACTERIZED by the fact that determining the occurrence of the gesture that raises the device further comprises confirming that a component of the direction of gravity travels from a first predefined end of the electronic device to a second predefined end of the electronic device. [0006] Method according to claim 5, CHARACTERIZED by the fact that the first predefined end comprises a headset, wherein the second predefined end comprises a microphone. [0007] Method, according to claim 3, CHARACTERIZED by the fact that it additionally comprises determining, with the motion sensor, an acceleration that occurs during the movement, in which the determination of the occurrence of the gesture that raises the device additionally comprises confirming that the acceleration exceeds a predetermined acceleration threshold. [0008] Method according to claim 7, CHARACTERIZED by the fact that the predetermined acceleration threshold is at least 0.5 meters per second squared. [0009] Method according to claim 1, CHARACTERIZED by the fact that the predetermined distance threshold is at least twenty centimeters. [0010] Method according to claim 1, CHARACTERIZED by the fact that the predetermined rotation threshold is at least forty-five degrees. [0011] Method, according to claim 1, CHARACTERIZED by the fact that the control operation comprises making the transition from a voice control interface mechanism from a first mode of operation to a second mode of operation. [0012] Method, according to claim 1, CHARACTERIZED by the fact that it additionally comprises redirecting the audible output of the electronic device to an earpiece of the electronic device. [0013] Apparatus, CHARACTERIZED by the fact that it comprises: one or more processors; a motion detector, operable with one or more processors; another motion detector, operable with one or more processors; and one or more proximity detectors operable with one or more processors; one or more processors to determine: movement of the device from the motion detector; determine the rotation of the device around a geometric axis from the other motion detector; and if an object is located close to the device from one or more proximity detectors at the end of the movement and rotation; one or more processors to confirm that a gesture that raises the device has occurred when: the movement exceeds a first predetermined threshold; the rotation exceeds a second predetermined threshold; and the object is located close to the device; the one or more processors to perform a control operation in response to confirmation that the gesture that raises the device has occurred. [0014] Apparatus according to claim 13, CHARACTERIZED by the fact that the motion detector comprises an accelerometer, the other motion detector comprising a gyroscope. [0015] Apparatus according to claim 13, CHARACTERIZED by the fact that the motion detector and the other motion detector comprise a common accelerometer. [0016] Apparatus according to claim 13, CHARACTERIZED by the fact that the one or more processors are to further determine a direction of gravity from the motion detector and additionally confirm that the gesture that raises the apparatus has occurred when a part or all of the movement occurred against the gravity direction. [0017] Apparatus according to claim 16, CHARACTERIZED by the fact that the apparatus comprises an electronic device that has the earpiece at one end and a microphone at a second end, in which the one or more processors additionally confirm that the gesture that raises the device when a gravity component is oriented from the first end to the second end. [0018] Apparatus, according to claim 17, CHARACTERIZED by the fact that the one or more processors are to further determine the occurrence of an acceleration during movement and to further confirm that the gesture that raises the apparatus occurred when the acceleration exceeds a third threshold predetermined. [0019] Apparatus, according to claim 18, CHARACTERIZED by the fact that: the first predetermined threshold is about twenty centimeters; the second predetermined threshold is about forty-five degrees; and the third predetermined threshold is about 0.5 meters per second square. [0020] Apparatus, according to claim 13, CHARACTERIZED by the fact that the control operation comprises transitioning from a voice control interface mechanism to a different operating mode.
类似技术:
公开号 | 公开日 | 专利标题 BR102016004022A2|2020-12-08|METHOD AND APPARATUS FOR GESTURE DETECTION IN AN ELECTRONIC DEVICE BR102016004328A2|2017-05-02|method and apparatus for discrete operation voice control user interface US9804681B2|2017-10-31|Method and system for audible delivery of notifications partially presented on an always-on display US9754588B2|2017-09-05|Method and apparatus for voice control user interface with discreet operating mode US20170257698A1|2017-09-07|Multifactorial unlocking function for smart wearable device and method ES2817841T3|2021-04-08|Procedure and apparatus to adjust detection threshold to activate voice assistant function US8954099B2|2015-02-10|Layout design of proximity sensors to enable shortcuts KR20150075058A|2015-07-02|Method, apparatus and terminal device for image processing US10244099B2|2019-03-26|Method and device for determining status of terminal, and terminal US20190175034A1|2019-06-13|Measurement device and measurement method CN108429969B|2019-12-03|Audio frequency playing method, device, terminal, earphone and readable storage medium storing program for executing US10747337B2|2020-08-18|Mechanical detection of a touch movement using a sensor and a special surface pattern system and method US20210224371A1|2021-07-22|Digital signature using phonometry and compiled biometric data system and method EP3576014A1|2019-12-04|Fingerprint recognition method, electronic device, and storage medium KR20140036584A|2014-03-26|Method for controlling for volume of voice signal and an electronic device thereof CN108900694A|2018-11-27|Ear line information acquisition method and device, terminal, earphone and readable storage medium storing program for executing JP2012034263A|2012-02-16|Portable terminal device WO2021244058A1|2021-12-09|Process execution method, device, and readable medium US10958777B1|2021-03-23|Methods and systems for stowed state verification in an electronic device US9298268B2|2016-03-29|Electronic device and gesture activation method thereof JP6633040B2|2020-01-22|Measuring device and measuring method CN108616640A|2018-10-02|Terminal unlock method and device, readable storage medium storing program for executing, terminal KR101370886B1|2014-03-07|Wireless integrated controller apparatus TW201426569A|2014-07-01|Electronic device and method of adjusting specific parameters thereof CN109189283A|2019-01-11|A kind of icon generation method, device and equipment
同族专利:
公开号 | 公开日 CN105929936B|2019-05-14| GB2537467B|2019-06-05| CN105929936A|2016-09-07| US20160252963A1|2016-09-01| US9715283B2|2017-07-25| US20170262065A1|2017-09-14| GB2537467A|2016-10-19| US10353481B2|2019-07-16| DE102016103162A1|2016-09-01| GB201603199D0|2016-04-06|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US20070082717A1|2005-10-12|2007-04-12|Inventec Corporation|Dual prompting device and method for mobile phone| US7702282B2|2006-07-13|2010-04-20|Sony Ericsoon Mobile Communications Ab|Conveying commands to a mobile terminal through body actions| US20110128350A1|2009-11-30|2011-06-02|Motorola, Inc.|Method and apparatus for choosing a desired field of view from a wide-angle image or video| DE102009057947A1|2009-12-11|2011-06-16|Ident Technology Ag|Multifunctional touch and / or proximity sensor| US20120086629A1|2010-10-07|2012-04-12|Thoern Ola|Electronic device having movement-based user input and method| US8320970B2|2011-02-16|2012-11-27|Google Inc.|Mobile device display management| US9182897B2|2011-04-22|2015-11-10|Qualcomm Incorporated|Method and apparatus for intuitive wrapping of lists in a user interface| US8952895B2|2011-06-03|2015-02-10|Apple Inc.|Motion-based device operations| US10304465B2|2012-10-30|2019-05-28|Google Technology Holdings LLC|Voice control user interface for low power mode| CN103945048A|2013-01-23|2014-07-23|中兴通讯股份有限公司|Call adjusting method and device| US9442570B2|2013-03-13|2016-09-13|Google Technology Holdings LLC|Method and system for gesture recognition| US9218811B2|2013-06-28|2015-12-22|Google Technology Holdings LLC|Electronic device and method for managing voice entered text using gesturing| US9477314B2|2013-07-16|2016-10-25|Google Technology Holdings LLC|Method and apparatus for selecting between multiple gesture recognition systems| US9613202B2|2013-12-10|2017-04-04|Dell Products, Lp|System and method for motion gesture access to an application and limited resources of an information handling system| CN104123075B|2014-06-23|2016-03-30|小米科技有限责任公司|The method of control terminal and device|US9804681B2|2015-11-10|2017-10-31|Motorola Mobility Llc|Method and system for audible delivery of notifications partially presented on an always-on display| DE102016120740A1|2016-10-31|2018-05-03|Krohne Messtechnik Gmbh|Method for operating a measuring unit, measuring unit and system comprising measuring unit and plug-in module| US10397394B1|2018-04-19|2019-08-27|Motorola Mobility Llc|Electronic device with adjustable depth imager and corresponding methods| WO2019221696A1|2018-05-14|2019-11-21|Hewlett-Packard Development Company, L.P.|Activate scanners for printer beacons| CN110896425B|2018-09-12|2021-10-22|意法半导体股份有限公司|System and method for recognizing a gesture for bringing a mobile electronic device to a user's ear| US10623845B1|2018-12-17|2020-04-14|Qualcomm Incorporated|Acoustic gesture detection for control of a hearable device| CN109618059A|2019-01-03|2019-04-12|北京百度网讯科技有限公司|The awakening method and device of speech identifying function in mobile terminal| CN109710078A|2019-01-14|2019-05-03|极目光科技有限公司|Intelligent control method, system, receiving device and storage medium| US11145315B2|2019-10-16|2021-10-12|Motorola Mobility Llc|Electronic device with trigger phrase bypass and corresponding systems and methods| US10984086B1|2019-10-18|2021-04-20|Motorola Mobility Llc|Methods and systems for fingerprint sensor triggered voice interaction in an electronic device| US10771616B1|2020-01-08|2020-09-08|Motorola Mobility Llc|Methods and systems for stowed state verification in an electronic device|
法律状态:
2018-01-09| B12F| Other appeals [chapter 12.6 patent gazette]| 2020-11-17| B150| Others concerning applications: publication cancelled [chapter 15.30 patent gazette]|Free format text: ANULADA A PUBLICACAO CODIGO 15.21 NA RPI NO 2443 DE 31/10/2017 POR TER SIDO INDEVIDA. | 2020-12-08| B03A| Publication of a patent application or of a certificate of addition of invention [chapter 3.1 patent gazette]| 2020-12-22| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]| 2020-12-29| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]|
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US14/631,925|US9715283B2|2015-02-26|2015-02-26|Method and apparatus for gesture detection in an electronic device| US14/631,925|2015-02-26| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|