![]() method for processing sound from a sound source and apparatus
专利摘要:
ACOUSTIC DISTANCE DETERMINATION SYSTEM USING ATMOSPHERIC DISPERSION. The present invention relates to a method and apparatus for processing sound (218) from a sound source (210). Sound (218) from the sound source (210) is detected. Harmonics (224) in the sound (218) from the sound source (210) are identified. A distance (222) to the sound source (210) is identified using harmonics (224) and a series of atmospheric conditions (226). 公开号:BR102013012900B1 申请号:R102013012900-3 申请日:2013-05-24 公开日:2021-07-06 发明作者:Qin Jiang;Michael J. Daily;Richard Michael Kremer 申请人:The Boeing Company; IPC主号:
专利说明:
BACKGROUND [0001] The present invention relates, in general, to acoustic systems and, in particular, to acoustic distance determination systems. Even more particularly, the present invention relates to a method and apparatus for identifying a distance to a sound source using passive acoustic sensors. [0002] In the operation of unmanned aerial vehicles, sensor systems are used to pilot the unmanned aerial vehicles during flight. These sensor systems can be used by an unmanned aerial vehicle to detect and avoid objects along the unmanned aerial vehicle's flight path. These objects can be, for example, another aircraft, structures, landmarks, terrain or some other type of object. To detect these objects, the sensor system can be used to detect the distance of an object from the unmanned aerial vehicle over a period of time. [0003] Some sensor systems may include different types of sensors. Active sensor systems use actively transmitted signals to detect objects. The unmanned aerial vehicle may use an active sensor system such as a radar system. A radar system can use a pulse of electromagnetic radiation to identify the direction and distance of an object from the unmanned aerial vehicle. Distance is determined by the length of time the pulse takes to return from the object. [0004] This type of sensor system, however, may be undesirable for use in an unmanned aerial vehicle. For example, with a radar system, the emission of electromagnetic radiation makes it easier to identify the presence of the unmanned aerial vehicle. Currently, an unmanned aerial vehicle using a radar system can be detected by unwanted individuals such as a hostile aircraft or a ground missile station. For example, a hostile aircraft or ground missile station can identify the presence of the unmanned aerial vehicle by detecting the pulse of electromagnetic radiation emitted by the unmanned aerial vehicle's radar system. This unmanned aerial vehicle detection may be undesirable when the unmanned aerial vehicle is used on a surveillance mission over hostile territory. [0005] In addition, a radar system may include more components than desired. For example, the components of a radar system may be larger and heavier than desired for an unmanned aerial vehicle. Additionally, a radar system can take up more space than desired in an unmanned aerial vehicle. In addition, a radar system can use more energy than intended when transmitting electromagnetic radiation. [0006] Therefore, it would be desirable to have a method and apparatus that takes into account at least one of the problems discussed above as well as other possible problems. SUMMARY [0007] In an illustrative modality, a method for processing sound from a sound source is present. Sound from the sound source is detected. Harmonics in the sound from the sound source are identified. A distance to the sound source is identified using harmonics and a range of atmospheric conditions. [0008] In another illustrative modality, a method to manage the flight of a first unmanned aerial vehicle in relation to a second unmanned aerial vehicle is present. A sound from the second unmanned aerial vehicle is detected. Harmonics in the sound from the second unmanned aerial vehicle are identified. A distance from the first unmanned aerial vehicle to the second unmanned aerial vehicle is identified using harmonics and a series of atmospheric conditions. The flight of the first unmanned aerial vehicle is managed using the distance from the first unmanned aerial vehicle to the second unmanned aerial vehicle. [0009] In yet another illustrative mode, an apparatus comprises a sensor system and a sound processor. The sensor system is configured to detect sound from a sound source. The sound processor is configured to identify harmonics in the sound from the sound source. The sound processor is also configured to identify a distance to the sound source using harmonics and a range of atmospheric conditions. [00010] Aspects and functions can be obtained independently in various embodiments of the present invention or can be combined in other embodiments in which further details can be seen with reference to the description and drawings below. BRIEF DESCRIPTION OF THE DRAWINGS [00011] The new aspects considered as characteristic of the illustrative modalities are described in the attached claims. However, illustrative embodiments as well as a preferred mode of use, other objects and advantages thereof will be better understood by reference to the following detailed description of an illustrative embodiment of the present invention, when read in conjunction with the accompanying drawings, in the which: [00012] Figure 1 is an illustration of the environment of an aircraft according to an illustrative modality; [00013] Figure 2 is an illustration of a block diagram of a location system according to an illustrative embodiment; [00014] Figure 3 is an illustration of a block diagram of a distance identifier in a sound processor according to an illustrative embodiment; [00015] Figure 4 is an illustration of a surveillance environment according to an illustrative embodiment; [00016] Figure 5 is an illustration of a flowchart of a process for processing sound from a sound source according to an illustrative embodiment; [00017] Figure 6 is an illustration of a flowchart of a process to manage the flight of unmanned aerial vehicles according to an illustrative modality; and [00018] Figure 7 is an illustration of a data processing system according to an illustrative embodiment. DETAILED DESCRIPTION [00019] The illustrative modalities recognize and take into account one or more different considerations. For example, the illustrative modalities recognize and take into account that with an aircraft, identifying a distance to another aircraft can be useful in managing the aircraft's flight. For example, knowing the distance and direction from an unmanned aerial vehicle to another unmanned aerial vehicle can be useful to effect collision avoidance between unmanned aerial vehicles. This collision avoidance can be carried out automatically by a controller or an operator. [00020] The illustrative embodiments recognize and take into account that when an active sensor system such as a radar system is undesirable, the passive sensor system can be used. For example, a passive sensor system can be a passive acoustic sensor system. [00021] The illustrative modalities recognize and take into account that passive acoustic sensors, such as microphones, can be used in identifying the direction to a sound source from passive acoustic sensor systems. For example, the sound detected by an acoustic sensor system can be used to identify direction information such as a bearing, azimuth and elevation, or some combination thereof for the sound source. [00022] The illustrative modalities recognize and take into account that the acoustic sensor systems that are currently available are not able to identify a distance to a sound source when the acoustic sensor system is a passive acoustic sensor system. [00023] Furthermore, the illustrative modalities recognize and take into account that the problem with passive acoustic sensor systems that are currently available applies to other uses of these systems in other environments in addition to aircraft management. For example, the illustrative modalities recognize and take into account that problems with identifying a distance to a sound source are present with passive acoustic sensor systems that are configured to monitor sounds such as gun fire, explosions, or both. [00024] Currently, a passive acoustic sensor system is able to identify the presence of a sound source, such as a firearm or an explosive device from the comparison of sounds that are detected with acoustic signatures of gunshot and explosions. The passive acoustic sensing system is also capable of identifying direction from a passive acoustic sensing system to a sound source. However, the passive acoustic sensor system is not able to identify a distance. As a result, the location of a weapon fire, explosion, or both cannot be identified using a currently available passive acoustic sensing system. [00025] Thus, the illustrative modalities provide a method and apparatus to identify a sound source. In particular, the illustrative embodiments provide a method and apparatus for identifying a distance to a sound source with a passive acoustic sensing system. In an illustrative modality, a sound is detected from the sound source. Harmonics in the sound of the sound source are identified. The distance to the sound source is identified using harmonics and a range of atmospheric conditions. [00026] With reference now to the figures and, in particular, with reference to figure 1, an illustration of the environment of an aircraft is described according to an illustrative embodiment. Aircraft environment 100 is an example of an environment in which information about a distance from an aircraft to a sound source can be identified. Specifically, aircraft environment 100 is an example of an environment in which information about a distance from the aircraft to the other aircraft can be identified. [00027] In this illustrative example, the aircraft environment 100 includes an aircraft in the form of the unmanned aerial vehicle 102, the unmanned aerial vehicle 104, the unmanned aerial vehicle 106 and the manned airplane 108. In these illustrative examples, the aerial vehicle unmanned aerial vehicle 102 includes acoustic sensor system 110, unmanned aerial vehicle 104 includes acoustic sensor system 112, and unmanned aerial vehicle 106 includes acoustic sensor system 114. [00028] In these illustrative examples, these acoustic sensor systems can be deployed according to an illustrative modality. When deployed according to an illustrative modality, these sensor systems provide the ability to identify a distance to a sound source. [00029] For example, the unmanned aerial vehicle 104 can be a source of sound. The unmanned aerial vehicle 104 generates sound 116. The sound 116 may be generated by an engine of the unmanned aerial vehicle 104. The sound 116 is detected by the acoustic sensor system 110. The acoustic sensor system 110 is configured to identify harmonics in the sound 116 from the unmanned aerial vehicle 104. [00030] Additionally, the acoustic sensor system 110 is configured to identify the distance 118 to the unmanned aerial vehicle 104 from the unmanned aerial vehicle 102. The distance 118 is identified by the acoustic sensor system 110 using harmonics identified from the sound 116 of the unmanned aerial vehicle 104 and a series of atmospheric conditions. [00031] A number of atmospheric conditions can affect the speed at which harmonics travel in the air, in these illustrative examples. In particular, each harmonic in tone 116 travels at a different speed than the other harmonics in tone 116. [00032] Thereby, the distance 118 can be identified from the unmanned aerial vehicle 102 to the unmanned aerial vehicle 104. In a similar way, the unmanned aerial vehicle 104 can detect the sound 120 generated by the manned aircraft 108. The acoustic sensor system 112 can identify harmonics in the sound 120 and identify the distance 122 from the unmanned aerial vehicle 104 to the manned aircraft 108 using the harmonics and a series of atmospheric conditions. Similarly, the acoustic sensor system 114 of the unmanned aerial vehicle 106 may also use the sound 116 generated by the unmanned aerial vehicle 104 and the sound 120 generated by the manned airplane 108 to detect distances to such aircraft. This distance information can be especially useful for managing the operation of unmanned aerial vehicle 102, unmanned aerial vehicle 104, and unmanned aerial vehicle 106. [00033] For example, operator 124 may use controller 126 to control the operation of these unmanned aerial vehicles. In this illustrative example, operator 124 and controller 126 are located on floor 130 of building 128. [00034] In these illustrative examples, controller 126 is a hardware device and may include software. Controller 126 can be deployed using one or more computers and is configured to manage the operation of unmanned aerial vehicle 102, unmanned aerial vehicle 104, and unmanned aerial vehicle 106 in these illustrative examples. [00035] With information such as the distance 118 between the unmanned aerial vehicle 102 and the unmanned aerial vehicle 104, the operator 124 can use the distance 118 to perform collision avoidance. For example, operator 124 can determine whether the distance 118 between unmanned aerial vehicle 102 and unmanned aerial vehicle 104 is an unwanted distance. Collision avoidance can also be performed with respect to manned airplane 108. [00036] In another illustrative example, one or more of the unmanned aerial vehicle 102, the unmanned aerial vehicle 104 and the unmanned aerial vehicle 106 can automatically manage their flight path to avoid collision with each other or with another aircraft . In this example, operator 124 does not need to perform collision avoidance. [00037] In addition, the collision avoidance performance in this illustrative example can be effected with information such as azimuth and elevation to the sound source in addition to distance. With this additional information, the locations of a sound source can also be identified. [00038] In these illustrative examples, azimuth is an angular measurement in a spherical coordinate system on a horizontal reference plane. Elevation is an angular measurement of the distance between a sound source and a horizontal reference plane. Location can be identified in two dimensions or three dimensions depending on the amount of additional information that exists. [00039] The illustration of the aircraft environment 100 in Figure 1 is provided as an example of an aircraft environment in which an acoustic sensor system can be deployed. Evidently, the different illustrative modalities can be implemented in other environments. For example, without limitation, acoustic sensor systems can be deployed in environments such as security environments, in surveillance environments and in other suitable environments. In these environments, identifying gun fire and explosion locations can be helpful. In particular, identifying the location of a firearm or explosive device can be useful when conducting operations in these types of environments. With the ability to identify distances in addition to azimuth, elevation, or some combination thereof, sound source locations can be more easily identified using passive acoustic sensing systems. Additionally, any type of directional measurement system can be used in addition to or in place of a spherical system in which azimuth and elevation define the directions. [00040] Referring now to figure 2, an illustration of a block diagram of a location system is described according to an illustrative embodiment. In this illustrative example, location system 200 may be deployed in various environments, which includes aircraft environment 100 in Figure 1. Location system 200 may be an acoustic distance determination system that identifies distances to sources of sound. The location system 200 is comprised of a series of acoustic sensor systems 202 in this example described. As used here, a “series” when used with reference to items means one or more items. For example, a series of acoustic sensor systems 202 is one or more acoustic sensor systems. [00041] In this illustrative example, the acoustic sensor system 204 in the series of acoustic sensor systems 202 is associated with the platform 206. The platform 206 can be considered part of the location system 200 or a separate component of the location system 200 depending on of the particular implantation. [00042] The 206 platform can take a number of different forms. For example, platform 206 may take the form of an aircraft such as an aircraft depicted in aircraft environment 100 in Figure 1. Additionally, platform 206 may be a mobile platform, a stationary platform, a land-based structure, an aquatic structure or a spatial structure. More specifically, platform 206 can be a surface ship, a tank, a helicopter, an armored military transport vehicle, a train, a spacecraft, a space station, a satellite, a submarine, an automobile, a power plant. , a bridge, dam, factory facility, building, or other suitable object. [00043] In this illustrative example, the acoustic sensor system 204 can be used to generate location information 208 for the sound source 210. As described, the acoustic sensor system 204 can comprise a group of acoustic sensors 212 and the sound processor. sound 214. [00044] In particular, the acoustic sensor group 212 comprises one or more acoustic sensors. In this illustrative example, the acoustic sensor group 212 takes the form of passive acoustic sensor group 216. In other words, the acoustic sensor group 212 does not transmit sound signals or other types of energy during detection of sound 218 generated by the source. 210. With the use of passive acoustic sensor group 216, energy is not emitted by acoustic sensor group 212. As a result, the detection capability of platform 206 may be reduced. [00045] In these illustrative examples, the acoustic sensor group 212 can be one or more microphones. These microphones can take many forms. For example, microphones can be omnidirectional microphones, unidirectional microphones, cardioid microphones and other suitable types of microphones. [00046] In these illustrative examples, sound source 210 can take a number of different forms. For example, sound source 210 may be an aircraft, unmanned aerial vehicle, firearm, or other suitable type of sound source that may be of interest. [00047] In these illustrative examples, sound processor 214 is hardware and may also include software. As described, sound processor 214 can be deployed in a number of different ways. For example, sound processor 214 may be deployed in computer system 219. Computer system 219 may be one or more computers. When more than one computer is present, those computers can be in communication with each other through a communication medium such as a network. Sound processor 214 is configured to detect and process sound 218 to identify location information for sound source 210. [00048] The operations performed by sound processor 214 may be implemented in software, hardware, or some combination thereof. When software is used, operations performed by sound processor 214 may be embedded in program code configured to run on the processor. When hardware is used, the hardware may include circuits that operate to perform operations for the sound processor 214. [00049] In the illustrative examples, the hardware may take the form of a circuit system, an integrated circuit, a special purpose integrated circuit (ASIC), a programmable logic device, or some other suitable type of hardware configured to perform a series of operations. With a programmable logic device, the device is configured to perform a series of operations. The device can be reconfigured later, or it can be permanently configured to perform a series of operations. Examples of programmable logic devices include, for example, a programmable logic array, a programmable array logic, a field programmable logic array, an array of field programmable gates, and other suitable hardware devices. Additionally, processes can be implemented in organic components integrated with inorganic components and/or can be composed entirely of organic components with the exception of humans. For example, processes can be deployed as circuits in organic semiconductors. [00050] In this illustrative example, sound processor 214 includes distance identifier 220. Distance identifier 220 is hardware, software, or both located within sound processor 214. Distance identifier 220 is configured to identify the distance 222 to sound source 210. In these illustrative examples, location information 208 comprises distance 222, but may also include other information depending on the deployment. [00051] In particular, the distance 222 to the sound source 210 is the distance from the acoustic sensor group 212 to the sound source 210. More specifically, the distance 222 can be a distance from a sensor acoustic from the acoustic sensor group 212 to the sound source 210. In these illustrative examples, with the acoustic sensor system 204 being associated with the platform 206, the distance 222 can also be considered to be the distance between the platform 206 and the source of sound 210. [00052] The atmospheric condition sensor system 225 is a hardware component of the acoustic sensor system 204 and is configured to detect a series of atmospheric conditions 226 in the environment around the acoustic sensor system 204. The atmospheric conditions series 226 it can be, for example, at least one of temperature, humidity, pressure, and other suitable atmospheric conditions. [00053] As used here, the expression “at least one of” when used with a list of items means different combinations of one or more of the listed items that can be used and perhaps only one of each item in the list is needed. For example, "at least one of item A, item B, and item C" may include, without limitation, item A or item A and item B. This example may also include item A, item B, and item C, or item B and item C. [00054] In this illustrative example, sound processor 214 is configured to process sound 218 detected by acoustic sensor group 212. When sound 218 is detected by acoustic sensor group 212, sound 218 is detected by one or more acoustic sensors of the acoustic sensor group 212. [00055] Sound processor 214 is configured to identify 224 harmonics in sound 218 from sound source 210. In these illustrative examples, sound source 210 can be any sound source that generates 224 harmonics in sound 218. , the sound processor 214 is configured to identify the distance 222 to the sound source 210 using harmonics 224 and a series of atmospheric conditions 226. The series of atmospheric conditions 226 is one or more conditions in the atmosphere that can cause dispersion atmospheric in a way that affects the propagation of harmonics 224. [00056] The harmonic at harmonics 224 is a frequency of the frequencies in sound 218. In particular, this frequency can be an integer multiple of a fundamental frequency. The series of atmospheric conditions 226 can affect the speed at which the harmonic travels, thereby causing atmospheric dispersion. As a result, sound processor 214 is able to identify the distance 222 to sound source 210 using harmonics 224 and taking into account the way in which the series of atmospheric conditions 226 affects harmonics 224. [00057] In these illustrative examples, sound processor 214 may also include other components to generate additional information for location information 208 in addition to distance 222. For example, sound processor 214 may also include direction identifier 229. Direction identifier 229 is configured to identify direction 230 to sound source 210 from acoustic sensor group 212. Direction 230 can be two-dimensional or three-dimensional depending on the particular deployment. [00058] In these illustrative examples, the direction identifier 229 can implement any series of known direction estimation processes. For example, heading identifier 229 may use one or more angle-of-arrival estimation algorithms with acoustic sensor group 212, such as multiple signal and location parameter estimation emitting algorithms. The acoustic sensor group 212 can be arranged in a matrix such as a flat matrix. [00059] Direction identifier 229 can also use other types of algorithms such as a minimum variance distortion-free response algorithm. In yet another illustrative example, direction identifier 229 can estimate directions using rotational invariance techniques. [00060] In these illustrative examples, heading identifier 229 may comprise at least one of azimuth identifier 231 and elevation identifier 232. Azimuth identifier 231 is configured to identify azimuth 234 in location information 208. 234 is the azimuth to sound source 210. Azimuth 234 is a direction relative to a horizontal plane in these illustrative examples. Azimuth 234 can be measured in degrees in these illustrative examples. [00061] Elevation identifier 232 is configured to identify elevation 236 in location information 208. Elevation 236 is an elevation to sound source 210. In particular, elevation 236 is an elevation from the perspective of platform 206 in these illustrative examples. Elevation 236 is an altitude or change from the horizontal plane for which azimuth 234 is identified in these illustrative examples. Elevation 236 can also be measured in degrees. [00062] Azimuth 234 and elevation 236 can form a support for direction 230. In some illustrative examples, the support may be in two dimensions such as a compass direction with angle information. In other illustrative examples, azimuth 234 and elevation 236 can be measured using other units such as miles, gradation, or other suitable units. [00063] With at least one of azimuth 234 and elevation 236, along with distance 222, location 238 can be identified by sound source 210 using location information 208. If azimuth 234, elevation 236 and distance 222 is present, location 238 may be in three-dimensional space. If only one of azimuth 234 and elevation 236 is present with distance 222, then location 238 can be a two-dimensional location in a plane. [00064] If none of these three of measurement types is present in location information 208, other acoustic sensor systems from the series of acoustic sensor systems 202 can also detect sound 218 and generate locations that can be used to identify location 238 of the sound source 210 in three dimensions. Additionally, with the use of multiple acoustic sensor systems in location system 200, a surveillance system can be configured to identify movement from sound source 210. [00065] In an illustrative example, sound processor 214 can generate alert 239 based on distance 222. For example, when distance 222 is used for collision avoidance, alert 239 is generated when distance 222 is less than a desired distance between platform 206 and sound source 210. Alert 239 may be based solely on distance 222, but may also be based on other conditions. For example, alert 239 can be generated when distance 222 is less than a desired distance and when both platform 206 and sound source 210 are at the same altitude. This form of 239 alert can be useful for an operator to manage multiple unmanned aerial vehicles. [00066] In another illustrative example, sound processor 214 may generate map 240 with location identification 238 of sound source 210. Map 240 may be used by a platform operator 206 or some other operator at another location to identify a location of sound source 210. Map 240 may be useful when platform operator 206 performs surveillance or other types of monitoring operations. [00067] In these illustrative examples, the acoustic sensor system 204 can identify the location 238 of the sound source 210 faster than other passive systems such as passive radar. As a result, the acoustic sensor system 204 can also be used to track motion. Acoustic sensor system 204 does not need much time to identify a location of sound source 210 like other types of passive sensor systems. With other passive sensor systems, the amount of time required to identify a location of sound source 210 may occur after sound source 210 has traveled to another location. As a result, the acoustic sensor system 204 provides the ability to track the movement of the sound source 210 more accurately compared to passive sensor systems that are currently available, such as passive radar systems. Also, the cost can be lower with passive sensor systems as passive acoustic sensors can cost less than passive radar sensors. [00068] Turning now to Figure 3, an illustration of a block diagram of a distance identifier in a sound processor is described according to an illustrative embodiment. In this illustrative example, examples of components that can be deployed in distance identifier 220 of sound processor 214 are shown. In this illustrative example, distance identifier 220 includes harmonic identifier 300, dispersion identifier 302, and distance estimator 304. [00069] Harmonic identifier 300 receives sound data 306 from acoustic sensor group 212. Sound data 306 is the data generated by acoustic sensor group 212 in response to sound detection 218. [00070] Harmonic identifier 300 is configured to identify 224 harmonics in tone 218 using tone data 306. In these illustrative examples, harmonic identifier 300 performs a Fourier transform on tone data 306. 218 sound from a time domain into a frequency domain. In the frequency domain, peaks in frequencies are identified. These peaks are used to identify 224 harmonics in the 306 tone data in these illustrative examples. Harmonic identifier 300 generates 307 harmonic data for the 224 harmonics identified in tone data 306. [00071] In these illustrative examples, the dispersion identifier 302 receives data from the atmosphere 308 from the atmospheric condition sensor system 225. The data from the atmosphere 308 is the data generated by the detection of a series of atmospheric conditions 226 carried out by the system of atmospheric condition sensor 225. Atmosphere data 308 includes information about a series of atmospheric conditions 226 as detected by the atmospheric condition sensor system 225. [00072] As described, the dispersion identifier 302 is configured to identify changes in the propagation of the speed of sound 218 using data from atmosphere 308. The changes in the propagation of the speed of sound 218 are identified using a series of atmospheric conditions 226 of the atmosphere data 308. [00073] These atmospheric conditions can include at least one of temperature, humidity, pressure and other suitable atmospheric conditions. In these illustrative examples, the speed of sound 218 changes based on a series of atmospheric conditions 226. Additionally, the speed of sound is also different for different frequencies and for different harmonics in harmonics 224. As a result, the speed of each harmonics in harmonics 224 is a function of a series of atmospheric conditions 226 and the frequency of the particular harmonic. Frequency information about harmonics can be received from harmonic identifier 300. [00074] The dispersion identifier 302 generates a series of dispersion factors 310 from the atmospheric conditions series 226 of the atmosphere data 308. The dispersion factor series 310 is used by the distance estimator 304 to make adjustments to the speeds for harmonics 224. These adjustments are designed to account for a series of atmospheric conditions 226 that cause the atmospheric dispersion of sound 218. [00075] In these illustrative examples, the 304 distance estimator uses the 224 harmonics and a series of 310 dispersion factors to identify the distance 222 to the 210 sound source. 312 phase delay series to 224 harmonics. A phase delay in 312 phase delay series is a phase delay between two harmonics at 224 harmonics. Using 312 phase delay series, the 304 distance estimator identifies series from time delays 314 to 224 harmonics. [00076] The 304 distance estimator identifies a 316 distance series using a 314 time delay series and a 310 dispersion factor series. In these illustrative examples, a distance in the 316 distance series can be identified between any pair of harmonics at harmonics 224 of the 307 harmonics data. As a result, if more than one harmonic pair is selected at harmonics 224, the 316 distance series will have more than one distance in this illustrative example. When there is more than one distance, the distance series 316 can be averaged or it can be analyzed in some other way to identify the distance 222 in these illustrative examples. [00077] For example, the 304 distance estimator can identify the first harmonic 318 and the second harmonic 320 at harmonics 224 for processing. In these illustrative examples, the Fourier transform of a sound signal from a time domain to a frequency domain is as follows: where F(k) is the frequency domain sound signal, 5(n) is a time domain sampled sound signal, N is a sampled length of the sampled sound signal, n is the time index of the sampled sound, and k is the frequency index of the harmonic frequencies. [00078] In the frequency domain, the series of spectral peaks is defined as follows: where PK is the spectral peaks. Harmonic frequencies are extracted from the series of spectral peaks. Each spectral peak is related to three Fourier coefficients. If F(k) is a spectral peak, the harmonic frequency is estimated using the three coefficients using the following equation: with where f is the harmonic frequency, and f is the discrete frequency of the frequency index k. f is a sampling frequency, Δf is the frequency difference between two adjacent discrete frequencies computed by the fast Fourier transform, and M is a series of frequencies extracted from harmonics. In this example, Equation (3) is used to estimate the frequency between the two discrete frequencies and provides a better resolution than the fast Fourier transform result in Equation (1). [00079] The 304 distance estimator also identifies the 324 phase delay. As described, the 324 phase delay in the 312 phase delay series is a phase delay between the 318 first harmonic and the 320 second harmonic. [00080] In these illustrative examples, during the calculation of a phase delay, the harmonic frequency phase f is calculated from its complex Fourier transform coefficient F(k) . The complex Fourier transform coefficient is calculated as follows: and [00081] The phase delay of two frequencies is given by where Φis the phase of the complex Fourier transform coefficient ofF(k) , a is the real part of F(k) , b is the imaginary part of F(k) , Φis the phase of frequency f , Φ1is the phase of frequency f , f is the reference frequency, and f is the selected harmonic frequency. With the phase delay, time delay in propagation of the two frequencies can be computed. [00082] In these illustrative examples, f and f are selected. At least two harmonic frequencies are used to calculate a phase difference and a dispersion. With both frequencies, one of the two can be used as the reference frequency for estimating a phase difference and a dispersion. Although this example shows two frequencies, other frequency numbers, such as 3, 6, 8, or some other frequency range, can be used. [00083] Time delay 326 is identified by distance estimator 304 from phase delay 324 and it represents the time delay between first harmonic 318 and second harmonic 320. Time delay 326 is the amount of time which passes between receiving the first harmonic 318 and receiving the second harmonic 320 in these illustrative examples. [00084] In identifying a time delay, the time delay of the two harmonic frequencies is computed from their phase delays using the following equation: where Δt the time delay between two harmonic frequencies, up to the propagation time for the reference frequency f ; and t is the propagation time for frequency f as selected by the harmonic frequency index n . [00085] The 304 distance estimator identifies the scatter factor 322. The scatter factor 322 in the 310 scatter factor series is a scatter factor for the first harmonic 318 and the second harmonic 320. In these illustrative examples, the faction of 322 dispersion may be an atmospheric dispersion factor. [00086] In an illustrative example, in the identification of series of dispersion factors 310, the atmospheric dispersion of harmonics 224 is identified. According to the principles of acoustic physics, the speed of sound propagation in the atmosphere changes according to a series of atmospheric conditions 226 and with the frequency of sound due to the relaxation of nitrogen and oxygen. The speed of sound is a function of sound frequencies for given atmospheric conditions. The function is provided as follows: with and where C(f ) is the speed of sound for the different frequencies, f is any harmonic frequency of the sound signal, C ( f ) is the speed of sound reference to the reference frequency, DF(f, f ) is a ratio of propagation velocities at two different harmonic frequencies as defined in equation (9). h is the wavelength of sound waves, a is an atmospheric attenuation coefficient that depends on atmospheric temperature, humidity, and pressure. [00087] In these illustrative examples, the atmospheric attenuation coefficient is calculated using the following equations: where p is the atmospheric pressure, p is the atmospheric pressure reference value (1atm), F = f / ps , Fro = fro I Ps e = fr,NI p are the frequencies scaled by atmospheric pressure, f is the frequency in Hz, T is the atmospheric temperature, and T is the reference temperature. T is 293.15 degrees K in this illustrative example. [00088] The scaled relaxation frequencies for oxygen and nitrogen are calculated as follows: and respectively, where h is the absolute humidity. Absolute humidity is computed as follows: where h is the relative humidity and P is the saturation vapor pressure. [00089] The acoustic dispersion factor of frequencies fef is defined as follows: where C is the velocity of sound for the harmonic frequency identified by the harmonic frequency index n , C1 is the velocity of sound for the harmonic reference frequency f., and Δ C is the velocity difference between the harmonic frequency f and the reference frequency f1. The acoustic dispersion factor is used to estimate the distance between the acoustic source and the sensor. [00090] Using this information, the distance evaluator 304 identifies the distance 328 in the distance series 316. In the identification of the distance 328, the time delay of the f and f frequencies can be represented as follows: where Dé is the distance between the acoustic source and the sensor. Distance D can be computed as follows: [00091] The illustration of the location system 200 and the different components of the location system 200 in figure 2 and in figure 3 is not intended to impose physical or architectural limitations on the way in which an illustrative modality can be implemented. Other components in addition to or in place of those illustrated can be used. Some components may be unnecessary. Furthermore, blocks are presented to illustrate some functional components. One or more of these blocks can be combined and split, or combined and split into different blocks when deployed in an illustrative modality. [00092] For example, in some illustrative modalities, the calculation of phase delays and time delays can be performed by harmonic identifier 300 instead of being performed by distance estimator 304. In addition, the identification of the series of distances 316 can be performed using operations other than those described in relation to Figure 3 that consider the velocity of harmonics 224 to be affected by the series of atmospheric conditions 226. [00093] Returning now to Figure 4, an illustration of a surveillance environment is described according to an illustrative modality. Surveillance environment 400 is an example of another environment in which location system 200 can be used. [00094] In this illustrative example, surveillance environment 400 is a top-down view of a portion of city 402. In this illustrative example, city 402 includes buildings 404, 406, 408, and 410. City 402 also includes the park 412. Roads 414, 416, and 418 are also present in city 402. Vehicle 420 is located on road 414 between building 404 and building 406. [00095] In this illustrative example, vehicle 420 is an example of platform 206. Vehicle 420 may be a police van or other suitable type of vehicle. In this illustrative example, acoustic sensor system 422 is associated with vehicle 420. Acoustic sensor system 422 is another example of an acoustic sensor system from the acoustic sensor systems 202 series of location system 200 in Figure 2. vehicle 420 can carry out surveillance of sound sources in city 402. [00096] As another illustrative example, the acoustic sensor system 424 is associated with the building 410. The acoustic sensor system 424 is another example of an acoustic sensor system from the acoustic sensor systems 202 series of the location system 200 in the figure 2. [00097] For example, if weapon 426 generates sound 428, sound 428 can be detected by one or both of acoustic sensor system 422 and acoustic sensor system 424. In these illustrative examples, sound 428 can be multiples gun shots. Acoustic sensor system 422 can identify distance 430 from acoustic sensor system 422 to weapon 426. Acoustic sensor system 424 can identify distance 432 between acoustic sensor system 424 and weapon 426. [00098] The locations of the acoustic sensor system 422 and the acoustic sensor system 424 along with the distance 430 and the distance 432 can be used to identify the location of the weapon 426. This location can be identified in two dimensions and displayed in a map depending on the particular deployment. [00099] If at least one of the acoustic sensor system 422 and the acoustic sensor system 424 is configured to provide additional location information in addition to distances, then a location in two or more dimensions can be identified with only one of the systems of acoustic sensor. [000100] Surveillance environment illustrations 400 in figure 4 and aircraft environment illustration 100 in figure 1 are not intended to impose limitations on the way in which a tracking system can be deployed. The location system 200 can be deployed in still other environments where identification of distance to sound sources is desirable. For example, location system 200 can be installed on or near a battlefield. In addition, the tracking system 200 can be arranged in a helicopter or unmanned aerial vehicle and used to detect gun fire, explosions, or both at various locations in city 402 in Figure 4. [000101] Referring now to Fig. 5, an illustration of a flowchart of a process for processing sound from a sound source is described according to an illustrative embodiment. This process can be implemented by the location system 200 in Figure 2. In particular, the process may be implemented by the sound processor 214 in Figure 2. [000102] The process starts by detecting a sound from a sound source (operation 500). The process identifies the harmonics in the sound from the sound source (operation 502). A distance to the sound source is identified using harmonics and a series of atmospheric conditions (operation 504) with the process ending shortly thereafter. [000103] Returning now to Figure 6, an illustration of a flowchart of a process to manage the flight of unmanned aerial vehicles is described according to an illustrative modality. The process illustrated in Figure 6 can be implemented by using the location system 200 of Figure 2 to manage the aircraft in the aircraft environment 100 of Figure 1. The process can be used to manage the flight of a first unmanned aerial vehicle in for a second unmanned aerial vehicle. [000104] The process starts by detecting a sound from the second unmanned aerial vehicle (operation 600). The process identifies harmonics in the sound from the second unmanned aerial vehicle (operation 602). A distance from the first unmanned aerial vehicle to the second unmanned aerial vehicle is identified using harmonics and a series of atmospheric conditions (operation 604). [000105] The flight of the first unmanned aerial vehicle is managed using the distance from the first unmanned aerial vehicle to the second unmanned aerial vehicle (operation 606) with the process ending shortly thereafter. In these illustrative examples, the distance may be relevant between the first unmanned aerial vehicle and the second unmanned aerial vehicle if they are at the same altitude. As a result, perhaps only distance is needed to manage the flight of unmanned aerial vehicles. Of course, when other information such as azimuth and elevation is present, a three-dimensional location of the second unmanned aerial vehicle can be identified. [000106] Additional types of unmanned aerial vehicle management can be performed when additional information is present. For example, unmanned aerial vehicle flight management can be used to identify routing operations such as surveillance or other types of data gathering. As a result, when additional information is present, operations other than collision avoidance can be performed in managing unmanned aerial vehicles. [000107] The flowcharts and block diagrams of the different described modalities illustrate the architecture, functionality and operation of some possible implementations for the device and methods in an illustrative modality. In this sense, each block of flowcharts or block diagrams can represent a module, segment, function, and/or part of an operation or step. For example, one or more blocks can be implemented as program code, in hardware, or a combination of program code and hardware. When deployed in hardware, the hardware can take, for example, the form of integrated circuits that are manufactured or configured to perform one or more operations on flowcharts or block diagrams. [000108] In some alternative implementations of an illustrative modality, the function or functions observed in the blocks may occur outside the order noted in the figures. For example, in some cases, two blocks shown in succession can be played substantially concurrently or sometimes the blocks can be played in reverse order depending on the functionality involved. Furthermore, other blocks can be added in addition to the blocks illustrated in a flowchart or block diagram. [000109] Turning now to figure 7, an illustration of a data processing system is described according to an illustrative embodiment. Data processing system 700 can be used to implement computer system 219 of Figure 2. In this illustrative example, data processing system 700 includes communications platform 702, which provides communications between processing unit 704, memory. 706, persistent store 708, communications unit 710, input/output (I/O) unit 712, and screen 714. In this example, the communication platform may take the form of a bus system. [000110] Processing unit 704 serves to execute software instructions which can be loaded into memory 706. Processing unit 704 can be a series of processors, a multiprocessor core, or some other type of processor, depending on the particular implementation . [000111] Memory 706 and persistent storage 708 are examples of storage devices 716. A storage device is any hardware that is capable of storing information, such as, for example, without limitation, data, program code in functional form , and/or other appropriate information on a temporary and/or permanent basis. Storage devices 716 may also be referred to as computer readable storage devices in these illustrative examples. Memory 706 in these examples can be, for example, random access memory or any other suitable volatile or non-volatile storage device. Persistent 708 storage can take various forms depending on the particular deployment. [000112] For example, persistent storage 708 can contain one or more components or devices. For example, persistent storage 708 can be a hard disk drive, flash memory, reusable optical disk, reusable magnetic tape, or some combination of the above. The medium used by persistent storage 708 may also be removable. For example, a removable hard disk drive can be used for persistent 708 storage. [000113] The communications unit 710, in these illustrative examples, provides communications with other data processing systems or devices. In these illustrative examples, the communications unit 710 is a network interface card. [000114] The input/output unit 712 allows input and output of data with other devices that can be connected to the data processing system 700. For example, the input/output unit 712 can provide a connection for user registration via a keyboard, mouse, and/or some other suitable recording device. In addition, the input/output unit 712 can send information to a printer. Screen 714 provides a mechanism for displaying information to a user. [000115] Instructions for the operating system, applications, and/or programs can be located in the storage devices 716, which are in communication with the processing unit 704 through the communications platform 702. The processes of the different modalities can be performed by processing unit 704 using computer-imposed instructions, which may be located in a memory, such as memory 706. [000116] These instructions are referred to as program code, computer-usable program code or computer-readable program code that can be read and executed by a processor in processing unit 704. Program code in the different modalities can be incorporated on different physical media or computer readable storage media such as memory 706 or persistent storage 708. [000117] Program code 718 is located in a functional form on computer readable medium 720 which is selectively removable and can be loaded or transferred to data processing system 700 for execution by processing unit 704. program 718 and computer readable medium 720 form the product with computer program 722 in these illustrative examples. In one example, computer readable medium 720 may be computer readable storage medium 724 or computer readable sign medium 726. [000118] In these illustrative examples, the computer readable storage medium 724 is a physical or tangible storage device used to store the program code 718 rather than a medium that propagates or transmits the program code 718. [000119] Alternatively, the program code 718 may be transferred to the data processing system 700 using the computer readable signal medium 726. The computer readable signal medium 726 may be, for example, a program code 718 containing a propagated data signal. For example, the computer readable signal medium 726 may be an electromagnetic signal, an optical signal, and/or any other suitable type of signal. These signals can be transmitted over communications links, such as wireless communications links, fiber optic cable, coaxial cable, a wire, and/or any other suitable type of communications link. [000120] The different components illustrated for the data processing system 700 are not intended to impose architectural limitations on the way in which different modalities can be implemented. The different illustrative embodiments can be implemented in a data processing system that includes components in addition to and/or in place of those illustrated for the data processing system 700. Other components shown in Figure 7 may vary from the illustrative examples shown . The different modalities can be implemented using any hardware device or system capable of executing program code 718. [000121] Thus, the illustrative modalities provide a method and apparatus to identify the location information about a sound source. With the illustrative examples, a distance between a sound source and the sound sensor can be identified. Sound harmonics travel at different speeds through the atmosphere. The arrival times of the different harmonics will be different. As a result, this information in conjunction with atmospheric conditions is used in the illustrative examples to identify the distance between the sound source and the acoustic sensor system. [000122] Thus, the illustrative examples provide the ability to use acoustic sensors that do not emit energy. As a result, the operation of a location system deployed in accordance with an illustrative embodiment may be more difficult to detect compared to an active sensor system such as a radar system. The use of this type of locating system can be especially useful in unmanned aerial vehicles where detection of the presence of these vehicles may be undesirable. Furthermore, the use of a location system according to an illustrative embodiment can also be useful on any type of vehicle or platform where detection of platform operation is undesirable. [000123] Additionally, this type of system uses less energy than active location systems. As a result, reduced energy use can be useful in vehicles where the energy provided by the vehicle may be limited. [000124] In addition to using the location system 200 to identify distances to other vehicles, the location system 200 can also be used to identify a vehicle location in two-dimensional space or three-dimensional space. [000125] The illustrative modalities can also implement the 200 location system for uses other than vehicles to determine distances to other vehicles. For example, location system 200 can also be used in monitoring systems or surveillance systems. For example, the location system 200 can be used to identify a weapon firing location, explosions and other types of sounds. [000126] In the figures and text of an aspect, a method is described for processing sound 218 from a sound source 210, the method includes: detecting sound 218 from sound source 210; identify 224 harmonics in sound 218 from sound source 210; and identifying a distance 222 to sound source 210 using harmonics 224 and a series of atmospheric conditions 226. In one variant, the method further includes: identifying a location 238 of sound source 210 using distance 222 to the sound source 210 and at least one of an azimuth 234 to the sound source 210 and an elevation 236 to the sound source 210. In another variant, the method further includes: identifying at least one of an azimuth 234 and an elevation 236 to sound source 210 from sound 218. In a further variant, the method which includes identifying the distance 222 to sound source 210 using harmonics 224 and the atmospheric conditions series 226 comprises: identifying a time delay 326 between a first harmonic 318 and a second harmonic 320 at harmonics 224; and identify the distance 222 to the sound source 210 using the time delay 326 between the first harmonic 318 and the second harmonic 320 and the atmospheric dispersion caused by the series of atmospheric conditions 226. [000127] In yet another variant, the method that includes identifying the distance 222 to the sound source 210 using the time delay 326 between the first harmonic 318 and the second harmonic 320 and the atmospheric dispersion caused by the series of atmospheric conditions 226 comprises: identifying the distance 222 to the sound source 210 using the time delay 326 between the first harmonic 318 and the second harmonic 320 and an atmospheric dispersion factor, in which the atmospheric dispersion factor takes into account the dispersion caused by the series of atmospheric conditions 226 for the first harmonic 318 and the second harmonic 320. In one example, the method that includes identifying the time delay 326 between the first harmonic 318 and the second harmonic 320 at harmonics 224 includes identifying a delay of phase between a first frequency for the first harmonic 318 and a second frequency for the second harmonic 320; and identify the time delay 326 between the first harmonic 318 and the second harmonic 320 using the phase delay. In another example, the method includes in which sounds are received with sound data 306 from an acoustic sensor system 204 and in which identifying harmonics 224 in sound 218 from sound source 210 comprises: converting the data of sound 306 from a time domain to a frequency domain using a Fourier transform; identify spectral peaks at frequencies in sound data 306 in the frequency domain; and identify the 224 harmonics from the spectral peaks. [000128] In yet another example, the method that includes the distance 222 is from a first unmanned aerial vehicle 104 to a second unmanned aerial vehicle 106 and further comprising: managing the flight of the first unmanned aerial vehicle 104 to avoid collision with the second unmanned aerial vehicle 106 using distance 222. In yet another example, the method including distance 222 is used to identify a location 238 of sound source 210 that generates a plurality of shots that result in the harmonics 224 in sound 218. In yet another example, the method includes a series of atmospheric conditions 226 that is selected from at least one of humidity, pressure, and temperature. In one example, the method includes sound source 210 which is selected from one of an aircraft, an unmanned aerial vehicle 104, and a firearm 426. [000129] In one aspect, a method is described for managing the flight of a first unmanned aerial vehicle 104 relative to a second unmanned aerial vehicle 106, the method includes: detecting a sound 218 from the second unmanned aerial vehicle 106; identify harmonics 224 in sound 218 from the second unmanned aerial vehicle 106; identify a distance 222 from the first unmanned aerial vehicle 104 to the second unmanned aerial vehicle 106 using harmonics 224 and a series of atmospheric conditions226; and manage the flight of the first unmanned aerial vehicle 104 using the distance 222 from the first unmanned aerial vehicle 104 to the second unmanned aerial vehicle 106. In one variant, the method further includes: identifying at least one of an azimuth 234 in the second unmanned aerial vehicle 106 and an elevation 236 in the second unmanned aerial vehicle 106. In another variant, the method which includes managing the flight of the first unmanned aerial vehicle 104 using distance 222 from the first unmanned aerial vehicle manned aerial vehicle 104 to second unmanned aerial vehicle 106 comprises: managing the flight of first unmanned aerial vehicle 104 using distance 222 from first unmanned aerial vehicle 104 to second unmanned aerial vehicle 106 and at least one of azimuth 234 towards the second unmanned aerial vehicle 106 and elevation 236 on the second unmanned aerial vehicle 106. In yet another variant, the method which includes ger sensing the flight of the first unmanned aerial vehicle 104 using distance 222 from the first unmanned aerial vehicle 104 to the second unmanned aerial vehicle 106 comprises: managing the flight of the first unmanned aerial vehicle 104 using distance 222 from the first unmanned aerial vehicle 104 to the second unmanned aerial vehicle 106 to avoid an unwanted distance 222 between the first unmanned aerial vehicle 104 and the second unmanned aerial vehicle 106. [000130] In one aspect, an apparatus is described which includes: a sensor system configured to detect a sound 218 from a sound source 210; and a sound processor 214 configured to identify harmonics 224 in sound 218 from the sound source and identify a distance 222 to sound source 210 using harmonics 224 and a series of atmospheric conditions 226. In one variant, the apparatus including sound processor 214 comprises: a distance identifier 220 configured to identify harmonics 224 in sound 218 from sound source 210 and identify distance 222 to sound source 210 using harmonics 224 and the series of atmospheric conditions 226. In another variant, the apparatus including the sound processor 214 further comprises: an elevation identifier 232 configured to identify an elevation 236 of the sound source 210. In yet another variant, the apparatus including the processor sound source 214 further comprises: an azimuth identifier 231 configured to identify an azimuth 234 to sound source 210. In one example, apparatus including sound processor 214 is configured. used to identify a location 238 of sound source 210 using a distance 222 to sound source 210 and at least one of an azimuth 234 to sound source 210 and an elevation 236 to sound source 210. [000131] The description of the different illustrative modalities was presented solely for the purpose of illustrating and describing and is not expected to be exhaustive or to limit the modalities to the form described here. In this way various modifications and variations will become apparent to those skilled in the art. Furthermore, different illustrative embodiments may provide different aspects compared to other illustrative embodiments. The modality or modalities selected were chosen and described to better explain the modality principles, practical application, and to allow others of skill in the art to understand the description of the various modalities with various modifications when they are suitable for the particular use contemplated.
权利要求:
Claims (17) [0001] 1. A method for processing sound (218) from a sound source (210), the method comprising: detecting sound (218) from the sound source (210), wherein sounds are received as data from sound (306) from an acoustic sensor system (204); identify harmonics (224) in the sound (218) from the sound source (210) including converting the sound data (306) from a time domain to a frequency domain using a Fourier transform; sound data (306) in the frequency domain, and identify the harmonics (224) of the spectral peaks; identify a series of atmospheric conditions (226) the method characterized by the fact that it further comprises: identifying an atmospheric dispersion factor for harmonics; identify a distance (222) to the sound source (210) using the harmonics (224), the atmospheric conditions series (226), and the atmospheric dispersion factor; identify the distance (222) to the sound source (210) using the time delay (326) between a first harmonic (318) and a second harmonic (320) and the atmospheric dispersion caused by the series of atmospheric conditions (226 ) comprises: identifying the distance (222) to the sound source (210) using the time delay (326) between a first harmonic (318) and a second harmonic (320) and the atmospheric dispersion factor, where the atmospheric dispersion factor takes into account dispersion caused by the series of atmospheric conditions (226) for the first harmonic (318) and the second harmonic (320); identifying the time delay (326) between the first harmonic (318) and the second harmonic (320) in the harmonics (224) comprises: identifying a phase delay between a first frequency for the first harmonic (318) and a second frequency for the second harmonic (320); and identify the time delay (326) between the first harmonic (318) and the second harmonic (320) using the phase delay. [0002] 2. Method according to claim 1, characterized in that it further comprises: identifying a location (238) of the sound source (210) using the distance (222) to the sound source (210) and by minus one of an azimuth (234) to the sound source (210) and one elevation (236) to the sound source (210). [0003] 3. Method according to claim 1, characterized in that it further comprises: identifying at least one of an azimuth (234) and an elevation (236) to the sound source (210) from the sound (218) . [0004] 4. Method according to claim 1, characterized in that it further comprises the step of generating an alert based on distance. [0005] 5. Method according to claim 1, characterized in that the distance (222) is between a first unmanned aerial vehicle (104) and a second unmanned aerial vehicle (106) and further comprising: managing the flight of the first unmanned aerial vehicle (104) to avoid collision with the second unmanned aerial vehicle (106) using distance (222). [0006] 6. Method according to claim 1, characterized in that the distance (222) is used to identify a location (238) of the sound source (210) that generates a plurality of triggers that result in harmonics (224) in the sound (218). [0007] 7. Method according to claim 1, characterized in that the series of atmospheric conditions (226) is selected from at least one of humidity, pressure and temperature. [0008] 8. Method according to claim 1, characterized in that the sound source (210) is selected from one of an aircraft, an unmanned aerial vehicle (104), and a firearm (426) . [0009] 9. Method according to claim 1, characterized in that the sound source (210) is a first unmanned aerial vehicle (104) and the vehicle is a second unmanned aerial vehicle (106), the method comprising manage the flight of the first unmanned aerial vehicle (104) using the distance of the first unmanned aerial vehicle (104) relative to the second unmanned aerial vehicle (106). [0010] 10. Method according to claim 9, characterized in that it further comprises: identifying at least one of an azimuth (234) for the second unmanned aerial vehicle (106) and an elevation (236) for the second aerial vehicle unmanned (106). [0011] 11. Method according to claim 10, characterized in that it manages the flight of the first unmanned aerial vehicle (104) using the distance (222) from the first unmanned aerial vehicle (104) to the second unmanned aerial vehicle (106) comprises: managing the flight of the first unmanned aerial vehicle (104) using the distance (222) from the first unmanned aerial vehicle (104) to the second unmanned aerial vehicle (106) and at least one of azimuth (234) toward the second unmanned aerial vehicle (106) and the elevation (236) to the second unmanned aerial vehicle (106). [0012] 12. Method according to claim 9, characterized in that it manages the flight of the first unmanned aerial vehicle (104) using the distance (222) from the first unmanned aerial vehicle (104) to the second unmanned aerial vehicle (106) comprises: managing the flight of the first unmanned aerial vehicle (104) using the distance (222) from the first unmanned aerial vehicle (104) to the second unmanned aerial vehicle (106) to avoid an unwanted distance (222) between the first unmanned aerial vehicle (104) and the second unmanned aerial vehicle (106). [0013] 13. Apparatus comprising: a sensor system configured to detect a sound (218) from a sound source (210), wherein the sound is received as sound data (306) from an acoustic sensor system ( 204); and a sound processor (214) configured to: identify the harmonics (224) of the spectral peaks; characterized in that the sound processor (214) is further configured to: identify harmonics (224) in the sound (218) from the sound source, including converting the sound data (306) from a time domain to a frequency domain using a Fourier transform, identify spectral peaks at frequencies in the sound data (306) in the frequency domain, and identify a series of atmospheric conditions (226) and an atmospheric dispersion factor for the harmonics; identify a distance (222) to the sound source (210) using the harmonics (224), the atmospheric conditions series (226), and the atmospheric dispersion factor, where the distance identifier (220) includes a harmonic identifier (300), a dispersion identifier (302), and a distance estimator (304), the distance identifier (220) configured to identify harmonics (224) in the sound (218) from the sound source (210) and identify the distance (222) to the sound source (210) using the harmonics (224) and the atmospheric conditions series (226). [0014] 14. Apparatus according to claim 13, characterized in that the sound processor (214) is configured to generate an alert based on distance. [0015] 15. Apparatus according to claim 14, characterized in that the sound processor (214) further comprises: an elevation identifier (232) configured to identify an elevation (236) of the sound source (210) . [0016] 16. Apparatus according to claim 14, characterized in that the sound processor (214) further comprises: an azimuth identifier (231) configured to identify an azimuth (234) to the sound source (210 ). [0017] 17. Apparatus according to claim 13, characterized in that the sound processor (214) is configured to identify a location (238) of the sound source (210) using the distance (222) to the sound source (210) and at least one of an azimuth (234) to the sound source (210) and an elevation (236) to the sound source (210).
类似技术:
公开号 | 公开日 | 专利标题 BR102013012900B1|2021-07-06|method for processing sound from a sound source and apparatus US20170068012A1|2017-03-09|Magnetic wake detector US8594932B2|2013-11-26|Management system for unmanned aerial vehicles US9562771B2|2017-02-07|Analysis of sensor data WO2017105569A1|2017-06-22|Measurement of magnetic field gradients EP3350543A1|2018-07-25|Magnetic field gradient navigation aid BR102013016268B1|2021-01-12|method and apparatus for automatically controlling the movement of a vehicle in an unpredictable way US20140139374A1|2014-05-22|Kalman filtering with indirect noise measurements Babel2014|Flight path planning for unmanned aerial vehicles with landmark-based visual navigation KR101908534B1|2018-10-16|Apparatus and method for determining position and attitude of a vehicle US20160299212A1|2016-10-13|Direct geolocation from tdoa, fdoa and agl Dehghan et al.2014|A new approach for simultaneous localization of UAV and RF sources | Deneault et al.2008|Tracking ground targets with measurements obtained from a single monocular camera mounted on an unmanned aerial vehicle WO2007063537A1|2007-06-07|A method and system for locating an unknown emitter US8620023B1|2013-12-31|Object detection and location system JP6200297B2|2017-09-20|Locating method and program Kim et al.2013|Variable-structured interacting multiple model algorithm for the ballistic coefficient estimation of a re-entry ballistic target Temel et al.2014|Opportunities and challenges of terrain aided navigation systems for aerial surveillance by unmanned aerial vehicles Wang et al.2018|Routing multiple unmanned vehicles in gps denied environments Ahmad et al.2006|A noncoherent approach to radar localization through unknown walls Kim et al.2019|Ballistic Object Trajectory and Launch Point Estimation from Radar Measurements using Long-Short Term Memory Networks Fernandes et al.2017|Airborne DoA estimation of gunshot acoustic signals using drones with application to sniper localization systems Saini et al.2014|Air-to-air tracking performance with inertial navigation and gimballed radar: a kinematic scenario Dasgupta et al.2021|A Sensor Fusion-based GNSS Spoofing Attack Detection Framework for Autonomous Vehicles Lv et al.2019|An Improved RF Detection Algorithm Using EMD-based WT
同族专利:
公开号 | 公开日 US9146295B2|2015-09-29| EP2667216A1|2013-11-27| CA2805081C|2016-04-19| CA2805081A1|2013-11-24| EP2667216B1|2017-07-12| US20130317669A1|2013-11-28| BR102013012900A2|2015-06-09| CN103424739B|2018-04-13| CN103424739A|2013-12-04|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 DE3204874C2|1982-02-11|1994-07-14|Atlas Elektronik Gmbh|Passive method for obtaining target data from a sound source| NO307013B1|1995-09-26|2000-01-24|Arve Gustavsen|Procedure for Passive Automatic Position Determination of Sniper Weapons Using the Projectile Shockbar| US6178141B1|1996-11-20|2001-01-23|Gte Internetworking Incorporated|Acoustic counter-sniper system| US5930202A|1996-11-20|1999-07-27|Gte Internetworking Incorporated|Acoustic counter-sniper system| US6400647B1|2000-12-04|2002-06-04|The United States Of America As Represented By The Secretary Of The Navy|Remote detection system| US20080221793A1|2003-01-24|2008-09-11|Shotspotteer, Inc.|Systems and methods of tracking and/or avoiding harm to certain devices or humans| US7385932B2|2004-04-27|2008-06-10|Telecommunications Research Laboratory|Wideband frequency domain reflectometry to determine the nature and location of subscriber line faults| US7180580B2|2004-07-02|2007-02-20|Venkata Guruprasad|Passive distance measurement using spectral phase gradients| US7495998B1|2005-04-29|2009-02-24|Trustees Of Boston University|Biomimetic acoustic detection and localization system| US20100226210A1|2005-12-13|2010-09-09|Kordis Thomas F|Vigilante acoustic detection, location and response system| US20100121574A1|2006-09-05|2010-05-13|Honeywell International Inc.|Method for collision avoidance of unmanned aerial vehicle with other aircraft| US8325562B2|2007-02-09|2012-12-04|Shotspotter, Inc.|Acoustic survey methods in weapons location systems| JP5376777B2|2007-06-13|2013-12-25|三菱電機株式会社|Radar equipment| US7606115B1|2007-10-16|2009-10-20|Scientific Applications & Research Associates, Inc.|Acoustic airspace collision detection system| KR101349268B1|2007-10-16|2014-01-15|삼성전자주식회사|Method and apparatus for mesuring sound source distance using microphone array| US8461986B2|2007-12-14|2013-06-11|Wayne Harvey Snyder|Audible event detector and analyzer for annunciating to the hearing impaired| US7957225B2|2007-12-21|2011-06-07|Textron Systems Corporation|Alerting system for a facility| US20090308236A1|2008-06-11|2009-12-17|Vladimir Anatolievich Matveev|Missile system| ES2400708T3|2008-08-27|2013-04-11|Saab Ab|Use of an image sensor and a time tracking filter to avoid collisions in flight| US8279112B2|2008-11-03|2012-10-02|Trimble Navigation Limited|Methods and apparatuses for RFID tag range determination| US8338683B2|2009-08-14|2012-12-25|The Tc Group A/S|Polyphonic tuner| JP2015514202A|2012-04-26|2015-05-18|インテル コーポレイション|Determination of relative positioning information|US9135797B2|2006-12-28|2015-09-15|International Business Machines Corporation|Audio detection using distributed mobile computing| JP6003510B2|2012-10-11|2016-10-05|富士ゼロックス株式会社|Speech analysis apparatus, speech analysis system and program| US9310221B1|2014-05-12|2016-04-12|Unmanned Innovation, Inc.|Distributed unmanned aerial vehicle architecture| US9256225B2|2014-05-12|2016-02-09|Unmanned Innovation, Inc.|Unmanned aerial vehicle authorization and geofence envelope determination| WO2016093908A2|2014-09-05|2016-06-16|Precisionhawk Usa Inc.|Automated un-manned air traffic control system| US9761147B2|2014-12-12|2017-09-12|Amazon Technologies, Inc.|Commercial and general aircraft avoidance using light pattern detection| US9997079B2|2014-12-12|2018-06-12|Amazon Technologies, Inc.|Commercial and general aircraft avoidance using multi-spectral wave detection| US9601022B2|2015-01-29|2017-03-21|Qualcomm Incorporated|Systems and methods for restricting drone airspace access| US9552736B2|2015-01-29|2017-01-24|Qualcomm Incorporated|Systems and methods for restricting drone airspace access| RU2589290C1|2015-02-24|2016-07-10|Федеральное государственное казенное военное образовательное учреждение высшего профессионального образования "Военная академия воздушно-космической обороны имени Маршала Советского Союза Г.К. Жукова" Министерства обороны Российской Федерации |Method and apparatus for acoustic detection and identification of aircraft| CN105203999A|2015-10-20|2015-12-30|陈昊|Rotorcraft early-warning device and method| US10379534B2|2016-01-28|2019-08-13|Qualcomm Incorporated|Drone flight control| US10140877B2|2016-10-28|2018-11-27|Lockheed Martin Corporation|Collision avoidance systems| US10317915B2|2017-02-28|2019-06-11|Gopro, Inc.|Autonomous tracking based on radius| JP2021520730A|2018-04-06|2021-08-19|レオナルド・ソチエタ・ペル・アツィオーニLEONARDO S.p.A.|Acoustic systems and related positioning methods for detecting and positioning weak and low frequency sound sources| US11078762B2|2019-03-05|2021-08-03|Swm International, Llc|Downhole perforating gun tube and components| US10689955B1|2019-03-05|2020-06-23|SWM International Inc.|Intelligent downhole perforating gun tube and components| US11268376B1|2019-03-27|2022-03-08|Acuity Technical Designs, LLC|Downhole safety switch and communication protocol|
法律状态:
2015-06-09| B03A| Publication of a patent application or of a certificate of addition of invention [chapter 3.1 patent gazette]| 2018-12-04| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]| 2019-11-05| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]| 2020-10-13| B06A| Notification to applicant to reply to the report for non-patentability or inadequacy of the application [chapter 6.1 patent gazette]| 2021-05-25| B09A| Decision: intention to grant [chapter 9.1 patent gazette]| 2021-07-06| B16A| Patent or certificate of addition of invention granted|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 24/05/2013, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US13/480,192|US9146295B2|2012-05-24|2012-05-24|Acoustic ranging system using atmospheric dispersion| US13/480,192|2012-05-24| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|