![]() Method and system of spatial localization by luminous markers for any environment (Machine-translati
专利摘要:
Method and system of spatial localization of an objective (10) in a three-dimensional environment (11) comprising at least one luminous marker comprising: - a stereo camera (12) for capturing a first image frame at a current time and a second image frame at a previous instant; - an angle measuring device (13) for obtaining an angle of rotation of the lens (10); - a signal processor (14) with access to a memory (15) that stores among others a radius of the at least one marker detected at a current time instant n and at a previous time instant N-1 configured to calculate some coordinates (xi, andi) of the target (10) at a time instant i as follows: - if the angle of rotation at the current time instant and at the previous time instant are different, (xn, andn) = (xn -1, andn-1); - if the two image frames are equal, (x {sub, n, andn) = (xn-1, andn -1); - in another case: - if the radii are equal and there are several markers, (xn, andn) are calculated by triangulation using both image frames; - if the radii are different and there are several markers, (xn, andn) are calculated by triangulation using a single image frame; - if the radii are different and there is a single marker, (xn, andn) are calculated by stereo geometry; - if the radii are the same and there is a single marker, (xn, andn) are calculated using image coordinates of the marker at the current time and the previous one. (Machine-translation by Google Translate, not legally binding) 公开号:ES2543038A1 申请号:ES201500011 申请日:2014-12-23 公开日:2015-08-13 发明作者:Eugenio Villar Bonet;Patricia Mª MARTÍNEZ MEDIAVILLA;Francisco José ALCALÁ GALÁN;Pablo Pedro SÁNCHEZ ESPESO;Víctor FERNÁNDEZ SOLORZANO 申请人:Universidad de Cantabria; IPC主号:
专利说明:
computation, to be able to process the images of the cameras in real time and with the least possible latency When only one reference marker can be displayed, the use of stereo cameras and epipolar geometry must be used. The characterization of a point in three-dimensional space requires knowledge of its coordinates (x, y, z) within the environment where it is located, with respect to a reference position. The most common technique is based on the use of two or more calibrated cameras, which provide a left and right image of the same scene. To obtain the 3D coordinates of the point or object to be characterized, stereo correspondences are applied (look for the same point in both images) and the projective or epipolar geometry is calculated (it describes the relationship between the image planes of the cameras and the point ). In the case of having more than one marker or pattern available, other techniques can be applied to locate the object on the stage. Through triangulation, knowing the real distance between markers, it is possible with only two markers and a single camera to obtain the parameters to achieve depth to the target and position it in the environment. This practice simplifies the computational cost, by not having to analyze two images and their correspondences; but it requires greater precision when it comes to detecting markers. Despite this, some authors ("Optical tracking using projective invariant marker pattern properties", R. van Liere et al., Proceedings of the IEEE Virtual Reality Conference 2003, 2003) consider that, in order to better track the object, it is it is necessary that the object has four or more markers and that stereo vision is also used; to get a more precise but slower system. One of the problems that have been found in other studies (US 7,231,063 82, "Fudicial Oetection System", L. Naimark et al.) When using markers, is the brightness of the environment where the images will be taken . The points to be detected may be lost in the scene due to lack of light. This limits the applications that use this system indoors or with controlled brightness. In addition, the use of contrast enhancement algorithms and / or the use of specific markers stored in a database is necessary, which substantially increases the computation time of these systems. In addition to considerably limiting the distance between the printed markers and the system user, unless their size is large enough for the image sensor to capture. In some cases (WO 2013/120041 A 1, "Method and apparatus for 30 spatial localization and tracking of objects using active optical illumination and sensing") such light sources with variable luminance or pulsed light have been proposed, which may cause synchronization failures. Even so, the use of light markers can pose problems, specifically in environments where there are light sources with a luminance much greater than the marker itself (in the worst case, sunlight) or sources that emit radiation in the same direction; In these situations, the image sensor is not able to differentiate one light source from another, so it will force, as previously, to use this technology in bright environments without large sources of light in it. The idea of detecting and positioning the markers serves to characterize the objects or individuals that are in it, that way they are located in space. In the article "Wide area optical tracking in unconstrained indoor environments" (by A. Mossel et al., 23rd International Conference on Artificial Reality and Telexistence (ICAT), 2013) it is proposed to incorporate infrared light markers in a tape located on the head of the Username. To do this, they place on the stage two independent cameras, which require a synchronization process to perform the shot simultaneously, located at a distance equal to the length of the wall of the room where it is to be tested. The algorithm used to estimate the position is based on the search for stereo correspondences. One of the disadvantages that it presents is that it cannot be implemented for augmented or simulated reality systems, because the cameras do not show what the user sees, in addition to being restricted to indoor environments with limited dimensions. Other studies such as "Tracking of user position and orientation by stereo measurement of infrared markers and orientation sensing" (by M. Maeda, et al., Proceeding of the 8th. International Symposium on Wearable Computers (ISWC'04), 2004) raise the use of infrared markers located on the wall of a room, to locate the user. Specifically, they propose the use of two types of markers: active and passive. The active markers are formed by a set of three infrared LEOs and a signal emitter, which sends data of their real position to a signal decoder that the user carries, so once they are detected they know their absolute position. Passive markers are only an infrared light source, from which they obtain the relative position of the user. In addition to relying on the reception of signals from active markers, they calculate the relative distance to the marker from stereo vision. He use of this technique, as it happened in the cases explained above, is limited to interior spaces. There are other methods, which do not require direct vision of one or more cameras with reference markers, for user tracking and tracking systems. Radio frequency techniques consist of measuring distances, of static or mobile objects, from the emission of electromagnetic pulses that are reflected in a receiver. These electromagnetic waves will be reflected when there are significant changes in the atomic density between the environment and the object, so they work particularly well in cases of conductive materials (metals). They are able to detect objects at a greater distance than other systems based on light or sound, however they are quite sensitive to interference or noise. It is also difficult to measure objects that are between each other at different distances to the emitter, because the pulse frequency will vary (slower the farther and vice versa). Even so, there are experimental studies such as "RADAR: an inbuilding RF-based user location and tracking system" (by P. Bahl et al., Proceedings of IEEE INFOCOM 2000, Tel-Aviv, 2000) that demonstrate its use to estimate location of the user with a high level of precision. This technique is not appropriate in augmented reality applications. Another example of existing solutions is the LlDAR systems, which calculate the distance over the time it takes for a light pulse to be reflected on an object or surface, using a device with a pulsed laser as a light emitter and a photodetector as a signal receiver reflected. The advantage of these systems is the precision they achieve over long distances (using lasers with a wavelength> 1000 nm) and the possibility of mapping large areas, by sweeping light pulses. Its drawbacks are the need to perform the analysis and processing of each point, as well as the difficulty of automatically reconstructing three-dimensional images. The objective technical problem that arises is thus to provide a system for the detection of the position and orientation of an individual or object in any type of environment, interior or exterior, with whatever their lighting conditions. DESCRIPTION OF THE INVENTION The present invention serves to solve the aforementioned problems, solving the drawbacks presented by the solutions mentioned in the prior art, providing a system that, from the use of one or more reference light markers and a single stereo camera, allows spatially locate objects or individuals in a scenario under any environmental condition and with greater distances between the user and the marker. The system is mainly based on the use of luminous markers to calculate relative positions of the object / individual, a stereo camera to visualize those markers on the stage image and an electronic device for measuring angles, such as a gyroscope or electronic compass , to provide angles of rotation of the target user (object, person or animal). The present invention makes it possible to detect reference luminous markers in any type of environment, regardless of the light sources that determine the environmental conditions. One aspect of the invention relates to a method for positioning or locating an objective by using reference markers in any 3D environment that, from a first image frame at an instant of current time and a second image frame in a previous moment of time captured by a stereo camera, images in which at least one marker is detected, obtains the coordinates (xn, Yn) of the objective at the current time instant n, for which it performs the following steps: - obtain an angle of rotation of the target at the current time instant and at the previous time instant; -if the angle of rotation at the current time instant and the angle of rotation at the previous time instant are different, calculate the coordinates (xn, Yn) of the target at the current time n equating them to the coordinates (Xn-1, Yn-1) of the objective in the previous instant n-1; -if the first image frame and the second image frame are the same, calculate the coordinates (xn, Yn) of the target at the current instant equal to the coordinates (Xn-1, Yn-1) of the objective at the previous instant; -if not, in another case, it obtains the image coordinates of, at least one detected marker, and its radius, to compare the radii at the current time instant n and at the previous time instant n-1 and: - if the radii are equal and there is a plurality of markers, the coordinates (Xn, Yn) of the target at the present time are obtained by triangulation using the first image frame and the second image frame; - if the radii are different and there is also more than one marker, the coordinates (Xn, Yn) of the target at the current moment are obtained by triangulation but using a single image frame, the one captured at the current moment; - if the radii are different and there is a single marker detected, the coordinates (Xn, Yn) of the target at the present time are obtained by the stereo geometry algorithm known in the state of the art; - if the radii are equal and there is a single marker detected, the coordinates (Xn, Yn) of the target at the current time are obtained by an algorithm reminiscent of stereo geometry but using the image coordinates of the marker at the current time instant and in the previous instant of time, instead of a left and right image of the same instant of time Another aspect of the invention relates to a system for locating an objective, which can be an object or an individual, from at least one reference marker in a 3D space or environment, comprising the following means: a stereo camera to capture image frames in which one or more markers are detected; an angle measuring device to obtain the angle of rotation of the objective at each instant of time; a signal processor, with access to a storage device (a memory), configured to perform the steps of the method described above to obtain at its output the coordinates (Xn, Yn) of the target calculated at the current time instant, using, according to each case indicated above, the data obtained in the previous instant of time stored in the memory. A light source, identifiable in the environment of use, is used as a reference marker. In a possible field of application, the invention described can be used for Simulated Reality applications. To do this, Virtual Reality glasses are incorporated into the system. Both the stereo camera and the glasses can be part of a helmet or fastening equipment that is placed on the user's head and connecting the camera with the glasses. The system can additionally incorporate an accelerometer, which measures the displacement made in a finite time, which would reduce the cumulative errors. The present invention has a series of differentiating characteristics with respect to the existing solutions discussed in the prior art that have technical advantages such as the following: With respect to US 7,231,063 82, the present invention solves the computation time problem of existing systems such as that described in US 7,231,063 82, because contrast enhancement algorithms and / or specific markers stored in a database are required , because the present invention uses light markers that work in the visible or infrared spectrum, such as light emitting diodes (LEOs With respect to WO 2013/120041 A 1, one of the differences of the present invention is that it uses fixed light sources and comes to solve the problem that occurs in environments where there are light sources with a luminance much greater than the marker itself.To solve this problem, the present invention uses an element that prevents the light conditions of an environment from affecting significantly as is the use of a background after the light source. BRIEF DESCRIPTION OF THE FIGURES A series of drawings that help to better understand the invention and that expressly relate to an embodiment of said invention which is presented as a non-limiting example thereof is described very briefly below. FIGURE 1.- Shows a schematic block diagram of the spatial location system of individuals or objects, according to a preferred embodiment of the invention. FIGURE 2.- Shows a simplified representation of a type of luminous marker, which the system of Figure 1 can use. FIGURE 3.- Shows an environment of use of the markers of Figure 2 and in which the system of Figure 1 is applicable, according to a possible embodiment. FIGURES 4A-4B.-They show a scheme of the markers and parameters that the system uses to locate in the environment individuals or objects that move vertically and when only a single marker is detected. FIGURES 5A-5B.-They show a scheme of the markers and parameters that the system uses to locate in the environment individuals or objects that move vertically and when more than one marker is detected. FIGURE 6A.- Shows a scheme of the markers and parameters that the system uses to locate in the environment individuals or objects that move horizontally and when only a single marker is detected. FIGURE 6B.-Shows a scheme of the markers and parameters that the system uses to locate in the environment individuals or objects that move horizontally and when more than one marker is detected. FIGURE 7.- It shows a scheme of the operation of the method, it is merely an example of data flow. PREFERRED EMBODIMENT OF THE INVENTION Next, possible embodiments of the obtaining system are proposed, from the use of one or more luminous markers, of the position and orientation of a user, in different possible environments, which can be indoors or outdoors, within a controlled scenario. Figure 1 shows a schematic diagram of the block architecture of the system to locate in space the objects or individuals that constitute a target (10) in a three-dimensional environment (11) under any environmental condition defined by a m ~ 1 number of sources of light (fL1, fL2, fL3, ..., fLm), having one or more light markers (20) as shown in Figures 2-3, 4A-48, 5A-58 and 6A-68. The system comprises a stereo camera (12) to detect the luminous markers (20) and an electronic angle measuring device (13), for example, a gyroscope or electronic compass, with which the rotation angles of the objective are obtained ( 10). In addition, the system comprises a digital signal processor (14) that calculates the positional coordinates in the space of each luminous marker (20) in time and stores them in a memory or storage device (15). The digital signal processor (14) uses the stored coordinates and the output parameters it obtains from the stereo camera (12) and from the angle measuring device (13) to determine at its output (16) the position of the target user (10 ). Figure 2 shows a type of reference marker (20) of those used, which is a light marker and comprises two main elements: a light source (21) and a contrast surface (22). The preferred light source (21) is a LEO diode that emits in the visible range: 400-700 nm. This type of source is a point light source that achieves ranges greater than 50 m for powers greater than 1 W. In addition, a LEO diode can be considered as a non-hazardous product due to the optical powers in which it works since in the worst of the cases the exposure time is very low (aversion reaction time = -250 ms). Even so, the system can use luminous markers (20) with other types of light sources (21), because the device that detects the luminous marker (20), that is, the stereo camera (12) used as light receiver, detects both light sources (21) that work in the visible and infrared spectrum. The image sensor of the stereo camera (12) has a spectral curve that, for the wavelength of the LEO used, indicates a spectral response with a value greater than 0.005 A / W. Filament bulbs are another example of light sources (21), although they are diffuse sources with emitted optical powers lower than those attainable with a LEO diode. Another possible light source (21) can be a laser diode, although it is a collimated source capable of focusing the light on a very small point and, for most cases, all those optical powers greater than 1 mW can Be dangerous The last type of light source (21) that can be used is an infrared LEO diode, although due to the scope it presents, the drawback is that the user is not able to perceive it and could cause eye damage. On the other hand, the contrast surface (22) is of a color - for example, black - and dimensions that allow distinguishing between the light marker (20) and any external light source. The contrast screen or surface (22) allows the method proposed here to be applied in environments with low or high brightness, and over long distances. The shape of the contrast surface (22) can be any, for example, square as in Figure 2. The dimensions of the contrast surface (22) depend on the ambient light conditions (11), on the luminous flux of the light source (21) and the maximum distance between the objective (10) and the light markers (20). The template or contrast surface (22) is located on the outside of the light source (21), specifically at the rear, the light source (21) being visible to the user. In the case that the environment or the background behind the light source (21) is dark enough, it is not necessary to add the contrast surface (22). The system supports the use of other types of luminous markers (20), such as white printed markers with a black border, although these cannot be used in any type of environment. Figure 3 shows a possible system application scenario, in which the distribution of the markers (20). However, the markers (20) can be placed at different distances from each other, which the system must know in advance. The height between each light marker (20) and the ground is not preset, but it is recommended that it be that which allows direct vision between the stereo camera (12) and the light sources (fL1, fL2, fL3, ..., fLm) of the luminous markers (20). In the case of outdoor environments, the markers (20) are placed on vertical supports to achieve the necessary height. In the case of indoor environments, the luminous markers (20) can also go on vertical supports or subjects on the surrounding walls or objects. The relationship between the number m of markers (20) and the distance between them (d1, d2) depends on the opening angle (2 <p) of the stereo camera (12), the emission angle (29) of the light source (21), for example the LEO, of the light marker (20) and the minimum distance (L) that must be between the target (10), user of the system, and the light sources (fL1, fL2, fL3, ..., fLm), ie, the LEOs, so that at least one pair of sources can be displayed; according It is reflected in the following in the equation rn / d ~ _1_. tB ():. <L The scenario where the method is applied does not present any predefined characteristics with respect to distribution, plant, obstacles, so that the system adapts to it. The type of environment, as explained above, can be indoor or outdoor. The only restriction it has is the maximum dimensions of this environment, being limited by the scope of the light sources (fL1, fL2, fL3, ..., fLm) chosen. Said scope is measured in function of the intensity and luminous flux of the light sources (fL1, fL2, fL3, ..., fLm) and the sensitivity of the image sensor of the stereo camera (12). From the images captured by the stereo camera (12) and the image coordinates of the light markers (20) calculated in a previous capture, as described below, by the digital signal processor (14) of the system, This system allows specific reference points to be located in the image by means of a light marker detection algorithm (20), such as the one described below. The method to be described is not unique, other variants can be used, returning as output parameters the image coordinates (u, v) and the diameter of the luminous markers (20) detected. In the image coordinates (u, v) in 2 dimensions, the first coordinate or denotes the coordinate along a horizontal axis and the second coordinate v denotes the coordinate along a vertical axis, in plane 20 of the image where movements are detected . The detection of light markers (20) is divided into the following steps: Image conversion to grayscale to significantly reduce the size of the image, as this goes from having three channels, red, green and blue, to only one black and white. That is, each pixel in the image reduces its value from 3 bytes to 1 byte. Filtering noise elimination to eliminate erroneous pixels and noise from images captured by cameras. The type of filter depends on how clear the images are desired and the delay time that can be introduced into the system. Localization of neighboring pixels with strong contrasts, analyzing the image by windows and looking for those regions where the contrasts between neighboring pixels are greater. This algorithm makes sense because the light sources (21) have pixel values in the image around 255 and the black template (22) has values around O. Obtaining the coordinates of the luminous markers (20), a once the regions that can correspond to light sources (21) are located, verifying that they really are. The first thing that is checked is the shape of the light sources (21), which approximates a circle or ellipse, and the image coordinates (u, v) of its central point as well as its radius are obtained. In addition, these regions have to be checked against each other, verifying that they are all in very similar rows of pixels and that they have intensity values similar, since it is assumed that all are light sources (21) with the same luminance Final verification, comparing the coordinates of the luminous markers (20) calculated with those obtained is an earlier capture. Once the regions that have been checked correspond to luminous markers (20), a final check is made. In this case, comparing the positions of the current markers with those of a previous instant; taking into account that being consecutive moments, the coordinates do not change very significantly from one site to another. To locate in the image of an environment (11) the reference points that give the location of the target user (10), it is necessary to know the following information: a) the image coordinates (u, v) and radius of each marker ( 20) detected by the algorithm for obtaining markers described above from the image captured by the stereo camera (12); b) the value in degrees 5, of the rotation of the target user, returned by the angle measuring device (13), at the time of capture by the stereo camera (12) of each image; and e) the data stored in the memory (15) such as: previous position, actual distance between markers, focal length of the cameras, camera opening angle, distance between cameras ('baseline', in English), image frame ('' frame '' in English), previous radius of the markers, previous position vectors of the markers and anterior angle of rotation. Considering the particular case of a continuous, unobstructed and square-shaped environment (11), for example, such as the scenario depicted in Figure 3, the position of the target user (10) depends on the turns and the type of movements he performs -vertical: up or down, horizontal: left or right-; or if it does not make any movement. The methods for calculating the position described below are summarized in Figure 7, implemented in different ways, illustrated in Figures 4A-48, 5A-58 and 6A-68, depending on the kind of displacement that is has registered and the number of markers detected, being valid for any type of marker, both luminous and printed. As Figure 7 shows, the first thing is to check the value, in degrees, returned by the angle measuring device (13) to determine if there is a significant turn, which occurs if the angle obtained at the current moment OR (n) other than the instant previous O (n-1); however, if O (n) = O (n-1) indicates that the target user (10) has not turned. If there is rotation, the user coordinates are the same even though the images captured by the stereo camera (12) change. When the angle of rotation, obtained by the angle measuring device (13), is constant over time, the image frame captured at the current moment, frame (n), is compared with the immediately previous frame (n1) And if match is interpreted as there has been no user movement. In the case that there is no displacement, the method returns the same user coordinates as in the previous moment (Xn-1, Yn-1); otherwise, the position is calculated with all the information, a) -c), mentioned above. In this way, redundant and unnecessary operations are avoided. When position change is detected, the marker detection algorithm is applied. By knowing the radius values of the markers detected at the current instant, r (n) and those of the previous instant r (n-1), the type of displacement of the target user (10) can be identified: If those radios are different, r (n-1): f :. r (n), the displacement is up or down. To know the position of the objective (10) it is necessary to know the distance between it and the markers, that is, knowing the displacement made vertically If those radii are equal, r (n-1) = r (n), displacement is to the right or left. To know the position of the objective (10) it is necessary to know how much It has moved horizontally. Once the type of movement performed by the objective (10) has been identified, it can be located in the environment (11) according to the following methods, which depend on the type of movement and the number m of markers (20) detected. Figures 4A-4B show the case in which it has been determined that there is a vertical movement of the lens (10) and when only a single marker (20) is detected in the binocular image (40) captured by the stereo camera (12). In this case, a triangulation algorithm cannot be used, because the pixels cannot be related to a real distance; Therefore, the stereo vision technique must be used and the following parameters are needed: the binocular disparity ('disparity') of stereo vision given by UL and UR coordinates, rectified and undistorted respectively, of the marker (20) obtained from the two image components, left (41) and right (42), captured by the stereo camera (12); the values of baseline distance B and focal length '_' ength f of the stereo camera (12); Y 5 the angle of rotation (5) of the target user (10). In order to transform the image coordinates to depth, the projective geometry is calculated at the current time n according to the equation: ba.seline ;;.; focQ · Uenght B: i <f Bookmark (n) = d "'t = LSpa'rI j 'u¿ -'Ita 10 Once the Marker distance (n) between the objective (10) and the marker (20) is calculated, its position on stage can be obtained. As it has moved vertically, the only thing that has apparently changed is its coordinate and, but it is necessary to take into account the angle of rotation or to obtain the coordinates 15 absolute. The coordinates (xn, Yn) in the current instant are equal to the coordinates in the previous instant (Xn-1, Yn-1) plus the sum of the displacement made: If r (n-1)> r (n) xn= xn- ~ -sin (or "). ~ / Markern_:-Lm(ll'Cado."r .. 1 v ~ n = v -cosCo) 'IL ~~ -L, ~ I ~. n, -1, marCUi: i: OTn _ :: .., narrates: ~ OTn. 20 Figures 5A-58 show the case in which it has been determined that there is a vertical movement of the lens (10) and two or more markers (20, 20 ', 20 ") are detected in the image (50) captured by the stereo camera (12). In this case, triangulation can be applied, since more than one marker is available, the actual distance (d / m) between markers (20, 20 ', 20 "), the angle of rotation (5), the angle opening (2 <p) of the chamber (12) and 25 number of pixels (AxB) of the image (50). Knowing the horizontal or image coordinates of the markers (20, 20 ', 20 "), it is calculated, in pixels, the distance in pixels q between them, q = U2-U1, which in the real world is equal ad / m meters, where m is the number of markers, so the actual distance meters Lmarcador (n) between the target (10) And one of the markers, marker (20), at the present time n is: · ~ -Ti11 * !! "'" case' "I 1 I lrn u ~ .,you Marker (n) = lA, " tg (2cp * -'u1) "-" 5 In Figures 4A-48 and 5A-58, the angle ct> is shown that refers to half of the opening angle (2 <p) of the chamber (12). Known the distance to the marker and the distance that was in the previous instant n-1, the meters traveled are calculated as the difference between the two. From that value and the user's previous position, coordinates of the target (10) in the previous instant (Xn-1, Yn-1), 10 its new coordinates (xn, Yn) can be calculated at the current time: In this case, stereo vision can also be used to obtain depth to the markers. But it is necessary to apply stereo correspondences, that is, to relate the 15 markers of the left image with their equivalents of the right image. Once the correspondences are obtained, projective geometry can be applied, as in the case of a single marker to obtain the real distance to each marker. Figures 6A-68 show the case in which it has been determined that there is a horizontal movement 20 of the objective (10). Figure 6A refers to the case in which only a single marker (20) is detected in the image (61, 62). An algorithm that can be reminded of stereo geometry is applied, but in this case two images of the same moment taken from two different angles are not used, 25 but two images of contiguous instants and the same perspective will be used: the image captured in the instant current (61) And the one captured in a moment immediately before 62). Likewise, it has the horizontal coordinates of the marker at the current moment (Un) and those obtained from the previous frame (Un-1), as well as the previous distance between the marker and the user (Marker) and the focal distance ( seal '-' ength) of the camera (12), to calculate the horizontal displacement (D) made by the target user (10) according to the 5 following expression: D = L "... an :: a.é.Oln _: - * 'IUn-1 -l" n IfocaClen9ht Once the displacement D, in meters, that the target user (10) has made is known, 10 its real coordinates can be obtained, which depend on its position in the previous instant n-1 and on the type of displacement, left or right, performed: Yes Un-1 <Unx ~! = X ~ -l -cos (or ') "D v = V _ .....; ... without (O)' * D- 'n-n 1 Yes Un-1> Un',.' = V-if71 (O) "D. N. N-1 15 Figure 68 refers to the case in which more than one marker (20,20 ', 20 ") is detected in the image. A technique similar to that of triangulation explained in the case of a vertical movement of the user with a plurality of markers detected, but in this case two images (63, 64) captured by the same image sensor are used consecutively in time, having the current image (63) and the image captured instantly Previous 20 (64). Knowing the actual distance between markers and the pixels between them, p pixels in the current instant n and q pixels in the previous instant n-1, can be extrapolated to the length that the user has moved. This requires knowing, in addition to the distance between markers (d / m), the angle of rotation (5) and the image coordinates (U n-1) of the markers (20,20 ', 20 ") in the previous image (64). . d q. d IU1, -lit I D = cos (S)) i (- ,, - = cos (S) * - "n. eleven-_ m p m lu, ~ -u1 I "" n n. As in the case of a single marker, once the displacement D is known, the actual coordinates of the target user (10) can be obtained: Yes Un-1 <Unv = V _ + without (o) '* D • 11 .11 1 Yes Un-1> UnX 11 = X "-l + cas (S) ~ D. .. v = V _ -sin (S) '" D ..,.,. n-n 1 In this case you can also apply the previous case of a single marker detected to obtain the displacement (D) made by the target user (10), that is, from the coordinates of the same marker in two adjacent images, neglecting the rest of the markers detected, and with the previous distance between the marker and the user, 10 calculate displacement D,
权利要求:
Claims (9) [1] 1. Method for spatial location of a target (10) using at least one marker (twenty) luminous identifiable in the reference use environment, which in an instant of time i calculates coordinates (Xi, Yi) of the objective (10), characterized in that it comprises: -capturing through a stereo camera (12) a first image frame in an instant of current time and a second image frame in an instant of previous time, detecting in the first and second image frame at least one marker (20); - obtaining a radius in an instant of current time and a radius in the previous instant of the at least one, marker (20) detected in the first image frame and second image frame; - obtain an angle of rotation of the objective (10) by means of an angle measuring device (13) in the instant of current time and in the instant of previous time; -if the angle of rotation at the current time instant and the angle of rotation at the previous time instant are different, calculate the coordinates (Xn, Yn) of the target (10) at the current time equal to the coordinates (Xn- 1, Yn-1) of the objective (10) in the previous instant; -if the first image frame and the second image frame are the same, calculate the coordinates (Xn, Yn) of the objective (10) at the current time equal to the coordinates (Xn-1, Yn-1) of the objective (10 ) in the previous instant; -if not, compare the radios at the current time instant and at the previous time instant of the at least one, marker (20) detected and: - if the radii are equal and there is more than one marker (20, 20 ', 20 ") detected, the coordinates (Xn, Yn) of the target (10) at the present time are obtained by triangulation using the first image frame and the second image frame; - if the radii are different and there is more than one marker (20, 20 ', 20 ") detected, the coordinates (xn, Yn) of the target (10) at the present time are obtained by triangulation using a single image frame that is the first image frame; - if the radii are different and there is a single marker (20) detected, the coordinates (Xn, Yn) of the target (10) at the present time are obtained by stereo geometry; - if the radii are equal and there is a single marker (20) detected, the coordinates (xn, Yn) of the target (10) at the current time are obtained by calculating image coordinates of the marker (20) at the current time instant in the first image frame and image coordinates of the marker (20) obtained at the previous time in the second image frame. [2] 2. Spatial location method according to claim 1, characterized in that it uses a luminous marker (20) comprising a light source (21) and a contrast surface (22). [3] 3. Spatial location method according to claim 2, characterized in that it uses a luminous marker (20) comprising a light source (21) that is a LEO diode. [4] Four. Spatial location method according to claim 1, characterized by 10 that, if the radii are equal and there is more than one marker (20, 20 ', 20 ") detected, obtain the coordinates (xn, Yn) of the target (10) at the present time comprises: -for each marker (20 , 20 ', 20 "), obtain horizontal coordinates An of image in the first image frame At the current time instant and in the second image frame obtain horizontal Un-1 image coordinates at the moment of 15 previous time; -measure a displacement O of each marker (20,20 ', 20 ") by the expression: _ d q _ d IU1. -U1 I D = cos (o) '", -" - = cosCo):; ..-:; .. n.-_;' 1. . m p -m IU1 I 1'1. -u1 n. where p is a number of pixels in the current instant n and q is a number of pixels in the previous instant n-1, m is a total number of markers, d / m is a real distance 20 between markers (20, 20 ', 20 ") Y or is the angle of rotation of the target (10) obtained; -calculate the coordinates (Xn, Yn) of the target (10) at the current time using the equation: Yes Un-1 <Unv = ~ n1, '-1 - + sin (S "),." D- 'n ..l Yes Un-1> Unv = -nv ~ _ -sin (o):;. D -._ 1 Method of spatial location, according to claim 1, characterized in that, if the radii are equal and there is a single marker (20) detected, obtain the coordinates (xn, Yn) of the target (10) in the instant current includes: - for the marker (20) obtain coordinates in the first image frame horizontal An image at the current time instant and in the second image frame obtain horizontal Un-1 image coordinates at the previous time; - measure a displacement D of the objective (10) by the expression: D = L ma, · cador; ¡_ :. : ~ l "un-1 -un I focalJenght where the camera (12) has a focal distance foca'-Iength and Lmarcadorn-1 is a distance between the marker (20) and the lens (10) measured at the previous time, -calculate the coordinates (Xn, Yn) of the objective (10) at the current moment using the equation: v = y ... Lsi "n (S -); .. D -n ~ n-1 Yes Un-1> Un v = ', -sin (or): ~. D -n-n-1 "' [6] 6. Method of spatial location, according to claim 1, characterized in that, if the radii, which are the radius at the current instant r (n) and the radius at the previous instant r (n-1), are different and there is more than one marker (20, 20 ', 20 ") detected, get the 15 coordinates (xn, Yn) of the target (10) at the present time comprises: - obtaining in the first image frame first horizontal image coordinates U1 of a first marker (20, 20 ', 20 ") and second coordinates horizontal image U2 of a second marker (20 ') - measure a distance in the current instant between the target (10) and the 20 first marker (20) by the expression: ¡: I _ 1t I '"~ *' cosS * '1 _ 2 1 7n IU2 -uil L marker (n) - "" lA! tg (2r.p '"" 2 -U1) where the camera (12) has an opening angle 2q>, AxB is a number of two-dimensional image pixels at the present time, m is a total number of markers, d / m is a real distance between the markers (20, 20 ') Y 1) is the angle of rotation of the target (10) 25 obtained; - get a distance Lmarcadorn_1 measured at the previous time between the objective (10) And the first marker (20);-calculate the coordinates (Xn, Yn) of the target (10) at the current time using theequation: [7] 7. Method of spatial location, according to claim 1, characterized in that, if the radii, which are the radius at the current instant r (n) and the radius at the previous instant r (n-1), are different and there is a single marker (20) detected, get the coordinates (Xn, 10 Yn) of the objective (10) at the present time comprises: - obtaining UL and UR rectified marker coordinates (20) in a left image component (41) and a right image component (42) captured by the camera stereo (12), a binocular disparity - measure a distance in the current time between the target (10) and the 15 marker (20) by the expression: where the stereo camera (12) has a binocular disparity equal to UL -UR "a reference distance B and a focal distance f; - obtain a Lmarcadorn-1 distance measured at the previous time instant between the 20 objective (10) and marker (20); -calculate the coordinates (xn, Yn) of the objective (10) at the current time using the equation, where 6 is the angle of rotation of the objective (10) obtained at the current time: [8] 8.-Spatial location system of an objective (10) in a three-dimensional environment (11)comprising at least one reference light marker (20) and a digital processorof signals (14) to calculate coordinates (Xi, Yi) of the target (10) in an instant oftime i, characterized in that it comprises:-a stereo camera (12) to capture a first image frame in an instant ofcurrent time and a second image frame in an instant of previous time;-a device for measuring angles (13) to obtain an angle of rotation of the objective (10)in the instant of current time and in the instant of previous time;-the signal processor (14) with access to a memory (15) that stores a radio inan instant of current time and in a radius in the previous instant of time of the at least one,marker (20) detected in the first image frame and second image frame; hesignal processor (14) configured to: - If the angle of rotation at the current time instant and the angle of rotation at the previous time instant are different, calculate the coordinates (Xn, Yn) of the target (10) in the current instant equaling them with the coordinates (Xn-1, Yn-1) of the target (10) in the previous instant; - If the first image frame and the second image frame are the same, calculate the coordinates (xn, Yn) of the target (10) at the current time equal to the coordinates (Xn-1, Yn-1) of the objective (10) in the previous instant; - if not, compare the radios at the current time instant and at the previous time instant of the at least one, marker (20) detected and: - if the radii are equal and there is more than one marker (20, 20 ', 20 ") detected, calculate the coordinates (Xn, Yn) of the target (10) at the current time by triangulation using the first image frame and the second image frame; - if the radii are different and there is more than one marker (20, 20 ', 20 ") detected, calculate the coordinates (xn, Yn) of the target (10) at the current time by triangulation using a single image frame that is the first image frame; - if the radii are different and there is a single marker (20) detected, calculate the coordinates (xn, Yn) of the target (10) at the current time using stereo geometry; - if the radii are equal and there is a single marker (20) detected, calculate the 5 coordinates (Xn, Yn) of the target (10) at the current time using image coordinates of the marker (20) at the current time in the first image frame and image coordinates of the marker (20) obtained in the previous time in the second image frame. 10. A spatial location system according to claim 8, characterized in that the light marker (20) comprises a light source (21) that is a LEO diode. [10] 10.-Spatial location system according to any of claims 8-9, characterized in that the luminous marker (20) which includes a contrast surface 15 (22) [11] 11.-Spatial location system according to any of claims 810, characterized in that the environment (11) is interior. 12. A spatial location system according to any of claims 810, characterized in that the environment (11) is external.
类似技术:
公开号 | 公开日 | 专利标题 ES2821743T3|2021-04-27|To Calibrate a Multi-View Display Device US9967545B2|2018-05-08|System and method of acquiring three-dimensional coordinates using multiple coordinate measurment devices US10466036B2|2019-11-05|Attachable depth and orientation tracker device and method of depth and orientation tracking using focal plane polarization and color camera ES2768033T3|2020-06-19|Three-dimensional tracking of a user control device in a volume ES2801395T3|2021-01-11|System and method for the three-dimensional measurement of the shape of material objects ES2745225T3|2020-02-28|Apparatus and method for the perception of electromagnetic radiation US8446473B2|2013-05-21|Tracking system with scattering effect utilization, in particular with star effect and/or cross effect utilization EP1946243A2|2008-07-23|Tracking objects with markers ES2763912T3|2020-06-01|Optical tracking ES2525011T3|2014-12-17|Optronic system and three-dimensional imaging procedure dedicated to identification ES2413438T3|2013-07-16|Procedure for measuring three-dimensional objects by single-view optical ombroscopy, using the optical laws of light propagation ES2607710B1|2017-10-11|Calibration method for heliostats ES2848078T3|2021-08-05|Night vision device ES2543038B2|2015-11-26|Spatial location method and system using light markers for any environment Aparicio-Esteve et al.2019|Visible light positioning system based on a quadrant photodiode and encoding techniques CN106568420B|2019-03-08|A kind of localization method and system based on indoor visible light ES2784353T3|2020-09-24|Image processing device and corresponding image processing procedure ES2460367B1|2015-05-29|Electro-optical detector device CN211178436U|2020-08-04|System for magnetometer spatial localization Orghidan et al.2003|Calibration of a structured light-based stereo catadioptric sensor ES2797375T3|2020-12-02|Light source location detection system ES2824873A1|2021-05-13|METHOD AND SYSTEM FOR SPACE TRACKING OF OBJECTS | JP6430813B2|2018-11-28|Position detection apparatus, position detection method, gazing point detection apparatus, and image generation apparatus ES2681094B1|2019-10-21|SYSTEM AND METHOD FOR VOLUMETRIC AND ISOTOPIC IDENTIFICATION OF DISTRIBUTIONS OF RADIOACTIVE SCENES WO2021104368A1|2021-06-03|System and method for spatial positioning of magnetometers
同族专利:
公开号 | 公开日 ES2543038B2|2015-11-26| WO2016102721A1|2016-06-30|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US7231063B2|2002-08-09|2007-06-12|Intersense, Inc.|Fiducial detection system| EP1501051A2|2003-07-08|2005-01-26|Canon Kabushiki Kaisha|Position and orientation detection method and apparatus| US20100045701A1|2008-08-22|2010-02-25|Cybernet Systems Corporation|Automatic mapping of augmented reality fiducials| US8761439B1|2011-08-24|2014-06-24|Sri International|Method and apparatus for generating three-dimensional pose using monocular visual sensor and inertial measurement unit| CN112955930A|2018-10-30|2021-06-11|Alt有限责任公司|System and method for reverse optical tracking of moving objects|
法律状态:
2015-11-26| FG2A| Definitive protection|Ref document number: 2543038 Country of ref document: ES Kind code of ref document: B2 Effective date: 20151126 |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 ES201500011A|ES2543038B2|2014-12-23|2014-12-23|Spatial location method and system using light markers for any environment|ES201500011A| ES2543038B2|2014-12-23|2014-12-23|Spatial location method and system using light markers for any environment| PCT/ES2015/000182| WO2016102721A1|2014-12-23|2015-12-16|Method and system for spatial localisation using luminous markers for any environment| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|