![]() METHOD, SYSTEM AND DEVICE FOR MOBILE LOCATION DETERMINATION
专利摘要:
A computing device comprising: a tracking assembly including a camera; a data capture module; and a controller associated with the tracking assembly and the data recording module, the controller being configured to: control the tracking assembly to track successive poses of the computing device in a first frame of reference; control the data recording module to record and decode an indicium placed in a facility; based on the indicium, obtain a position of a point in the facility in a second frame of reference corresponding to the facility; through the camera, detect features of a marker placed in the facility; based on the detected features, determine an orientation of the mark in the second frame of reference; and for each pose in the first frame of reference, generate a corresponding pose in the second frame of reference according to the position of the point and orientation. 公开号:BE1027971B1 申请号:E20215049 申请日:2021-01-22 公开日:2022-03-09 发明作者:Alan J Epshteyn;Meena Avasarala;Sundaresan Sundaram;Vaibhav Srinivasa;Anup Kulkarni;Kaushik Samudrala;Shruti Kulkarni;Babu Induja Seshagiri 申请人:Zebra Tech; IPC主号:
专利说明:
METHOD, SYSTEM AND DEVICE FOR MOBILE LOCATION DETERMINATION BACKGROUND Location tracking of mobile computing devices within facilities or other work environments can be used to deliver specific content (e.g., location-specific tasks) to devices for device fleet management activities and the like. However, in some environments, certain location-finding technologies may not be available or may not be accurate enough in determining location. SUMMARY OF THE INVENTION According to one aspect of the invention there is provided a computing device comprising: a tracking assembly including a camera; a data capture module; and a controller associated with the tracking assembly and the data recording module, the controller 1s configured to: control the tracking assembly to track successive poses of the computing device in a first frame of reference; control the data recording module to record and decode an indicium placed in a facility; based on the indicium, obtain a position of a point in the facility in a second frame of reference corresponding to the facility; through the camera, detect features of a marker placed in the facility; based on the detected features, determine an orientation of the mark in the second frame of reference; and for each pose in the first frame of reference, generate a corresponding pose in the second frame of reference according to the position of the point and orientation. The controller may further be configured to: generate a transformation between the first frame of reference and the second frame of reference based on the position of the point and orientation; generate the corresponding pose in the second frame of reference according to the transformation; and publish the corresponding pose in the second frame of reference. The controller may further be configured to publish the corresponding pose, transferring the corresponding pose to a server. The processor may further be configured to determine the first frame of reference prior to tracking the successive poses. The processor may further be configured to detect the features of the marker in response to the recording and decoding of the indicium. The computing device may further comprise an input assembly; wherein the processor is configured to control the data capture module to record and decode the indicium in response to an activation of the input assembly. The computing device may further comprise a memory having stored thereon a reference image of the marker; wherein the processor is further configured to detect the features of the mark and compare an image captured with the camera with the reference image. The point in the facility can be one of the features of the marker. The processor may further be configured to obtain the orientation in the second frame of reference based on the indicium. The processor may further be configured to: obtain an identification of the indicium in response to the recording of the indicium; transmit a request to a server that includes the identifier; and receive from the server the position in the second frame of reference. According to one aspect of the invention, there is provided a method comprising: controlling a tracking assembly of the computing device to track successive poses of the computing device in a first frame of reference; controlling a data recording module of the computing device to record and decode an indicium placed in a facility; obtaining, based on the indicium, a position of a point in the facility in a second frame of reference corresponding to the facility; detecting, via a camera of the computing device, features of a marker placed in the facility; determining, based on the detected features, an orientation of the marker in the second frame of reference; and generating, for each pose in the first frame of reference, a corresponding pose in the second frame of reference according to the position of the point and orientation. The method may further comprise: generating a transformation between the first frame of reference and the second frame of reference based on the position of the point and orientation; generating the corresponding pose in the second frame of reference according to the transformation; and publishing the corresponding pose in the second frame of reference. Publishing the corresponding pose may include transferring the corresponding pose to a server. The method may further comprise, prior to following the successive poses, establishing the first frame of reference. The method may further comprise detecting the features of the marker in response to recording and decoding of the indicium. The method may further comprise detecting activation of an input assembly, and controlling the data recording module to record and decode the indicium in response to detecting the activation. The method may further comprise: storing a reference image of the marker; wherein detecting the features of the mark comprises comparing an image captured with the camera with the reference image. The point in the facility can be one of the features of the marker. The method may further comprise obtaining, based on the indicium, the orientation in the second frame of reference. The method may further comprise, in response to recording the indicium: obtaining an identification of the indicium; transferring a request to a server comprising the identifier; and receiving, from the server, the position in the second frame of reference. BRIEF DESCRIPTION OF VARIOUS VIEWS OF THE DRAWINGS The accompanying figures, where like reference numerals refer to identical or functionally equivalent elements throughout the various views, together with the detailed description below, are incorporated into and form part of the description and serve to illustrate embodiments of concepts which invention further illustrate and explain various principles and advantages of these embodiments. FIG. 1 is a schematic view illustrating a mobile computing device. FIG. 2 is a schematic diagram showing a rear view of the mobile computing device of FIG. 1 illustrates. FIG. 3 is a block diagram of certain internal hardware components of the mobile computing device of FIG. 1. FIG. 4 is a flowchart of a location determination method FIG. 5 is a schematic representation showing an embodiment of block 5 405 of the method of FIG. 4 illustrates. FIG. 6 is a further schematic representation illustrating the implementation of block 405 of the method of FIG. 4 illustrates. FIG. 7 is a schematic representation illustrating an embodiment of blocks 410 and 415 of the method of FIG. 4 illustrates. FIG. 8 is a schematic view illustrating a further location determination mechanism used by the mobile computing device of FIG. 1. Elements in the figures are illustrated for simplicity and clarity and are not necessarily shown to scale. For example, dimensions of some of the elements in the figures may be exaggerated relative to other elements to help improve understanding of embodiments of the present invention. The apparatus and method components are represented, where appropriate, by conventional symbols in the drawings, showing only those specific details necessary for understanding the embodiments of the present invention so that the description is not overshadowed by details apparent to those skilled in the art. benefits from the description herein. DETAILED DESCRIPTION Examples described herein are directed to a computing device comprising: a tracking assembly including a camera; a data capture module; and a controller associated with the tracking assembly and the data recording module, the controller 1s configured to: control the tracking assembly to track successive poses of the computing device in a first frame of reference; control the data recording module to record and decode an indicium placed in a facility; based on the indicium, obtain a position of a point in the facility in a second frame of reference corresponding to the facility; through the camera, detect features of a marker placed in the facility; based on the detected features, determine an orientation of the mark in the second frame of reference; and for each pose in the first frame of reference, generate a corresponding pose in the second frame of reference according to the position of the point and orientation. Additional examples described herein are directed to a method comprising: controlling a tracking assembly of the computing device to track successive poses of the computing device in a first frame of reference; controlling a data recording module of the computing device to record and decode an indicium placed in a facility; obtaining, based on the indicium, a position of a point in the facility in a second frame of reference corresponding to the facility; detecting, via a camera of the computing device, features of a marker placed in the facility; determining, based on the detected features, an orientation of the marker in the second frame of reference; and generating, for each pose in the first frame of reference, a corresponding pose in the second frame of reference according to the position of the point and orientation. FIG. 1 illustrates a mobile computing device 100 (also referred to herein as the mobile device 100 or simply the device 100) capable of tracking its location within a facility, such as a warehouse or other work environment. The facility may be constructed in such a way that location-finding technologies such as the Global Positioning System (GPS) are unavailable or insufficiently accurate. In contrast, device 100 uses a combination of local pose tracking and facility infrastructure to track a pose (i.e. position and orientation) of device 100 according to a frame of reference previously established within the facility. The device 100 includes a housing 104 that supports various other components of the device 100 . Among the components supported by the housing 104 are a display 108 which may also include an integrated touch screen (touch screen), as well as a data recording module 112, such as a barcode scanner. As can be seen in FIG. 1, the data capture module 112 includes a scan window 116 through which the module 112 can capture images and/or emit laser beams to detect and decode indicia such as barcodes attached to objects within the above facility. The device 100 may also include other output devices, such as a speaker 120. The device 100 may also include other input devices, including one or more of a microphone, at least a button, a trigger (trigger), and the like. Such input devices are indicated as input assembly 124 in FIG. 1 (including buttons on one side of the housing 104). In FIG. 2, illustrating a rear view of the device 100, the device 100 also includes a camera 200 including a suitable image sensor or combination of image sensors. The camera 200 is controllable to capture a sequence of images (e.g., a direct video display (video stream)) for subsequent processing. The camera 200 has a field of view (FOV) 204 extending from a rear surface 208 of the device 100 (opposite the display 108). In the illustrated example, the FOV 204 is substantially perpendicular to the back surface 208. The data recording module 112, as also illustrated in FIG. 2, also a FOV 212, extending from the scan window 116 and substantially perpendicular to the FOV 204. As will be discussed below, data is captured with the data capture module 112, and images captured with the camera 200 used to track a pose of the device 100 within the above facility. In particular, images captured via the camera 200 in combination with other motion detection data, e.g., from an inertial measurement unit (IMU) of the device 100 are used to track successive poses of the device 100 according to a first frame of reference. However, the first frame of reference is arbitrarily defined, and need not correspond to a second frame of reference previously established in the facility. Therefore, the apparatus 100 is arranged to also use data obtained through the data capture module 112 to determine a transformation (e.g., a function combining a translation and a rotation) between the first and second frames of reference. The transformation, once generated, allows the device to perform pose tracking in the first frame of reference, and to convert poses obtained therewith into the second frame of reference. Before further discussing the functionality implemented by the device 100, certain components of the device 100 will be described, with reference to FIG. 3. In FIG. 3, a block diagram of certain components of the device 100 is illustrated. In addition to the display and touchscreen 108, the data capture module 112, and the input assembly 124, the device 100 includes a dedicated application controller, such as a processor 300, interconnected with a non-temporary computer-readable storage medium, such as a memory 304. Memory 304 includes a combination of volatile memory (e.g., working memory or RAM) and non-volatile memory (e.g., read-only memory or ROM, electrically erasable programmable read-only memory or EEPROM, flash memory). Processor 300 and memory 304 each include at least one or more integrated circuits. The device 100 also includes a motion sensor such as an inertial measurement unit (IMU) 308 that includes one or more accelerometers, one or more gyroscopes, and/or one or more magnetometers. The IMU 308 is configured to generate data indicating detected motion of the device 100 and provide the data to the processor 300. The processor 300 may be configured to process the motion data from the IMU 308 along with the images from the camera 200 to track a current pose of the device 100 in the above-mentioned first frame of reference. The camera 200 and the IMU 308 can therefore also be referred to as a tracking assembly. Other combinations of image sensors and/or motion sensors can be used to implement a tracking assembly. For example, a tracking assembly may be implemented with one or more of an ultra-broadband sensor, a laser radar sensor, an ultrasonic sensor, and the like. The device 100 also includes a communications interface 310 that allows the device 100 to exchange data with other computing devices, e.g., over a network 312. The other computing devices may include a server 316, which may be deployed within the facility , or remote from the facility. Memory 304 has stored computer-readable instructions thereon for execution by processor 300. In particular, memory 304 has stored thereon a pose-tracking application 320 which, when executed by processor 300, configures the processor to display a sequence of poses of the object. device 100 based on data from the IMU 308 and the camera 200, as mentioned above. The poses generated via execution of the application 320 are in the first frame of reference, which may be arbitrarily defined by the application 320. The memory 304 also has a location management application 324 stored thereon. Execution of the application 324 configures the processor 300 to use data from the camera 200 and the data capture module 112 to convert poses generated by the application 320 (e.g., poses in the first frame of reference) to poses in the second frame of reference, which a predetermined frame of reference may be corresponding to the facility rather than an arbitrary and temporal (e.g., session-specific) frame of reference such as the first frame of reference used by the application 320. Each of the applications 320 and 324 may also be implemented as a series of different applications in other examples. The applications 320 and 324 can be combined into a single application in further examples. Processor 300, when so configured by executing applications 320 and 324, may also be referred to as a locating controller, or simply a controller. The functionality implemented by the processor 300 through the execution of the applications 320 and 324 may also be implemented by one or more specially designed hardware and firmware components, such as FPGAs, ASICs, and the like in other embodiments. With reference to FIG. 4, the functionality implemented by the device 100 will be described in more detail. FIG. 4 illustrates a location determination method 400 which will be discussed below in conjunction with its implementation by the device 100. At block 405, the device 100 initiates pose tracking. That is, the device 100 begins tracking successive poses (i.e. positions and orientations of the device 100 in three dimensions) at any suitable frequency (e.g., at a frequency of around 30 or 60 Hz, although there may also be a wide variety of other pose estimation frequencies are used). For example, the frequency at which pose estimates are generated by the device 100 may depend on the frame rate of the camera 200 and/or the sampling rate of the IMU 308. Pose tracking may be initiated in response to receiving an input command, e.g., from the user of the device 100 through the input assembly 124. To track the pose of the device 100, the processor 300 directs the tracking assembly (e.g., the camera 200 and the IMU 308) to capture data representing the environment of the device 100, as well as movement of the device 100. In the present example, the processor 300 directs the camera 200 to start capturing a stream (stream) of images and provide the images to the processor 300. The processor 300 also directs the IMU 308 to provide motion data (e.g. which define accelerations affecting the device 100, as well as changes in orientation of the device 100). The processor 300 detects at least one image feature in the images from the camera 200 and tracks the changes in position of such features between images. Exemplary features include corners, edges (e.g., change in gradient) and the like, detectable by any suitable feature-detection algorithm. The movement of such features between images is indicative of movement of the device 100. The positions in the above image features, as well as motion data from the IMU 308, may be provided as input to a pose estimator implemented by the processor 300, such as a Kalman filter. Various mechanisms can be used to combine image and/or motion sensor data to generate pose estimates. Examples of such mechanisms include those implemented by the ARCore software development package provided by Google LLC, and the ARKit software development package provided by Apple Inc. In other words, the application 320 shown in FIG. 3 can implement the above functionality. In FIG. 5 illustrates an exemplary pose estimate as determined at block 405, including a position 500 and an orientation 504. The position 500 represents the location of a reference point on the device 100, such as a center of gravity of the device 100. In other embodiments, the position 500 corresponds to another point on the device 100 (e.g., a center point of a lens assembly of the camera 200). The orientation 504 represents the direction in which the front of the device 100 (e.g., the front surface carrying the scan window 116) is currently facing. The position 500 and orientation 504 are defined relative to a three-dimensional frame of reference. In particular, the position 500 and orientation 504 are defined according to a first frame of reference arbitrarily generated by the device 100 when execution of block 405 begins. In FIG. 6 is a first frame of reference 600-1 example as mentioned above illustrated. Frame of reference 600-1 may also be referred to as a local frame of reference, identified by the suffix "1", due to the arbitrary and session-specific nature of frame of reference 600-1. As shown in FIG. 6, the position 500 is at an origin O1 of the frame of reference 600-1, for example because the frame of reference 600-1 was initialized to the current position of the device 100 when execution of the method 400 began. The position 500 is defined by coordinates along each of the three axes XI, Y1, and ZI of the reference frame 600-1, and the orientation 504 is defined by angles in each of the three planes formed by the above axes. For example, the orientation 504 can be defined by a pitch angle in the XI-Y1 plane, a roll angle in the Y1-Z1 plane, and a yaw angle in the XI-Z1 plane. For example, the pose consisting of position 500 and orientation 504 can be defined by the following trios of X, Y, and Z coordinates and roll, pitch, and yaw angles: [0, 0, 0] and [0°, - 15°, -30°]. In FIG. 6 is also shown a reference frame 600-c previously established at the facility mentioned above and defined by an origin Oc and axes Xc, Yc and Zc. A map 604 of the facility 604 is also shown, indicating the locations of the origin O1 and Oc of the frames of reference 600, as well as the physical location of the device 100 (via position 500 and orientation 504). The 600-c frame of reference uses the suffix “c” to indicate that the 600-c frame of reference defines a common frame of reference that may also be referred to as a world frame of reference, a map frame of reference, or the like. In general, the device 100 and other computing devices (e.g., the server 316) use the frame of reference 600-c to indicate locations of tasks to be performed, to determine data to send to devices 100, and the like. It is clear from FIG. 6 that although the Y axes Yc and YI of the reference frames 600 are parallel, the origins Oc and Ol do not coincide, and the X and Z axes of the reference frames 600 are at different angles from each other. Because the frame of reference 600-c is substantially constant while the frame of reference 600-1 is dynamically generated by the device 100, the coordinates and angles defining the position 500 and orientation 504 as obtained at block 405 may be unusable for other devices (e.g. the server 316). The device 100 is therefore arranged to perform additional actions to convert followed poses in the reference frame 600-1 into poses defined in the reference frame 600-c. More specifically, as will be discussed below, the apparatus 100 is configured to determine a translational match 608 between the origins Oc and O1 and a yaw match 612 between the axes X1 and Xc. The adjustments 608 and 612 can then be saved and applied to future poses generated via the application 320 to convert such poses to representations in the common frame of reference 600-c. The remainder of the method 400, which can be performed via execution of the application 324, serves to generate the above adaptations 608 and 612. Returning to FIG. 4, once pose tracking is initiated at block 405, pose tracking is performed continuously for the remainder of the method 400. That is, the device 100 generates a stream of poses in the local frame of reference 600-1. At block 410, while the pose tracking continues, the apparatus 100 is arranged to capture and decode an indicium such as a barcode contained within the FOV 212 of the data capture module 112 . Various forms of indicia are contemplated, including one- and two-dimensional barcodes. In response to the recording and decoding of the indicium 1s, the device 100 is arranged to obtain a position in the common frame of reference 600-c. The position obtained at block 410 is then used to determine a translation match 608 mentioned above. The indicium is applied to any suitable surface within the facility, e.g. a wall or the like, and the position mentioned above corresponds to the position of the indicium, or to an associated infrastructure component nearby, as described below. The capture of the indicium at block 410 may be performed in response to activation of the input assembly 124 (e.g., a pawl, a button) or other suitable input device, such as the touchscreen integrated with the display 108. The above position may be obtained by different mechanisms. For example, the indicium itself may encode the position, for example as a trio of coordinates defining the position in the reference frame 600-c. In other examples, the indicium may encode an identifier specific to that indicia (e.g., when the facility includes multiple such indicia), and the position obtained at block 410 may be requested from server 136. That is, device 100 may request transfer to the server 136 that includes the identifier decoded from the indicium, and the server 136 may return the position. A combination of the above mechanisms can also be used, wherein the indicium encodes at least a portion of the information defining the position, as well as an identifier that can be used to request information from the server. At block 415, the device 100 is configured to detect image features corresponding to a fixed feature placed in the facility. The marker may be painted, labeled, or the like, near the indicium captured at block 410. More particularly, the marker is positioned such that when the indicium is within the FOV 212, the marker is within the FOV 204 of the camera 200. is. In other words, the indicium and marker are placed in the facility such that blocks 410 and 415 can be executed substantially simultaneously. Execution of block 415 may be initiated by the same input (e.g., a trigger press as mentioned above) as is used to initiate execution of block 410 . At block 415, the processor (as configured via execution of the application 324) obtains a frame captured by the camera 200, and searches the frame for predetermined imaging features corresponding to the shape of the marker. A wide variety of marking shapes and features can be considered. The execution of block 415 may in some examples include retrieving a reference image of the marker from memory 304 and comparing the above frame with the reference image. In FIG. 7, an exemplary embodiment of blocks 410 and 415 is illustrated. In particular, an exemplary indicium 700 (a QR code in the present example) is shown, e.g. applied to a wall or other suitable surface. Additionally, a marker 704 is shown, e.g. mounted on a floor or other suitable surface. The placement of the indicium 700 and marker 704 are such that the indicium 700 may be located within the FOV 212 at the same time as the marker 704 is located within the FOV 204 . Upon activation of an input such as a detent (trigger), button, or the like, the device 100 captures the indicium 700 and also captures an image through the camera 200. Because the camera 200 is controlled to capture a continuous stream of images for the pose tracking initiated at block 405, the execution of block 415 may include simple retrieval of a frame from a stream that coincides in time with the activation of the input mentioned above. At block 410, positional information 708 is decoded from the indicium 700. As illustrated, the indicium 700 encodes both positional information and an identifier "A1" that can be used to request the positional information from the server 136 . In addition to a position [120, 131, 0], the indicium encodes an orientation of [0, 0, 0]. The position and orientation correspond to the marker 704. In particular, the position encoded in the indicium 700 (or retrieved from the server 136 based on the indicium 700) is the position in the frame of reference 600-c of a predetermined feature of the mark 704, such as point 712 or arrow. The point 712 of the marker 704, as well as other features (e.g., angles 714) are identified in the image of the camera 200 to detect the marker 704. The orientation encoded in the indicium 700 meanwhile represents the orientation of a longitudinal axis 716 of the marker 704 . In the present example, the indicium 700 indicates that the axis 716 is parallel to the axis Xc. (i.e. roll, pitch, and yaw angles are all zero degrees). Therefore, after executing blocks 410 and 415, the device 100 obtains a position and an orientation corresponding to the marker 704 in the reference frame 600-c. Returning to FIG. 4, at block 420, device 100 is arranged to determine a transformation between frames of reference 600-c and 600-c using the position and orientation obtained at blocks 410 and 415 (e.g., the position of point 712 and the orientation of the axis 716). At block 420, the device 100 also uses the position and orientation of the marker 704 in the local frame of reference 600-1, which are available as a result of the pose-following activity performed during the execution of the method 400. Therefore, the device determines 100 at block 420 a difference between the position of the point 712 as shown in the reference frame 600-c. That difference is the translation match 608 shown in FIG. 6. In addition, the apparatus 100 determines a difference between the orientation of the axis 716 as shown in the reference frame 600-1 and the orientation of the axis 716 as shown in the reference frame 600-c. That difference is the yaw adjustment 612 shown in FIG. 6. As will be apparent from FIG. 6, the axes Yc and YI are parallel and therefore no adjustments are needed for pitch and roll angles in the present embodiment. However, in other examples, the frame of reference 600-1 may also have a different slope and roll than the frame of reference 600-c. In such examples, the apparatus 100 is arranged to determine the orientations of at least two features of the marker 704, such as the axis 716 and the transverse axis which is substantially perpendicular to the axis 716. At block 425, after the transform is determined (defined by modifiers 608 and 612), the device 100 converts and publishes at least a subset of the tracked poses generated through the execution of block 405. That is, for at least the above sub-array, the device 100 converts a tracked pose from the frame of reference 600-1 to the frame of reference 600-c by applying the transformation consisting of the adjustments 608 and 612 to the tracked pose. In some examples, device 100 converts each tracked pose generated via block 405. In other examples, the device converts 100 tracked poses at a configurable rate, such as once per second. The device 100 also publishes the converted tracked poses. Publishing can be done via transfer to other local applications performed by the device 100, transfer to the server 136, or a combination thereof. The processor 300 may also control the display 108 to display subsequent converted tracked poses thereon. At block 430, device 100 determines whether the tracking session has ended. Tracking can be stopped by an explicit input (eg receiving a command via the display/touch screen 108). In other examples, tracking may be stopped if the local pose-tracking process initiated at block 405 suffers from decreased accuracy, resulting in a discontinuity in pose-tracking. In such cases, the 600-1 reference frame can be discarded and replaced with a new local reference frame. If the determination at block 430 is affirmative, the current execution of the method 400 ends and a new execution can be initiated. In another example, with reference to FIG. 8, in some embodiments, the marker 704 may be omitted and a gantry or other physical support 800 may be deployed within the facility. The gantry 800 includes a designated container 804 for receiving the device 100 and fixing the position of the device 100. When positioned on the container 804, the device 100 is oriented such that an indicium 808 is within the field of view 212 of the data capture module. The indicium 808 may then encode a position and orientation of the device 100 in the frame of reference 600-c. The device 100 can therefore determine the transformation from its currently tracked pose in the local frame of reference 600-1 and its current pose in the common frame of reference 600-c, as obtained from the indicium 808. In the foregoing description, specific embodiments have been described. However, it will be recognized that various modifications and changes may be made without departing from the scope of the invention as set forth in the claims below. Therefore, the description and figures are to be understood as illustrative rather than limiting, and all such modifications are intended to be included within the scope of the invention of the present specification. The benefits, solutions to problems, and any element(s) that may cause any benefit or solution to occur or become apparent should not be construed as critical, mandatory, or essential features or elements of any or all of the claims. The invention is defined solely by the appended claims, including any changes made during the course of this application and any equivalents of those claims as published. In addition, relational terms such as first and second, top and bottom and the like may be used throughout this document only to distinguish one entity or action from another entity or action without necessarily requiring or implying an actual relationship or sequence between such entities or actions. imply. The terms “comprise”, “comprising”, “has”, “having”, “contains”, “containing” or any variation thereof are intended to cover a non-exclusive inclusion such that any process, process, article, or assembly that a list includes, has, contains not only contains those elements, but may also contain other elements not explicitly mentioned or inherent in such process, method, article, or assembly. An element preceded by “includes. a”, “has…a”, “contains…a” does not exclude, without further limitation, the existence of additional identical elements in the process, method, article or device comprising, having or containing the element. The term “one” is defined as one or more unless expressly stated otherwise. The terms "substantially", "essential", "near", "approximately" or any other version thereof are defined as close to what is understood by those skilled in the art, and in a non-limiting embodiment the term is defined as being within 10% , in another embodiment within 5%, in another embodiment within 1% and in another embodiment within 0.5%. The term "linked" is defined herein as connected, although not necessarily directly and not necessarily mechanically. A device or structure that is "configured" in a certain way is configured in at least that way, but may also be configured in ways not described. It will be appreciated that some embodiments may include one or more generic or specialized processors (or "processing devices") such as microprocessors, digital signal processors, custom processors, and field programmable gate arrays (FPGAs) and unique stored program instructions (including both software and firmware). controlling the one or more processors to implement, in combination with certain non-processor circuitry, some, most, or all of the functions of the method and/or arrangement described herein. Alternatively, some or all of the functions may be implemented by a state machine that does not contain stored program instructions, or in one or more application-specific integrated circuits (ASICs), in which each function or some combinations of particular functions are implemented as custom logic. Of course, a combination of the two approaches could be used. In addition, an embodiment may be implemented as a computer-readable storage medium with computer-readable code stored thereon for programming a computer (e.g., comprising a processor) to perform a method as described and claimed herein. Examples of such computer readable storage media include, but are not limited to, a hard disk, a CD-ROM, an optical storage device, a magnetic storage device, a ROM (read-only memory), a PROM (programmable read-only memory), an EPROM (erasable programmable read-only memory), an EEPROM (electrically erasable programmable read-only memory), and a flash memory. It is further expected that, notwithstanding potentially significant efforts and many design choices motivated by, for example, time available, current technology, and economic considerations, when guided by the concepts and principles described herein, it is expected that those skilled in the art will readily be able to understand such software instructions and - generate programs and ICs with minimal experimentation. The summary of the description is provided to give the reader a quick impression of the nature of the technical description. It is filed with the understanding that it shall not be used to interpret or limit the scope or meaning of the claims. In addition, it can be seen from the foregoing "detailed description" that various features are grouped together in different embodiments to streamline the description. This manner of description should not be interpreted as reflecting an intention that the claimed embodiments require features beyond those expressly stated in each claim. Rather, as the following claims reflect, there is inventive matter in less than all the features of a single described embodiment. Thus, the following claims are incorporated into the "detailed description", each claim standing alone as subject matter separately claimed.
权利要求:
Claims (20) [1] A computing device (100), comprising: a tracking assembly including a camera (200); a data recording module (112); and a controller associated with the tracking assembly and the data recording module (112), the controller being configured to: control the tracking assembly to track successive poses of the computing device in a first frame of reference; control the data recording module to record and decode an indicium placed in a facility; based on the indicium, obtain a position of a point in the facility in a second frame of reference corresponding to the facility; through the camera, detect features of a marker placed in the facility; based on the detected features, determine an orientation of the mark in the second frame of reference; and for each pose in the first frame of reference, generate a corresponding pose in the second frame of reference according to the position of the point and orientation. [2] The computing device of claim 1, wherein the controller is further configured to: generate a transformation between the first frame of reference and the second frame of reference based on the position of the point and the orientation; generate the corresponding pose in the second frame of reference according to the transformation; and publish the corresponding pose in the second frame of reference. [3] The computing device of claim 1 or 2, wherein the controller is further configured to publish the corresponding pose, transferring the corresponding pose to a server. [4] A computing device according to any one of the preceding claims, wherein the processor is further configured to determine the first frame of reference prior to tracking the successive poses. [5] The computing device of any preceding claim, wherein the processor is further configured to detect the features of the marker in response to the recording and decoding of the indicium. [6] A computing device according to any preceding claim, further comprising an input assembly; wherein the processor is configured to control the data capture module to record and decode the indicium in response to an activation of the input assembly. [7] A computing device according to any one of the preceding claims, further comprising a memory having stored thereon a reference image of the marker; wherein the processor is further arranged to compare an image captured with the camera with the reference image in order to detect the features of the mark. [8] A computing device according to any preceding claim, wherein the point in the facility is one of the features of the marker. [9] The computing device of any preceding claim, wherein the processor is further configured to obtain the orientation in the second frame of reference based on the indicium. [10] The computing device of any preceding claim, wherein the processor is further configured to, in response to recording the indicium: obtain an identification of the indicium; transmit a request to a server that includes the identifier; and receive from the server the position in the second frame of reference. [11] A method comprising: controlling a tracking assembly of the computing device to track successive poses of the computing device in a first frame of reference; controlling a data recording module of the computing device to record and decode an indicium placed in a facility; obtaining, based on the indicium, a position of a point in the facility in a second frame of reference corresponding to the facility; detecting, via a camera of the computing device, features of a marker placed in the facility; determining, based on the detected features, an orientation of the marker in the second frame of reference; and generating, for each pose in the first frame of reference, a corresponding pose in the second frame of reference according to the position of the point and orientation. [12] The method of claim 11, further comprising: generating a transformation between the first frame of reference and the second frame of reference based on the position of the point and orientation; generating the corresponding pose in the second frame of reference according to the transformation; and publishing the corresponding pose in the second frame of reference. [13] The method of claim 12, wherein publishing the corresponding pose comprises transferring the corresponding pose to a server. [14] The method of any one of claims 11 to 13, further comprising, prior to tracking the successive poses, determining the first frame of reference. [15] The method of any one of claims 11 to 14, further comprising detecting the features of the marker in response to recording and decoding of the indicium. [16] The method of any one of claims 11 to 15, further comprising detecting activation of an input assembly, and controlling the data recording module to record and decode the indicium in response to detecting the activation. [17] The method of any one of claims 11 to 16, further comprising: storing a reference image of the marker; wherein detecting the features of the mark comprises comparing an image captured with the camera with the reference image. [18] The method of any one of claims 11 to 17, wherein the point in the facility is one of the features of the marker. [19] The method of any one of claims 11 to 18, further comprising obtaining, based on the indicium, the orientation in the second frame of reference. [20] The method of any one of claims 11 to 19, further comprising, in response to recording the indicium: obtaining an identifier of the indicium; transferring a request to a server comprising the identifier; and receiving, from the server, the position in the second frame of reference.
类似技术:
公开号 | 公开日 | 专利标题 JP6374107B2|2018-08-15|Improved calibration for eye tracking system US20170261993A1|2017-09-14|Systems and methods for robot motion control and improved positional accuracy US20210019854A1|2021-01-21|Location Signaling with Respect to an Autonomous Vehicle and a Rider BE1027971B1|2022-03-09|METHOD, SYSTEM AND DEVICE FOR MOBILE LOCATION DETERMINATION KR20140121861A|2014-10-16|System and method for determining location of a device using opposing cameras US9661470B1|2017-05-23|Methods and systems for locating an actor within an environment US20180196415A1|2018-07-12|Location Signaling with Respect to an Autonomous Vehicle and a Rider KR20140090078A|2014-07-16|Method for processing an image and an electronic device thereof TW201142749A|2011-12-01|Orientation determination of a mobile station using side and top view images JP2020067439A|2020-04-30|System and method for estimating position of moving body WO2019089210A1|2019-05-09|Methods and apparatus for initializing object dimensioning systems US11227395B2|2022-01-18|Method and apparatus for determining motion vector field, device, storage medium and vehicle US20140079320A1|2014-03-20|Feature Searching Along a Path of Increasing Similarity US20190310652A1|2019-10-10|Method, system and apparatus for mobile automation apparatus localization EP3566022B1|2021-03-10|Location signaling with respect to an autonomous vehicle and a rider JP2009266155A|2009-11-12|Apparatus and method for mobile object tracking WO2019094125A1|2019-05-16|Methods and apparatus for dimensioning an object using proximate devices US11079240B2|2021-08-03|Method, system and apparatus for adaptive particle filter localization CN110291771B|2021-11-16|Depth information acquisition method of target object and movable platform JP6393000B2|2018-09-19|Hypothetical line mapping and validation for 3D maps JP2007003448A|2007-01-11|Movement information generating device, movement information generating method, program, and storage medium JP2017219389A|2017-12-14|Object tracking device, object tracking method, and object tracking program US11015938B2|2021-05-25|Method, system and apparatus for navigational assistance US20210343035A1|2021-11-04|Reference Surface Detection for Mobile Dimensioning Ergun et al.2018|Real-time relative mobile target positioning using GPS-assisted stereo videogrammetry
同族专利:
公开号 | 公开日 WO2021155231A1|2021-08-05| US20210233256A1|2021-07-29| BE1027971A1|2021-08-02|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US20190171780A1|2017-02-22|2019-06-06|Middle Chart, LLC|Orienteering system for responding to an emergency in a structure| US20190073550A1|2017-09-07|2019-03-07|Symbol Technologies, Llc|Imaging-based sensor calibration| US20190156086A1|2017-11-17|2019-05-23|Divine Logic, Inc.|Systems and methods for tracking items| USD914684S1|2018-06-11|2021-03-30|Zebra Technologies Corporation|Mobile computing device|
法律状态:
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US16/776,054|US20210233256A1|2020-01-29|2020-01-29|Method, System and Apparatus for Mobile Locationing| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|